合 使用mysqldump迁移类MySQL数据库
简介
使用mysqldump可以迁移MySQL、rds for MySQL、mariadb、tidb等类MySQL数据库,可跨平台,可跨版本!!!
准备MySQL环境
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | docker rm -f mysql5733171 docker run -d --name mysql5733171 -h mysql5733171 -p 33171:3306 \ -v /etc/mysql/mysql5733171/conf:/etc/mysql/conf.d \ -e MYSQL_ROOT_PASSWORD=lhr -e TZ=Asia/Shanghai \ mysql:5.7.30 docker exec -it mysql5733171 bash mysql -uroot -plhr -h192.168.66.35 -P33171 create database lhrdb default character set utf8mb4; docker logs -f mysql5733171 cat > /etc/mysql/mysql5733171/conf/my.cnf <<"EOF" [mysqld] default-time-zone = '+8:00' log_timestamps = SYSTEM skip-name-resolve log-bin server_id=573033171 character_set_server=utf8mb4 EOF |
查询数据库对象
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | -- MySQL数据库常见对象 • 表(Table) • 视图(View) • Routines(存储过程(Procedure)、函数(function)) • 触发器(Triggers) • 事件(EVENTS):JOB use lhrdb; create view `vw_ob_lhr` as select db ,type ,cnt from (select 'TABLE' type,table_schema db, count(*) cnt from information_schema.`TABLES` a where table_type='BASE TABLE' group by table_schema union all select 'EVENTS' type,event_schema db,count(*) cnt from information_schema.`EVENTS` b group by event_schema union all select 'TRIGGERS' type,trigger_schema db,count(*) cnt from information_schema.`TRIGGERS` c group by trigger_schema union all select 'PROCEDURE' type,routine_schema db,count(*) cnt from information_schema.ROUTINES d where`ROUTINE_TYPE` = 'PROCEDURE' group by db union all select 'FUNCTION' type,routine_schema db,count(*) cnt from information_schema.ROUTINES d where`ROUTINE_TYPE` = 'FUNCTION' group by db union all select 'VIEWS' type,table_schema db,count(*) cnt from information_schema.VIEWS f group by table_schema ) t where db='lhrdb' order by db,type; select * from lhrdb.vw_ob_lhr ; -- 查询MySQL大小 select a.SCHEMA_NAME, a.DEFAULT_CHARACTER_SET_NAME,a.DEFAULT_COLLATION_NAME, sum(table_rows) as '记录数', truncate(sum(data_length)/1024/1024, 2) as '数据容量(MB)', truncate(sum(index_length)/1024/1024, 2) as '索引容量(MB)', truncate(sum(data_length+index_length)/1024/1024, 2) as '总大小(MB)', truncate(sum(max_data_length)/1024/1024, 2) as '最大值(MB)', truncate(sum(data_free)/1024/1024, 2) as '空闲空间(MB)', max(f.filesize_M) as '磁盘文件大小(MB)' from INFORMATION_SCHEMA.SCHEMATA a left outer join information_schema.tables b on a.SCHEMA_NAME=b.TABLE_SCHEMA left outer join (select substring(b.file_name,3,locate('/',b.file_name,3)-3) as db_name, truncate(sum(total_extents*extent_size)/1024/1024,2) filesize_M from information_schema.FILES b group by substring(b.file_name,3,locate('/',b.file_name,3)-3)) f on ( a.SCHEMA_NAME= f.db_name) group by a.SCHEMA_NAME, a.DEFAULT_CHARACTER_SET_NAME,a.DEFAULT_COLLATION_NAME UNION ALL select '总计', '','', sum(table_rows) as '记录数', truncate(sum(data_length)/1024/1024, 2) as '数据容量(MB)', truncate(sum(index_length)/1024/1024, 2) as '索引容量(MB)', truncate(sum(data_length+index_length)/1024/1024, 2) as '总大小(MB)', truncate(sum(max_data_length)/1024/1024, 2) as '最大值(MB)', truncate(sum(data_free)/1024/1024, 2) as '空闲空间(MB)', max(f.filesize_M) as '磁盘文件大小(MB)' from INFORMATION_SCHEMA.SCHEMATA a left outer join information_schema.tables b on a.SCHEMA_NAME=b.TABLE_SCHEMA left outer join (select substring(b.file_name,3,locate('/',b.file_name,3)-3) as db_name, truncate(sum(total_extents*extent_size)/1024/1024,2) filesize_M from information_schema.FILES b group by substring(b.file_name,3,locate('/',b.file_name,3)-3)) f on ( a.SCHEMA_NAME= f.db_name) order by `数据容量(MB)` , `索引容量(MB)` ; ---------- 前10张大表 SELECT table_schema AS '数据库', table_name AS '表名', a.TABLE_TYPE, a.ENGINE, a.CREATE_TIME, a.UPDATE_TIME, a.TABLE_COLLATION, table_rows AS '记录数', TRUNCATE(a.DATA_LENGTH / 1024 / 1024, 2 ) AS '数据容量(MB)', TRUNCATE( index_length / 1024 / 1024, 2 ) AS '索引容量(MB)', TRUNCATE( ( data_length + index_length ) / 1024 / 1024, 2 ) AS '总大小(MB)', TRUNCATE( a.DATA_FREE / 1024 / 1024, 2 ) AS '空闲空间(MB)', truncate(f.filesize_M,2) AS '磁盘文件大小(MB)' FROM information_schema.TABLES a left outer join (select substring(b.file_name,3,locate('/',b.file_name,3)-3) as db_name, substring(b.file_name,locate('/',b.file_name,3)+1,(LENGTH(b.file_name)-locate('/',b.file_name,3)-4)) as tb_name, b.file_name, (total_extents*extent_size)/1024/1024 filesize_M from information_schema.FILES b order by filesize_M desc limit 20 ) f on ( a.TABLE_SCHEMA= f.db_name and a.TABLE_NAME=f.tb_name ) ORDER BY ( data_length + index_length ) DESC LIMIT 10; |
MySQL数据库大小参考:https://www.dbaup.com/mysqlchaxunbiaokongjiandaxiao.html
准备常用对象
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | -- 函数 set GLOBAL log_bin_trust_function_creators=on; -- ERROR 1418 (HY000) at line 200: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) DROP FUNCTION if exists rand_string; delimiter // CREATE DEFINER=`root`@`%` FUNCTION `rand_string`(n INT) RETURNS varchar(255) CHARSET utf8 BEGIN DECLARE chars_str varchar(100) DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; DECLARE return_str varchar(255) DEFAULT ''; DECLARE i INT DEFAULT 0; WHILE i < n DO SET return_str = concat(return_str,substring(chars_str , FLOOR(1 + RAND()*62 ),1)); SET i = i +1; END WHILE; RETURN return_str; END;// delimiter ; select rand_string(5); -- 存储过程 delimiter // DROP PROCEDURE IF EXISTS mock_isam// CREATE PROCEDURE mock_isam (IN rowCount int) BEGIN DECLARE insertCount int; SET insertCount = 0; DROP TABLE IF EXISTS `isam_table`; /*删掉之前创建的表,因为下面要建这个名字的表*/ /*自己按需求修改以下建表语句构造需要的表*/ CREATE TABLE `isam_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, `age` tinyint(3) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; START TRANSACTION; loopHandler : LOOP /*插入数据*/ INSERT INTO `isam_table`(`name`,`age`) VALUES( rand_string(20), CEIL(RAND() * 110) ); SET insertCount = insertCount + 1; IF (insertCount >= rowCount) THEN LEAVE loopHandler; END IF; END LOOP loopHandler; COMMIT; END // delimiter ; delimiter // DROP PROCEDURE IF EXISTS mock_innodb// CREATE PROCEDURE mock_innodb (IN rowCount int) BEGIN DECLARE insertCount int; SET insertCount = 0; DROP TABLE IF EXISTS `innodb_table`; /*删掉之前创建的表,因为下面要建这个名字的表*/ /*自己按需求修改以下建表语句构造需要的表*/ CREATE TABLE `innodb_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, `age` tinyint(3) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=innodb DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; START TRANSACTION; loopHandler : LOOP /*插入数据*/ INSERT INTO `innodb_table`(`name`,`age`) VALUES( rand_string(20), CEIL(RAND() * 110) ); SET insertCount = insertCount + 1; IF (insertCount >= rowCount) THEN LEAVE loopHandler; END IF; END LOOP loopHandler; COMMIT; END // delimiter ; -- 插入数据 call mock_innodb(2000); call mock_isam(200); select * from innodb_table limit 2; select * from isam_table limit 2; select * from mysql.proc where db = 'lhrdb' and `type` = 'PROCEDURE' ; -- 存储过程 select * from mysql.proc where db = 'lhrdb' and `type` = 'FUNCTION' ; -- 函数 -- 触发器 create table time (time varchar(100)); CREATE TRIGGER trig1 AFTER INSERT ON isam_table FOR EACH ROW INSERT INTO time VALUES(NOW()); SHOW TRIGGERS; select * from information_schema.`TRIGGERS` where trigger_schema='lhrdb' ; -- event事件 drop table if exists events_list; create table events_list(id int PRIMARY key auto_increment, event_name varchar(20) not null, event_started timestamp not null); drop event event_minute; create event event_minute on schedule every 10 minute do insert into events_list(event_name,event_started) values('event_now', now()); set global event_scheduler =1; show processlist; show events; select * from information_schema.`EVENTS` where event_schema='lhrdb' ; select * from events_list; MySQL [lhrdb]> select * from vw_ob_lhr ; +-------+-----------+-----+ | db | type | cnt | +-------+-----------+-----+ | lhrdb | Events | 1 | | lhrdb | Function | 1 | | lhrdb | Procedure | 2 | | lhrdb | Table | 4 | | lhrdb | Trigger | 1 | | lhrdb | View | 1 | +-------+-----------+-----+ |
开始导出数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | -- Windows下使用“^”替换“\” -- 直接导出表结构和数据 mysqldump -uroot -plhr -h192.168.66.35 -P33171 \ --single-transaction --hex-blob --routines --events --triggers --set-gtid-purged=OFF --default-character-set=utf8 \ --max_allowed_packet=512M --databases lhrdb > lhrdb.sql -- 或者 mysqldump -uroot -plhr -h192.168.66.35 -P33171 \ --single-transaction --hex-blob --routines --events --triggers --set-gtid-purged=OFF --default-character-set=utf8 \ --max_allowed_packet=512M --databases lhrdb \ | sed -e 's/DEFINER=.*FUNCTION/FUNCTION/' \ -e 's/DEFINER=.*TRIGGER/TRIGGER/' \ -e 's/DEFINER=.*EVENT/EVENT/' \ -e 's/DEFINER=.*PROCEDURE/PROCEDURE/' \ -e 's/DEFINER=.*SQL SECURITY/SQL SECURITY/' \ > lhrdb.sql -- 导出表结构 mysqldump -uroot -plhr -h192.168.66.35 -P33171 \ --single-transaction --hex-blob --routines --events --triggers --set-gtid-purged=OFF --default-character-set=utf8 \ --databases lhrdb --no-data \ | sed -e 's/DEFINER=.*FUNCTION/FUNCTION/' \ -e 's/DEFINER=.*TRIGGER/TRIGGER/' \ -e 's/DEFINER=.*EVENT/EVENT/' \ -e 's/DEFINER=.*PROCEDURE/PROCEDURE/' \ -e 's/DEFINER=.*SQL SECURITY/SQL SECURITY/' \ > lhrdb_ddl.sql -- 导出数据 mysqldump -uroot -plhr -h192.168.66.35 -P33171 \ --single-transaction --hex-blob --skip-triggers --set-gtid-purged=OFF --default-character-set=utf8 \ --no-create-info --max_allowed_packet=512M --databases lhrdb > lhrdb_data.sql |
注意:多个数据库之间参数--databases
用空格隔开即可。
导入数据
1 2 3 4 5 6 7 8 9 10 11 12 13 | -- 删库 drop database lhrdb; -- 一次性导入表结构和数据 mysql -f -h192.168.66.35 -P33171 -u root -plhr --default-character-set=utf8 < lhrdb.sql -- 分别导入表结构和数据 mysql -f -h192.168.66.35 -P33171 -u root -plhr --default-character-set=utf8 < lhrdb_ddl.sql mysql -f -h192.168.66.35 -P33171 -u root -plhr -D lhrdb --default-character-set=utf8 < lhrdb_data.sql -- 或者使用source mysql -h192.168.66.35 -P33171 -u root -plhr SQL> source lhrdb.sql |
导出导入参数优化及注意事项
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | --- 导出时设置 set global wait_timeout=28800000; set global net_read_timeout=28800; set global net_write_timeout=28800; set global max_allowed_packet=2147483648; --- 导入时设置 set sql_log_bin=0; set global innodb_flush_log_at_trx_commit = 2; set global sync_binlog = 20000; set global max_allowed_packet=100000000; set global net_buffer_length=100000; set global interactive_timeout=28800000; set global wait_timeout=28800000; -- 注意事项 1、如果单独导出表,若表上有触发器,则会自动导出触发器。 2、如果用的是Windows下的cmd中导入,则需要根据导出的sql文件字符集来设置代码页,否则会报莫名其妙的错误“ERROR at 1ine 3759: Unknown comand‘\’”。设置命令为:utf8:chcp 65001 3、建议导出导入设置参数:--default-character-set=utf8 4、若报错“mysqldump: Couldn't execute 'SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM, '$."number-of-buckets-specified"') FROM information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = 'lhrdb' AND TABLE_NAME = 'events_list';': Unknown table 'COLUMN_STATISTICS' in information_schema (1109)”,这是因为mysqldump和mysql服务器版本不匹配造成的 |
mysqldump文件导入华为云RDS报错
导出导入过程参考:https://support.huaweicloud.com/usermanual-rds/rds_migration_mysql.html
报错:[Err] 1227 - Access denied; you need (at least one of) the SUPER privilege(s) for this operation
原因:华为云的RDS的root用户默认不提供super权限,可以参考: https://support.huaweicloud.com/rds_faq/rds_faq_0075.html
解决:
1、修改从Navicat导出来的sql文件,去掉如下内容:
1 | DEFINER=`root`@`xxx` |
2、若使用mysqldump导出导入数据,则可以选择ddl和data单独分别导出导入,然后对ddl内容进行批量替换,最后再导入ddl和data数据即可。
例如报错: