mysql xtrabackup 压缩限流


mysql xtrabackup 压缩限流

 

官方文档:https://www.percona.com/doc/percona-xtrabackup/8.0/backup_scenarios/compressed_backup.html

官方文档:https://www.percona.com/doc/percona-xtrabackup/LATEST/advanced/throttling_backups.html

 

 

***********************

压缩复制

 

*****************

相关参数

 

--compress:使用压缩复制

--decompress:prepare备份目录前需要先解压缩,解压算法与压缩算法对应

Compress individual backup files using the specified compression algorithm. 

# 支持的算法:quicklz(默认)、lz4
Supported algorithms are 'quicklz' and 'lz4'. The default algorithm is 'quicklz'.

 

--compress-threads:压缩备份时压缩线程

--parallel:同时解压的线程数,默认为1

 

--remove-original:清理备份目录中的压缩文件

Percona XtraBackup doesn’t automatically remove the compressed files. In order to clean up 
the backup directory you should use --remove-original option. Even if they’re not removed 
these files will not be copied/moved over to the datadir if --copy-back or --move-back are used

 

*****************

示例

 

创建 mysql

docker run -it  -d --net fixed --ip 172.18.0.3 -p 3306:3306 \
-v /usr/mysql/single/data:/var/lib/mysql \
--privileged=true -e  MYSQL_ROOT_PASSWORD=123456 --name mysql mysql

 

数据备份

xtrabackup -u root --password=123456 -H 192.168.57.120 -P 3306 --backup \
--compress --compress-threads=2 \
--log-bin=/usr/mysql/single/data/binlog --log-bin-index=/usr/mysql/single/data/binlog.index \
--datadir=/usr/mysql/single/data --target-dir=/usr/mysql/single/backup


# 备份目录
[root@centos single]# ls backup
backup-my.cnf.qp  ib_buffer_pool.qp  mysql.ibd.qp        undo_001.qp                xtrabackup_checkpoints  xtrabackup_tablespaces.qp
binlog.000003.qp  ibdata1.qp         performance_schema  undo_002.qp                xtrabackup_info.qp
binlog.index.qp   mysql              sys                 xtrabackup_binlog_info.qp  xtrabackup_logfile.qp

 

数据解压

xtrabackup --decompress --target-dir=/usr/mysql/single/backup

# 解压后备份目录
[root@centos backup]# ls
backup-my.cnf     binlog.index       ibdata1     mysql.ibd.qp        undo_001.qp             xtrabackup_binlog_info.qp  xtrabackup_logfile
backup-my.cnf.qp  binlog.index.qp    ibdata1.qp  performance_schema  undo_002                xtrabackup_checkpoints     xtrabackup_logfile.qp
binlog.000003     ib_buffer_pool     mysql       sys                 undo_002.qp             xtrabackup_info            xtrabackup_tablespaces
binlog.000003.qp  ib_buffer_pool.qp  mysql.ibd   undo_001            xtrabackup_binlog_info  xtrabackup_info.qp         xtrabackup_tablespaces.qp


xtrabackup --decompress --remove-original --target-dir=/usr/mysql/single/backup

# 解压后数据目录(删除了压缩文件)
[root@centos backup]# ls
backup-my.cnf  binlog.index    ibdata1  mysql.ibd           sys       undo_002                xtrabackup_checkpoints  xtrabackup_logfile
binlog.000003  ib_buffer_pool  mysql    performance_schema  undo_001  xtrabackup_binlog_info  xtrabackup_info         xtrabackup_tablespaces

 

prepare 数据

xtrabackup --prepare --target-dir=/usr/mysql/single/backup

 

restore 数据:将数据复制到空目录

xtrabackup --copy-back --datadir=/usr/mysql/single/data2 --target-dir=/usr/mysql/single/backup

 

 

***********************

复制限流

 

# 尽管xtrabackup不会阻塞数据库操作,但是备份操作会增加系统负载
Although xtrabackup does not block your database’s operation, any backup can add load to 
the system being backed up. 

# 如果系统没有充足的 i/o读写,可尝试在备份时进行限流
On systems that do not have much spare I/O capacity, it might be helpful to throttle the 
rate at which xtrabackup reads and writes data. 

# --option参数限制每秒钟读写的chunk数,每个chunk大小为 10m
You can do this with the --throttle option. This option limits the number of chunks copied 
per second. The chunk +size is 10 MB

 

--throttle=1 

                   

 

# 全量复制时,--throttle限制每秒读写的chunk数
When specified with the --backup option, this option limits the number of pairs of read-
and-write operations per second that xtrabackup will perform. 

# 增量复制时,限制每秒读的chunk数
If you are creating an incremental backup, then the limit is the number of read I/O 
operations per second

 

# 默认没有限流,xtrabackup会尽可能快的进行读写操作,完成备份
By default, there is no throttling, and xtrabackup reads and writes data as quickly as it can. 

# 如果限制读写数很低,备份操作可能赶不上transaction log写入速度,备份就不会停止
If you set too strict of a limit on the IOPS, the backup might be so slow that it will 
never catch up with the transaction logs that InnoDB is writing, so the backup might never complete

 

 


版权声明:本文为weixin_43931625原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。