原 GreenPlum新建实例后需要做哪些基本优化操作
- OS配置
- 推荐配置
- 内核参数
- 进程优先级、进程数
- 配置磁盘预读
- 磁盘挂载参数
- 禁用透明大页
- 配置足够的swap
- 安装
- segment数规划
- pgbouncer连接池
- 参数配置
- 修改max_connections、shared_buffers等(需要重启库)
- 定时清理主库日志或只记录DDL语句日志
- 其它参数调优(不需要重启库)
- 考虑将udp修改为tcp
- 异步复制(慎重)
- 增大fts检测机制(慎重)
- 队列数修改
- 远程访问
- 定时任务
- 定时收集统计信息
- 方法1:vacuumdb
- 方法2:analyzedb(推荐)
- 定时收缩系统表
- 定时删除gpload的外部表
- 定时对全库做vacuum操作
- gpcc定时重启
- 频繁膨胀的表做频繁vacuum操作
- 定时任务2
- 开机启动GP
- 简易版(慎用)-- 适合单机环境
- 高级版--适合多主机环境
- 做gpcheckperf性能测试
- 总结
OS配置
推荐配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | echo 'export TMOUT=0' >> /etc/profile source /etc/profile echo 'export LANG=en_US.UTF-8' >> /etc/profile sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config setenforce 0 systemctl status firewalld.service systemctl disable firewalld.service systemctl start firewalld firewall-cmd --add-port=0-65535/tcp --permanent firewall-cmd --add-port=0-65535/udp --permanent firewall-cmd --reload firewall-cmd --list-ports systemctl stop firewalld.service sed -i 's/.*RemoveIPC.*/RemoveIPC=no/' /etc/systemd/logind.conf echo "RemoveIPC=no" >> /usr/lib/systemd/system/systemd-logind.service systemctl daemon-reload systemctl restart systemd-logind.service echo gpadmin >> /etc/cron.allow cat >> /etc/ssh/sshd_config <<"EOF" MaxSessions 1000 MaxStartups 1000 # MaxStartups 50:80:200 EOF systemctl restart sshd |
内核参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | cat >> /etc/sysctl.conf <<"EOF" fs.file-max=9000000 fs.inotify.max_user_instances = 1000000 fs.inotify.max_user_watches = 1000000 kernel.pid_max=4194304 kernel.shmmax = 4398046511104 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernel.sem = 32000 1024000000 500 32000 vm.overcommit_memory=1 vm.overcommit_ratio=95 net.ipv4.ip_forward=1 vm.swappiness=20 vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 5 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 600 vm.dirty_ratio = 10 vm.dirty_writeback_centisecs = 100 vm.vfs_cache_pressure = 500 vm.min_free_kbytes = 2097152 net.ipv4.ipfrag_time = 600 net.ipv4.ipfrag_high_thresh = 450943040 net.ipv4.ipfrag_low_thresh = 400457280 net.ipv4.ipfrag_max_dist = 1000 net.core.somaxconn = 65535 net.ipv4.tcp_rmem="4096 87380 82914560" net.ipv4.tcp_wmem="4096 16384 61943040" net.core.rmem_default=46777216 net.core.rmem_max=467772160 net.core.wmem_default=46777216 net.core.wmem_max=467772160 net.ipv4.tcp_max_syn_backlog = 65536 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_fin_timeout = 30 net.ipv4.udp_mem="123372720 164496990 246745440" net.ipv4.udp_rmem_min=32384 net.ipv4.udp_wmem_min=32384 sunrpc.udp_slot_table_entries = 64 EOF sysctl -p |
进程优先级、进程数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | ll /lib64/security/pam_limits.so echo "session required /lib64/security/pam_limits.so" >> /etc/pam.d/login cat >> /etc/security/limits.conf <<"EOF" * soft nofile 655350 * hard nofile 655350 * soft nproc 655350 * hard nproc 655350 gpadmin soft priority -20 EOF sed -i 's/4096/655350/' /etc/security/limits.d/20-nproc.conf cat /etc/security/limits.d/20-nproc.conf |
配置磁盘预读
在含有数据目录的设备上,blockdev预读尺寸应该被设置为16384。
1 2 3 4 5 6 7 8 9 10 11 12 | /sbin/blockdev --getra /dev/sdb /sbin/blockdev --setra 16384 /dev/sdb echo '/sbin/blockdev --setra 16384 /dev/sdb' >> /etc/rc.local chmod +x /etc/rc.d/rc.local pvcreate /dev/sdb vgcreate vggp /dev/sdb lvcreate -n lvgp -l 100%free vggp mkfs.xfs /dev/vggp/lvgp mkdir /data |
加入/etc/rc.local文件中。
磁盘挂载参数
1 2 3 4 5 6 | # 磁盘挂载参数 mount -o rw,nodev,noatime,nobarrier,inode64 /dev/vggp/lvgp /data echo '/dev/vggp/lvgp /data xfs defaults,rw,nodev,noatime,nobarrier,inode64 0 0' >> /etc/fstab |
禁用透明大页
不建议对数据库工作负载使用 THP (Oracle、MySQL、PostgreSQL、MongoDB均建议关闭THP),因为THP在运行时动态分配内存,而运行时的内存分配会有延迟,对于数据库的管理来说并不友好,会导致数据库性能抖动,所以建议关闭THP。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | cat /sys/kernel/mm/transparent_hugepage/defrag cat /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag cat >> /etc/rc.local <<"EOF" if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi EOF chmod +x /etc/rc.d/rc.local |
参考:https://www.dbaup.com/pgshujukupeizhidaye.html
配置足够的swap
若是内存不足,发生自动切换,会报:“FTS: cannot establish libpq connection (content=0, dbid=11): could not fork new process for connection: Cannot allocate memory”或“FATAL: Out of memory. Failed on request of size 144 bytes. (context 'GPORCAmemory pool') ”或“ATAL: the database system is in recovery mode”,若没有swap内存配置,会发生OOM,特别严重时会导致segment自动故障切换。
1 2 3 4 5 6 7 8 9 10 11 12 13 | -- 主机设置,大小一般为内存的1.5倍 dd if=/dev/zero of=/root/.swapfile bs=1G count=100 chmod -R 0600 /root/.swapfile mkswap /root/.swapfile swapon /root/.swapfile echo '/root/.swapfile swap swap defaults 0 0' >> /etc/fstab swapon -s |
参考:https://www.dbaup.com/linuxzhongdejiaohuanfenquswap.html
安装
segment数规划
下表列出了segment实例对应内存和CPU需求
总segment个数(p+m) | CPU核数 | 内存GB |
---|---|---|
2 | 4 | 16~32 |
2~4 | 6~8 | 24~32 |
4~6 | 8~16 | 32~64 |
6~8 | 12~16 | 48~64 |
8~12 | 16~24 | 64~84 |
12~16 | 24~32 | 64~128 |
30 | 64 | 250 |
50 | 128 | 500 |
我这边碰到的环境多半是,在1+1+3(1个master+1个standby+3个data节点)主机,standby配置低,其它节点都是128GB内存+32核CPU,在数据节点上创建的是4个primary+4个mirror=8个实例,监控了很久,感觉对资源还是有很大浪费的。所以,对于这类主机,我们可以在每个数据节点上创建16个实例(8个P+8个M),shared_buffers配置512MB就行。
pgbouncer连接池
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | create user pgbouncer password 'lhr' superuser; mkdir -p /usr/local/greenplum-db/pgbouncer cat > /usr/local/greenplum-db/pgbouncer/pgbouncer.ini <<"EOF" [databases] * = host=127.0.0.1 port=5432 [pgbouncer] pool_mode = session listen_port = 6432 listen_addr = * auth_type = md5 auth_user = pgbouncer auth_file = /usr/local/greenplum-db/pgbouncer/users.txt logfile = /usr/local/greenplum-db/pgbouncer/pgbouncer.log pidfile = /usr/local/greenplum-db/pgbouncer/pgbouncer.pid admin_users = pgbouncer default_pool_size=1000 max_client_conn=4000 ignore_startup_parameters = extra_float_digits,search_path so_reuseport = 1 application_name_add_host=1 query_wait_timeout=0 log_connections=0 log_disconnections=0 sbuf_loopcnt=0 reserve_pool_timeout=6 reserve_pool_size=100 listen_backlog=65535 server_fast_close =1 EOF psql copy (select '"'||usename ||'" "' || passwd||'"' from pg_shadow order by 1) to '/usr/local/greenplum-db/pgbouncer/users.txt'; /usr/local/greenplum-db/bin/pgbouncer -d /usr/local/greenplum-db/pgbouncer/pgbouncer.ini /usr/local/greenplum-db/bin/pgbouncer -Rd /usr/local/greenplum-db/pgbouncer/pgbouncer.ini psql -U xxt -h172.16.7.163 -p 6432 -d pgbouncer psql -U pgbouncer -h172.16.7.163 -p 6432 -d pgbouncer show help; show clients ; #用来查看客户端连接信息 show pools; #用来查看连接池信息 show config; show databases; show servers; SHOW STATS; |
参数配置
注意:standby master的值需要单独修改postgresql.conf文件
计算gp_vmem_protect_limit 的值:https://greenplum.org/calc/ ,其中,“Primary Segments Per Server”为所有的实例个数,包括p和m。该参数限制每个Instance上所有语句可以使⽤的内存总量的上限值(以MB为单位)。 如果查询导致超出此限制,则不会分配内存,查询将失败,配置合理可以有效避免OOM的发生。gp_vmem_protect_limit没有统计到共享内存,仍旧有操作系统OOM的风险,在计算时需要排除掉该部分内存的空间。
注意: 该参数若配置过大,会导致发生OOM后,实例自动切换;若配置过小,则会导致查询报错OOM(ERROR: Canceling query because of high VMEM usage. Used: 29MB, available 1MB, red zone: 90MB (runaway_cleaner.c:202) 或 Vmem limit reached, failed to allocate或gpcc报错:Warning: Updates stalled due to heavy GPDB workload or segment failure (system, diskusage, alert))或DETAIL: Vmem limit reached, failed to allocate 12582912 bytes from tracker, which has 0 MB available 但并不会发生实例切换。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | -- 以下参数需要重启(注意:standby master的值需要单独修改postgresql.conf文件) -- gp_vmem_protect_limit和gp_resqueue_priority_cpucores_per_segment需要计算 -- max_prepared_transactions需要和max_connections的master配置一样 gpconfig -c gp_vmem_protect_limit -v 16658 -m 102400 gpconfig -c shared_buffers -v 256MB -m 4GB gpconfig -c max_connections -v 6000 -m 2000 gpconfig -c max_prepared_transactions -v 2000 gpconfig -c track_activity_query_size -v 102400 -m 102400 gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 4 -m 32 gpconfig -c gp_enable_global_deadlock_detector -v on gpconfig -c gp_segworker_relative_priority -v 0 -m 0 --skipvalidation gpconfig -c runaway_detector_activation_percent -v 0 gpconfig -c gp_vmem_protect_segworker_cache_limit -v 200 gpstop -M fast -ra -- 查询 gpconfig -s gp_vmem_protect_limit gpconfig -s shared_buffers gpconfig -s max_connections gpconfig -s max_prepared_transactions gpconfig -s track_activity_query_size gpconfig -s gp_resqueue_priority_cpucores_per_segment gpconfig -s gp_enable_global_deadlock_detector gpconfig -s gp_segworker_relative_priority gpconfig -s runaway_detector_activation_percent gpconfig -s gp_vmem_protect_segworker_cache_limit |
定时清理主库日志或只记录DDL语句日志
不需要重启库!!!
1 2 3 4 5 6 7 8 9 10 | gpconfig -s log_statement gpconfig -c log_statement -v ddl --masteronly gpstop -u gpconfig -s log_statement -- 定时清理 0 1 * * * find /opt/greenplum/data/master/gpseg-1/pg_log/ -name "gpdb*.csv" -ctime +7 -exec rm -rf {} \; systemctl status crond |
参考:https://www.dbaup.com/peizhicrontabdingshiqingligreenplumrizhiwenjian.html
其它参数调优(不需要重启库)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | gpconfig -c work_mem -v 2GB gpconfig -c temp_buffers -v 4GB gpconfig -c maintenance_work_mem -v 2GB gpconfig -c effective_cache_size -v 60GB -m 100GB gpconfig -c max_statement_mem -v 6GB -- gpconfig -c statement_mem -v 2GB gpconfig -c gp_workfile_compression -v on gpconfig -c gp_autostats_mode -v on_change gpconfig -c gp_autostats_on_change_threshold -v 10000 gpconfig -c gp_vmem_idle_resource_timeout -v 10s gpconfig -c stats_queue_level -v on -- gpconfig -c optimizer_enable_right_outer_join -v off gpconfig -c optimizer_mdcache_size -v 10GB gpconfig -c gp_cached_segworkers_threshold -v 20 gpconfig -c gpcc.enable_save_full_query_text -v on gpconfig -c statement_timeout -v '3h' gpconfig -c lock_timeout -v '5min' -- 是否丢包 -- gpconfig -c gp_interconnect_min_retries_before_timeout -v 30 gpstop -u -- 查询 gpconfig -s work_mem gpconfig -s statement_mem gpconfig -s temp_buffers gpconfig -s maintenance_work_mem gpconfig -s effective_cache_size gpconfig -s max_statement_mem gpconfig -s gp_workfile_compression gpconfig -s gp_vmem_idle_resource_timeout gpconfig -s stats_queue_level gpconfig -s gp_autostats_mode gpconfig -s gp_autostats_on_change_threshold gpconfig -s optimizer_enable_right_outer_join gpconfig -s optimizer_mdcache_size gpconfig -s gp_cached_segworkers_threshold gpconfig -s gpcc.enable_save_full_query_text gpconfig -s gp_interconnect_min_retries_before_timeout gpconfig -s statement_timeout gpconfig -s lock_timeout |
注意:
1、参数statement_mem在6.24.4版本中测试的有问题,不能配置该参数,否则会导致GP不能启动。但在6.25.1中测试又正常了,在6.27中报错。
2、参数work_mem在后期版本中可能会被废弃,不建议配置。WARNING "work_mem": setting is deprecated, and may be removed in a future release.
考虑将udp修改为tcp
1、需要根据实际情况测试,在有segment 200个的情况下,修改为tcp后,发现有的很简单的SQL会挂着不执行,修改为udp后正常。
2、udp方式需要注意是否有udp丢包问题