环境说明
本篇描述了如何使用 Percona repositories中的软件包在Debian 服务器上安装和配置三个 Percona XtraDB Cluster 节点。
- 节点 1主机名: pxc1 IP地址: 192.168.10.61
- 节点 2主机名: pxc2 IP地址: 192.168.10.62
- 节点 3主机名: pxc3 IP地址: 192.168.10.63
先决条件
本篇中描述的过程需要执行以下操作:
- 所有三个节点都安装了 Debian10.10(Ubuntu-20.04.3同样适用)。
- 所有节点上的防火墙都配置为允许连接到端口 3306、4444、4567 和 4568。
- MySQL AppArmor 配置文件已禁用。
步骤 1. 安装 PXC
sudo vim sources.list //增加以下163镜像源地址
deb http://mirrors.163.com/debian/ buster main non-free contrib
deb http://mirrors.163.com/debian/ buster-updates main non-free contrib
deb http://mirrors.163.com/debian/ buster-backports main non-free contrib
deb http://mirrors.163.com/debian-security/ buster/updates main non-free contrib
sudo wget https://repo.percona.com/apt/percona-release_latest.buster_all.deb
sudo chmod 755 percona-release_latest.buster_all.deb
sudo dpkg -i percona-release_latest.buster_all.deb
sudo apt-get update
sudo apt-get -y install percona-xtradb-cluster-57
sudo ss -tunlp //查看服务端口
sudo service mysql stop //停止服务
步骤 2. 配置第一个节点
1.确保/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf 第一个节点 ( pxc1)的配置文件有3处修改内容如下:
sudo vim /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf //编辑配置文件,需修改3处
# Cluster connection URL contains IPs of nodes
wsrep_cluster_address=gcomm://192.168.10.61,192.168.10.62,192.168.10.63 //1.ip节点池
# Node IP address
wsrep_node_address=192.168.10.61 //2.节点1ip
#Authentication for SST method
wsrep_sst_auth="sstuser:s3cretPass" //3.去掉注释,使其生效,密码默认或修改均可
2.使用以下命令启动第一个节点:
sudo /etc/init.d/mysql bootstrap-pxc
3.启动第一个节点后,可以使用以下命令检查集群状态:
mysql> show status like 'wsrep%';
+----------------------------------+----------------------------------------+
| Variable_name | Value |
+----------------------------------+----------------------------------------+
| wsrep_local_state_uuid | fc2d7eb9-4944-11ec-92cc-87002c885d25 |
| wsrep_protocol_version | 9 |
| wsrep_last_applied | 592 |
| wsrep_last_committed | 592 |
| wsrep_replicated | 6 |
| wsrep_replicated_bytes | 1440 |
| wsrep_repl_keys | 6 |
| wsrep_repl_keys_bytes | 192 |
| wsrep_repl_data_bytes | 850 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 600 |
| wsrep_received_bytes | 520083 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 1 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 141, 141 ] |
| wsrep_flow_control_interval_low | 141 |
| wsrep_flow_control_interval_high | 141 |
| wsrep_flow_control_status | OFF |
| wsrep_flow_control_active | false |
| wsrep_flow_control_requested | false |
| wsrep_cert_deps_distance | 52.072635 |
| wsrep_apply_oooe | 0.604730 |
| wsrep_apply_oool | 0.008446 |
| wsrep_apply_window | 1.788851 |
| wsrep_apply_waits | 35 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.005068 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 67 |
| wsrep_cert_bucket_count | 210 |
| wsrep_gcache_pool_size | 545912 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | ,192.168.10.61:3306,192.168.10.62:3306,192.168.10.63:3306 |
| wsrep_cluster_weight | 3 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | f2551ef3-49ac-11ec-a7d6-5f6c289b3167 |
| wsrep_gmcast_segment | 0 |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | fc2d7eb9-49a7-11ec-92cc-87fc2c885d25 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 2 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.53(re33d74b) |
| wsrep_ready | ON |
+----------------------------------+----------------------------------------+
75 rows in set (0.00 sec)
mysql>
此输出显示集群已成功引导。
要使用XtraBackup执行状态快照传输,请设置一个具有适当权限的新用户:
mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';
mysql@pxc1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql@pxc1> FLUSH PRIVILEGES;
步骤 3. 配置第二个节点
1.确保/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf 第二个节点 ( pxc2)的配置文件有3处修改内容如下:
sudo vim /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf //编辑配置文件,需修改3处
# Cluster connection URL contains IPs of nodes
wsrep_cluster_address=gcomm://192.168.10.61,192.168.10.62,192.168.10.63 //1.ip节点池
# Node IP address
wsrep_node_address=192.168.10.62 //2.节点1ip
#Authentication for SST method
wsrep_sst_auth="sstuser:s3cretPass" //3.去掉注释,使其生效,密码默认或修改均可
2.使用以下命令启动第二个节点:
sudo /etc/init.d/mysql start
3.启动第二个节点后,同样可以使用以下命令检查集群状态:
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | b598a22e-ace3-11e2-0800-3e90eb9cd5be |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 2 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)
此输出确认第二个节点已加入集群。
步骤 4. 配置第三个节点
1.确保/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf 第三个节点 ( pxc3)的配置文件有3处修改内容如下:
sudo vim /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf //编辑配置文件,需修改3处
# Cluster connection URL contains IPs of nodes
wsrep_cluster_address=gcomm://192.168.10.61,192.168.10.62,192.168.10.63 //1.ip节点池
# Node IP address
wsrep_node_address=192.168.10.63 //2.节点1ip
#Authentication for SST method
wsrep_sst_auth="sstuser:s3cretPass" //3.去掉注释,使其生效,密码默认或修改均可
2.使用以下命令启动第三个节点:
sudo /etc/init.d/mysql start
3.启动第三个节点后,同样可以使用以下命令检查集群状态:
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | b118af3e-ace3-11e2-0800-3e90eb9cd5d3 |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 3 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)
此输出确认第三个节点已加入集群。
测试验证
为了测试复制数据是否同步,让我们在第二个节点上创建一个新数据库,在第三个节点上为该数据库创建一个表,并将一些记录添加到第一个节点上的表中。
1.在第二个节点上创建一个新数据库:
mysql@pxc2> CREATE DATABASE percona;
Query OK, 1 row affected (0.01 sec)
2.在第三个节点上创建一个表:
mysql@pxc3> USE percona;
Database changed
mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.05 sec)
3.在第一个节点上插入记录:
mysql@pxc1> INSERT INTO percona.example VALUES (1, 'percona1');
Query OK, 1 row affected (0.02 sec)
4.从第二个节点上的表中检索所有行:
mysql@pxc2> SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
| 1 | percona1 |
+---------+-----------+
1 row in set (0.00 sec)
这个简单的过程确保集群中的所有节点都同步并按预期工作。
补充说明:数据库服务器集群部署必须是奇数台机器,偶数机器数量则容易导致服务器脑裂(split-brain)。假如在数据库服务器资源紧张的情况下(通常2台做双机热备),可采用“双主一仲裁”的三节点集群模式,Galera Arbitrator仲裁节点是Percona XtraDB Cluster的成员,Galera Arbitrator不需要专用服务器,它可以安装在运行其他应用程序的机器上,只要确保它具有良好的网络连接即可。
Galera Arbitrator 是参与投票的集群成员,但不参与实际复制(尽管它接收的数据与其他节点相同)。此外,它不包括在流量控制计算中。
下面将补充如何将 Galera Arbitrator 节点添加到现有集群:
1.安装Galera Arbitrator:
sudo vim /etc/apt/sources.list //增加163镜像源
deb http://mirrors.163.com/debian/ buster main non-free contrib
deb http://mirrors.163.com/debian/ buster-updates main non-free contrib
deb http://mirrors.163.com/debian/ buster-backports main non-free contrib
deb http://mirrors.163.com/debian-security/ buster/updates main non-free contrib
sudo wget https://repo.percona.com/apt/percona-release_latest.buster_all.deb
sudo chmod 755 percona-release_latest.buster_all.deb
sudo dpkg -i percona-release_latest.buster_all.deb
sudo apt-get update
sudo apt install percona-xtradb-cluster-garbd-5.7
2.配置Galera Arbitrator:
# Copyright (C) 2012 Codership Oy
# This config file is to be sourced by garb service script.
# REMOVE THIS AFTER CONFIGURATION //别忘了删掉该行,否则会启动失败
# A comma-separated list of node addresses (address[:port]) in the cluster
GALERA_NODES="192.168.10.61:4567,192.168.10.62:4567,192.168.10.63:4567" //节点池IP
# Galera cluster name, should be the same as on the rest of the nodes.
GALERA_GROUP="pxc-cluster" //与各节点集群名称一致
# Optional Galera internal options string (e.g. SSL settings)
# see http://galeracluster.com/documentation-webpages/galeraparameters.html
# GALERA_OPTIONS=""
# Log file for garbd. Optional, by default logs to syslog
# Deprecated for CentOS7, use journalctl to query the log for garbd
# LOG_FILE=""
3.启动、查看Galera Arbitrator:
sudo service garbd start
[ ok ] Starting /usr/bin/garbd: :.
root@server:~# service garbd status
[ ok ] garb is running.
特别注意:
# REMOVE THIS AFTER CONFIGURATION //在启动之前删掉该行,否则会启动失败
参考文档:
https://www.percona.com/doc/percona-xtradb-cluster/5.7/install/apt.html#apt
https://www.percona.com/doc/percona-xtradb-cluster/5.7/howtos/garbd_howto.html
本文暂时没有评论,来添加一个吧(●'◡'●)