Home » Msyql » How to Setup Mariadb Galera Cluster on Centos

How to Setup Mariadb Galera Cluster on Centos

MariaDB is a relational database management system (RDBMS) and MariaDB Galera Cluster is a synchronous multi-master cluster for MariaDB. It is available on Linux only, and only supports the XtraDB/InnoDB storage engines. When Galera Cluster is in use, you can direct reads and writes to any node with any interruption , and you can lose any individual node without interruption in operations and without the need to handle complex failover procedures

  • At a high level, Galera Cluster comprises of a database server—that is, MySQL or MariaDB—that then use the Galera Replication Plugin to manage replication. To be more detailed, the MySQL replication plugin API has been extended to provide all the information and hooks required for accurate multi-master, synchronous replication. This extended API is called the Write-Set Replication API, or wsrep API.
    Through the wsrep API, Galera Cluster be responsible for certification-based replication. A transaction for replication, the write-set, not only contains the database rows to replicate, but also includes information on all the locks that were held by the database during the transaction. Each node then certifies the replicated write-set against other write-sets in the applier queue. The write-set is then applied, if there are no conflicting locks. At this point, the transaction is considered committed, after which each node continues to apply it to the tablespace.
    Benefits of Galera Cluster
  • Automatic Node Provisioning No need to manually back up the database and copy it to the new node.
  • Transparent to Applications Required no (or minimal) changes) to the application.
  • True Multi-master Read and write to any node at any time.
  • Synchronous Replication No slave lag, no data is lost at node crash.
  • Tightly Coupled All nodes hold the same state. No diverged data between nodes allowed.
  • Multi-threaded Slave For better performance. For any workload.
  • No Master-Slave Failover Operations or Use of VIP.
  • Hot Standby No downtime during failover (since there is no failover)
  • Supports InnoDB.
  • No Read and Write Splitting Needed.
    This article explains how to setup MariaDB Galera Cluster 10.0 with 3 nodes running on CentOS 6.5 x86_64 resulting in a HA (high-availability) database cluster.
    CLUSTER DETAILS
    We are using below 3 fresh installed VMs running a minimal install of CentOS 6.5 x86_64.
    Cluster db node 1 has hostname db-node1 and IP address 10.1.2.1
    Cluster db node 2 has hostname db-node2 and IP address 10.1.2.2
    Cluster db node 3 has hostname db-node3 and IP address 10.1.2.3
    http://www.linuxpcfix.com/high-availability-linux-cluster-with-pacemaker-and-corosync/
    Step 1: Add MariaDB Repositories
    Create a new mariadb repository /etc/yum.repos.d/mariadb.repo using below content.
    For CentOS 6 – 64bit:
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

For CentOS 6 – 32bit:

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Step 2 – Disable or change SELinux mode as permissive.
Before starting the setup put SELinux into permissive mode on all nodes:

[root@linuxpcfix ~]#setenforce0

Step 3 – Install MariaDB Galera Cluster 10.0 software
Note:- If you did install CentOS 6 with minimal packages then make sure you should install the socat package from the EPEL repository before installation the MariaDB Galera Cluster 10.0 software.
So you can install EPEL repository with the following command (for x86_64):

[root@linuxpcfix ~]#yum install epel-release
[root@linuxpcfix ~]#yum install socat

you can also use following command to install directly from EPEL Repository.

[root@linuxpcfix ~]#yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/socat-1.7.2.3-1.el6.x86_64.rpm

Now Install the MariaDB Galera Cluster 10.0 software by performing the following command on all nodes:

[root@linuxpcfix ~]#yum install MariaDB-Galera-server MariaDB-client rsync galera

Step 4: Configure MariaDB security
Start the mysql daemon (As mariadb still using mysql daemon of init script)

[root@linuxpcfix ~]#service mysql start

Run the mysql_secure_installation script so we can improve the security. Run the following command on all nodes:

[root@linuxpcfix ~]#/usr/bin/mysql_secure_installation

I using password as ‘mydbpassword’ and accepted all defaults (so answered yes to all questions).
Step 5 – Create MariaDB Galera Cluster users
Now, we have to create new sst_user that must be able to access the database. The ‘sst_user’ is the user will use for authenticating to one node to another database node in the State Transfer Snapshot (SST) phase. Run the following command on all nodes:

[root@linuxpcfix ~]#mysql -u root -p
mysql> DELETE FROM mysql.user WHERE user=”;
mysql> GRANT ALL ON *.* TO ‘root’@’%’ IDENTIFIED BY ‘ mydbpassword ‘;
mysql> GRANT USAGE ON *.* to sst_user@’%’ IDENTIFIED BY ‘ mydbpassword ‘;
mysql> GRANT ALL PRIVILEGES on *.* to sst_user@’%’;
mysql> FLUSH PRIVILEGES;
mysql> quit

We are recommended to change ‘%’ to hostname or IP addresses from which those users can access the database. As ‘%’ belong to any host. That means the root or sst_user is allowed to access the database from any host.
Step 6 – Create the MariaDB Galera Cluster config
First stop the mysql services on all nodes:

[root@linuxpcfix ~]#service mysql stop

Next, We have to create the MariaDB Galera Cluster configuration by the following command. Note: Use the same command on all nodes and make required changes for db2, and db3):

[root@linuxpcfix ~]#cat  /etc/my.cnf.d/server.cnf
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://10.1.2.1,10.1.2.2,10.1.2.3″
wsrep_cluster_name=’galera_cluster’
wsrep_node_address=’10.1.2.1
wsrep_node_name=’ db-node1′
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:mydbpassword

IMPORTANT NOTE: when executing this command on db2 and db3 do not forget to adjust the wsrep_node_address and wsrep_node_name variables.
On db2 :

wsrep_node_address=10.1.2.2
wsrep_node_name=’db-node2′

On db3 :

wsrep_node_address=’10.1.2.3′
wsrep_node_name=’db-node3”

Step 7– Initialize the first cluster node
Start MariaDB with the special ‘??wsrep-new-cluster’ option , Do it on node db1 only so the primary node of the cluster is initialized:

[root@linuxpcfix ~]/etc/init.d/mysql start –wsrep-new-cluster

Check status by run the following command on node db1 only:

[root@linuxpcfix ~]mysql-uroot-p-e”show status like ‘wsrep%'”

Some important information in the output are the following lines:

wsrep_local_state_comment | Synced <– cluster is synced
wsrep_incoming_addresses | 10.1.2.1:3306 <– node db1 is a provider
wsrep_cluster_size | 1 <– cluster consists of 1 node
wsrep_ready | ON <– good 🙂

Step 8– Add the other cluster nodes
We need to verify nodes db2 and db3 have the correct configuration in /etc/my.cnf.d/server.cnf.
If the correct configuration is placed then we have to run below command to join db2 and db3 a member of the cluster. Perform the below command:

[root@linuxpcfix ~]service mysql start

Now check the cluster status using below command:

[root@linuxpcfix ~]mysql -u root -p -e “show status like ‘wsrep%'”

And you will see that node db2 is now known as the cluster size is ‘2’ and the IP address of node db2 is listed:

| wsrep_local_state_comment | Synced |
| wsrep_incoming_addre sses | 10.1.2.1:3306,10.1.2.2:3306 |
| wsrep_cluster_size | 2 |
| wsrep_connected | ON |
| wsrep_ready | ON |

Repeat the same command for node db3 to make a member of cluster.

[root@linuxpcfix ~] service mysql start

Check what has changed in the cluster status by executing the following command on for example db1:

[root@linuxpcfix ~]mysql -u root -p -e “show status like ‘wsrep%'”

Now you will see that node db3 is known as the cluster size is ‘3’ and the IP address of node-db3 is listed:

| wsrep_local_state_comment | Synced |
| wsrep_incoming_addresses | 10.1.2.3:3306,10.1.2.1:3306,10.1.2.2:3306 |
| wsrep_cluster_size | 3 |
| wsrep_connected | ON |
| wsrep_ready | ON |

Step 9 – Verify replication
The cluster is running without any issue. Let’s check whether it is working or not. On node-db1 create a database ‘myclustertestdb’ by run the following command:

[root@linuxpcfix ~]mysql -u root -p -e ‘CREATE DATABASE myclustertestdb;’
[root@linuxpcfix ~]mysql -u root -p -e ‘CREATE TABLE myclustertestdb.mycluster ( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(50), ipaddress VARCHAR(20), PRIMARY KEY(id));’
[root@linuxpcfix ~]mysql -u root -p -e ‘INSERT INTO myclustertestdb.mycluster (name, ipaddress) VALUES (“db1”, “10.1.2.1”);’
Check if the database, table and data exists:
[root@linuxpcfix ~]mysql -u root -p -e ‘SELECT * FROM myclustertestdb.mycluster;’
Enter password:
+—-+——+—————–+
| id | name | ipaddress |
+—-+——+———–+
| 2 | node-db1 | 10.1.2.1 |
+—-+——+—————–+

Now do the check on node db2:

mysql -u root -p -e ‘SELECT * FROM myclustertestdb.mycluster;’
Enter password:
+—-+——+—————–+
| id | name | ipaddress |
+—-+——+———–+
| 2 | node-db1 | 10.1.2.1 |
+—-+——+—————–+

Now do the same check on node db3:

mysql -u root -p -e ‘SELECT * FROM myclustertestdb.mycluster;’
Enter password:
+—-+——+—————–+
| id | name | ipaddress |
+—-+——+———–+
| 2 | node-db1 | 10.1.2.1 |
+—-+——+—————–+

From these outputs we can confirm that everything was successfully replicated by node db1 across all other nodes.

About

I am founder and webmaster of www.linuxpcfix.com and working as a Sr. Linux Administrator (Expertise on Linux/Unix & Cloud Server) and have been in the industry from last 7 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*

Time limit is exhausted. Please reload the CAPTCHA.

Categorized Tag Cloud