How to Configure Mariadb Maxscale Master-Slave with Galera Cluster

In this article we will install MariaDB galera cluster with MaxScale proxy database by MariaDB corporation. MaxScale is inteligent proxy database that can route database statements from cluster to one server. But unlike HAproxy, MaxScale uses asynchronius I/O of the Linux kernel  which should help with performance.

Cluster we are going to make will have read write split, meaning that all writes are done in master by MaxScale, and replicates them to all nodes as where they can be read.

Main Features of maxscale

  1. If any database server fails, connection will be automatically created to another node
  2. Connections can be dynamically added or removed from session
  3. Maxscale will route client request to number of database servers
  4. MaxScale can now replicate binlog events from MariaDB to Kafka. Data is sent to data lake environments like Hadoop or other data warehouses, which allows users to leverage real-time data for machine learning or real-time analytics

Installing the cluster

First thing is to set up your host file with hostnames and private ips of the all your hosts. This is needed so you could have your nodes communicate over private IPs. Here is my hosts file (/etc/hosts) on all 4 servers:

  • 10.100.0.1 dwh-db1
  • 10.100.0.2 dwh-db2
  • 10.100.0.3 dwh-db3
  • 10.100.0.5 dwh-lb1

First three will be for Galera cluster, and fourth for MaxScale proxy.

In this link : https://downloads.mariadb.org/mariadb/repositories/#mirror=one
You can to configure the yum or apt repository for the installation of the Mariadb 10.1 (in this release galera cluster is integrated).

Update the sources list (yum o apt)

Debian Like:
apt-get update
RedHat Like:
yum update

And then install the mariadb

Debian Like:
apt install mariadb-server rsync
Redhat Like:
yum install MariaDB-server.x86_64 rsync

Configuring and building the cluster

Next we need to edit configuration files and build the cluster. The node-01 will be the node for bootstrapping the cluster and other nodes will join to this one. So lets first edit configuration files on all three nodes.

vim /etc/mysql/my.cnf
or
nano /etc/mysql/my.cnf

There we need to find [galera] section and change these lines:

[galera]# Mandatory settings
wsrep_on                       =ON
wsrep_provider                 =/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address          =gcomm://10.100.0.1,10.100.0.2,10.100.0.3
wsrep_provider_options          = “gmcast.mcast_addr=239.192.0.11;gcache.size=1024M”
binlog_format = ROW
wsrep_forced_binlog_format = ROW
wsrep_gtid_domain_id = 1 # This is useful if you are also using an asynchronous replication
wsrep_gtid_mode = 1 # This is useful if you are also using an asynchronous replication
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
wsrep_debug = OFF
wsrep_log_conflicts = 1
wsrep_slave_threads  = 4

wsrep_cluster_name=”DB_CLUSTER”
wsrep_node_address=”10.100.0.1″ #This change for every node
wsrep_node_name=”dwh-db1″ #This change for every node
wsrep_node_incoming_address=”10.100.0.1″ #This change for every node
wsrep_sst_method=rsync
wsrep_slave_threads=4

The parameters gmcast.mcast_addr (not all architectures it can be implemented) and gcache.size are option of the tuning.
gmcast.mcast_addr we have an optimization of network traffic, by default the primary node sends unicast packets to each node, the multicast will be enough to send a single packet to get at each node.
Gcache.sized d
efines the disk space you want to node to use in caching write-sets (default value is 128M). This parameter defines the amount of disk space you want to allocate for the present ring buffer storage. The node allocates this space when it starts the database server.

After this is done, we need to start the cluster. If the database server is running by any chance, stop it on all three nodes.

systemctl stop mariadb

On first node run:

galera_new_cluster

On the other two nodes

systemctl start mariadb

Back on first node, we need to set the password, so we will run

mysql_secure_installation

After you ran that script, you can type this command on any node

mysql -u root -p -e “SHOW STATUS LIKE ‘wsrep_cluster_size'”

It should give you output similar to this:

MaxScale Proxy installation and preparing the cluster

When we have cluster up and running, we can turn to dwh-lb1 to install MaxScale on it. MaxScale is only supported on x86_64 architecture for now.  Lets download maxscale deb package with wget (i’m using a Centos7):

wget https://downloads.mariadb.com/MaxScale/2.0.5/rhel/7/x86_64/maxscale-2.0.5-1.rhel.7.x86_64.rpm

And after download, we can to install:

yum install maxscale-2.0.5-1.rhel.7.x86_64.rpm

Maxscale have been installed now we need to again enter the mysql prompt on our Galera cluster in order to make maxscale user and grant him enough privileges for maxscale to operate.

mysql -u root -p

And in mysql prompt type this line by line

CREATE USER ‘maxscale’@’%’ IDENTIFIED BY ‘you-password-here’;
Query OK, 0 rows affected (0.00 sec)

GRANT SELECT ON mysql.db TO ‘maxscale’@’%’;
Query OK, 0 rows affected (0.01 sec)

GRANT SELECT ON mysql.user TO ‘maxscale’@’%’;
Query OK, 0 rows affected (0.01 sec)

GRANT SHOW DATABASES ON *.* TO ‘maxscale’@’%’;
Query OK, 0 rows affected (0.01 sec)

N.B: The create user will be propagated on all nodes of the cluster, but the grants will not be propagated in the cluster  and consequently must apply the grant at each node.

Configuring MaxScale

Lets explain how this config file works. First part under [maxscale] will  respectively set number of CPU threads to 4, turn off logging to /var/log/syslog, turn on logging to /var/log/maxscale, turn on log warning, log to memory and log notices, and turn of log info and log developer options for debugging the code.

Next important section is [Galera Monitor]. There the we need to concentrate on several lines. The line that say servers=  need to be filled with names of servers. This is not hostname, this is how MaxScale names the servers in this config file further down. In our case we will set server1 to server3. User is username that we created in previous section, maxscale in our case. Password is whatever you set for your password for maxscale user.
Galera Monitor will pick one node as master and others as slave (out of our three nodes). The node with lowest WSREP_LOCAL_INDEX will be selected as master. If cluster configuration changes, the new selection may happen and node with lower index will selected as master. If you don’t want for master to change this way,  you can use option disable_master_failback and set it to 1 like in our config file. This way master wont change even if new node with lower index joins the cluster.

Then backup the config file

mv /etc/maxscale.cnf /etc/maxscale.cnf.bk

After the file have been backed up and moved, lets make new file from scratch.

vim /etc/maxscale.cnf

There, you can use this as skeleton for configuration, except off course you need to change the bold parts:

[maxscale]
threads=4

# LOG
#
logdir=/var/log/maxscale/
syslog=0 #log to syslog
maxlog=1 #log to /var/log/maxscale
log_to_shm=1 #log to memory
#log_warning=1 #log warnings
log_warning=1
log_notice=1 #log notices
#log_info=1 #log info
#log_debug=1 #log full debug for dev only

# Server definitions
#
# Set the address of the server to the network
# address of a MySQL server.
#

[server1]
type=server
address=dwh-db2
port=3306
protocol=MySQLBackend

[server2]
type=server
address=dwh-db2
port=3306
protocol=MySQLBackend

[server3]
type=server
address=dwh-db3
port=3306
protocol=MySQLBackend

# Monitor for the servers
#
# This will keep MaxScale aware of the state of the servers.
# MySQL Monitor documentation:
# https://github.com/mariadb-corporation/MaxScale/blob/master/Documentation/Monitors/MySQL-Monitor.md

[Galera Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=maxscale
passwd=enter your password
monitor_interval=2000
disable_master_failback=1 #Once master fails stay don’t fall back to old master
available_when_donor=1 #In Galera while in donor mode, continue to be available
# Service definitions

#
# Service Definition for a read-only service and
# a read/write splitting service.
#

# ReadConnRoute documentation:
# https://github.com/mariadb-corporation/MaxScale/blob/master/Documentation/Routers/ReadConnRoute.md

[Read-Only Service]
type=service
router=readconnroute
servers=server1,server2,server3
user=maxscale
passwd=enter your password
router_options=synced

# ReadWriteSplit documentation:
# https://github.com/mariadb-corporation/MaxScale/blob/master/Documentation/Routers/ReadWriteSplit.md

[Read-Write Service]
type=service
router=readwritesplit
servers=server1,server2,server3
user=maxscale
passwd=enter your password
max_slave_connections=100%

[MaxAdmin Service]
type=service
router=cli

# Listener definitions for the services
#
# These listeners represent the ports the
# services will listen on.
#

[Read-Only Listener]
type=listener
service=Read-Only Service
protocol=MySQLClient
port=3306

[Read-Write Listener]
type=listener
service=Read-Write Service
protocol=MySQLClient
port=3307

[MaxAdmin Listener]
type=listener
service=MaxAdmin Service
protocol=maxscaled
socket=default

After this have been saved, you can start the maxscale service

systemctl start maxscale.service

And test whether it is working

maxadmin list servers

Conclusion

We have successfully installed MaxScale proxy database as load balancer for our Galera cluster running on 3  nodes with fourth node for MaxScale. MaxScale is good solution for large clusters, today we made smallest possible configuration but scaling out from here is possible. I hope that article was useful for introducing yourself with MaxScale configuration, than you for reading and have a good day.

N.B : Over three nodes you must pay the license MariaDB Enterprise.

Leave a Reply

Your email address will not be published. Required fields are marked *