MariaDB Galera Cluster Replication

By | December 5, 2018

Extending the topic around database auto-clustering, we’d like to cover MariaDB Galera Cluster, high availability synchronous replication solution, that provides:

  • True multi-master topology
  • Automatic new node provisioning
  • No data loss when nodes crash
  • Data replicas remain consistent
  • Automatic membership control
  • No complex and time-consuming failovers
  • Parallel transaction execution on all cluster nodes
  • No slave lag
  • No lost transactions
  • Reads/writes scalability
  • Smaller client latencies
  • Support of multi-cloud and multi-region deployments

According to official documentation, Galera implements so-called certificationbased replication. The basic idea is that the transaction to be replicated – the write set – not only contains the database rows to replicate, it also includes information about all the locks that were held by the database (ie. InnoDB) during the transaction. Each node then certifies the replicated write set against other write sets in the applier queue, and if there are no conflicting locks, we know that the write set can be applied. At this point, the transaction is considered committed, after which each node continues to apply it to the InnoDB tablespace.

This approach is also called virtually synchronous replication since it is logically synchronous, but actual writing (and committing) to the InnoDB tablespace happens independently (and thus, strictly speaking, asynchronously) on each node.

In Jelastic, Galera Cluster can be automatically activated while creating the environment. The default topology consists of 2 ProxySQL load balancers and 3 MariaDB instances.single primary mgr

MariaDB Galera Cluster Installation

Navigate to Jelastic dashboard, click Create Environment and select MariaDB server within topology wizard. Then activate Auto-Clustering and choose Galera scheme. You can increase the default number of databases by pressing “+” in the Horizontal Scaling block.single-primary replication

In a few minutes, the environment will be created with the chosen topology and pre-configured interconnections.

mysql group replication package

You can perform the state-of-health monitoring of the cluster nodes via the Orchestrator admin panel that can be accessed with the credentials from the email related to ProxySQL Load Balancer deployment. The cluster members are shown at the panel as separated clusters with one instance inside.single-primary mysql clustering

Application Connection to MariaDB Galera Cluster

Let’s establish a connection to our MariaDB Galera Cluster from Java web-application, using  ProxySQL load balancer as an entrypoint. Follow the linked guide to find out about the connection to other types of applications.

The creation of each master node within MariaDB cluster is accompanied by the email with phpMyAdmin credentials. Accessing the database via phpMyAdmin panel is useful for debugging or performing some manual operations on the databases.

1. Log in to phpMyAdmin using Admin Panel URL, Username and Password (received in the email). Choose the existing database test (or create whatever you want) at the left pane. After that in the right pane you will see there are no tables in the database test.multi primary mgr

2. Get back to Jelastic dashboard. We use a separate environment with a Tomcat 9 application server for this example. Now, we have to create Database config file for our test application. To do this, click on the Config icon next to your compute node, then navigate to /opt/tomcat/temp directory and create mydb.cfg file using platform built-in file-manager.mysql cluster group replication

3. Put the following lines into the mydb.cfg file and fill all the fields with entrypoint credentials like on the picture above.






  • {connect_URL} – link to your DB cluster load balancer (i.e. ProxySQL node)
  • {db_name} – name of the database. We chose test in the first step
  • usePipelineAuth – if activated different queries are executed using pipeline (all queries are sent, only then all results are read), permitting faster connection creation. This value should be set to false, as such implementation doesn’t work with the ProxySQL in front of the cluster
  • {user} and {password} – database credentials received in the email

Download test application using the link below and deploy to the Tomcat server.

mysql group replication multi primary


  • To get full compatibility with proxy layer use the latest JDBC connector for MariaDB. Put connectors to /opt/tomcat/webapps/ROOT/WEB-INF/lib/multi-primary group replication
  • Don’t forget to restart your application server to apply mydb.cfg changes, by pressing Restart Nodes button.
    mysql auto-clustering

5. Once deployment is finished, click Open in Browser in a popup window or next to your application server. Click on Create test table in your database button in the application replication mysql

6. In order to ensure connection was established and a new table was created, return to the MySQL admin panel.mysql cluster auto installer

You should see the table with the name {date-time of creation}. To make sure the replication works properly, go through all of database phpMyAdmin panels in the cluster to check up the data availability using the same credentials.

Tip: In Jelastic, all MariaDB nodes are equipped with phpMyAdmin panel. To access it just press Open in Browser button in the line of database node.check group replication

Great! In just a few simple steps, you’ve established access to your DB cluster from web-application and performed a simple management operation via a single entrypoint.

Now you have highly available and reliable MariaDB Galera Cluster automatically installed in a matter of minutes, and provided with out-of-box intuitive management tools. Benefit from embedded database auto-clustering with Jelastic PaaS.

Related Articles

MySQL Single-Primary and Multi-Primary Group Replication

Master-Slave and Master-Master Replication with MariaDB/MySQL Auto-Сlustering

MariaDB/MySQL Auto-Сlustering with Load Balancing and Replication for High Availability and Performance


Subscribe to get the latest updates