MariaDB-as-a-Service in Jelastic Cloud Platform

| July 29, 2021

Jelastic MariaDB-as-a-Service is a result of our many year experience with MariaDB hosting and analysis of the best practices on the platform. It is an automation that does all the "hard" work behind the scenes, providing you with a ready-to-work solution in a matter of minutes.

MariaDB-as-a-Service powered by Jelastic PaaS offers numerous benefits, among them:

  • significantly simplified installation - no "deep" knowledge is required, installation is fully automated
  • reliability of the cluster - multi-instance database structure (based on various schemes) eliminates downtime risks
  • optimization by default - configurations are automatically adjusted to match your specific environment and ensure the best performance
  • minimal time-to-market - you are literally a few clicks away from getting a fully functional MariaDB database at any given moment

Below, we'll review these and other advantages of the MariaDB-as-a-Service in detail. So, let's get started!

Challenges and Necessity in MariaDB Clusterization

One of the key challenges faced by developers is how to avoid downtimes. It is among the most devastating disasters that can occur for your application. For example, you can check the cost of downtime for the top US e-commerce sites:

downtime cost e-commerce

Moreover, the immediate money loss is nothing compared to the blow to reputation. You can lose existing and potential customers to competitors, which results in even greater money loss. Thus, it is in your best interest to avoid downtime by all possible means.

Obviously, it is impossible to ensure uptime on the standalone topology, as you'll always have a single point of failure. In order to ensure high availability, clustered solutions should be used. However, such an approach causes the complexity of initial configuration and future maintenance. In the case of MariaDB, the list of tasks would be close to the following:

  • Create the required number of server nodes
  • Add MariaDB repositories to all nodes
  • Install MariaDB on all nodes
  • Configure each server in the cluster
  • Open firewall on every server for inter-node communication
  • Install and configure SQL Load Balancer
  • Initiate and start the cluster
  • Check the nodes and the cluster operability
  • Control and timely perform database software updates

For inexperienced users, this is quite a challenging and time-consuming task. Many companies solve this problem by moving to the Databases-as-a-Service direction with easy-to-deploy solutions.

MariaDB hosting services
  • On premises - offers no automation; you'll need to manage all aspects of the hosting manually
  • Infrastructure-as-a-Service (IaaS) - provides automation only regarding an operating system and hardware maintenance
  • Self-Service Platform-as-a-Service (PaaS) - takes care of the installation and general maintenance while leaving the management of the database itself up to you (e.g. Jelastic self-managed MariaDB hosting, offered by default for every customer)
  • Managed Platform-as-a-Service - automates everything, providing you with a ready-to-work product (e.g. MariaDB-as-a-Service solution, request it from one of the certified Jelastic providers)

MariaDB-as-a-Service from Jelastic Cloud

Jelastic offers you out-of-the-box clusterization for MariaDB database, available in a single flick of a switch in the topology wizard.

install mariadb cluster

Database Replication Scheme

The platform provides three different MariaDB replication schemes, SQL load balancing and easy scalability. All settings are wrapped in the intuitive GUI for simple management.

Primary-Secondary Replication

When you enable an Auto-Clustering switcher, the MariaDB default Primary-Secondary scheme is selected. This scheme fits the most when the main load is reading.

For Primary-Secondary topology we provision two nodes, one of them is primary, and another is secondary. The proxy SQL servers are provisioned for load balancing of SQL queries. The list of automatically configured parameters inherent for this topology:

Server-id = {nodeId}
binlog_format = mixed
log-bin = mysql-bin
Log-slave-updates = ON
expire_logs_days = 7
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
replicate-wild-ignore-table = performance_schema.%
replicate-wild-ignore-table = information_schema.%
replicate-wild-ignore-table = mysql.%

Primary-Primary Replication

For Primary-Primary replication topology, we create two database nodes working in the primary mode and two ProxySQL load balancers in front of the cluster. Consider this scheme when the application is actively writing to and reading from the databases.

The list of automatically configured parameters for this topology looks as follows:

server-id = {nodeId}
binlog_format = mixed
auto-increment-increment = 2
Auto-increment-offset = {1 or 2}
log-bin = mysql-bin
expire_logs_days = 7
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
replicate-wild-ignore-table = performance_schema.%
replicate-wild-ignore-table = information_schema.%
replicate-wild-ignore-table = mysql.%

Galera Cluster Replication

For the Galera cluster, three MariaDB and two proxy SQLs nodes are added by default.

If required to create a HA database cluster distributed across geographically distant regions, the Galera cluster is the best choice. Creating such distributed topologies, keep in mind that only an odd number of nodes can maintain a quorum required to avoid split-brain issues that may occur due to network failure.

The automatically configured parameters for this topology:

server-id = {nodeId}
binlog_format = ROW
# Galera Provider Configuration
wsrep_on = ON
wsrep_provider = /usr/lib64/galera/
# Galera Cluster Configuration
wsrep_cluster_name = cluster
wsrep_cluster_address = gcomm://{node1},{node2},{node3}
wsrep-replicate-myisam = 1
# Galera Node Configuration
Wsrep_node_address = {node.ip}
Wsrep_node_name = {}

Automatic Vertical Scaling

We run MariaDB instances inside the isolated system containers, which dynamically change the amount of allocated resources (RAM and CPU) according to the current demands. Just specify the upper limit, no need to migrate or restart your container while growing.

The platform automatically reconfigures database parameters based on the scaling limit to ensure new resources can be utilized:

key_buffer_size = ¼ of available RAM if total >200MB, ⅛ if <200MB
table_open_cache = 64 if total >200MB, 256 if <200MB
myisam_sort_buffer_size = ⅓ of available RAM
innodb_buffer_pool_size = ½ of available RAM
Tip: It is possible to configure MariaDB memory related parameters manually if it is required by your application. You may change them in /etc/my.cnf.

Automatic Horizontal Scaling

You can easily add or remove instances with "+" and "-" control elements in the typology wizard. Based on the preselected scaling mode, nodes will be added as an empty new instance (Stateless) or as a clone of the primary layer (Stateful).

Custom triggers can be set up to automatically scale nodes based on the CPU, RAM, Network, or Disk usage.

Automatic Horizontal Scaling

If auto-clusterization is enabled, new nodes will be added in accordance with the cluster scheme:

Primary-Secondary Scaling

When adding a new database node, the Primary-Secondary scaling logic goes through the steps outlined below:

1. Define a secondary node in the topology
2. Drop the secondary from the ProxySQL balancer distribution list
3. Stop the secondary. A primary's binlog position is fixed automatically
4. Clone the secondary (stateful horizontal scaling)
5. Start the original secondary and return it to the ProxySQL distribution list
6. Reconfigure server-id and report_host on the new secondary
7. Launch the new secondary and add it to ProxySQL
8. As soon as all skipped transactions are applied to the new secondary and catch up with the primary, ProxySQL will add the new secondary to the distribution

Primary-Primary Scaling

In this topology, we always use primary to create new secondaries. But all in all, the process is similar to the Primary-Secondary scaling and described in the respective manifest:

1. Define a second primary node in the topology
2. Drop the 2nd primary from the ProxySQL distribution list
3. Stop the 2nd primary, binlog position is fixed automatically
4. Clone the 2nd primary (stateful horizontal scaling)
5. Start the 2nd primary & return it to the ProxySQL distribution list
6. Reconfigure cloned node as a new secondary.
(Disable primary configuration)
7. Launch a new secondary and add it to ProxySQL
8. The first primary is chosen for further scaling
9. Sequential choice of primaries as further secondaries allows to equally distribute secondaries between primaries

Galera Cluster Scaling

For the Galera cluster, the situation is different from Primary-Primary topology. Here, we use stateless scaling mode and another technique to catch up with the current database state, see the algorithm stages below:

1. Add a new node (stateless horizontal scaling)
2. Pre-configure wsrep_cluster_name, wsrep_cluster_address, wsrep_node_address and wsrep_node_name on the new node before adding it to the cluster
3. Add the new node to the cluster
4. Add the new node to ProxySQL (not for distribution)
5. The cluster automatically assigns a donor from existing nodes and does the State Snapshot Transfer from it to the new node
6. Once the synchronization is complete, ProxySQL will include the node in the distribution of the requests

Default and Custom Load Alerts

Jelastic PaaS provides a set of load alerts to automatically detect and notify you about high (i.e. close to the limit) resource usage. You can tune default alerts to match your needs and add additional conditions tracking - if the use of a particular resource type is above/below the stated value (%) during the appropriate period.

Load Alerts

If an alert is triggered, you’ll get an email notification about your application’s load change.

Anti-Affinity Rules

In order to ensure extra high availability and failover protection, all newly added containers of the same layer are created at the different physical hosts.

Anti-Affinity Rules

For example, if replication topology consists of the two nodes, they will be deployed on different hosts, as shown in the image above. In such a case, if one physical node fails, your database will still work on the other hosts.

Automatic Handling of OOM Killer Events

When an application runs out of memory, the OS offers two ways to resolve this issue: crash the entire system or terminate the process (application) that eats up memory. Better, of course, to end the process and save the OS from crashing. In a nutshell, the Out-Of-Memory Killer is the process that terminates the application to save the kernel from crashing. It sacrifices the application to keep the OS running.

In Jelastic Out-of-Memory(OOM) Killer process plays an important role in this scenario and keeps the kernel from panicking. When the MariaDB process is forcibly terminated, a message will appear in the log file /var/log/messages to provide information the OOM killer was triggered.

If OOM killer terminates the MariaDB process, Jelastic automatically adjusts database configs reducing the innodb_buffer_pool_size parameter by 10%. Then, the container is restarted to restore its operability. In case the situation occurs again, the mentioned autoconfiguration cycle is repeated.

You may customize the environment variables to adjust system behavior related to the OOM kills issue:

  • JELASTIC_AUTOCONFIG - enables/disables (true/false) Jelastic autoconfiguration
  • OOM_DETECTION_DELTA - sets the time interval (two seconds by default) for Jelastic during which it can analyze the /var/log/messages log after each service restart to determine if the OOM killer caused it
  • OOM_ADJUSTMENT - defines a value in %, MB, GB (10% by default) that the current innodb_buffer_pool_size parameter should be reduced after each OOM-caused restart
  • MAX_OOM_REDUCE_CYCLES - configures a maximum number of cycles for innodb_buffer_pool_size reduction (5 times by default)
OOM Killer Events

Now you know the most important specifics implemented for MariaDB hosting in Jelastic PaaS. Try for yourself at one of the platforms available around the world.