Extending Scaling Opportunities within Multiple Middleware Stacks
With Jelastic PaaS, hosting of your applications becomes truly flexible. In addition to the automated vertical scaling, Jelastic also lets you increase/decrease the number of servers in your environment, if it is required for your application.
As it was announced in the Jelastic 2.5 Press Release, starting with the current version you receive even more scaling opportunities with our extended multi nodes feature. Now you can scale horizontally not only the application server, but all of the nodes in your environment:
- cache instance
The only exception is Maven build-node (as there is no need to scale it).
The process of scaling has not changed and remains fairly easy – just open the environment topology wizard and use the appropriate buttons in its central pane, in order to state the required amount of nodes for the desired server:
The maximum available number of the same-type of nodes can vary and depends on your hosting provider (usually this limit stands at 8 nodes).
All newly added nodes are created at different hardware nodes, ensuring even more reliability and high-availability.The set of same-type nodes in an environment is presented as a string with server’s full name and xN label at its end, which defines the amount of instances in a cluster. Using the triangle icon for a particular server, you can expand the full list if its nodes. Each of these nodes has a unique Node ID identifier and can be configured or restarted separately:
In order to facilitate the interaction with numerous nodes of the same type, we’ve also added an ability to mark a particular node with an appropriate label, e.g. to define the master and slave nodes in a DB cluster.
Just double-click at the default Node ID: xxx value and specify the desired alternative name.
Regarding other management improvements, for now you can reset the password not just for a separate node, but for the whole set of similar nodes in case it’s necessary to regain administrator access for them:
And now, let’s dive into some details for each type of server you are able to scale.
The first thing you should notice while increasing the number of application server instances is the automatically enabled NGINX-balancer node, which appears in your topology wizard:
This server is virtually placed in front of your application and becomes an entry point of your environment. Its key role is to handle all the incoming users’ requests and evenly distribute them among the stated amount of app servers.
Such a load distribution is performed by virtue of HTTP balancing, though you can optionally configure the TCP balancing (e.g. due to your application requirements or in order to achieve faster request serving or in case of the necessity for the non-HTTP traffic to be balanced).
It’s also vital to note, that each newly added application server node will copy the initial one, i.e. it will contain the same set of configurations and files inside. In case you already have several instances with varying content and would like to add more, the very first node will be cloned while scaling.
Increasing the amount of NGINX-balancer nodes makes sense if you would like to improve your app’s accessibility and gain a few entry points for it. In this case, you need to have the Public IP addresses attached to all of your balancers.
Each newly added NGINX-balancer node will copy the initial one (in the same way as it was described for application server nodes above).
You should also take into consideration, that adding multiple balancers to your environment is not available if you have the High Availability enabled for your compute node. In this case you’ll be shown the corresponding error message while trying to scale your NGINX-balancer server up horizontally:For the same reason, the amount of your load balancer nodes will be automatically decreased to 1 if you switch the HA on, so please, pay attention to this.
With Jelastic you are able to scale both SQL and NoSQL databases:
Each newly added database node has its own host name (which consists of DB name, node ID and environment’s host) and credentials for administrator access, which you’ll receive in the separate emails after it is added.
In contrast to the app server’s and balancer’s approach, newly added to cluster DB nodes will have the default content, i.e. the existing databases and records of the initial server won’t be copied to the new ones.
Note that after increasing the number of DB server instances you’ll receive a set of independent nodes. In case you would like your data to be replicated between them, please follow one of the instructions below (depending on the chosen database system):
We are going to automate these operations in the nearest future in order to make our platform even more easy-to-use and to save your time for coding.
Nevertheless, you already have an ability to get the complete DB cluster of 2 nodes with the automatically adjusted master-slave replication, in just a few clicks. For that, navigate to our Marketplace site page (or to the appropriate section in your dashboard) and use a particular JPS package in accordance with the database you would like to use (see the Others category).
Memcached is a distributed memory object caching server, aimed to greatly accelerate the incoming requests serving. This is achieved by means of caching weighty data as its generation from scratch requires a considerable amount of resources. To make it more clear, some details on the Memory Allocation approach, used by Memcached system, are presented here.
Take into consideration that each new Memcached node, created as a result of horizontal scaling of the initial one, will contain the default set of data and configurations without any customizations:
Adding several Memcached instances to your environment will improve the application’s failover capabilities. For example, you can designate each of the nodes to serve a particular part of an application’s data, or, even more beneficial, adjust your application for storing its cache in all nodes simultaneously. In such a way, each server will contain the full cache duplicate, which eliminates the risks of probable application downtime or cached data loss, due to the particular Memcached server failure.
In addition, you are also able to use your Memcached cluster as storage for users’ sessions, which is especially useful while working with a number of clustered application servers. With such a solution adjusted for your Java or PHP application, all of the already handled sessions are backed up to the Memcached system. Afterwards, they can be fetched and reused by any app server in the cluster if the original one (which has initially processed this session) fails. And, your customers will not notice anything.
With several caching nodes added to the environment, you can configure the storing of copied sessions in each of them, ensuring these sessions will be accessible until at least a single Memcached server is working.
The ability to use multiple Elastic Virtual Private Servers highly facilitates their management, eliminating the necessity to create a separate environment for each one. Nevertheless, every VPS node remains representing an independent server, able to run a separate application:
Increasing the number of nodes for a VPS cluster adds the default bare server to it. Obviously, each node has its own hostname and administrator credentials, provided in the appropriate email correspondence.
Note that each of the added VPS nodes has a separate Public IP address attached, i.e. if your cluster consists of 3 private servers, you’ll be charged for usage of 3 different IPs.
Summing it up, multi nodes support provides you with even more possibilities for your applications’ hosting, while at the same time simplifying the processes of its management. Continue to use Jelastic Cloud and enjoy the constantly expanding capabilities you will receive!