Kubernetes Cluster Setup with Automated Scaling and Pay-per-Use Pricing

| July 3, 2019

k8s clusterKubernetes (K8s) is one of the leading platforms for deployment and management of fault-tolerant containerized applications. It is used to build cloud-native, microservice applications, as well as enables companies to migrate existing projects into containers for more efficiency and resiliency. K8s cluster can handle complex tasks of container orchestration, such as deployment, service discovery, rolling upgrades, self-healing, and security management.

Kubernetes project is supported by Cloud Native Computing Foundation that helps to enable cloud portability without vendor lock-in. The K8s clusters can be deployed anywhere: on bare metal, public or private cloud. 

At the same time, we don’t need to forget that spinning up Kubernetes cluster on own servers from scratch is a complicated procedure. It requires a deep understanding of the cluster components and ways they should be interconnected, as well as time and skills for monitoring and troubleshooting. For more details refer to Kubernetes The Hard Way article.

In addition, managed K8s services automate and ease a list of operations but there still remains the “right-sizing” cloud problem. To get maximum efficiency you have to predict the size of a worker node and containers running inside. Otherwise, you may end up paying for large workers that are not fully loaded, or using small VMs and playing around automatic horizontal scaling which may lead to additional complexity. 

Jelastic has moved ahead solving a number of barriers and providing necessary functionality to get started with Kubernetes hosting easily while gaining maximum efficiency in terms of resource consumption:

  • Complex cluster setup is fully automated and converted to “one click” inside intuitive UI
  • Instant vertical scaling based on the load changes fully automated by the platform
  • Fast automatic or manual horizontal scaling of K8s worker nodes with integrated autodiscovery
  • Pay-per-Use pricing model is unlocked for Kubernetes hosting, thus there is no need to overpay for reserved but unused resources
  • Jelastic Shared Storage is integrated with Dynamic Volume Provisioner so physical volumes used by applications are automatically placed to the storage drive and can be accessed by the user using SFTP/NFS or via integrated file manager
  • No Public IPs are required by default, Shared Load Balancer processes all incoming requests as a proxy server and is provided out of the box
  • Provision the clusters across multiple regions, clouds and on-premises with no fractions and differences in configurations and no vendor lock-in 

kubernetes cluster

Jelastic PaaS supplies Kubernetes cluster with the following pre-installed components:  

  • Runtime controller Containerd
  • CNI plugin (powered by Weave) for overlay network support
  • Traefik ingress controller for transferring HTTP/HTTPS requests to services
  • HELM package manager to auto-install pre-packed solutions from repositories
  • CoreDNS for internal names resolution
  • Dynamic provisioner of persistent volumes
  • Dedicated NFS storage
  • Metrics Server for gathering stats 
  • Jelastic SSL for protecting ingress network
  • Web UI Dashboard

Explore the Kuberenetes cluster installation steps from the video or instruction below. 

Kubernetes Cluster Installation

1. To get started, log in to the dashboard, find the Kubernetes Cluster in the Marketplace and click Install. Note that this clustered solution is available only for billing customers - learn about trial limitations.

kubernetes in marketplace

2. Сhoose the type of installation:

  • Clean Cluster with pre-deployed Hello World example

k8s clean cluster

  • Deploy custom helm or stack via shell commands. Type a list of commands to execute helm chart or other commands for a custom application deployment.

deploy custom helm

By default, here you are offered to install Open Liberty operator with set of commands:


kubectl create namespace "$OPERATOR_NAMESPACE"

kubectl apply -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-crd.yaml

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-cluster-rbac.yaml | sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -f -

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-operator.yaml  | sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -n ${OPERATOR_NAMESPACE} -f -

kubectl apply -f https://raw.githubusercontent.com/jelastic-jps/kubernetes/v1.18.10/addons/open-liberty.yaml

3. As a next step, choose the required topology of the cluster. Two options are available:

  • Development: one master (1) and one scalable worker (1+) - lightweight version for testing and development purposes
  • Production: multi master (3) with API balancers (2+) and scalable workers (2+) - cluster with pre-configured high availability for running applications in production


    • Multi master (3) - three master nodes.
    • API balancers (2+) - two or more load balancers for distributing incoming API requests. In order to increase the number of balancers, scale them out horizontally.
    • Scalable workers (2+) - two or more workers (Kubernetes nodes). In order to increase the number of workers, scale them out horizontally.

dev production topology

4. Attach dedicated NFS Storage with dynamic volume provisioning. 

By default, every node has its own filesystem with read-write permissions but for access from other containers or persisting after redeployments, the data should be placed to a dedicated volume. 

You can use custom dynamic volume provisioner by specifying the required settings  in your deployment yaml files. 

Or, you can keep already pre-configured volume manager and NFS Storage built-in to Jelastic Kubernetes cluster. As a result, the physical volumes are going to be provisioned dynamically on demand and connected to the containers. Storage Node can be accessed and managed using file manager via dashboard, SFTP or any NFS client. 

attach nfs storage

5. If necessary you can install auxiliary software to monitor and troubleshoot K8s cluster, and enable API access with help of complementary tools checkboxes:

  • Install Prometheus & Grafana to monitor K8s cluster and applications health. This software requires additional 5GB of disk space for persistent volumes and consumes about 500 MB RAM
  • Install Jaeger tracing tools to ensure effective troubleshooting for distributed services
  • Enable Remote API Access to provide an ability to manage K8s via API

install auxiliary software

If you decide not to install them now you can do it later on with a specific Cluster Configuration add-on.

6. In order to highlight all package features and peculiarities, we initiate the installation of the Open Liberty application server runtime in Production Kubernetes cluster topology with built-in NFS Storage

Click the Install button and wait a few minutes. Once the installation process is completed the cluster topology looks as follows:

kubernetes topology

7. You can access Kubernetes administration dashboard along with Open Liberty application server welcome page from the successful installation window.

  • use Access Token and follow Kubernetes dashboard link to manage Kubernetes cluster 

k8s cluster deployed

access token

k8s dashboard

  • access Open Liberty welcome page by pressing Open in Browser button

access open liberty

open liberty

Jelastic Kubernetes Distribution Add-Ons

Jelastic K8s package comes with specific add-ons available at master node.

kubernetes addons

which are designed to:

  • install SSL Certificate Manager:
    Before proceeding to the installation attach a public IP to the one of worker nodes. Then create an A record for your external domain for example myservice.jele.website using generated public IP. After that put this domain name in the Certificate Manager user interface and press Apply.

    kubernetes domainAlong with managing SSL certificates it deploys a separate ingress controller to balance a workload between applications that will be bound to workers' public IPs. In this case all internal resources become accessible via worker nodes hostnames like node${nodeId}-${envName}.${platformDomain} except for the worker node which public IP address was used to bind the external domain(for example  myservice.jele.website) with help of Certificate manager.

  • enable/disable GitLab server integration within Jelastic PaaS
  • automatically upgrade Kubernetes cluster
  • switch on a remote API access (see the section below for more information), if it wasn’t enabled during installation

    kubernetes API

  • install and configure monitoring tools Prometheus and Grafana, if they weren’t enabled during installation.

    kubernetes monitoringThe respective email is sent and informational popup with access credentials appears:

    kubernetes monitoring info

  • Install tracing tool Jaeger.

    kubernetes troubleshoting

    The respective email is sent and informational popup with access credentials appears:

    jaeger access

Remote API Access to Kubernetes Cluster 

In order to access and manage the created Kubernetes cluster remotely using API, tick the Enable Remote API Access checkbox.

enable remote api

The Remote API Endpoint link and access Token should be used to access Kuberntes api-server (Balancer or Master node). 

api endpoint

The best way to interact with api-server is using the Kubernetes command line tool kubectl:

  • Install the kubectl utility on your local computer following the official guide. For this article, we have used installation for Ubuntu Linux.
  • Then create local configuration for kubectl. To do this open terminal on your local computer and issue the following commands:
$ kubectl config set-cluster mycluster --server={API_URL}
$ kubectl config set-context mycluster --cluster=mycluster
$ kubectl config set-credentials user --token={TOKEN}
$ kubectl config set-context mycluster --user=user
$ kubectl config use-context mycluster


{API_URL} - Remote API Endpoint link

{TOKEN} -  Access Token

Now you can manage your Kubernetes cluster from local computer just following the official tutorial

As an example, let’s take a look at the list of all available nodes in our cluster. Open local terminal and issue a command using kubectl:

user@jelastic:~$ kubectl get nodes

available nodes

In order to disable/enable API service after installation use Master node Configuration Add-On.

k8s cluster configuration

remote api access

Cluster Upgrade

To keep your Kubernetes cluster software up-to-date use the Cluster Upgrade Add-On. Just click on the Start Cluster Upgrade button. Addon checks whether new version is available or not and if so the new version will be installed. During the upgrade procedure all the nodes including masters and workers will be redeployed to new version one by one, all the existing data and settings will remain untouched. Keep in mind that upgrade procedure is sequential between versions so if you perform an upgrade to the latest version from the version far away behind the latest one you will have to run upgrade procedure multiple times. The upgrade becomes available only if new version becomes available and was globally published by the Jelastic team.

k8s cluster upgrade

In order to avoid downtime of your applications during the redeployment please consider using of multiple replicas for your services.  

Statistics and Pay-per-Use Billing

Jelastic provides automatic vertical scaling for each worker and master node in the Kubernetes cluster, thus the required resources are allocated on demand based on the real-time load. As a result, there is no need to monitor the changes all the time as the system makes it for you. In addition, there is a convenient way to check current load across a group of nodes or each node separately. Just press the Statistics button next to the required layer or specific node.

kubernetes usage statistics

Such highly-automated scaling and full containerization of Jelastic PaaS enables a billing model that is considered relatively new for cloud computing. Despite the novelty, this model has already gained a reputation of the most cost-effective “pay-per-use” or so-called pay-as-you-use approach. As a result, the payment for Kubernetes hosting within the platform is required only for the actually used resources with no need to overallocate thus solving the “right-sizing” problem inherent from the first generation of cloud computing pricing ("pay-per-limits" or so-called “pay-as-you-go” approach). 

The whole billing process is transparent and can be tracked via the dashboard (Balance > Billing History). Basically, the price is based on the number of real consumed resource unit cloudlet (128MiB + 400MHz). Such granularity provides more flexibility in bill forming, as well as clarity in cloud expenditures. 

pay per use billing

Jelastic PaaS allows automatic vertical scaling of Kubernetes cluster nodes, automatic horizontal scaling with auto-discovery of the newly added workers, management via intuitive UI, as well as implementation of the required CI/CD pipelines with Cloud Scripting and open API. For private setup, the platform can provision clusters across multiple clouds and on-premises with no vendor lock-in and with full interoperability across the clouds. It allows to focus the valuable team resources on the development of applications and services logic instead of spending time on adjusting and supporting infrastructure and API differences of each K8s service implementation. Try it out at one of public Jelastic PaaS service providers and share with us your feedback for further improvements!

Related Articles

Public IP for Access to Kubernetes Application in Jelastic PaaS

Scaling Kubernetes on Application and Infrastructure Levels

Jelastic Released Kubernetes Package with Integrated Auto-Clustering and Pay-per-Use Pricing Model

Kubernetes Cluster Automated Upgrade in Jelastic PaaS

Kubernetes Integration with GitLab CI/CD Pipeline