How to Build Custom Software Stack Container Image and Add Template to Jelastic Private PaaS

By | April 22, 2020

Jelastic Platform-as-a-Service provides certified support of various stacks (application servers, databases, load balancers, cache servers and others) and this list can be extended with custom Docker-based templates. In this article, we’ll cover the steps on how to build a software stack as a container image (using Apache Kafka as a sample) and make it available as a custom template within the dedicated platform installed on-premise or on top of preferred cloud infrastructure.

Apache Kafka is a distributed streaming platform written in Java and Scala, which utilizes a publish/subscribe messaging model. In Kafka, the producers write messages to the topics and consumers (subscribers) of the appropriate topics read them. It includes a set of utilities for creating themes, sections, ready-made publishers, subscribers, etc. 

Now, let’s follow the steps required to get Kafka up and running as one of the available templates within topology wizard of your Private Jelastic PaaS installation.

Pre-Requirements

This guide is aimed for the platform owners, i.e. admin access to the Jelastic Cluster Admin Panel (JCA) is required. You can install your own Private Jelastic PaaS in one of the following ways:

Tip: If you are a Public Cloud customer with access to the dev panel, you can add custom templates to your account following the Building Docker Image guide.

Also, you’ll need a Docker Hub account to store your custom images and Docker Engine CE to build them (can be installed locally or at your Jelastic account). In this tutorial we’ll cover the latter option:

1. Locate the Docker Engine CE package in the Marketplace in the Dev & Admin Tools section.

Docker Engine Jelastic Marketplace

2. Within the installation frame, choose the Create a clean standalone engine option and click Install. If required, change the Environment name.

create docker engine jelastic marketplace

3. Once the environment is created, it can be used for building templates. Access the Engine Node via Web SSH to start.

web ssh access to docker container

We have prepared a ready-to-work example for the Kafka stack template building, publishing and testing. You can download this pre-configured example from GitHub repository https://github.com/jelastic/kafka-image-building and follow the video tutorial that shows the whole process from scratch including Jelastic Private PaaS installation on the top of DigitalOcean infrastructure. 

Below you can review the steps of building and publishing Kafka stack provided in the form of step-by-step instruction.

Composing Kafka Dockerfile

Create a Dockerfile via any preferable text-editor, e.g. vim:

$vim Dockerfile

Tip: For a better understanding of how Dockerfiles work, we recommend examining the official Dockerfile references.

Fill it with the content below.

#The Jelastic Java Engine with the zulujdk-11.0.5 tag is used as a base image for this solution
FROM jelastic/javaengine:zulujdk-11.0.5

#Define a set of variables that are passed to the builder (stack template name, Kafka version, Scala programming language version)
ARG STACK_NAME="Kafka"
ARG STACK_VERSION=2.4.1
ARG SCALA_VERSION=2.13

#Set the environment variables for the image: default username (that the main service will be run under), installation directory in the template, user home directory, location for variables.conf file, custom Java arguments (if needed), ports accessible from outside.
ENV STACK_USER=kafka \
    STACK_PATH="/opt/kafka" \
    HOME_DIR="/home/jelastic" \
    JAVA_OPTS_CONFFILE="/home/jelastic/conf/variables.conf" \
    JAVA_ARGS="" \
    JELASTIC_EXPOSE=9092

#Use the RUN instruction to execute the required commands. Replace the default jvm user of the base image with the kafka one.
RUN groupmod -n ${STACK_USER} jvm; usermod -l ${STACK_USER} jvm; \

#Install Kafka broker of the required version.  
    cd /opt && curl -O https://downloads.apache.org/kafka/${STACK_VERSION}/kafka_${SCALA_VERSION}-${STACK_VERSION}.tgz && \
    tar -xf kafka_${SCALA_VERSION}-${STACK_VERSION}.tgz && rm -f kafka_${SCALA_VERSION}-${STACK_VERSION}.tgz && \
    mv kafka_${SCALA_VERSION}-${STACK_VERSION} kafka && \
    mkdir -p /opt/kafka/{zookeeper,kafka-logs,logs}; chown -R kafka:kafka /opt/kafka; ln -sfT /opt/kafka/logs /var/log/kafka; \

#Create /etc/jelastic/metainf.conf file which will be used by JEM (Jelastic Environment Manager) to determine COMPUTE_TYPE.
    echo -e "COMPUTE_TYPE=${STACK_USER}\n\
    COMPUTE_TYPE_VERSION=${STACK_VERSION%%.*}\n\
    COMPUTE_TYPE_FULL_VERSION=${STACK_VERSION}\n\
    CERTIFIED_VERSION=2\n\
    " > /etc/jelastic/metainf.conf; \

#Remove unused init script from the base template, the default Hello World application, and deployment module that is not needed for the current template. 
    rm -rf /etc/rc.d/init.d/{jvm,java} && \
    rm -rf /home/jelastic/APP && \
    rm -rf /var/lib/jelastic/overrides/jvm-common-deploy.lib; \

#Define data location for the kafka and zookeeper services.
    sed -i 's|^log.dirs=.*|log.dirs=/opt/kafka/kafka-logs|g' /opt/kafka/config/server.properties; \
    sed -i 's|^dataDir=.*|dataDir=/opt/kafka/zookeeper|g' /opt/kafka/config/zookeeper.properties; \

#Rename default user crontab file from jvm to kafka.
    mv /var/spool/cron/jvm /var/spool/cron/kafka;

#Add custom configuration files from the src subdirectory of the project folder. The required data will be configured and described in the next section of this guide.
ADD src/. /

#Open the required port (9092) in the container firewall.
EXPOSE 9092

#Create persistent data volumes.
VOLUME /opt/kafka/kafka-logs /opt/kafka/zookeeper

#Add metadata labels to the image for Jelastic PaaS to recognize it as a certified template:
#Default container user.
LABEL appUser=${STACK_USER} \
#Short template description.
    description="Jelastic ${STACK_NAME}" \
#The maximum and minimum cloudlet limits for the instance.
    cloudletsCount=16 \
    cloudletsMinCount=8 \
#Stack name to be displayed within JCA and end-user dashboard.
    name=${STACK_NAME} \
#Unique stack identifier.
    nodeType=kafka \
#Stack version.
    nodeVersion=${STACK_VERSION} \
#Environment layer where the stack template should be displayed at the wizard.
    nodeMission=extra \
#URL to the template icons. Each stack requires two icons named as logo_16x16.png and logo_32x32.png (with the appropriate sizes of 16x16 and 32x32 pixels).
    sourceUrl="https://raw.githubusercontent.com/jelastic/icons/master/kafka/"

Building an Image

1. Let’s create a dedicated directory to work at, for example:

$mkdir kafka

Enter the directory and add a Dockerfile from the previous section.

$cd kafka

$vim Dockerfile

2. The next step is to create a directory tree where the system service files will be located. These files are added to the image due to the ADD src/. / line in Dockerfile.

$mkdir -p src/etc/sudoers.d src/etc/systemd/system  src/etc/jelastic src/var/lib/jelastic/overrides

The directory structure will be created as follows:

directory structure docker container

3. Now, you need to create several configuration files to set up your stack. Use vim text editor to create listed below files and provide the required content.

Define start/stop service commands that are allowed to be executed as user kafka in src/etc/sudoers.d/kafka file.

$vim src/etc/sudoers.d/kafka

File content:

Cmnd_Alias KAFKA_SERVICE = /sbin/service kafka stop, /sbin/service kafka start, /sbin/service kafka restart

%ssh-access ALL = NOPASSWD: KAFKA_SERVICE

 

The image uses systemd to initialize multiple daemon services. Thus we are in need to create a system container as for our example, since the utilized Kafka distribution has ZooKeeper as an integrated component, and the two services are running in a single container. As a result, two appropriate systemd service files should be created (kafka.service and zookeeper.service).

Tip: For advanced and clustered environments that require horizontal scaling, it is better to put these services inside separate containers. Keep an eye on our blog posts for upcoming tutorials on how to build clustered and horizontally scalable software stack templates.

$vim src/etc/systemd/system/kafka.service

File content:

[Unit]
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties > /opt/kafka/logs/kafka_stdout.log 2>/opt/kafka/logs/kafka_stderr.log'
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

docker engine file content

As you can see in the [Unit] section, the kafka service depends on ZooKeeper and only starts after the zookeeper service.

Create another service file for ZooKeeper similar to the previous step.

$vim src/etc/systemd/system/zookeeper.service

File content:

[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
PartOf=kafka.service

[Service]
Type=simple
User=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

ZooKeeper file in docker engine

The src/etc/jelastic/favourites.conf file configures shortcuts to the most common files and directories at the left pane of Configuration File Manager.

$vim src/etc/jelastic/favourites.conf

File content:

# This file is considered only during container creation. To modify the list of items at Favorites panel, please make the required changes within image initial settings and rebuild it.

[directories]
/home/jelastic
/opt/kafka/config
/opt/kafka/zookeeper
/opt/kafka/kafka-logs
/var/spool/cron
[files]
/home/jelastic/conf/variables.conf

favourites.conf file kafka

The redeploy.conf file lists files to keep during the container lifecycle.

$vim src/etc/jelastic/redeploy.conf

File content:

# This file stores links to custom configuration files or folders that will be kept during container redeploy.

/etc/jelastic/redeploy.conf
/opt/kafka/config
/opt/kafka/logs
/var/spool/cron/kafka
/usr/lib/locale
/etc/locale.conf

redeploy.conf kafka

It is used by JEM to determine the template-specific logic (service initialization and restart in the case of Kafka).

$vim src/var/lib/jelastic/overrides/envinfo.lib

File content:

case ${COMPUTE_TYPE} in
kafka)
        STACK_PATH='/opt/kafka';
        DATA_OWNER='kafka:kafka';
        SERVICE='kafka';
;;
esac

kafka template logic

4. Ensure that the stack template icons are available via the sourceUrl specified in the last line of the Dockerfile. However, if needed, you’ll be able to change them later on at JCA > Templates > Edit Template.

5. Build an image with the next command:

$docker image build -t <dockerhub-account>/kafka:2.4.1 .

6. Once it is built, login to your Docker Hub account and push the image to it:

$docker login

$docker image push <dockerhub-account>/kafka:2.4.1

Adding and Testing Template

Once your image is available at Docker Hub, it is time to add it as a custom template via the Jelastic Cluster Admin Panel. You can check the dedicated tutorial on how to add a template to the platform.

Also, we’ll cover the main steps below:

1. In order to add a new template, go to the JCA > Templates section and click Add > From Docker Repository.

Add Kafka Teplate From Docker Repository

2. Type the Repository name the same as you’ve used at the building stage (i.e. <dockerhub-account>/kafka).  Press Enter or click the magnifying glass icon to proceed.

kafka repository data
3. If everything is done properly, the image tags will be pulled from the Docker Hub repository.

kafka docker image tags

All the template metadata is automatically pulled from the image (icons will be applied automatically during addition). So, choose a tag that should be published as default and click Add

4. Now, your template should appear on the list. We recommend testing the image before publishing it. Select Kafka and click Actions > Preview Unpublished in the tools bar.

preview unpublished software stack template

Your dashboard account with all the software templates available (including unpublished ones) will be opened.

5. Create a New Environment with the Kafka template (at the Extra Services) and click Create.

kafka template in environment topology
6. Once the environment is created, you can check whether the Kafka broker server works properly. Open the Web SSH terminal and run the commands as follows:

  • create a new test-topic topic

$/opt/kafka/bin/kafka-topics.sh --create --topic test-topic --zookeeper localhost:2181 --partitions 1 --replication-factor 1

create test topic kafka template

  • write a few messages into the created topic

$/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic

write to kafka test topic file

  • exit with Ctrl+C shortcut and try to read the messages that were sent by the producer (the --from-beginning flag  is required to view messages sent before):

$/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --from-beginning

read kafka producer message

Congratulations! As you can see, all of the messages sent to test-topic were obtained by the consumer, which means the Kafka stack is working properly.

7. Now, you can return to the JCA and Publish the template to make it available at the platform dashboard for all your PaaS customers.

publish kafka software stack as container template

See the video tutorial to follow the whole process.

That’s all! You have passed through all of the steps required to create a custom software stack template for your Jelastic Private PaaS and get it published at the platform dashboard. Now, you can use this guide to adopt your own solutions and add them to the platform.