Running a Hubot in the Kubernetes cluster — part 2
In my previous post I started a preparation process of allowing to run a Hubot instance in the Kubernetes cluster. I presented how to set up a Docker image, how to use MySQL as Hubot’s brain and so on. Right now we are ready to jump into Helm and Charts and do a real Kubernetes stuff!
Just one remark: I assume you have a Kubernetes cluster running plus you have an account on Docker Hub to push your images there. Also, DevOps already installed Jenkins on the cluster which you can use for deployment.
Helm
The very first thing we must do is to install Helm which is also known as the Package manager for Kubernetes. It basically allows deploying any service into a Kubernetes cluster based on a defined Chart. The chart is a packaging format like Dockerfile or Jenkinsfile, which you can use to describe how to deploy your service. It can be simple or complicated and allows you to deploy really complex systems. In our case I will use two charts, one for MySQL — I will use a ready-to-use chart for that, and a second chart which describes how to deploy our Hubot.
Note: I’ve been using Helm 3.x which is a bit different than Helm 2.x and it generates a bit different skeleton.
Hubot Chart
There is a stable Chart already available, but I wanted to understand and learn how to write Charts, that’s why I decided to do it from scratch. Also, I wanted to use MySQL to store Hubot’s brain.
The simplest way to start writing a Chart is to use a helm
command line to create a skeleton. Create folder helm
or k8s
in the root folder of your Hubot, enter it and run the command below:
helm create my-hubot
It will generate a set of files that defines your Chart and you should end up with a similar structure as the one below:
./
└── helm/
└── my-hubot/
|-- .helmignore # Contains patterns to ignore
|-- Chart.yaml # Information about your chart
|-- values.yaml # The default values for your templates
|-- charts/ # Charts that this chart depends on
└── templates/ # The template files
|-- _helpers.tpl
|-- deployment.yaml
|-- ingress.yaml
|-- NOTES.txt
|-- service.yaml
|-- serviceaccount.yaml
└── tests/ # The test files
└── test-connection.yaml
(you can delete tests
folder and serviceaccount.yaml
as we won’t use them)
The most important files are values.yaml
, deployment.yaml
. We also need a template file for configmap.yaml
which we have to create manually. Chart.yaml
contains information about our Chart, its version and dependencies — it will be needed to deploy MySQL.
Chart.yaml
These files contain some metadata describing your application as used apiVersion
, name
and version
. It’s filled with comments when you should increase version
and so on, please read them through. In our case the most important thing is an ability to define dependencies on other Charts. As our Hubot will be using MySQL we must define it here and configure in values.yaml
, just add this section at the bottom of the file:
dependencies:
- name: mysql
version: 1.4.0
repository: https://kubernetes-charts.storage.googleapis.com
Right now we are ready to setup the rest.
values.yaml
This file contains all values that are used in the Chart, it’s a good practice to expose all the configuration options of your Chart here.
Tip: if you are using IntelliJ IDEA you can install a plugin to support editing these files.
On the very top of the file, there is a section image
which is a piece of source information about Docker image to use in this deployment. Let’s change repository
to match the one you are using and add tag
to allow deploy the exact version of our Hubot image.
Also, we if you’re using a private Docker repo you must define a secret used to pull the image (imagePullSecrets
), ask your DevOps guys for it and add it here:
image:
repository: my-org/my-hubot
tag: latest
pullPolicy: IfNotPresentimagePullSecrets:
- name: sml-docker-hub
Now let’s move to ingress
section to define a few endpoints used by Kubernetes to monitor our service and incoming Slack actions. Uncomment the two annotations and set tls-acme
to false
— we won’t be using https for now. And now define host
and paths
as below:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "false"
path: /
hosts:
- host: my-hubot.sml.io
paths: [/status,/slack/actions]
tls: []
# - secretName: my-hubot-tls
# hosts:
# - my-hubot.sml.io
The very last value that we must define is a Slack Token used by Hubot’s Slack Connector. Let’s add a slack
section with token
value — we can use this section to define other useful Slack options, eg. exitOnDisconnect:
slack:
token: 11111111111
exitOnDisconnect: true
apiUrl: "https://myorg.slack.com/api"
As you can see the token
value is a random string, but that’s ok. We will provide a proper value during deployment when a Jenkins job will be running. We will come back to this file while configuring our MySQL instance, but now it’s time to define deployment.
deployment.yaml
I assumed you will be using the provided service account, configured by Cluster Admins, let’s delete the serviceAccountName
option as not needed.
We can now adjust containers
section to use the tag
defined in values.yaml
instead of AppVersion
— this is more handful idea as we can use Jenkins to tag our image and then deploy it.
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
As you can see, we can now use values defined in values.yaml
to configure our deployment and as it is possible to override those values when deploying our service, we can externalise the whole configuration.
Now we must expose the container port plus define some ENVs that will be passed into the container and used by our bot, so let’s modify ports
section and add envFrom
section as follow:
ports:
- name: http
containerPort: 8080
protocol: TCP
envFrom:
- configMapRef:
name: {{ include "my-hubot.fullname" . }}
Tip: when using IntelliJ IDEA with a proper plugin you can navigate to the included value by CMD+B
As we defined a new configMapRef
section, right now we must create a new file representing the config. At the same level create a file configmap.yaml
and fill it as follow:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}
labels:
{{- include "my-hubot.labels" . | nindent 4 }}
data:
HUBOT_SLACK_TOKEN: {{ .Values.slack.token }}
SLACK_API_URL: {{ .Values.slack.apiUrl }}
MYSQL_URL: mysql://{{ .Values.mysql.mysqlUser }}:{{ .Values.mysql.mysqlPassword }}@{{ include "my-hubot.fullname" . }}-mysql/{{ .Values.mysql.mysqlDatabase }}
I think this file is very self-explanatory. We are creating a set of ENVs based on our values.yaml
and as those values can be passed in when deploying the Chart we can customise our deployment during deploy phase without a need to modify the bot itself.
The two most important sections left, readinessProbe
and livenessProbe
are used by k8s to check if your service is ready to serve requests and is alive or it needs to be killed and a new Pod should be started. Just expose a simple endpoint in your Hubot:
robot.router.get '/status', (req, res) ->
robot.logger.debug "Status check"
res.send status or 'Ok'
This endpoint shouldn’t check if a connection with MySQL is ready or some other dependent services are ready. The assumption here is that the other services are also controlled by k8s or monitored in another way. Having this endpoint we can define a health check:
livenessProbe:
httpGet:
path: /status
port: http
readinessProbe:
httpGet:
path: /status
port: http
Ok, our Hubot deployment is almost done, what left is MySQL configuration. Switch back to values.yaml
MySQL config in values.yaml
As you remember we have defined that out service depends on MySQL Chart and this Chart exposes a number of values that can be used to configure the running MySQL instance, check the website for more details. In case of using MySQL with Hubot we must define username and password plus some init script to create the database and table if not exist. In values.yaml
define a new section as below to configure MySQL deployment:
mysql:
mysqlRootPassword: pass
mysqlUser: myhubot
mysqlPassword: myhubot
mysqlDatabase: myhubot
initializationFiles:
myhubot-db.sql: |-
CREATE DATABASE IF NOT EXISTS myhubot DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
USE myhubot;
CREATE TABLE IF NOT EXISTS brain (id INT,data TEXT, PRIMARY KEY (id));
Put some dummy passwords as these will be overwritten anyway during deploy. And done … almost ;-) The problem here is that Hubot starts in a few seconds but MySQL requires a bit more of time to be ready, so in this case our Hubot will fail at the start as MySQL is not available yet.
To resolve this dependency we must tell k8s how to init our container with Hubot and to wait on MySQL to be ready, to do this we must define a initContainers
section in deployment.yaml
initContainers
If you want to perform some actions before the container gets ready you can use scripts in the initContainers
section. In our case it has to wait on MySQL to be ready, so we can use the MySQL image which is already available with the MySQL Chart. On the level of containers
section define initContainers
section:
containers:
- name: {{ .Chart.Name }}
...
initContainers:
- name: init-mydb
image: mysql:5.7
command: ['sh', '-c', 'until mysqladmin --host={{ include "janusz-the-bot.fullname" . }}-mysql --port={{ .Values.mysql.service.port | default "3306" }} -u root -p{{ .Values.mysql.mysqlRootPassword }} ping; do echo waiting for {{ include "janusz-the-bot.fullname" . }}-mysql; sleep 2; done;']
Basically we run a container based on amysql:5.7
image and execute a command in a loop to check if our MySQL instance is ready. We again used values from values.yaml
to execute the command, makes sense :)
You can say that this can last forever but Helm’s default timeout on deploying a Chart is 5 minutes, so easy ;-)
I hope that your Chart is ready and it can be deployed, let’s test that!
helm --dry-run
If you already configured access to your k8s cluster, so you can use kubectl
to monitor and manage it, you can also dry run your deployment with helm
command.
Use this command to test your Chart:
helm upgrade --install --atomic --dry-run my-hubot ./helm/my-hubot
Using upgrade --install
means that if the Chart was already installed, just upgrade it, in other case install it from scratch.
The --atomic
flag makes everything in a transaction so in case of an error, all the changes will be reverted.
And finally --dry-run
tells Helm to just test the deployment.
Now you should prepare an additional yaml file but keep it out of the repo with Hubot’s code (or you can put it there but then ignore with .gitignore
). This file must contain all the secrets used by the Chart to deploy the service. Its content must match the structure of values.yaml
file as like this:
slack:
token: "...."
exitOnDisconnect: true
apiUrl: "https://mynamespace.slack.com/api"mysql:
mysqlRootPassword: "..."
mysqlUser: "myhubot"
mysqlPassword: "..."
The same file can be then used by Jenkins during hands-free deployments so keep it in secret ;) And again you can run a test deployment with the real values:
helm upgrade --install --atomic --dry-run -f ~/myhubot-secrets.yaml my-hubot ./helm/my-hubot
It’s time to tell Jenkins how to deploy all the stuff, so let’s prepare a Jenkinsfile
Jenkinsfile
This is a simple scripted pipeline (and I would like to convert it into a declarative pipeline at some point — stay tuned!), so we must start with:
#!/usr/bin/env groovy
Then declare some initial properties describing our job:
properties([
buildDiscarder(
logRotator(
artifactDaysToKeepStr: '',
artifactNumToKeepStr: '',
daysToKeepStr: '',
numToKeepStr: '10'
)
),
disableConcurrentBuilds(),
[
$class: 'GithubProjectProperty',
displayName: 'My Hubot',
projectUrlStr: 'https://github.com/my-org/my-hubot/'
],
overrideIndexTriggers(false),
pipelineTriggers([
githubPush(),
pollSCM('H/15 * * * *')
])
])
Nothing really to explain, you can google each of these properties if you want to. Now add some nice tag handling logic which I borrowed from our fantastic DevOps guy — thanks Grzegorz Kocur!
String getGitHash() {
return sh(
script: 'git describe --always --tags --abbrev=7',
returnStdout: true
)?.trim()
}
I then combine it with the current build number to tag an image
dockerTag = "CD-${currentBuild.number}-${getGitHash()}"
Now let’s add some boilerpart things:
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
serviceAccountName: ${serviceAccount}
securityContext:
runAsUser: 0
volumes:
- name: cache
emptyDir: {}
- name: dockersock
hostPath:
path: /var/run/docker.sock
containers:
- name: docker
resources:
image: docker:stable
command:
- cat
tty: true
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
- name: helm
image: lachlanevenson/k8s-helm:v3.0.0
command:
- cat
tty: true
- name: node
image: node:8.16.2-alpine3.10
command:
- cat
tty: true
"""
) {
...
Once running Jenkins in k8s with the Kubernetes plugin you can run your jobs using predefined containers and perform a variety of steps within each of the containers. Here we use docker
container which is obvious, then helm
to deploy the Chart and finally node
to build the bot before deploying it.
So let’s go through each step:
) {
node(POD_LABEL) {
stage('Checkout') {
checkout scm
dockerTag = "CD-${currentBuild.number}-${getGitHash()}"
}
The first step is to check out the source code and to define the tag, then we need to compile the code to see if there are any errors. This is also a good place to run your unit tests used to test the Hubot:
) {
node(POD_LABEL) {
stage('Checkout') {
...
}
container('node') {
stage('Compile') {
sh 'npm install --global coffeescript'
sh 'coffee -c -p scripts > /dev/null'
}
}
If compilation fails, the whole job fails and nothing wrong will happen :) Time to build a docker image with the latest version of our bot and publish it to Docker Hub:
) {
node(POD_LABEL) {
stage('Checkout') {
...
}
container('node') {
...
}
container('docker') {
stage('Build docker image') {
sh "docker build -t my-org/my-hubot:${dockerTag} ."
}
stage('Publish docker image') {
withCredentials(
[usernamePassword(
credentialsId: 'docker-hub-credentials',
passwordVariable: 'DOCKER_PASSWORD',
usernameVariable: 'DOCKER_USERNAME')]) {
sh """
docker login -u \${DOCKER_USERNAME} -p \${DOCKER_PASSWORD}
docker push my-org/my-hubot:${dockerTag}
"""
}
}
}
To push the image into Docker Hub we must provide username and password to do this, it can be achieved using withCredentials
step and assigning them to ENVs that can be used with a docker
command. Here I’m using my fancy tag as a combination of a build number and commit hash.
The final step is to deploy everything using Helm into k8s cluster:
) {
node(POD_LABEL) {
stage('Checkout') {
...
}
container('node') {
...
}
container('docker') {
...
}
container('helm') {
stage('Deploy') {
if (env.BRANCH_NAME == 'master') {
withCredentials(
[file(
credentialsId: 'janusz-secrets',
variable: 'secrets')]) {
sh """
helm dependency update helm/my-hubot/
helm upgrade --install --atomic\
--set image.tag=${dockerTag}\
-f ${env.secrets}\
my-hubot ./helm/my-hubot
"""
}
} else {
echo 'Not a master branch, skipping deploy'
}
}
}
}
}
Deploy is only performed if we checked out the source code from the master
branch. In other way just ignore this step.
Next, I’m using the same file with secrets as defined early to test the Chart. You can define new credentials of type secret file
in Jenkins and upload the file, which then can be used in your jobs as shown above.
The very first command in this step is to update dependency of the Chart, to allow deploy them — this must be performed at least once as only then helm
will download them.
The next command is a well-known deploy command but without --dry-run
and that’s all.
If everything has gone well, you will be running your Hubot in k8s cluster now — don’t worry if it doesn’t work from scratch, it took me around 100 job runs to set up everything as it should be ;-)
Final word
A state of my knowledge presented in this blog post can be already outdated, DevOps ecosystem is changing faster than JavaScript these days :D
I hope you at least grasp the main idea what it means to run a service, Hubot in Kubernetes cluster and you will be able to adjust these ideas and tips to your case. If not, left a comment and maybe I will be able to help you!
Enjoy!