Kubernetes Helm logo
In the previous article, we translated the reference implementation's docker-compose.yaml file into a set of Kubernetes manifests. To deploy our app, we used the kubectl command to apply our manifest files. Since our reference implementation application has a considerable number of individual manifest files, we created a shell script to start and stop the application. While this approach is sufficient to demonstrate the application can execute in our development Kubernetes environment, we would likely need additional, customized manifests & scripts for every environment in which we deploy the application (e.g.m development, QA, production). What we need is a way that we could package the application and externalize its configuration. What we need is Helm.

Introducing helm

Helm is an open-source package management tool for Kubernetes that allows us to easily package, configure, install, upgrade, and manage application dependencies. Helm packages are referred to as charts and are composed of a set of YAML configuration files and Go templates that are rendered by Helm's template engine into corresponding Kubernetes manifest files.

Installing Helm

The first step is to install Helm in our Microk8s environment. Microk8s makes this as easy for use by supplying both a Helm and Helm 3 add-on. We will use the latest version: Helm 3. For a complete overview of changes between Helm 2 and Helm 3, refer to Migrating Helm v2 to v3.

microk8s enable helm3
[sudo] password for <user>
Enabling Helm 3
Fetching helm version v3.0.2.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.5M  100 11.5M    0     0  9310k      0  0:00:01  0:00:01 --:--:-- 9312k
Helm 3 is enabled

We can verify everything is working by checking Helm's version.

microk8s helm3 version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Creating Helm Charts

A Helm chart's elements exist in a well-defined directory structure. Helm can generate this structure for you using the following command:

microk8s helm3 create thinkmicroservices
Creating thinkmicroservices
if we tree the thinkmicroservice directory, we see:

tree thinkmicroservices

.
└── thinkmicroservices
    ├── charts
    ├── Chart.yaml
    ├── .helmignore
    ├── templates
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── ingress.yaml
    │   ├── NOTES.txt
    │   ├── serviceaccount.yaml
    │   ├── service.yaml
    │   └── tests
    │       └── test-connection.yaml
    └── values.yaml

4 directories, 10 files

	
Here we see Helm has generated the directory structure along with several important files:

  • charts- This directory contains manually managed chart dependencies
  • Chart.yaml- This YAML file contains the chart's metadata. The Charts.yaml file can include the following fields:

     
    apiVersion: The chart API version (required)
    name: The name of the chart (required)
    version: A SemVer 2 version (required)
    kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
    description: A single-sentence description of this project (optional)
    type: The type of the chart (optional)
    keywords:
      - A list of keywords about this project (optional)
    home: The URL of this projects home page (optional)
    sources:
      - A list of URLs to source code for this project (optional)
    dependencies: # A list of the chart requirements (optional)
      - name: The name of the chart (nginx)
        version: The version of the chart ("1.2.3")
        repository: The repository URL ("https://example.com/charts") or alias ("@repo-name")
        condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
        tags: # (optional)
          - Tags can be used to group charts for enabling/disabling together
        enabled: (optional) Enabled bool determines if chart should be loaded
        import-values: # (optional)
          - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
        alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times
    maintainers: # (optional)
      - name: The maintainers name (required for each maintainer)
        email: The maintainers email (optional for each maintainer)
        url: A URL for the maintainer (optional for each maintainer)
    icon: A URL to an SVG or PNG image to be used as an icon (optional).
    appVersion: The version of the app that this contains (optional). This needn't be SemVer.
    deprecated: Whether this chart is deprecated (optional, boolean)
    annotations:
      example: A list of annotations keyed by name (optional).
    	
    
    Note: apiVersion, name, and version are the only required fields.
  • .helmignore- this file specifies the file patterns that the helm package command should ignore when packaging the application. It supports Unix shell glob matching, relative path matching, and negation (prefixed with !). For more details, refer to the Helm .helmignore file documentation.
  • templates- This directory contains the application's manifest template files. Each manifest template may contain zero or more template directives. Helm's template engine injects configuration values into these template directives when it renders the manifest files. Each template directive starts with {{ is followed by a value key, and ends with }}.

    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: {{ .Release.Name }}-configmap
    data:
      logging-level: "INFO"
    
    
    Helm will lookup the .Release.Name key in the values.yaml and inject its value into the template.
    • deployment.yaml- is an example Deployment manifest generated by the helm3 create command.
    • _helpers.tpl-is generated by the helm3 create command and includes template partials and functions.
    • ingress.yaml-is an example Ingress manifest generated by the helm3 create command. An Ingress manages external access to the service in a cluster. It will often provide load-balancing, SSL-termination, and name-based virtual hosting.
    • NOTES.txt-is an example NOTES.txt file generated by the helm3 create command. This file contains useful instructions for Helm chart users that is echoed to the user at the end of a helm install or helm upgrade
    • serviceaccount.yaml-is an example serviceaccount.yaml file generated by the helm3 create command. A ServiceAccounts allows containers running in a Pod to communicate with Kubernetes's API server to perform cluster administration tasks.
    • service.yaml-is an example serviceaccount.yaml file generated by the helm3 create command. The service.yaml illustrates a templated example of a Kubernetes Service.
    • tests-Is a directory containing tests used to validate the chart. Each test contains a job definition specifying a container with a given command to run. Successful tests will return an 0-value exit code.
  • values.yaml- This YAML file contains the default configuration values for the chart. It is used by Helm's templating engine when generating the chart's Kubernetes manifests.

Run the reference implementation with helm

Now that we have a basic understanding of Helm, we are ready to package the reference implementation. We will start with a simple Helm chart that doesn't perform any templating. The first step is to modify the Charts.yaml of the thinkmicroservices chart we created earlier. We edit the YAML chart to describe our application.


apiVersion: v2
name: thinkmicroservices-ri
description: A simple Helm Chart for the ThinkMicroservices reference implementation application.

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

Now we will delete the deployment.yaml, ingress.yaml, serviceaccount.yaml, and services.yaml example files and copy our working Kubernetes YAML manifest files into the deployment directory. We won't make any changes to these files for now. Helm will still look at each file but won't find any template directives and render them without modification. Congratulations, you have now completed your first (albeit rudimentary) Helm chart. Of course, the next obvious step is to run it! We can accomplish this using the install command.

microk8s helm3 install simple-ri-chart ./thinkmicroservices
You should see output similar to this.

NAME: simple-ri-chart
LAST DEPLOYED: Tue Oct 27 22:43:11 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
We can check the status of our Helm install using microk8s kubectl . We will start by checking our deployments.

 microk8s kubectl get deployments
 NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
 grafana-deployment        1/1     1            1           17m
 mongo-express             1/1     1            1           17m
 postgresdb                1/1     1            1           17m
 elasticsearch             1/1     1            1           17m
 mongo-db-service          1/1     1            1           17m
 kibana                    1/1     1            1           17m
 content-service           1/1     1            1           17m
 fluentd                   1/1     1            1           17m
 rabbitmq                  1/1     1            1           17m
 postgresadmin             1/1     1            1           17m
 config-service            1/1     1            1           17m
 auth-service              1/1     1            1           17m
 telemetry-service         1/1     1            1           17m
 prometheus-deployment     1/1     1            1           17m
 account-history-service   1/1     1            1           17m
 feature-service           1/1     1            1           17m
 email-outbound-service    1/1     1            1           17m
 api-gateway-service       1/1     1            1           17m
 discovery-service         1/1     1            1           17m
 peer-signaling-service    1/1     1            1           17m
 sms-outbound-service      1/1     1            1           17m
 notification-service      1/1     1            1           17m
 account-profile-service   1/1     1            1           17m
 administration-service    1/1     1            1           17m

Now we can check our pods.

 microk8s kubectl get pods
 NAME                                       READY   STATUS    RESTARTS   AGE
 grafana-deployment-6f778cc4f-sb2dm         1/1     Running   0          21m
 mongo-express-679c74f889-mrjhr             1/1     Running   0          21m
 postgresdb-7f7685745f-kdbcj                1/1     Running   0          21m
 elasticsearch-7c59bb9fcc-d7hpc             1/1     Running   0          21m
 mongo-db-service-75489c945f-gxkpb          1/1     Running   0          21m
 kibana-5799f9fb6-72c2n                     1/1     Running   0          21m
 content-service-78d94c6dc4-gg2ch           1/1     Running   0          21m
 fluentd-8486c4db6f-mlkgt                   1/1     Running   0          21m
 rabbitmq-69876d496f-58x48                  1/1     Running   0          21m
 postgresadmin-6b586d8c88-z4kdp             1/1     Running   0          21m
 config-service-59447d9f59-7jw7g            1/1     Running   0          21m
 auth-service-7d789cd7b5-zw7zz              1/1     Running   0          21m
 prometheus-deployment-6d7645fcf9-mxhnn     1/1     Running   0          21m
 telemetry-service-7c8dbdc668-p94z2         1/1     Running   0          21m
 feature-service-d797c66cc-wlflv            1/1     Running   1          21m
 administration-service-5f54496fd5-s74d5    1/1     Running   1          21m
 api-gateway-service-6fb6bc99bd-d5s62       1/1     Running   1          21m
 peer-signaling-service-5fc44775b8-kn9l6    1/1     Running   1          21m
 discovery-service-5f64cbb6d4-nj6kc         1/1     Running   1          21m
 email-outbound-service-6fd8f89dfb-5p2xx    1/1     Running   1          21m
 account-history-service-65bdc97b4c-rtx52   1/1     Running   1          21m
 notification-service-585696d986-zkqn2      1/1     Running   1          21m
 sms-outbound-service-79c575c5df-8rg2h      1/1     Running   1          21m
 account-profile-service-b9b6b4878-4ktr4    1/1     Running   1          21m


and finally, our services.

 microk8s kubectl get services
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
kubernetes                ClusterIP   10.152.183.1     <none>        443/TCP               31d
account-profile-service   ClusterIP   10.152.183.65    <none>        5020/TCP              21m
kibana                    ClusterIP   10.152.183.164   <none>        5601/TCP              21m
mongo-express             ClusterIP   10.152.183.219   <none>        8081/TCP              21m
config-service            ClusterIP   10.152.183.98    <none>        8888/TCP              21m
content-service           ClusterIP   10.152.183.121   <none>        4040/TCP              21m
discovery-service         ClusterIP   10.152.183.11    <none>        8761/TCP              21m
account-history-service   ClusterIP   10.152.183.253   <none>        5010/TCP,5019/TCP     21m
elasticsearch             ClusterIP   10.152.183.6     <none>        9200/TCP              21m
auth-service              ClusterIP   10.152.183.165   <none>        7777/TCP              21m
administration-service    ClusterIP   10.152.183.38    <none>        9999/TCP              21m
sms-outbound-service      ClusterIP   10.152.183.192   <none>        6020/TCP              21m
email-outbound-service    ClusterIP   10.152.183.233   <none>        6010/TCP              21m
peer-signaling-service    ClusterIP   10.152.183.84    <none>        18433/TCP             21m
prometheus                NodePort    10.152.183.46    <none>        9090:31000/TCP        21m
feature-service           ClusterIP   10.152.183.201   <none>        3550/TCP              21m
postgresdb                NodePort    10.152.183.133   <none>        5432:30367/TCP        21m
postgresadmin             ClusterIP   10.152.183.209   <none>        1080/TCP              21m
rabbitmq                  ClusterIP   10.152.183.249   <none>        15672/TCP,5672/TCP    21m
mongo-db-service          ClusterIP   10.152.183.195   <none>        27017/TCP             21m
fluentd                   ClusterIP   10.152.183.86    <none>        24224/TCP,24224/UDP   21m
api-gateway-service       NodePort    10.152.183.105   <none>        8443:31368/TCP        21m
grafana                   NodePort    10.152.183.71    <none>        3000:30000/TCP        21m
telemetry-service         ClusterIP   10.152.183.75    <none>        3500/TCP              21m
notification-service      ClusterIP   10.152.183.47    <none>        6005/TCP              21m


Now that we have verified everything is running, we can shut it down using Helm's uninstall command.

microk8s helm3 uninstall simple-ri-chart
release "simple-ri-chart" uninstalled
We now have a basic Helm chart that will deploy the reference implementation to our Kubernetes cluster.

Externalizing values

Currently, our Helm chart does not externalize any of its configuration values. In this step, we externalize two values values: namespace and API gateway service port. Our values.yaml file declares the following:


# Default values for thinkmicroservices.
# This is a YAML-formatted file.
 
application:
  namespace: t16s
  apiGatewayService:
    port: 9443
 


Now that we have externalized the desired values, we must modify all of the manifests files in the template directory. We will use the api-gateway-service-service.yaml as our example since it contains both externalized values. ( Note- only this manifest requires the port template directive change. )


# Default values for thinkmicroservices.
# This is a YAML-formatted file.
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert -f docker-compose.yaml
    kompose.service.type: clusterip
    kompose.version: 1.21.0 (992df58d8)
  creationTimestamp: null
  labels:
    io.kompose.service: api-gateway-service
  name: api-gateway-service
  namespace: {{ .Values.application.namespace }}
spec:
  ports:
  - name: "{{ .Values.application.apiGatewayService.port }}"
    port: {{ .Values.application.apiGatewayService.port }}
    targetPort: 8443
  selector:
    io.kompose.service: api-gateway-service
  type: NodePort
status:
  loadBalancer: {}

We have added namespace: {{ .Values.application.namespace }} to the metadata section, and also modified the spec/ports/name and spec/ports/port values to {{ .Values.application.apiGatewayService.port }}. Once the remaining manifests have been modified to include the namespace, you can use helm to install the application again. To verify the installation, you can use the same kubectl commands again; however, you will need to include the namespace flag.

 microk8s kubectl get deployments --namespace t16s
 microk8s kubectl get pods --namespace t16s
 microk8s kubectl get services --namespace t16s

Updating the Notes.txt

As mentioned earlier, the NOTES.txt file is echoed to the console whenever a helm install or helm upgrade is called. We will provide a new file that will tell us the name of the application, its release, and the command to get the API Gateway Service's IP address and port.


You have installed: {{ .Chart.Name }}.

Your release is named {{ .Release.Name }}.

Use the following command to determine the API-GATEWAY-SERVICE IP and port.

microk8s kubectl get all --namespace t16s | grep "service/api-gateway-service" | awk '{print $3 "\t" $5}'
                                                      

Now, whenever we perform a Helm install or upgrade, that message will be run through Helm's' template engine and echoed to the console.

 	NAME: simple-ri-chart
 	LAST DEPLOYED: Wed Oct 28 17:24:28 2020
 	NAMESPACE: default
 	STATUS: deployed
 	REVISION: 1
 	NOTES:

 	You have installed: thinkmicroservices-ri.

 	Your release is named simple-ri-chart.

 	Use the following command to determine the API-GATEWAY-SERVICE IP and port.

 	microk8s kubectl get all --namespace t16s | grep "service/api-gateway-service" | awk '{print $3 "\t" $5}'

We can then run the suggested command to get the API Gateway Service IP address and port.

microk8s kubectl get all --namespace t16s | grep "service/api-gateway-service" | awk '{print $3 "\t" $5}'
You should see an output similar to this.

10.152.183.166	9443:30553/TCP
We now have a working example of the refererence implementation packaged in a Helm chart.

Overriding default values

Frequently, we will want to override the chart's default values. We can override the desired values from the command line using the --set flag.

helm3 install --set value1=v1 --set value2=v2  example-release ./helm-chart-name
In this example, we overrode two values: value and value2.

If we have many values to override, we can pass one or more YAML files containing the replacement values.

helm3 install -f override1.yaml -f override2.yaml example-release ./helm-chart-name
In this example, we provide two external files: override1.yaml and override2.yaml. Helm will override the default values with the contents of override1.yaml and then override those values with the values contained in override1.yaml. Each subsequent file's value takes precedence of the preceding files.

Learning more

At this point, we are now capable of packaging our application as a Helm chart, externalize its configuration, and override those configurations. However, we have only scratched the surface of Helm's capabilities. To dive deeper into Helm, visit Helm's Documentation site.

Resources

Coming Up

In our next article we will look at deploying the reference implementation to a Kubernetes cloud provider.