Helm

Kubernetes configuration can be performed via imperative command line or declarative YAML files. While OpenShift provides a user interface to allow manual configuration of the Kubernetes cluster, which is ideal for discovery and development purposes, but is not sustainable in a production solution.

While Kubernetes YAML definitions are declarative, it is laborious have multiple copies for similar deployment patterns and multiple target environments. The most fundamental declaration is a deployment, which defines what containers are to be deployed.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.14.2
        ports:
        - containerPort: 80

To avoid proliferation of YAML definitions, and provide flexibility to alter deployment specific aspects, Helm was introduced. Helm provides a template for deployments, which can be re-used for multiple applications across multiple environments.

graph TD

    subgraph test
        subgraph app1
        serv1["service"]
        appt1["pod"]
        end
        subgraph app2
        serv2["service"]
        appp2["pod"]
        end
    end

    subgraph prod
        subgraph app3
        serv3["service"]
        appt3["pod"]
        end
        subgraph app4
        serv4["service"]
        appp4["pod"]
        end
    end

  serv1 --> appt1
  serv2 --> appp2

  serv3 --> appt3
  serv4 --> appp4

classDef dotted stroke-dasharray: 2, 2
class test,prod dotted

classDef dashed stroke-dasharray: 5, 5
class app1,app2,app3,app4 dashed

Deploying each application, in each environment, requires imperative knowledge of what steps are needed to achieve the desired outcome. See Desired State releases, rather than imperative.

Subsections of Helm

Helm Hello World

The following example is relatively complicated and doesn’t serve well as a learning exercise.

Use the Helm Getting Started material to create a template which has all the appropriate structure and some example charts.

Note

The template does not work in OpenShift because the root-less containers do not allow Nginx to bind to port 80.

How Helm Works

Using the previous YAML example, all of the elements that we want to re-use for multiple apps, or configure differently for progressive environments, are defined as properties. This is the basis of the files that make up the template.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "Chart.fullname" . }}
  labels:
    {{- include "Chart.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "Chart.labels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "Chart.labels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        ports:
        - containerPort: {{ .Values.service.port }}

There are two files used with the templates to apply deploy time settings, the Chart.yaml, which is included with the template implements the DRY principle, i.e. Don’t Repeat Yourself. Where literals that are applied repeatedly across the template are defined.

Chart.yaml

apiVersion: v2
name: nginx-container
fullname: nginx-deployment
description: A Helm chart for Kubernetes
appVersion: "1.16.0"
labels:
  app: nginx

A values file is used at deploy time to allow the re-use of the template across multiple applications, and environments.

replicaCount: 1

image:
  repository: docker.io/cdaf/fastapi
  tag: "50"

service:
  port: 80

Tokenised Values

To avoid the creation of multiple values YAML files, and the inherent structural drift of those files, a single file should be defined with tokenised settings. The CDAF configuration management feature can be used to provide a human readable settings definition which gives an abstraction from the complexity of the Helm files.

example.cm

context    target  replicaCount  port
container  LINUX   1             8081
container  dev     1             8080
container  TEST    2             8001
container  PROD    2             8000

Now the values YAML contains tokens for deploy time replacement.

replicaCount: "%replicaCount%"

image:
  repository: docker.io/cdaf/fastapi
  tag: "50"

service:
  port: "%port%"

Helm Repository

To provide Helm charts as a re-usable asset, Helm provides versioning and packaging. The resulting versioned packages can be consumed by multiple applications and environments. To ensure the release package is consistent and repeatable, the Helm packages are downloaded at build (CI) and not during deployment (CD). The packages are included in the release package so there are no external dependencies at deploy time.

The Helm registry

Helm command line can create the packaged templates and the required index file.

helm package $chart_name --destination public
helm repo index public

The resulting files and index.yaml files are placed on a web server to provide the repository service, e.g.

apiVersion: v1
entries:
  internal-service:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-11T08:51:15.763749822Z"
    description: Use Values for Container Name
    digest: 9a0cf4c0989e3921bd9b4d2e982417c3eac04f5863feb0439ad52a9f1d6ffeb9
    name: internal-service
    type: application
    urls:
    - internal-service-0.0.1.tgz
    version: 0.0.1
  kiali-dashboard:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-11T08:51:15.764037805Z"
    description: Use Values for Container Name
    digest: aa65089080e3e04a6560a1f3b70fc8861609d8693c279b10154264a9fe9fc794
    name: kiali-dashboard
    type: application
    urls:
    - kiali-dashboard-0.0.2.tgz
    version: 0.0.2