How to Helm

Declarative Desired State Container Deployment using Helm

This approach is based on Autonomous Development, Authoritative Release which decouples the development process from the release process.

This is an alternative implementation to Terraform Application Stack, using Helm instead of Terraform, but with the same core principles of runtime versioning and desired state.

The Application Stack can be defined once, and deployed many times into separate namespaces, e.g. development, test and production.

graph TD

  subgraph k8s["Kubernetes"]
    subgraph ns1["Dev namespace"]
      ns1-ingress["ingress"]
      subgraph ns1-pod-1["Pod"]
        ns1-con-a["container"]
      end
      subgraph ns1-pod-2["Pod"]
        ns1-con-b["container"]
        ns1-con-c["container"]
      end
    end
    subgraph ns2["Test namespace"]
      ns2-ingress["ingress"]
      subgraph ns2-pod-1["Pod"]
        ns2-con-a["container"]
      end
      subgraph ns2-pod-2["Pod"]
        ns2-con-b["container"]
        ns2-con-c["container"]
      end
    end
    subgraph ns3["Production namespace"]
      ns3-ingress["ingress"]
      subgraph ns3-pod-1["Pod"]
        ns3-con-a["container"]
      end
      subgraph ns3-pod-2["Pod"]
        ns3-con-b["container"]
        ns3-con-c["container"]
      end
    end
  end

  client -->
  ns1-ingress --> ns1-con-a
  ns1-ingress --> 
  ns1-con-b --> ns1-con-c

  client -->
  ns2-ingress --> ns2-con-a
  ns2-ingress --> 
  ns2-con-b --> ns2-con-c

  client -->
  ns3-ingress --> ns3-con-a
  ns3-ingress --> 
  ns3-con-b --> ns3-con-c

classDef external fill:lightblue
class client external
 
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
 
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted
  • Helm

    Helm for Kubernetes

  • Desired State Release

    Full Stack Release Helm/Kubernetes {class=“children children-type-list children-sort-”}

Subsections of How to Helm

Helm

Kubernetes configuration can be performed via imperative command line or declarative YAML files. While OpenShift provides a user interface to allow manual configuration of the Kubernetes cluster, which is ideal for discovery and development purposes, but is not sustainable in a production solution.

While Kubernetes YAML definitions are declarative, it is laborious have multiple copies for similar deployment patterns and multiple target environments. The most fundamental declaration is a deployment, which defines what containers are to be deployed.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.14.2
        ports:
        - containerPort: 80

To avoid proliferation of YAML definitions, and provide flexibility to alter deployment specific aspects, Helm was introduced. Helm provides a template for deployments, which can be re-used for multiple applications across multiple environments.

graph TD

    subgraph test
        subgraph app1
        serv1["service"]
        appt1["pod"]
        end
        subgraph app2
        serv2["service"]
        appp2["pod"]
        end
    end

    subgraph prod
        subgraph app3
        serv3["service"]
        appt3["pod"]
        end
        subgraph app4
        serv4["service"]
        appp4["pod"]
        end
    end

  serv1 --> appt1
  serv2 --> appp2

  serv3 --> appt3
  serv4 --> appp4

classDef dotted stroke-dasharray: 2, 2
class test,prod dotted

classDef dashed stroke-dasharray: 5, 5
class app1,app2,app3,app4 dashed

Deploying each application, in each environment, requires imperative knowledge of what steps are needed to achieve the desired outcome. See Desired State releases, rather than imperative.

Subsections of Helm

Helm Hello World

The following example is relatively complicated and doesn’t serve well as a learning exercise.

Use the Helm Getting Started material to create a template which has all the appropriate structure and some example charts.

Note

The template does not work in OpenShift because the root-less containers do not allow Nginx to bind to port 80.

How Helm Works

Using the previous YAML example, all of the elements that we want to re-use for multiple apps, or configure differently for progressive environments, are defined as properties. This is the basis of the files that make up the template.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "Chart.fullname" . }}
  labels:
    {{- include "Chart.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "Chart.labels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "Chart.labels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        ports:
        - containerPort: {{ .Values.service.port }}

There are two files used with the templates to apply deploy time settings, the Chart.yaml, which is included with the template implements the DRY principle, i.e. Don’t Repeat Yourself. Where literals that are applied repeatedly across the template are defined.

Chart.yaml

apiVersion: v2
name: nginx-container
fullname: nginx-deployment
description: A Helm chart for Kubernetes
appVersion: "1.16.0"
labels:
  app: nginx

A values file is used at deploy time to allow the re-use of the template across multiple applications, and environments.

replicaCount: 1

image:
  repository: docker.io/cdaf/fastapi
  tag: "50"

service:
  port: 80

Tokenised Values

To avoid the creation of multiple values YAML files, and the inherent structural drift of those files, a single file should be defined with tokenised settings. The CDAF configuration management feature can be used to provide a human readable settings definition which gives an abstraction from the complexity of the Helm files.

example.cm

context    target  replicaCount  port
container  LINUX   1             8081
container  dev     1             8080
container  TEST    2             8001
container  PROD    2             8000

Now the values YAML contains tokens for deploy time replacement.

replicaCount: "%replicaCount%"

image:
  repository: docker.io/cdaf/fastapi
  tag: "50"

service:
  port: "%port%"

Helm Repository

To provide Helm charts as a re-usable asset, Helm provides versioning and packaging. The resulting versioned packages can be consumed by multiple applications and environments. To ensure the release package is consistent and repeatable, the Helm packages are downloaded at build (CI) and not during deployment (CD). The packages are included in the release package so there are no external dependencies at deploy time.

The Helm registry

Helm command line can create the packaged templates and the required index file.

helm package $chart_name --destination public
helm repo index public

The resulting files and index.yaml files are placed on a web server to provide the repository service, e.g.

apiVersion: v1
entries:
  internal-service:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-11T08:51:15.763749822Z"
    description: Use Values for Container Name
    digest: 9a0cf4c0989e3921bd9b4d2e982417c3eac04f5863feb0439ad52a9f1d6ffeb9
    name: internal-service
    type: application
    urls:
    - internal-service-0.0.1.tgz
    version: 0.0.1
  kiali-dashboard:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-11T08:51:15.764037805Z"
    description: Use Values for Container Name
    digest: aa65089080e3e04a6560a1f3b70fc8861609d8693c279b10154264a9fe9fc794
    name: kiali-dashboard
    type: application
    urls:
    - kiali-dashboard-0.0.2.tgz
    version: 0.0.2

Desired State Release

Full Stack Release Helm/Kubernetes

To manage an application stack holistically, a Declaration is required. From this declaration, desired state can be calculated, i.e. what changes need to be made for an environment to be aligned to the declaration. The tool used in this example is Helmsman, however, another tool, Helmfile has fundamentally the same configuration constructs. Each gather one or more Helm applications to create an application stack. Only the necessary components will be updated if a change is determined, based on a calculated state change.

graph TD
  subgraph Test
    subgraph stack1["Declaration"]
      subgraph app1["Helmchart"]
        serv1["service"]
        appt1["pod"]
      end
      subgraph app2["Helmchart"]
        serv2["service"]
        appp2["pod"]
      end
    end
  end

  subgraph Prod
   subgraph stack2["Declaration"]
      subgraph app3["Helmchart"]
        serv3["service"]
        appt3["pod"]
      end
      subgraph app4["Helmchart"]
        serv4["service"]
        appp4["pod"]
      end
    end
  end

  serv1 --> appt1
  serv2 --> appp2

  serv3 --> appt3
  serv4 --> appp4

classDef AppStack fill:LightBlue
class stack1,stack2 AppStack

classDef dotted stroke-dasharray: 2, 2
class stack1,stack2 dotted

classDef dashed stroke-dasharray: 5, 5
class app1,app2,app3,app4 dashed

Subsections of Desired State Release

Build Once, Deploy Many

CI Process for Declarative Release

The following example is Helmsman, but the same mechanism works for Helmfile also.

Using DRY principles, a single declaration of the application stack is used, and tokens applied for deplopy-time environment variations.

metadata:
  scope: "cluster microservices"
  maintainer: "Jules Clements"

namespaces:
  %name_space%:
    protected: false

apps:

  pull:
    name: "docker-registry-pull-secret"
    description: "GitLab Registry Pull Secret"
    namespace: "%name_space%"
    enabled: true
    chart: "pull-secrets-0.0.1.tgz"
    version: "0.0.1"
    valuesFile: "pods/docker-registry-pull-secret.yaml"

  cdaf-ui:
    name: "cdaf-ui"
    description: "CDAF Published Site (Django)"
    namespace: "%name_space%"
    enabled: true
    chart: "public-ingress-0.1.4.tgz"
    version: "0.1.4"
    valuesFile: "pods/cdaf-ui.yaml"
    set:
      dockerconfigjson: "$DOCKER_CONFIG_JSON"

The build-time process uses the declaration to determine the Helm charts that are required at deploy time. These are downloaded and included in the package, this has the advantage of not having to manage registry access at deploy time and ensures the charts are immutable within the release package.

helm repo add $repo_name https://kool-aid.gitlab.io/helm
IFS=$'\\n'
for chart in $(cat .cdaf/customRemote/${SOLUTION}.yaml | grep chart: | sort | uniq); do eval "${SOLUTIONROOT}/pull.sh $repo_name $chart"; done

Build & Package

There is no “compiled” output for the source files described above, so the self-contained release package capability of Continuous Delivery Automation Framework (CDAF) is used to produce a portable, re-usable deployment artefact, i.e. build once, deploy many.

graph LR

  subgraph ci["Continuous Integration"]
    persist[(persist)]
  end

  release.ps1

  subgraph cd["Continuous Delivery"]
    test
    prod
  end

  persist -->
  release.ps1 --> test
  release.ps1 --> prod

classDef blue fill:#007FFF
class release.ps1 blue
 
classDef dashed stroke-dasharray: 5, 5
class ci,cd dashed

The deployment uses an Environment argument is a symbolic link to the settings that need to be detokenised at deploy time, e.g.

./release.ps1 QA

Helmsman Deploy-Time

Built Once, Deployed Many

This example is the deploy time process for Helmsman, although it is fundamentally the same for Helmfile. The tokenised application stack declaration is de-tokenised to apply the correct name_space at deploy time.

helm.tsk

sed -i -- "s•name_space•*****•g" ranger.yaml

the resulting deployment

helmsman --apply -f ranger.yaml ranger-chart
 _ _ 
| | | | 
| |__ ___| |_ __ ___ ___ _ __ ___ __ _ _ __
| '_ \ / _ \ | '_ ` _ \/ __| '_ ` _ \ / _` | '_ \ 
| | | | __/ | | | | | \__ \ | | | | | (_| | | | | 
|_| |_|\___|_|_| |_| |_|___/_| |_| |_|\__,_|_| |_| version: v3.11.0

Helm-Charts-as-Code tool.
WARNING: helm diff not found, using kubectl diff

INFO: Parsed [[ ranger.yaml ]] successfully and found [ 1 ] apps
INFO: Validating desired state definition
INFO: Setting up kubectl
INFO: Setting up helm
INFO: Setting up namespaces
INFO: Getting chart information
INFO: Chart [ /solution/deploy/ranger-chart ] with version [ 0.1.0 ] was found locally.
INFO: Charts validated.
INFO: Preparing plan
INFO: Acquiring current Helm state from cluster
INFO: Checking if any Helmsman managed releases are no longer tracked by your desired state ...
INFO: No untracked releases found

NOTICE: -------- PLAN starts here --------------
NOTICE: Release [ ranger ] in namespace [ test ] will be installed using version [ 0.1.0 ] -- priority: 0
NOTICE: -------- PLAN ends here --------------

INFO: Executing plan
NOTICE: Install release [ ranger ] version [ 0.1.0 ] in namespace [ test ]
NOTICE: Release "ranger" does not exist. Installing it now.
NAME: ranger
LAST DEPLOYED: Sun Aug 7 03:42:51 2022
NAMESPACE: test
STATUS: deployed
REVISION: 1
NOTES:

1. Get the application URL by running these commands:
 export POD_NAME=$(kubectl get pods --namespace test -l "app.kubernetes.io/name=ranger-chart,app.kubernetes.io/instance=ranger" -o jsonpath="{.items[0].metadata.name}")
 export CONTAINER_PORT=$(kubectl get pod --namespace test $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
 echo "Visit http://127.0.0.1:8080 to use your application"
 kubectl --namespace test port-forward $POD_NAME 8080:$CONTAINER_PORT

NOTICE: Finished: Install release [ ranger ] version [ 0.1.0 ] in namespace [ test ]

DRY

Don't Repeat Yourself

The key to using Helm charts rather than simply authoring Kubernetes YAML definitions is the use of templates. This way a deployment pattern can be defined once, with only the deploy time, application specific, values being changed.

From the Helm template the health probes are hard coded, replace these with shared definitions, .Values.service.port & .Values.service.probeContext.

deployment.yaml

      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          livenessProbe:
            httpGet:
              path: {{ .Values.service.probeContext }}
              port: {{ .Values.service.port }}
          readinessProbe:
            httpGet:
              path: {{ .Values.service.probeContext }}
              port: {{ .Values.service.port }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}

The .Values.service.port is already defined in the generated values file, but .Values.service.probeContext is not, so add this to the values definition.

values.yaml

service:
  type: ClusterIP
  port: 8000
  probeContext: /

Now replace single values file with a file for each application being deployed based on this pattern. Create additional app definitions in Helmsman

ranger.yaml

apps:
  kestrel:
    name: "kestrel"
    description: "dotnet core Kestrel API"
    namespace: "name_space"
    enabled: true
    chart: "public-ingress-0.1.3.tgz"
    version: "0.1.3"
    valuesFile: "dockerhub-public/kestrel.yaml"

  fastapi:
    name: "fastapi"
    description: "Python Fast API"
    namespace: "name_space"
    enabled: true
    chart: "public-ingress-0.1.1.tgz"
    version: "0.1.1"
    valuesFile: "dockerhub-public/fastapi.yaml"

Helmsman Secrets

Sensitive Data Management

Define the secret in your chart with a substitution value.

secrets.yaml

apiVersion: v1
kind: Secret
metadata:
  name: dockerhub-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: >-
    {{ .Values.dockerconfigjson }}

Define the property with no value. Note also the reference to the secret for pull from the private registry.

values.yaml

replicaCount: 1

image:
  repository: docker.io/cdaf/cdaf
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "464"

imagePullSecrets: [{ name: dockerhub-secret }]
dockerconfigjson: ""

Define the environment variable to be substituted into the chart

ranger.yaml

metadata:
  scope: "cluster ranger"
  maintainer: "Jules Clements"

namespaces:
  name_space:
    protected: false

apps:
  cdaf-ui:
    name: "cdaf-ui"
    description: "cdaf-ui"
    namespace: "name_space"
    enabled: true
    chart: "cdaf-ui"
    version: "0.1.2"
    set:
      dockerconfigjson: "$DOCKER_CONFIG_JSON"

No change required for the helmsman command line as the change above will trigger Helmsman to try and use the environemnt variable.

Helmsman Version Constraints

Helmsman Update Limitations

Some changes cannot be updated in place, an example of this is the service port. If this is changed, the chart version has be updated or the existing deployment manually removed.