This is an alternative implementation to Terraform Application Stack, using Helm instead of Terraform, but with the same core principles of runtime versioning and desired state.
The Application Stack can be defined once, and deployed many times into separate namespaces, e.g. development, test and production.
graph TD
subgraph k8s["Kubernetes"]
subgraph ns1["Dev namespace"]
ns1-ingress["ingress"]
subgraph ns1-pod-1["Pod"]
ns1-con-a["container"]
end
subgraph ns1-pod-2["Pod"]
ns1-con-b["container"]
ns1-con-c["container"]
end
end
subgraph ns2["Test namespace"]
ns2-ingress["ingress"]
subgraph ns2-pod-1["Pod"]
ns2-con-a["container"]
end
subgraph ns2-pod-2["Pod"]
ns2-con-b["container"]
ns2-con-c["container"]
end
end
subgraph ns3["Production namespace"]
ns3-ingress["ingress"]
subgraph ns3-pod-1["Pod"]
ns3-con-a["container"]
end
subgraph ns3-pod-2["Pod"]
ns3-con-b["container"]
ns3-con-c["container"]
end
end
end
client -->
ns1-ingress --> ns1-con-a
ns1-ingress -->
ns1-con-b --> ns1-con-c
client -->
ns2-ingress --> ns2-con-a
ns2-ingress -->
ns2-con-b --> ns2-con-c
client -->
ns3-ingress --> ns3-con-a
ns3-ingress -->
ns3-con-b --> ns3-con-c
classDef external fill:lightblue
class client external
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted
Kubernetes configuration can be performed via imperative command line or declarative YAML files. While OpenShift provides a user interface to allow manual configuration of the Kubernetes cluster, which is ideal for discovery and development purposes, but is not sustainable in a production solution.
While Kubernetes YAML definitions are declarative, it is laborious have multiple copies for similar deployment patterns and multiple target environments. The most fundamental declaration is a deployment, which defines what containers are to be deployed.
To avoid proliferation of YAML definitions, and provide flexibility to alter deployment specific aspects, Helm was introduced. Helm provides a template for deployments, which can be re-used for multiple applications across multiple environments.
graph TD
subgraph test
subgraph app1
serv1["service"]
appt1["pod"]
end
subgraph app2
serv2["service"]
appp2["pod"]
end
end
subgraph prod
subgraph app3
serv3["service"]
appt3["pod"]
end
subgraph app4
serv4["service"]
appp4["pod"]
end
end
serv1 --> appt1
serv2 --> appp2
serv3 --> appt3
serv4 --> appp4
classDef dotted stroke-dasharray: 2, 2
class test,prod dotted
classDef dashed stroke-dasharray: 5, 5
class app1,app2,app3,app4 dashed
Deploying each application, in each environment, requires imperative knowledge of what steps are needed to achieve the desired outcome. See Desired State releases, rather than imperative.
The following example is relatively complicated and doesn’t serve well as a learning exercise.
Use the Helm Getting Started material to create a template which has all the appropriate structure and some example charts.
Note
The template does not work in OpenShift because the root-less containers do not allow Nginx to bind to port 80.
How Helm Works
Using the previous YAML example, all of the elements that we want to re-use for multiple apps, or configure differently for progressive environments, are defined as properties. This is the basis of the files that make up the template.
apiVersion: apps/v1kind: Deploymentmetadata:
name: {{ include "Chart.fullname" . }}labels:
{{- include "Chart.labels" . | nindent 4 }}spec:
{{- if not .Values.autoscaling.enabled }}replicas: {{ .Values.replicaCount }} {{- end }}selector:
matchLabels:
{{- include "Chart.labels" . | nindent 6 }}template:
metadata:
labels:
{{- include "Chart.labels" . | nindent 8 }}spec:
containers:
- name: {{ .Chart.Name }}image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"ports:
- containerPort: {{ .Values.service.port }}
There are two files used with the templates to apply deploy time settings, the Chart.yaml, which is included with the template implements the DRY principle, i.e. Don’t Repeat Yourself. Where literals that are applied repeatedly across the template are defined.
Chart.yaml
apiVersion: v2name: nginx-containerfullname: nginx-deploymentdescription: A Helm chart for KubernetesappVersion: "1.16.0"labels:
app: nginx
A values file is used at deploy time to allow the re-use of the template across multiple applications, and environments.
To avoid the creation of multiple values YAML files, and the inherent structural drift of those files, a single file should be defined with tokenised settings. The CDAF configuration management feature can be used to provide a human readable settings definition which gives an abstraction from the complexity of the Helm files.
example.cm
context target replicaCount port
container LINUX 1 8081
container dev 1 8080
container TEST 2 8001
container PROD 2 8000
Now the values YAML contains tokens for deploy time replacement.
To provide Helm charts as a re-usable asset, Helm provides versioning and packaging. The resulting versioned packages can be consumed by multiple applications and environments. To ensure the release package is consistent and repeatable, the Helm packages are downloaded at build (CI) and not during deployment (CD). The packages are included in the release package so there are no external dependencies at deploy time.
The Helm registry
Helm command line can create the packaged templates and the required index file.
helm package $chart_name --destination public
helm repo index public
The resulting files and index.yaml files are placed on a web server to provide the repository service, e.g.
apiVersion: v1entries:
internal-service:
- apiVersion: v2appVersion: 1.16.0created: "2022-08-11T08:51:15.763749822Z"description: Use Values for Container Namedigest: 9a0cf4c0989e3921bd9b4d2e982417c3eac04f5863feb0439ad52a9f1d6ffeb9name: internal-servicetype: applicationurls:
- internal-service-0.0.1.tgzversion: 0.0.1kiali-dashboard:
- apiVersion: v2appVersion: 1.16.0created: "2022-08-11T08:51:15.764037805Z"description: Use Values for Container Namedigest: aa65089080e3e04a6560a1f3b70fc8861609d8693c279b10154264a9fe9fc794name: kiali-dashboardtype: applicationurls:
- kiali-dashboard-0.0.2.tgzversion: 0.0.2
To manage an application stack holistically, a Declaration is required. From this declaration, desired state can be calculated, i.e. what changes need to be made for an environment to be aligned to the declaration. The tool used in this example is Helmsman, however, another tool, Helmfile has fundamentally the same configuration constructs. Each gather one or more Helm applications to create an application stack. Only the necessary components will be updated if a change is determined, based on a calculated state change.
graph TD
subgraph Test
subgraph stack1["Declaration"]
subgraph app1["Helmchart"]
serv1["service"]
appt1["pod"]
end
subgraph app2["Helmchart"]
serv2["service"]
appp2["pod"]
end
end
end
subgraph Prod
subgraph stack2["Declaration"]
subgraph app3["Helmchart"]
serv3["service"]
appt3["pod"]
end
subgraph app4["Helmchart"]
serv4["service"]
appp4["pod"]
end
end
end
serv1 --> appt1
serv2 --> appp2
serv3 --> appt3
serv4 --> appp4
classDef AppStack fill:LightBlue
class stack1,stack2 AppStack
classDef dotted stroke-dasharray: 2, 2
class stack1,stack2 dotted
classDef dashed stroke-dasharray: 5, 5
class app1,app2,app3,app4 dashed
The build-time process uses the declaration to determine the Helm charts that are required at deploy time. These are downloaded and included in the package, this has the advantage of not having to manage registry access at deploy time and ensures the charts are immutable within the release package.
There is no “compiled” output for the source files described above, so the self-contained release package capability of Continuous Delivery Automation Framework (CDAF) is used to produce a portable, re-usable deployment artefact, i.e. build once, deploy many.
graph LR
subgraph ci["Continuous Integration"]
persist[(persist)]
end
release.ps1
subgraph cd["Continuous Delivery"]
test
prod
end
persist -->
release.ps1 --> test
release.ps1 --> prod
classDef blue fill:#007FFF
class release.ps1 blue
classDef dashed stroke-dasharray: 5, 5
class ci,cd dashed
The deployment uses an Environment argument is a symbolic link to the settings that need to be detokenised at deploy time, e.g.
This example is the deploy time process for Helmsman, although it is fundamentally the same for Helmfile. The tokenised application stack declaration is de-tokenised to apply the correct name_space at deploy time.
helm.tsk
sed -i -- "s•name_space•*****•g" ranger.yaml
the resulting deployment
helmsman --apply -f ranger.yaml ranger-chart
_ _
| | | |
| |__ ___| |_ __ ___ ___ _ __ ___ __ _ _ __
| '_ \ / _ \ | '_ ` _ \/ __| '_ ` _ \ / _` | '_ \
| | | | __/ | | | | | \__ \ | | | | | (_| | | | |
|_| |_|\___|_|_| |_| |_|___/_| |_| |_|\__,_|_| |_| version: v3.11.0
Helm-Charts-as-Code tool.
WARNING: helm diff not found, using kubectl diff
INFO: Parsed [[ ranger.yaml ]] successfully and found [ 1 ] apps
INFO: Validating desired state definition
INFO: Setting up kubectl
INFO: Setting up helm
INFO: Setting up namespaces
INFO: Getting chart information
INFO: Chart [ /solution/deploy/ranger-chart ] with version [ 0.1.0 ] was found locally.
INFO: Charts validated.
INFO: Preparing plan
INFO: Acquiring current Helm state from cluster
INFO: Checking if any Helmsman managed releases are no longer tracked by your desired state ...
INFO: No untracked releases found
NOTICE: -------- PLAN starts here --------------
NOTICE: Release [ ranger ] in namespace [ test ] will be installed using version [ 0.1.0 ] -- priority: 0
NOTICE: -------- PLAN ends here --------------
INFO: Executing plan
NOTICE: Install release [ ranger ] version [ 0.1.0 ] in namespace [ test ]
NOTICE: Release "ranger" does not exist. Installing it now.
NAME: ranger
LAST DEPLOYED: Sun Aug 7 03:42:51 2022
NAMESPACE: test
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace test -l "app.kubernetes.io/name=ranger-chart,app.kubernetes.io/instance=ranger" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace test $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace test port-forward $POD_NAME 8080:$CONTAINER_PORT
NOTICE: Finished: Install release [ ranger ] version [ 0.1.0 ] in namespace [ test ]
The key to using Helm charts rather than simply authoring Kubernetes YAML definitions is the use of templates. This way a deployment pattern can be defined once, with only the deploy time, application specific, values being changed.
From the Helm template the health probes are hard coded, replace these with shared definitions, .Values.service.port & .Values.service.probeContext.
The .Values.service.port is already defined in the generated values file, but .Values.service.probeContext is not, so add this to the values definition.
values.yaml
service:
type: ClusterIPport: 8000probeContext: /
Now replace single values file with a file for each application being deployed based on this pattern. Create additional app definitions in Helmsman
Some changes cannot be updated in place, an example of this is the service port. If this is changed, the chart version has be updated or the existing deployment manually removed.