Terraform Kubernetes

Full Stack Release using Terraform

This approach implements the Autonomous Development, Authoritative Release principle, to orchestrate a full stack release, i.e. the automated coordination of Infrastructure as Code, Configuration Management and Application deployment.

This is an alternative implementation to How to Helm, using Terraform instead of Helm, but with the same core principles of runtime versioning and desired state, and the inclusion of the Kubernetes Infrastructure as Code, using a single language, i.e. Terraform.

The Application Stack can be defined once, and deployed many times into separate namespaces, e.g. development, test and production.

graph TD

  subgraph k8s["Kubernetes"]
    subgraph ns1["Dev namespace"]
      ns1-ingress["ingress"]
      subgraph ns1-pod-1["Pod"]
        ns1-con-a["container"]
      end
      subgraph ns1-pod-2["Pod"]
        ns1-con-b["container"]
        ns1-con-c["container"]
      end
    end
    subgraph ns2["Test namespace"]
      ns2-ingress["ingress"]
      subgraph ns2-pod-1["Pod"]
        ns2-con-a["container"]
      end
      subgraph ns2-pod-2["Pod"]
        ns2-con-b["container"]
        ns2-con-c["container"]
      end
    end
    subgraph ns3["Production namespace"]
      ns3-ingress["ingress"]
      subgraph ns3-pod-1["Pod"]
        ns3-con-a["container"]
      end
      subgraph ns3-pod-2["Pod"]
        ns3-con-b["container"]
        ns3-con-c["container"]
      end
    end
  end

  client -->
  ns1-ingress --> ns1-con-a
  ns1-ingress --> 
  ns1-con-b --> ns1-con-c

  client -->
  ns2-ingress --> ns2-con-a
  ns2-ingress --> 
  ns2-con-b --> ns2-con-c

  client -->
  ns3-ingress --> ns3-con-a
  ns3-ingress --> 
  ns3-con-b --> ns3-con-c

classDef external fill:lightblue
class client external
 
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
 
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted

Subsections of Terraform Kubernetes

Manifest

Declare Container Deployment as Terraform Package

The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Terraform) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file.

solutionName=kat
artifactPrefix=0.4

ui_image=cdaf/cdaf:572
api_image=cdaf/kestrel:ubuntu-22.04-14
fast_image=cdaf/fastapi:50

While the stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm. The properties management is covered in more detail in the Configuration Management section.

context    target  work_space      name_space  api_node_category  api_ip        ui_ip     
container  TEST    kat_test        kat-test    secondary          10.224.10.11  10.224.10.21  
container  PROD    kat_production  kat-prod    primary            10.224.10.10  10.224.10.20  

Next, build a release package…

Terraform Build

Immutable Release Package

The key construct for the Authoritative Release is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Terraform dependencies, modules are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Terraform build process resolves and validates dependencies.

Build-time Module Resolution

Most Terraform module resolution approaches are to pull from source control (Git) or registry at deploy-time, which can require additional credential management, risks unexpected module changes (if tags are used) and potential network connectivity issues. This approach is the treat modules like software dependencies, resolving them at build time and building them into an all-in-one immutable package.

The following state.tf defines the modules and versions that are required

terraform {
  backend "local" {}
}

module "azure_k8s" {
  source = "gitlab.com/hdc-group/azure-private-registry/k8s"
  version = "0.0.14"
}

The following builld.tsk triggers module download from a private registry using credentials in TERRAFORM_REGISTRY_TOKEN, these credentials will not be required at deploy time.

Write-Host "[$TASK_NAME] Verify Version`n" -ForegroundColor Cyan
terraform --version

VARCHK

MAKDIR $env:APPDATA\terraform.d
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf '{'
Add-Content $conf '  "credentials": {'
Add-Content $conf '    "app.terraform.io": {'
Add-Content $conf "      `"token`": `"$env:TERRAFORM_REGISTRY_TOKEN`""
Add-Content $conf '    }'
Add-Content $conf '  }'
Add-Content $conf '}'
Get-Content $conf

Write-Host "[$TASK_NAME] Log the module registry details`n" -ForegroundColor Cyan
Get-Content state.tf

Write-Host "[$TASK_NAME] In a clean workspace, first init will download modules, then fail, ignore this and init again"
if ( ! ( Test-Path ./.terraform/modules/azurerm )) { IGNORE "terraform init -upgrade -input=false" }

Write-Host "[$TASK_NAME] Initialise with local state storage and download modules`n" -ForegroundColor Cyan
terraform init -upgrade -input=false

alt text alt text

The trick to use the downloaded, local copy of the modules, is to reference the opinionated location of resolved modules, i.e. ./.terraform/modules/${module_declaration_above}/${registry_name}, as per the following example:

module "azure_private_registry" {
  source            = "./.terraform/modules/azure_k8s/azure-private-registry"
  REGISTRY_SERVER   = var.REGISTRY_SERVER
  REGISTRY_USERNAME = var.REGISTRY_USERNAME
  REGISTRY_PASSWORD = var.REGISTRY_PASSWORD
}

Validation

Once all modules have been downloaded, syntax is then validated.

Write-Host "[$TASK_NAME] Validate Syntax`n" -ForegroundColor Cyan
terraform validate

Write-Host "[$TASK_NAME] Generate the graph to validate the plan`n" -ForegroundColor Cyan
terraform graph

alt text alt text

Once validated, copy the modules and your .tf files to a release directory, as outlined below, with consideration of numeric token substitution.

Numeric Token Handling

All the deploy-time files are copied into the release directory. Because tokens cannot be used during the build process, an arbitrary numeric is used, and this is then replaced in the resulting release directory. Tokenisation is covered in more detail in the following section

Write-Host "[$TASK_NAME] Tokenise variable file`n" -ForegroundColor Cyan
REFRSH .terraform\modules\* ..\release\.terraform\modules\
VECOPY *".tf" ..\release
VECOPY *".json" ..\release
REPLAC ..\release\variables.tf '{ default = 3 }' '{ default = %agent_count% }'

alt text alt text

Release Package

The deploytime components are then copied into the release package, based on the storeFor definition in your solution directory

# Tokenised Terraform Files
release

alt text alt text

The modules and helper scripts are then packed into a self-extracting release executable as per standard CDAF release build process

alt text alt text

Deploy Time

The build-time state.tf file is replaced on deploy-time, replacing the declaration of local storage and removing the build time module dependencies, in your .tsk file

echo "[$TASK_NAME] Replace Local State with Remote"
$remote_state = "state.tf"
Set-Content $remote_state 'terraform {'
Add-Content $remote_state '  backend "remote" {'
Add-Content $remote_state "    organization = `"${env:TERRAFORM_ORG}`""
Add-Content $remote_state '    workspaces {'
Add-Content $remote_state "      name = `"${work_space}`""
Add-Content $remote_state '    }'
Add-Content $remote_state '  }'
Add-Content $remote_state '}'
Get-Content $remote_state

Configuration Management

Tokens and Properties

To avoid a configuration file for each environment, and the inevitable drift between those files, a single, tokenised, definition is used.

variable "aks_work_space"   { default = "%aks_work_space%" }
variable "name_space"       { default = "%name_space%" }
variable "REGISTRY_KEY"     { default = "@REGISTRY_KEY@" }
variable "REGISTRY_KEY_SHA  { default = "@REGISTRY_KEY_SHA@" }

To De-tokenise this definition at deploy time, name/value pair files are used. This allows the settings to be decoupled from the complexity of configuration file format.

If these were to be stored as separate files in source control, they would suffer the same drift challenge, so in source control, the settings are stored in a tabular format, which is compiled into the name/value files during the Continuous Integration process.

target  aks_work_space  name_space  REGISTRY_KEY       REGISTRY_KEY_SHA
TEST    aks_prep        test        $env:REGISTRY_KEY  FD6346C8432462ED2DBA6...
PROD    aks_prod        prod        $env:REGISTRY_KEY  CA3CBB1998E86F3237CA1...

Note: environment variables can be used for dynamic value replacement, most commonly used for secrets.

These human readable configuration management tables are transformed to computer friendly format and included in the release package (release.ps1). The REGISTRY_KEY and REGISTRY_KEY_SHA are used for Variable Validation, creating a properties.varchk as following

env:REGISTRY_KEY=$env:REGISTRY_KEY_SHA

Write the REGISTRY_KEY_SHA aa a container environment variable, so that when SHA changes, the container is automatically restarted to pick up the environment variable change, and hence the corresponding secret is also reloaded.

env {
  name = "REGISTRY_KEY_SHA"
  value = var.REGISTRY_KEY_SHA
}

An additional benefit of this approach is that when diagnosing an issue, the SHA can be used as an indicative secret verification.

Deploy

Deploy-time Detokenisation

To support the build-once/deploy-many model, the environment specific values are injected and then deployed for the release. Note that the release is immutable, and any change to any component will require a new release to be created, eliminating cherry picking. The tasksRun.tsk performs two levels of detokenisation, the first is for environment specific settings, and the second applies any solution level declarations.

Write-Host "[$TASK_NAME] Generic Properties Detokenisation`n" -ForegroundColor Cyan
Get-Content variables.tf
DETOKN variables.tf
DETOKN variables.tf $WORKSPACE\manifest.txt

Environment (TARGET) specific de-tokenisation is blue, and solution level de-tokenisation in green:

alt text alt text

Terraform Cloud is being used to perform state management. To avoid false negative reporting on Terraform apply, the operation is performed in a CMD shell.

echo "[$TASK_NAME] Azure Secrets are stored in the back-end, the token opens access to these"
MAKDIR $env:APPDATA\terraform.d
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf '{'
Add-Content $conf '  "credentials": {'
Add-Content $conf '    "app.terraform.io": {'
Add-Content $conf "      `"token`": `"$env:TERRAFORM_TOKEN`""
Add-Content $conf '    }'
Add-Content $conf '  }'
Add-Content $conf '}'

echo "[$TASK_NAME] Replace Local State with Remote"
$remote_state = "state.tf"
Set-Content $remote_state 'terraform {'
Add-Content $remote_state '  backend "remote" {'
Add-Content $remote_state "    organization = `"${env:TERRAFORM_ORG}`""
Add-Content $remote_state '    workspaces {'
Add-Content $remote_state "      name = `"${work_space}`""
Add-Content $remote_state '    }'
Add-Content $remote_state '  }'
Add-Content $remote_state '}'

Write-Host "[$TASK_NAME] Initialise Remote State`n" -ForegroundColor Cyan
terraform init -upgrade -input=false

EXECMD "terraform $OPT_ARG"

alt text alt text

Feedback Loop

Realising the Feedback Loop

Based on Realising the Feedback Loop, once the package has been promoted to it’s last stage, it is then pushed to the artefact store

alt text alt text

In this example Azure DevOps (ADO) using the az artifacts extension, see the example push.tsk.

Write-Host "[$TASK_NAME] Verify deployable artefact is available`n"
$package_name = (Get-Item "$(PWD)\release.ps1" -ErrorAction SilentlyContinue).FullName
if ( ! ( $package_name )) { ERRMSG "[PACKAGE_NOT_FOUND] $(PWD)\release.ps1 not found!" 9996 }

Write-Host "[$TASK_NAME] Verify Azure DevOps PAT is set correctly`n"
VARCHK push.varchk

PROPLD manifest.txt
$version = ${artifactPrefix} + '.' + ${BUILDNUMBER}

Write-Host "[$TASK_NAME] Push package to `$ado_project $ado_project"
Write-Host "[$TASK_NAME]   `$ado_org      = $ado_org"
Write-Host "[$TASK_NAME]   `$ado_project  = $ado_project"
Write-Host "[$TASK_NAME]   `$ado_feed     = $ado_feed"
Write-Host "[$TASK_NAME]   `$SOLUTION     = $SOLUTION"
Write-Host "[$TASK_NAME]   `$version      = $version"
Write-Host "[$TASK_NAME]   `$package_name = $package_name"

az artifacts universal publish --organization $ado_org --project $ado_project --scope project --feed $ado_feed --name $SOLUTION --version $version --path $package_name

The package can be retrieved using the semantic version, or latest (current production).

alt text alt text

To see how this can be consumed in a Release Train approach, see Terraform Cloud.