Terraform Cloud

Full Stack Release using Terraform Cloud

This Release Train extends the Terraform Kubernetes authoritative release, combining the application stack deployment with the Infrastructure-as-Code solution.

graph TD
  client["🌐"]:::transparent

  apim["API Gateway"]

  subgraph k8s["Kubernetes"]
    subgraph ns1["Dev namespace"]
      ns1-ingress["ingress"]
      subgraph ns1-pod-1["Pod"]
        ns1-con-a["container"]
      end
      subgraph ns1-pod-2["Pod"]
        ns1-con-b["container"]
        ns1-con-c["container"]
      end
    end
  end

  client -->
  apim -->
  ns1-ingress --> ns1-con-a
  ns1-ingress --> 
  ns1-con-b --> ns1-con-c

classDef external fill:lightblue
class client external
 
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
 
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted

Each component publishes a self-contained release package to the Azure DevOps (ADO) artefact store. The ADO Release orchestrates these package deployments for each environment, ensuring the complete stack is promoted through each environment with aligned package versions.


graph LR

  subgraph Components
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end
  subgraph Infrastructure
    Abuild["Build"] -->
    Atest["Test"] -->
    Apublish["Publish"]
  end

  subgraph Release
    TEST
    PROD
  end

  store[(ADO Store)]

  Apublish --> store
  Spublish --> store
  store --> TEST
  TEST --> PROD

classDef release fill:lightgreen
class TEST,PROD release

Subsections of Terraform Cloud

Manifest

Declare Container Deployment as Terraform Package

The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Terraform) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file.

solutionName=kat
artifactPrefix=0.4

ui_image=cdaf/cdaf:572
api_image=cdaf/kestrel:ubuntu-22.04-14
fast_image=cdaf/fastapi:50

While the stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm. The properties management is covered in more detail in the Configuration Management section.

context    target  work_space      name_space  api_node_category  api_ip        ui_ip     
container  TEST    kat_test        kat-test    secondary          10.224.10.11  10.224.10.21  
container  PROD    kat_production  kat-prod    primary            10.224.10.10  10.224.10.20  

Next, build a release package…

Terraform Build

Immutable Release Package

The key construct for the Authoritative Release is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Terraform dependencies, modules are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Terraform build process resolves and validates dependencies.

Build-time Module Resolution

Most Terraform module resolution approaches are to pull from source control (Git) or registry at deploy-time, which can require additional credential management, risks unexpected module changes (if tags are used) and potential network connectivity issues. This approach is the treat modules like software dependencies, resolving them at build time and building them into an all-in-one immutable package.

The following state.tf defines the modules and versions that are required

terraform {
  backend "local" {}
}

module "stack_modules" {
  source  = "app.terraform.io/example/modules/azurerm"
  version = "0.2.0"
}

module "stack_components" {
  source  = "app.terraform.io/example/components/azurerm"
  version = "0.1.3"
}

The following builld.tsk triggers module download from a private registry using credentials in TERRAFORM_REGISTRY_TOKEN, these credentials will not be required at deploy time.

Write-Host "[$TASK_NAME] Verify Version`n" -ForegroundColor Cyan
terraform --version

VARCHK

MAKDIR $env:APPDATA\terraform.d
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf '{'
Add-Content $conf '  "credentials": {'
Add-Content $conf '    "app.terraform.io": {'
Add-Content $conf "      `"token`": `"$env:TERRAFORM_REGISTRY_TOKEN`""
Add-Content $conf '    }'
Add-Content $conf '  }'
Add-Content $conf '}'
Get-Content $conf

Write-Host "[$TASK_NAME] Log the module registry details`n" -ForegroundColor Cyan
Get-Content state.tf

Write-Host "[$TASK_NAME] In a clean workspace, first init will download modules, then fail, ignore this and init again"
if ( ! ( Test-Path ./.terraform/modules/azurerm )) { IGNORE "terraform init -upgrade -input=false" }

Write-Host "[$TASK_NAME] Initialise with local state storage and download modules`n" -ForegroundColor Cyan
terraform init -upgrade -input=false

alt text alt text

Validation

Once all modules have been downloaded, syntax is then validated.

Write-Host "[$TASK_NAME] Validate Syntax`n" -ForegroundColor Cyan
terraform validate

Write-Host "[$TASK_NAME] Generate the graph to validate the plan`n" -ForegroundColor Cyan
terraform graph

alt text alt text

Numeric Token Handling

All the deploy-time files are copied into the release directory. Because tokens cannot be used during the build process, an arbitrary numeric is used, and this is then replaced in the resulting release directory. Tokenisation is covered in more detail in the following section

Write-Host "[$TASK_NAME] Tokenise variable file`n" -ForegroundColor Cyan
REFRSH .terraform\modules\* ..\release\.terraform\modules\
VECOPY *".tf" ..\release
VECOPY *".json" ..\release
REPLAC ..\release\variables.tf           '{ default = 3 }'  '{ default = %agent_count% }'

alt text alt text

Release Package

The deploytime components are then copied into the release package, based on the storeFor definition in your solution directory

# Tokenised Terraform Files
release

alt text alt text

The modules and helper scripts are then packed into a self-extracting release executable as per standard CDAF release build process

alt text alt text

Configuration Management

Tokens and Properties

To avoid a configuration file for each environment, and the inevitable drift between those files, a single, tokenised, definition is used.

variable "aks_work_space"   { default = "%aks_work_space%" }
variable "name_space"       { default = "%name_space%" }
variable "REGISTRY_KEY"     { default = "@REGISTRY_KEY@" }
variable "REGISTRY_KEY_SHA  { default = "@REGISTRY_KEY_SHA@" }

To De-tokenise this definition at deploy time, name/value pair files are used. This allows the settings to be decoupled from the complexity of configuration file format.

If these were to be stored as separate files in source control, they would suffer the same drift challenge, so in source control, the settings are stored in a tabular format, which is compiled into the name/value files during the Continuous Integration process.

target  aks_work_space  name_space  REGISTRY_KEY       REGISTRY_KEY_SHA
TEST    aks_prep        test        $env:REGISTRY_KEY  FD6346C8432462ED2DBA6...
PROD    aks_prod        prod        $env:REGISTRY_KEY  CA3CBB1998E86F3237CA1...

Note: environment variables can be used for dynamic value replacement, most commonly used for secrets.

These human readable configuration management tables are transformed to computer friendly format and included in the release package (release.ps1). The REGISTRY_KEY and REGISTRY_KEY_SHA are used for Variable Validation, creating a properties.varchk as following

env:REGISTRY_KEY=$env:REGISTRY_KEY_SHA

Write the REGISTRY_KEY_SHA aa a container environment variable, so that when SHA changes, the container is automatically restarted to pick up the environment variable change, and hence the corresponding secret is also reloaded.

env {
  name = "REGISTRY_KEY_SHA"
  value = var.REGISTRY_KEY_SHA
}

An additional benefit of this approach is that when diagnosing an issue, the SHA can be used as an indicative secret verification. How these are consumed are described later in the deploy section.

Release

Release Construction

The release combines the Infrastructure-as-Code (IaC) Continuous Integration (CI) output with the application components from Terraform Authoritative Release. The application authoritative release package (in green below) declares the image versions to be deployed to the infrastructure provided by the IaC release package.

graph LR
  Key["Legend<br/>Blue - IaC & CM<br/>Green - Application Stack"]

  subgraph ado["Azure DevOps"]
    git[(Git)]
    build-artefact[(Build)]
    iac["release.ps1"]
    package-artefact[(Artifacts)]
    app["release.ps1"]
  end

  subgraph az["Azure"]
    qa
    pp
    pr
  end

  registry[(Docker Registry)]

  git --CI--> build-artefact
  build-artefact --CD--> iac

  package-artefact --CD--> app

  registry -. "pull image" .-> qa
  app -. "terraform apply" .-> qa
  iac -. "terraform apply" .-> qa

  classDef infra fill:LightBlue
  class iac,az infra

  classDef app-stack fill:LightGreen
  class registry,app app-stack

In this example, the application release pipeline only deploys to the development environment to verify the package, and then pushes to the artefact store

alt text alt text

The package, based on it’s semantic version is pulled from this store at deploy time, based on the solution manifest, CDAF.solution.

alt text alt text

artifactPrefix=0.5
productName=Azure Terraform for Kubernetes
solutionName=azt

kat_release=0.4.80

the two release artefacts are promoted together through the pipeline

alt text alt text

Intermediary

Terraform Cloud intermediary

The deployment process itself is processed via the Terraform Cloud intermediary, which decouples the configuration management, and provides state storage and execution processing.

alt text alt text.

An important aspect of the intermediaries function is to store dynamic outputs, for example, the Infrastructure-as-Code solution provides a Kubernetes cluster, the dynamically created configuration is stored as outputs.

alt text alt text.

The outputs are made available to the subsequent application deployment process.

alt text alt text.

The Application components consume the state information that has been shared

alt text alt text.

Deploy

Deploy-time Detokenisation

The configuration management is consumed at deploy time.

Deployment Mechanics

To support the build-once/deploy-many model, the environment specific values are injected and then deployed for the release. Note that the release is immutable, and any change to any component will require a new release to be created, eliminating cherry picking. The tasksRun.tsk performs multiple levels of detokenisation, the first is for environment specific settings, the second applies any solution level declarations, then cluster, groups/regions and non-secret elements of the credentials

Write-Host "[$TASK_NAME] Generic Properties Detokenisation`n" -ForegroundColor Cyan
Get-Content variables.tf
DETOKN variables.tf

Write-Host "[$TASK_NAME] Custom Properties Detokenisation`n" -ForegroundColor Cyan
DETOKN variables.tf $azure_groups
DETOKN variables.tf $azure_credentials reveal

Environment (TARGET) specific de-tokenisation is blue, and solution level de-tokenisation in green:

alt text alt text

Cluster de-tokenisation is blue, group/region de-tokenisation in green and on-secret elements of the credentials in orange:

alt text alt text

Terraform Cloud is being used to perform state management. To avoid false negative reporting on Terraform apply, the operation is performed in a CMD shell.

Write-Host "[$TASK_NAME] Azure Secrets are stored in the back-end, the token opens access to these"
MAKDIR "$env:APPDATA\terraform.d"
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf 'credentials "app.terraform.io" {'
Add-Content $conf "  token = `"$env:TERRAFORM_TOKEN`""
Add-Content $conf '}'

Write-Host "[$TASK_NAME] Replace Local State with Remote, load env_tag from $azure_groups"
PROPLD $azure_groups
$remote_state = "state.tf"
Set-Content $remote_state 'terraform {'
Add-Content $remote_state '  backend "remote" {'
Add-Content $remote_state "    organization = `"${env:TERRAFORM_ORG}`""
Add-Content $remote_state '    workspaces {'
Add-Content $remote_state "      name = `"${SOLUTION}_${resource_group}`""
Add-Content $remote_state '    }'
Add-Content $remote_state '  }'
Add-Content $remote_state '}'

terraform init -upgrade -input=false

Write-Host "[$TASK_NAME] Default action is plan`n" -ForegroundColor Cyan
if ( ! $OPT_ARG ) { $OPT_ARG = 'plan' }
EXECMD "terraform $OPT_ARG"

alt text alt text

Once the infrastructure has been deployed, the application components are installed. The release package is downloaded (in this example an container with the AZ extensions pre-installed is used) and then run for the environment.

alt text alt text

Feedback Loop

Realising the Feedback Loop

Based on Realising the Feedback Loop, once the package has been promoted to it’s last stage, it is then pushed to the artefact store

alt text alt text

In this example Azure DevOps (ADO) using the az artifacts extension, see the example push.tsk.

Write-Host "[$TASK_NAME] Verify deployable artefact is available`n"
$package_name = (Get-Item "$(PWD)\release.ps1" -ErrorAction SilentlyContinue).FullName
if ( ! ( $package_name )) { ERRMSG "[PACKAGE_NOT_FOUND] $(PWD)\release.ps1 not found!" 9994 }

Write-Host "[$TASK_NAME] Verify Azure DevOps PAT is set correctly`n"
VARCHK push.varchk

PROPLD manifest.txt
$version = ${artifactPrefix} + '.' + ${BUILDNUMBER}

Write-Host "[$TASK_NAME] Push $SOLUTION release package:"
Write-Host "[$TASK_NAME]   `$ado_org      = $ado_org"
Write-Host "[$TASK_NAME]   `$ado_project  = $ado_project"
Write-Host "[$TASK_NAME]   `$ado_feed     = $ado_feed"
Write-Host "[$TASK_NAME]   `$SOLUTION     = $SOLUTION"
Write-Host "[$TASK_NAME]   `$version      = $version"
Write-Host "[$TASK_NAME]   `$package_name = $package_name"

Write-Host "Verify deployable artefact is available`n"
az artifacts universal publish --organization $ado_org --project $ado_project --scope project --feed $ado_feed --name "$SOLUTION" --version $version --path $package_name

Write-Host "Verify wrapper is available`n"
$package_name = (Get-Item "$(PWD)\userenv.ps1" -ErrorAction SilentlyContinue).FullName
if ( ! ( $package_name )) { ERRMSG "[PACKAGE_NOT_FOUND] $(PWD)\userenv.ps1 not found!" 9995 }
az artifacts universal publish --organization "https://cdaf.visualstudio.com" --project $ado_project --scope project --feed $ado_feed --name "userenv" --version $version --path $package_name

The package can be retrieved using the semantic version, or latest (current production).

alt text alt text

Operations

Operational tasks can be performed using the production (latest) or specific release. In this example, a production-like development environment can be created and destroyed on demand.

alt text alt text