This Release Train extends the Terraform Kubernetesauthoritative release, combining the application stack deployment with the Infrastructure-as-Code solution.
graph TD
client["🌐"]:::transparent
apim["API Gateway"]
subgraph k8s["Kubernetes"]
subgraph ns1["Dev namespace"]
ns1-ingress["ingress"]
subgraph ns1-pod-1["Pod"]
ns1-con-a["container"]
end
subgraph ns1-pod-2["Pod"]
ns1-con-b["container"]
ns1-con-c["container"]
end
end
end
client -->
apim -->
ns1-ingress --> ns1-con-a
ns1-ingress -->
ns1-con-b --> ns1-con-c
classDef external fill:lightblue
class client external
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted
Each component publishes a self-contained release package to the Azure DevOps (ADO) artefact store. The ADO Release orchestrates these package deployments for each environment, ensuring the complete stack is promoted through each environment with aligned package versions.
graph LR
subgraph Components
Sbuild["Build"] -->
Stest["Test"] -->
Spublish["Publish"]
end
subgraph Infrastructure
Abuild["Build"] -->
Atest["Test"] -->
Apublish["Publish"]
end
subgraph Release
TEST
PROD
end
store[(ADO Store)]
Apublish --> store
Spublish --> store
store --> TEST
TEST --> PROD
classDef release fill:lightgreen
class TEST,PROD release
The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Terraform) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file.
While the stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm. The properties management is covered in more detail in the Configuration Management section.
The key construct for the Authoritative Release is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Terraform dependencies, modules are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Terraform build process resolves and validates dependencies.
Build-time Module Resolution
Most Terraform module resolution approaches are to pull from source control (Git) or registry at deploy-time, which can require additional credential management, risks unexpected module changes (if tags are used) and potential network connectivity issues. This approach is the treat modules like software dependencies, resolving them at build time and building them into an all-in-one immutable package.
The following state.tf defines the modules and versions that are required
The following builld.tsk triggers module download from a private registry using credentials in TERRAFORM_REGISTRY_TOKEN, these credentials will not be required at deploy time.
Write-Host "[$TASK_NAME] Verify Version`n" -ForegroundColor Cyan
terraform --version
VARCHK
MAKDIR $env:APPDATA\terraform.d
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"Set-Content $conf '{'Add-Content $conf ' "credentials": {'Add-Content $conf ' "app.terraform.io": {'Add-Content $conf " `"token`": `"$env:TERRAFORM_REGISTRY_TOKEN`""Add-Content $conf ' }'Add-Content $conf ' }'Add-Content $conf '}'Get-Content $conf
Write-Host "[$TASK_NAME] Log the module registry details`n" -ForegroundColor Cyan
Get-Content state.tf
Write-Host "[$TASK_NAME] In a clean workspace, first init will download modules, then fail, ignore this and init again"if ( ! ( Test-Path ./.terraform/modules/azurerm )) { IGNORE "terraform init -upgrade -input=false" }
Write-Host "[$TASK_NAME] Initialise with local state storage and download modules`n" -ForegroundColor Cyan
terraform init -upgrade -input=false
Validation
Once all modules have been downloaded, syntax is then validated.
Write-Host "[$TASK_NAME] Validate Syntax`n" -ForegroundColor Cyan
terraform validate
Write-Host "[$TASK_NAME] Generate the graph to validate the plan`n" -ForegroundColor Cyan
terraform graph
Numeric Token Handling
All the deploy-time files are copied into the release directory. Because tokens cannot be used during the build process, an arbitrary numeric is used, and this is then replaced in the resulting release directory. Tokenisation is covered in more detail in the following section
To De-tokenise this definition at deploy time, name/value pair files are used. This allows the settings to be decoupled from the complexity of configuration file format.
If these were to be stored as separate files in source control, they would suffer the same drift challenge, so in source control, the settings are stored in a tabular format, which is compiled into the name/value files during the Continuous Integration process.
target aks_work_space name_space REGISTRY_KEY REGISTRY_KEY_SHA
TEST aks_prep test $env:REGISTRY_KEY FD6346C8432462ED2DBA6...
PROD aks_prod prod $env:REGISTRY_KEY CA3CBB1998E86F3237CA1...
Note: environment variables can be used for dynamic value replacement, most commonly used for secrets.
These human readable configuration management tables are transformed to computer friendly format and included in the release package (release.ps1). The REGISTRY_KEY and REGISTRY_KEY_SHA are used for Variable Validation, creating a properties.varchk as following
env:REGISTRY_KEY=$env:REGISTRY_KEY_SHA
Write the REGISTRY_KEY_SHA aa a container environment variable, so that when SHA changes, the container is automatically restarted to pick up the environment variable change, and hence the corresponding secret is also reloaded.
env {
name = "REGISTRY_KEY_SHA"
value = var.REGISTRY_KEY_SHA
}
An additional benefit of this approach is that when diagnosing an issue, the SHA can be used as an indicative secret verification. How these are consumed are described later in the deploy section.
The release combines the Infrastructure-as-Code (IaC) Continuous Integration (CI) output with the application components from Terraform Authoritative Release. The application authoritative release package (in green below) declares the image versions to be deployed to the infrastructure provided by the IaC release package.
graph LR
Key["Legend<br/>Blue - IaC & CM<br/>Green - Application Stack"]
subgraph ado["Azure DevOps"]
git[(Git)]
build-artefact[(Build)]
iac["release.ps1"]
package-artefact[(Artifacts)]
app["release.ps1"]
end
subgraph az["Azure"]
qa
pp
pr
end
registry[(Docker Registry)]
git --CI--> build-artefact
build-artefact --CD--> iac
package-artefact --CD--> app
registry -. "pull image" .-> qa
app -. "terraform apply" .-> qa
iac -. "terraform apply" .-> qa
classDef infra fill:LightBlue
class iac,az infra
classDef app-stack fill:LightGreen
class registry,app app-stack
In this example, the application release pipeline only deploys to the development environment to verify the package, and then pushes to the artefact store
The package, based on it’s semantic version is pulled from this store at deploy time, based on the solution manifest, CDAF.solution.
artifactPrefix=0.5
productName=Azure Terraform for Kubernetes
solutionName=azt
kat_release=0.4.80
the two release artefacts are promoted together through the pipeline
The deployment process itself is processed via the Terraform Cloud intermediary, which decouples the configuration management, and provides state storage and execution processing.
.
An important aspect of the intermediaries function is to store dynamic outputs, for example, the Infrastructure-as-Code solution provides a Kubernetes cluster, the dynamically created configuration is stored as outputs.
.
The outputs are made available to the subsequent application deployment process.
.
The Application components consume the state information that has been shared
To support the build-once/deploy-many model, the environment specific values are injected and then deployed for the release. Note that the release is immutable, and any change to any component will require a new release to be created, eliminating cherry picking. The tasksRun.tsk performs multiple levels of detokenisation, the first is for environment specific settings, the second applies any solution level declarations, then cluster, groups/regions and non-secret elements of the credentials
Environment (TARGET) specific de-tokenisation is blue, and solution level de-tokenisation in green:
Cluster de-tokenisation is blue, group/region de-tokenisation in green and on-secret elements of the credentials in orange:
Terraform Cloud is being used to perform state management. To avoid false negative reporting on Terraform apply, the operation is performed in a CMD shell.
Write-Host "[$TASK_NAME] Azure Secrets are stored in the back-end, the token opens access to these"
MAKDIR "$env:APPDATA\terraform.d"
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf 'credentials "app.terraform.io" {'
Add-Content $conf " token = `"$env:TERRAFORM_TOKEN`""
Add-Content $conf '}'
Write-Host "[$TASK_NAME] Replace Local State with Remote, load env_tag from $azure_groups"
PROPLD $azure_groups
$remote_state = "state.tf"
Set-Content $remote_state 'terraform {'
Add-Content $remote_state ' backend "remote" {'
Add-Content $remote_state " organization = `"${env:TERRAFORM_ORG}`""
Add-Content $remote_state ' workspaces {'
Add-Content $remote_state " name = `"${SOLUTION}_${resource_group}`""
Add-Content $remote_state ' }'
Add-Content $remote_state ' }'
Add-Content $remote_state '}'
terraform init -upgrade -input=false
Write-Host "[$TASK_NAME] Default action is plan`n" -ForegroundColor Cyan
if ( ! $OPT_ARG ) { $OPT_ARG = 'plan' }
EXECMD "terraform $OPT_ARG"
Once the infrastructure has been deployed, the application components are installed. The release package is downloaded (in this example an container with the AZ extensions pre-installed is used) and then run for the environment.
The package can be retrieved using the semantic version, or latest (current production).
Operations
Operational tasks can be performed using the production (latest) or specific release. In this example, a production-like development environment can be created and destroyed on demand.