Release Train

The examples provided in this section are based on the motivations of Autonomous Development, Authoritative Release, in some cases, extending the declarative release principles.

Automating Release Management at Scale

In a large scale environment, a release can include infrastructure, operational and application changes. In Scaled Agile Framework (SAFe) language, the role of coordinating these changes is called the Release Train Engineer (RTE). In many organisations, the coordination of these changes is manual. Automation of this coordination extends the Autonomous Development, Authoritative Release approach to include all aspects of the solution.

Release Train Engineering objectives preserve Autonomous Development, while ensuring the development output assets extend beyond application development, and may include infrastructure, configuration management and test automation.

Fundamental to Release Train Engineering is a Desired State Engine. Examples of these include Terraform, Amazon Cloud Development Kit, Azure Resource Manager/Bicep, Helmsman, Helmfile, Puppet, Ansible, Octopus*.

Intermediary

An intermediary provides a decoupled solution to perform the deployment actions of the release, based on a triggering request from the pipeline. Intermediaries, also known as orchestrators, can provide state management persistence, state reporting and drift remediation.

  • Octopus does not have a Desired State capability as such, but using a parent project, a release manifest can be constructed, and only child projects which have changed will be deployed. See detailed explanation in Octopus Deploy section.

Subsections of Release Train

Azure DevOps (ADO) Release

Orchestrated Component Deploy

The Application Stack in this example deploys two components, static content and an API.

graph TD

  Agent["🌐"] 

  subgraph vm1["☁️ CloudFlare"]
    content["Static Content"]
    API
  end

  Agent --> content
  Agent --> API

classDef external fill:lightblue
class Agent external

classDef dashed stroke-dasharray: 5, 5
class vm1,vm2,vm3,vm4 dashed
 
classDef dotted stroke-dasharray: 2, 2
class vm1-pod-1,vm1-pod-2,vm2-pod-1,vm2-pod-2,vm3-pod-1,vm3-pod-2,vm4-pod-1,vm4-pod-2 dotted

Each component publishes a self-contained release package to the Azure DevOps (ADO) artefact store. The ADO Release orchestrates these package deployments for each environment, ensuring the complete stack is promoted through each environment with aligned package versions.


graph LR

  subgraph static["Static Content"]
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end
  subgraph API
    Abuild["Build"] -->
    Atest["Test"] -->
    Apublish["Publish"]
  end

  subgraph Release
    TEST
    PROD
  end

  store[(ADO Store)]

  Apublish --> store
  Spublish --> store
  store --> TEST
  TEST --> PROD

classDef release fill:lightgreen
class TEST,PROD release

Subsections of Azure DevOps (ADO) Release

Component CI

Autonomous Component Build & Test

Each component contains both application code and deployment automation. The development team can imperatively deploy to the dev environment, i.e. the API and Vue application can be deployed separately, with no assurance of version alignment.

Example Vue properties.cm file, the deployment tool used is Wrangler.

context    target  pages_app_project  fqdn                 api_url
container  DEV     petstore-dev       vue-dev.example.com  api-dev.example.com

container  TEST    petstore-tst       vue-tst.example.com  api-tst.example.com
container  PROD    petstore-prd       vue.example.com      api.example.com

Example API properties.cm file, the deployment tool used is Terraform.

context    target tf_work_space  pages_suffix
container  DEV    PetStack-Dev   dev
container  TEST   PetStack-Test  tst
container  PROD   PetStack-Prod  prd

Due to the loose-coupling principle of CDAF, the same pipeline template is used for both components, even though the code and deployment automation are different (see orchestration templates in GitHub for Windows and Linux).

note that Jest for Vue and Checkov for Terraform have both been configured to output results in JUnit XML format.

jobs:
  - job: Build
    displayName: Build and Package
    pool:
      vmImage: windows-latest
    steps:
      - task: PowerShell@2
        displayName: CDAF Release Build
        inputs:
          targetType: 'inline'
          script: |
            . { iwr -useb https://cdaf.io/static/app/downloads/cdaf.ps1 } | iex
            .\automation\entry.ps1 $(Build.BuildNumber) $(Build.SourceBranch) staging@$(Build.ArtifactStagingDirectory)
      - task: PublishTestResults@2
        condition: succeededOrFailed()
        inputs:
          testResultsFormat: 'JUnit'
          testResultsFiles: '**/test-results/*.xml' 
      - task: PublishBuildArtifacts@1

The resulting ADO component pipelines are independent

alt text alt text

Next, autonomous deploy…

Component CD

Autonomous Component Deploy

By using the feature-branch.properties capability of CDAF, branches containing the string dev will deploy to the development environment. This feature allows imperative deployment by the development team, without manipulating the pipeline, and therefore avoiding drift.

vue

# Feature Branch name match mapping to environment
dev=DEV

API

# Feature Branch name "contains" mapping to environment
dev=DEV release 'apply --auto-approve'

In the feature branch, where dev is in the branch name, CDAF will detect and execute a deployment, using the mapping above to invoke a release to DEV.

alt text alt text

The trunk based pipeline will only push a release artefact from the main branch, with a stand-up/tear-down integration test of the production build.

alt text alt text

Next, publication…

Component Publish

Autonomous Component Publication

the final stage of the main pipeline is publication. This pushes the release package to the artefact registry.

alt text alt text

Each component publishes their release package, so although they use different technologies, they are now available as consistent packages, using the CDAF package process, which outputs a self-extract release.ps1 (of release.sh for linux) file.

alt text alt text

Next, Release…

Release

Full Stack Release

The ADO Release function is used to create a release, and promote it through the environments. The release obtains the components from the artefact store

alt text alt text

The Release is defined in order of dependency, i.e. the CloudFlare infrastructure is created/updated and configured with the API, then the front-end is deployed to the infrastructure.

The release itself includes to deployment logic, it simply invokes the packages provided by the component development team.

alt text alt text

When a new release is created, the latest versions are defaulted, and this defines the manifest for the release, i.e. different versions cannot be deployed to different environments. This ensures the stack is consistency promoted.

The latest versions do not have to selected, but whatever is selected is static for that release instance.

alt text alt text

When the release is promoted, no manual intervention is required, except for approval gates, which can be approved by business or product owners, and does not require any further development effort.

alt text alt text

Ansible Automation Platform

Full Stack Release using Ansible Automation Platform

Ansible Automation Platform is the replacement for Ansible Tower.

The Application Stack is a combination of Podman containers with an Apache reverse proxy for ingress.

This implementation does not include infrastructure, i.e. the creation of the host and related networking is not included in the automation, however, it does combine configuration management and software delivery.

graph TD
  client["🌐"]:::transparent

  subgraph dc["Data Center"]
    subgraph vm["Host"]
      Apache
      subgraph Podman
        vm1-con-a["Rails"]
        vm1-con-b["Spring"]
        vm1-con-c["Python"]
      end
    end
  end

  client -->
  Apache --> vm1-con-a
  Apache --> vm1-con-b
  Apache --> vm1-con-c

classDef transparent fill:none,stroke:none,color:black

classDef dashed stroke-dasharray: 5, 5
class dc dashed
 
classDef dotted stroke-dasharray: 2, 2
class Podman dotted

The configuration of the host and deployment of the application are defined once, and deployed many times, e.g. test and production.

graph LR

  subgraph Rails
    Rbuild["Build"] -->
    Rtest["Test"] -->
    Rpublish["Publish"]
  end
  subgraph Python
    Pbuild["Build"] -->
    Ptest["Test"] -->
    Ppublish["Publish"]
  end
  subgraph Spring
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end

  subgraph Release
    TEST:::release
    PROD:::release
  end

  store1[(GitLab Docker Registry)]
  store2[(Nexus Docker Registry)]

  Rpublish --> store1
  Spublish --> store1
  Ppublish --> store2
  store1 --> TEST
  store2 --> TEST
  TEST --> PROD

classDef release fill:lightgreen

Subsections of Ansible Automation Platform

Component Pipelines

Autonomous Development

Each development team is responsible to publishing a container image, how they do so it within their control, in this example GitLab and ThoughtWorks Go are used by different teams. The GitLab team are branch based, while the Go team are branch based.

alt text alt text

alt text alt text

Both teams are using CDAF docker image build and push helpers.

productName=Ruby on Rails                    productName=Springboot
solutionName=rails                           solutionName=spring
artifactPrefix=0.3                           artifactPrefix=0.2
defaultBranch=main	
                                             containerImage=cdaf/linux
buildImage=ruby:3.2.2                        buildImage=registry.access.redhat.com/ubi9/openjdk-17-runtime

CDAF_PUSH_REGISTRY_URL=${CI_REGISTRY}        CDAF_PUSH_REGISTRY_URL=https://${NEXUS_REGISTRY}                     
CDAF_PUSH_REGISTRY_TAG=${semver} latest      CDAF_PUSH_REGISTRY_TAG=${NEXUS_REGISTRY}/${SOLUTION}:$BUILDNUMBER   
CDAF_PUSH_REGISTRY_USER=${CI_REGISTRY_USER}  CDAF_PUSH_REGISTRY_USER=${NEXUS_REGISTRY_USER}                        
CDAF_PUSH_REGISTRY_TOKEN=${CI_JOB_TOKEN}     CDAF_PUSH_REGISTRY_TOKEN=${NEXUS_REGISTRY_PASS}                      

Next, build a release package…

Manifest

Application Stack Declaration

The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Ansible) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file. To see an example component build, see the Java SpringBoot example.

artifactPrefix=1.2
productName=Ansible Provisioning
solutionName=ansible

# SMTP Configuration
smtp_image=registry.example/mails:0.0.26
smtp_container_name=mail_forwarder
smtp_container_ports=25:25
LISTEN_PORT=25
SITE_NAME=onprem

# OAuth Verification App
rails_image=registry.example/rails:0.3.117
rails_container_name=ruby_on_rails
rails_container_ports=3000:3000

# Springboot
spring_image=registry.example/spring:127
spring_container_name=spring_boot
spring_container_ports=8081:8080

While that stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm.

context    target      deployTaskOverride  sharepoint_list  rails_fqdn              spring_fqdn
remote     staging     tower.tsk           stage            rails-test.example.com  spring-test.example.com
remote     production  tower.tsk           prod             rails.example.com       spring.example.com

Next, build a release package…

Ansible Build

Immutable Release Package

The key construct for the Release Train is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Ansible dependencies, playbooks are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Ansible build process resolves dependencies and validates the playbooks.

Due to the complexity, a customer build script build.sh is defined, and broken down into the steps below

Sprint Zero

Based on Sprint-Zero, it is critical that a deployment is verifiable by version. A message of the day (motd) file is generated with the build number included so that a user who logs in to the host can verify what version has been applied.

executeExpression "ansible-playbook --version"

echo "[$scriptName] Build the message of the day verification file"; echo
executeExpression "cp -v devops/motd motd.txt"
propertiesList=$(eval "$AUTOMATIONROOT/remote/transform.sh devops/CDAF.solution")
printf "$propertiesList"
eval $propertiesList
cat >> motd.txt <<< "State version : ${artifactPrefix}.${BUILDNUMBER}"
cat motd.txt

Resolve Dependencies

Playbooks are then downloaded to the release.

common_collections='community.general ansible.posix containers.podman'
for common_collection in $common_collections; do
	executeExpression "ansible-galaxy collection install $common_collection $force_install -p ."
done

alt text alt text

Validation

Once all playbooks have been downloaded, syntax is then validated.

for play in `find playbooks/ -maxdepth 1 -type f -name '*.yaml'`; do
	executeExpression "ansible-playbook $play --list-tasks -vv"
	for inventory in `find inventory/ -maxdepth 1 -type f`; do
		echo
		echo "ansible-playbook ${play} -i $inventory --list-hosts -vv"
		echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
		echo
		executeExpression "ansible-playbook ${play} -i $inventory --list-hosts -vv"
	done
done

alt text alt text

Release Package

The deploytime components are then copied into the release package, based on the storeFor definition in your solution directory

# All Deploy-time Playbooks
release

alt text alt text

The playbooks and helper scripts are then packed into a self-extracting release executable as per standard CDAF release build process

alt text alt text

Ansible Deploy

Detokenisation and Release

At deploy time, the solution manifest and environment settings are applied, the following is an extract from the tower.tsk.

echo "De-tokenise Environment properties prior to loading to Tower"
DETOKN roles/apache-reverse-proxy/vars/main.yml

echo "Resolve global config, i.e. container image version, then environment specific list names"
DETOKN roles/smtp/vars/main.yml
DETOKN roles/smtp/vars/main.yml $WORKSPACE/manifest.txt

DETOKN roles/rails/vars/main.yml
DETOKN roles/rails/vars/main.yml $WORKSPACE/manifest.txt

DETOKN roles/spring/vars/main.yml
DETOKN roles/spring/vars/main.yml $WORKSPACE/manifest.txt

alt text alt text

As the Ansible Automation Platform is the intermediary, the declarations need to be moved to intermediary and then the release triggered. In this example, the desired state is continually apply to remediate any drift, but can also be triggered via a command line interface (CLI). The following extract from towerTemplate.sh sets up the configuration

templateID=$(tower-cli job_template list -n "${name}" -f id)
if [ -z $templateID ]; then
	executeExpression "tower-cli job_template create --name '${name}' --inventory '${inventory}' --project '${project}' --playbook '${playbook}' --verbosity more_verbose"
else
	executeExpression "tower-cli job_template modify --name '${name}' --inventory '${inventory}' --project '${project}' --playbook '${playbook}' --verbosity more_verbose"
fi

for credential in $credentials; do
	executeExpression "tower-cli job_template associate_credential --job-template '${name}' --credential ${credential}"
done

once configured, the deployment is triggered.

echo "With Project and Inventory loaded, can now create the Template which links the Inventory, Project, Playbook and Credentials"
${WORKSPACE}/towerTemplate.sh "$TARGET" "$TARGET" "$TARGET" 'playbooks/common.yaml' 'localadmin'

echo "Launch and watch the deployed playbooks"
templateID=$(tower-cli job_template list -n "$TARGET" -f id)
tower-cli job launch --job-template=$templateID

alt text alt text

An overview of deployment activity and state management is available in the intermediary user interface.

alt text alt text

Octopus Deploy

Release Orchestration using Octopus Deploy

Octopus Deploy is a dedicated release orchestration tool which does not have build capabilities and does not natively integrate with source control, instead it provides a repository to which build artefacts can be pushed. The following scenario is a stack which comprises a customer-facing application (React) front-end and Platform-as-a-Service (Mulesoft Anypoint) back-end.

The back-end deployment is itself an authoritative release solution with a source-driven manifest (see Custom Desired State Management Solution). The client will retrieve the static content from the content delivery network (CloudFlare).

graph TD
  client["🌐"]:::transparent

  subgraph cf["CloudFlare"]
    react-a["Static Content"]
  end

  subgraph ch["CloudHub"]
    patient["Patient API"]
    Admissions["Admissions API"]
  end

  client --> react-a
  client --> patient
  patient --> Admissions

classDef external fill:lightblue
class client external
 
classDef dashed stroke-dasharray: 5, 5
class cf,ch dashed

Octopus creates a release whenever either the state management or user interface packages are pushed, but this is not deployed into test until the release manager approves. The API construction and registration with AnyPoint exchange is not described here, this is treated as a prerequisite, see Custom Desired State Management Solution for a detailed breakdown of that process.

graph LR

  subgraph "Patient API"
    Rbuild["Build"] -->
    Rtest["Test"] -->
    Rpublish["Publish"]
  end
  subgraph "AnyPoint Desired State Management"
    Pbuild["Build"] -->
    Ptest["Test"] -->
    Ppublish["Publish"]
  end
  subgraph "Admissions API"
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end
  subgraph "CloudFlare Pages"
    Abuild["Build"] -->
    Atest["Test"] -->
    Apublish["Publish"]
  end

  subgraph Release
    TEST:::release
    PROD:::release
  end

  store1[(Anypoint Exchange)]
  store2[(Octopus Package Registry)]

  Rpublish --> store1
  Spublish --> store1
  Ppublish --> store2
  Apublish --> store2

  store1 --> TEST
  store2 --> TEST
  TEST --> PROD

classDef release fill:lightgreen

Subsections of Octopus Deploy

Octopus Pane of Glass

Overview of Stack Components

As an intermediatry, Octopus provides release gating, orchestration and a overview of the stack components, and what versions have been promoted to which environments.

alt text alt text

Parent Project

The parent project does not perform any deployment activity itself, it serves as the orchestrator of the child projects, providing gating and sequencing.

alt text alt text

Child Projects

The child project, use the same template process, but each has the release packages that have been build to perform their technology specific deployment process.

alt text alt text

Component Independence

The approach above does offer the ability to independently promote or roll-back a child component. This can be beneficial for hot-fixes, however, it is discouraged as it breaks the stack alignment principles of the release train.

Decoupled Deployment

Orchestrated Release

The core principle of all the examples in this material is the production of a self-contained, immutable release package. This provides loose coupling with tool chains and re-usability for development environments (see Realising the Feedback Loop).

While Octopus provides a wide range of deployment mechanisms, as a release orchestrator, each child project has the same process, executing the release package for each component against the target environment.

Delivery Lifecycle

Octopus orchestration is called a lifecycle, which is a re-usable pattern. Each child item can use the same lifecycle because the deployment launch process is the same.

alt text alt text

While the launch process is the same, each child components underlying technologies can be very different.

alt text alt text

Business Visibility

Non-techincal Release View

After each environment deployment is successful, a Confluence page (one per component/environment) is updated, capturing release details. This provides visibility outside of the toolchain, which is easier to access by business users such as test managers and product owners. Using the content include macro, these pages can be merged.

alt text alt text

Terraform Cloud

Full Stack Release using Terraform Cloud

This Release Train extends the Terraform Kubernetes authoritative release, combining the application stack deployment with the Infrastructure-as-Code solution.

graph TD
  client["🌐"]:::transparent

  apim["API Gateway"]

  subgraph k8s["Kubernetes"]
    subgraph ns1["Dev namespace"]
      ns1-ingress["ingress"]
      subgraph ns1-pod-1["Pod"]
        ns1-con-a["container"]
      end
      subgraph ns1-pod-2["Pod"]
        ns1-con-b["container"]
        ns1-con-c["container"]
      end
    end
  end

  client -->
  apim -->
  ns1-ingress --> ns1-con-a
  ns1-ingress --> 
  ns1-con-b --> ns1-con-c

classDef external fill:lightblue
class client external
 
classDef dashed stroke-dasharray: 5, 5
class ns1,ns2,ns3 dashed
 
classDef dotted stroke-dasharray: 2, 2
class ns1-pod-1,ns1-pod-2,ns2-pod-1,ns2-pod-2,ns3-pod-1,ns3-pod-2 dotted

Each component publishes a self-contained release package to the Azure DevOps (ADO) artefact store. The ADO Release orchestrates these package deployments for each environment, ensuring the complete stack is promoted through each environment with aligned package versions.


graph LR

  subgraph Components
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end
  subgraph Infrastructure
    Abuild["Build"] -->
    Atest["Test"] -->
    Apublish["Publish"]
  end

  subgraph Release
    TEST
    PROD
  end

  store[(ADO Store)]

  Apublish --> store
  Spublish --> store
  store --> TEST
  TEST --> PROD

classDef release fill:lightgreen
class TEST,PROD release

Subsections of Terraform Cloud

Manifest

Declare Container Deployment as Terraform Package

The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Terraform) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file.

solutionName=kat
artifactPrefix=0.4

ui_image=cdaf/cdaf:572
api_image=cdaf/kestrel:ubuntu-22.04-14
fast_image=cdaf/fastapi:50

While the stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm. The properties management is covered in more detail in the Configuration Management section.

context    target  work_space      name_space  api_node_category  api_ip        ui_ip     
container  TEST    kat_test        kat-test    secondary          10.224.10.11  10.224.10.21  
container  PROD    kat_production  kat-prod    primary            10.224.10.10  10.224.10.20  

Next, build a release package…

Terraform Build

Immutable Release Package

The key construct for the Authoritative Release is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Terraform dependencies, modules are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Terraform build process resolves and validates dependencies.

Build-time Module Resolution

Most Terraform module resolution approaches are to pull from source control (Git) or registry at deploy-time, which can require additional credential management, risks unexpected module changes (if tags are used) and potential network connectivity issues. This approach is the treat modules like software dependencies, resolving them at build time and building them into an all-in-one immutable package.

The following state.tf defines the modules and versions that are required

terraform {
  backend "local" {}
}

module "stack_modules" {
  source  = "app.terraform.io/example/modules/azurerm"
  version = "0.2.0"
}

module "stack_components" {
  source  = "app.terraform.io/example/components/azurerm"
  version = "0.1.3"
}

The following builld.tsk triggers module download from a private registry using credentials in TERRAFORM_REGISTRY_TOKEN, these credentials will not be required at deploy time.

Write-Host "[$TASK_NAME] Verify Version`n" -ForegroundColor Cyan
terraform --version

VARCHK

MAKDIR $env:APPDATA\terraform.d
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf '{'
Add-Content $conf '  "credentials": {'
Add-Content $conf '    "app.terraform.io": {'
Add-Content $conf "      `"token`": `"$env:TERRAFORM_REGISTRY_TOKEN`""
Add-Content $conf '    }'
Add-Content $conf '  }'
Add-Content $conf '}'
Get-Content $conf

Write-Host "[$TASK_NAME] Log the module registry details`n" -ForegroundColor Cyan
Get-Content state.tf

Write-Host "[$TASK_NAME] In a clean workspace, first init will download modules, then fail, ignore this and init again"
if ( ! ( Test-Path ./.terraform/modules/azurerm )) { IGNORE "terraform init -upgrade -input=false" }

Write-Host "[$TASK_NAME] Initialise with local state storage and download modules`n" -ForegroundColor Cyan
terraform init -upgrade -input=false

alt text alt text

Validation

Once all modules have been downloaded, syntax is then validated.

Write-Host "[$TASK_NAME] Validate Syntax`n" -ForegroundColor Cyan
terraform validate

Write-Host "[$TASK_NAME] Generate the graph to validate the plan`n" -ForegroundColor Cyan
terraform graph

alt text alt text

Numeric Token Handling

All the deploy-time files are copied into the release directory. Because tokens cannot be used during the build process, an arbitrary numeric is used, and this is then replaced in the resulting release directory. Tokenisation is covered in more detail in the following section

Write-Host "[$TASK_NAME] Tokenise variable file`n" -ForegroundColor Cyan
REFRSH .terraform\modules\* ..\release\.terraform\modules\
VECOPY *".tf" ..\release
VECOPY *".json" ..\release
REPLAC ..\release\variables.tf           '{ default = 3 }'  '{ default = %agent_count% }'

alt text alt text

Release Package

The deploytime components are then copied into the release package, based on the storeFor definition in your solution directory

# Tokenised Terraform Files
release

alt text alt text

The modules and helper scripts are then packed into a self-extracting release executable as per standard CDAF release build process

alt text alt text

Configuration Management

Tokens and Properties

To avoid a configuration file for each environment, and the inevitable drift between those files, a single, tokenised, definition is used.

variable "aks_work_space"   { default = "%aks_work_space%" }
variable "name_space"       { default = "%name_space%" }
variable "REGISTRY_KEY"     { default = "@REGISTRY_KEY@" }
variable "REGISTRY_KEY_SHA  { default = "@REGISTRY_KEY_SHA@" }

To De-tokenise this definition at deploy time, name/value pair files are used. This allows the settings to be decoupled from the complexity of configuration file format.

If these were to be stored as separate files in source control, they would suffer the same drift challenge, so in source control, the settings are stored in a tabular format, which is compiled into the name/value files during the Continuous Integration process.

target  aks_work_space  name_space  REGISTRY_KEY       REGISTRY_KEY_SHA
TEST    aks_prep        test        $env:REGISTRY_KEY  FD6346C8432462ED2DBA6...
PROD    aks_prod        prod        $env:REGISTRY_KEY  CA3CBB1998E86F3237CA1...

Note: environment variables can be used for dynamic value replacement, most commonly used for secrets.

These human readable configuration management tables are transformed to computer friendly format and included in the release package (release.ps1). The REGISTRY_KEY and REGISTRY_KEY_SHA are used for Variable Validation, creating a properties.varchk as following

env:REGISTRY_KEY=$env:REGISTRY_KEY_SHA

Write the REGISTRY_KEY_SHA aa a container environment variable, so that when SHA changes, the container is automatically restarted to pick up the environment variable change, and hence the corresponding secret is also reloaded.

env {
  name = "REGISTRY_KEY_SHA"
  value = var.REGISTRY_KEY_SHA
}

An additional benefit of this approach is that when diagnosing an issue, the SHA can be used as an indicative secret verification. How these are consumed are described later in the deploy section.

Release

Release Construction

The release combines the Infrastructure-as-Code (IaC) Continuous Integration (CI) output with the application components from Terraform Authoritative Release. The application authoritative release package (in green below) declares the image versions to be deployed to the infrastructure provided by the IaC release package.

graph LR
  Key["Legend<br/>Blue - IaC & CM<br/>Green - Application Stack"]

  subgraph ado["Azure DevOps"]
    git[(Git)]
    build-artefact[(Build)]
    iac["release.ps1"]
    package-artefact[(Artifacts)]
    app["release.ps1"]
  end

  subgraph az["Azure"]
    qa
    pp
    pr
  end

  registry[(Docker Registry)]

  git --CI--> build-artefact
  build-artefact --CD--> iac

  package-artefact --CD--> app

  registry -. "pull image" .-> qa
  app -. "terraform apply" .-> qa
  iac -. "terraform apply" .-> qa

  classDef infra fill:LightBlue
  class iac,az infra

  classDef app-stack fill:LightGreen
  class registry,app app-stack

In this example, the application release pipeline only deploys to the development environment to verify the package, and then pushes to the artefact store

alt text alt text

The package, based on it’s semantic version is pulled from this store at deploy time, based on the solution manifest, CDAF.solution.

alt text alt text

artifactPrefix=0.5
productName=Azure Terraform for Kubernetes
solutionName=azt

kat_release=0.4.80

the two release artefacts are promoted together through the pipeline

alt text alt text

Intermediary

Terraform Cloud intermediary

The deployment process itself is processed via the Terraform Cloud intermediary, which decouples the configuration management, and provides state storage and execution processing.

alt text alt text.

An important aspect of the intermediaries function is to store dynamic outputs, for example, the Infrastructure-as-Code solution provides a Kubernetes cluster, the dynamically created configuration is stored as outputs.

alt text alt text.

The outputs are made available to the subsequent application deployment process.

alt text alt text.

The Application components consume the state information that has been shared

alt text alt text.

Deploy

Deploy-time Detokenisation

The configuration management is consumed at deploy time.

Deployment Mechanics

To support the build-once/deploy-many model, the environment specific values are injected and then deployed for the release. Note that the release is immutable, and any change to any component will require a new release to be created, eliminating cherry picking. The tasksRun.tsk performs multiple levels of detokenisation, the first is for environment specific settings, the second applies any solution level declarations, then cluster, groups/regions and non-secret elements of the credentials

Write-Host "[$TASK_NAME] Generic Properties Detokenisation`n" -ForegroundColor Cyan
Get-Content variables.tf
DETOKN variables.tf

Write-Host "[$TASK_NAME] Custom Properties Detokenisation`n" -ForegroundColor Cyan
DETOKN variables.tf $azure_groups
DETOKN variables.tf $azure_credentials reveal

Environment (TARGET) specific de-tokenisation is blue, and solution level de-tokenisation in green:

alt text alt text

Cluster de-tokenisation is blue, group/region de-tokenisation in green and on-secret elements of the credentials in orange:

alt text alt text

Terraform Cloud is being used to perform state management. To avoid false negative reporting on Terraform apply, the operation is performed in a CMD shell.

Write-Host "[$TASK_NAME] Azure Secrets are stored in the back-end, the token opens access to these"
MAKDIR "$env:APPDATA\terraform.d"
$conf = "$env:APPDATA\terraform.d\credentials.tfrc.json"
Set-Content $conf 'credentials "app.terraform.io" {'
Add-Content $conf "  token = `"$env:TERRAFORM_TOKEN`""
Add-Content $conf '}'

Write-Host "[$TASK_NAME] Replace Local State with Remote, load env_tag from $azure_groups"
PROPLD $azure_groups
$remote_state = "state.tf"
Set-Content $remote_state 'terraform {'
Add-Content $remote_state '  backend "remote" {'
Add-Content $remote_state "    organization = `"${env:TERRAFORM_ORG}`""
Add-Content $remote_state '    workspaces {'
Add-Content $remote_state "      name = `"${SOLUTION}_${resource_group}`""
Add-Content $remote_state '    }'
Add-Content $remote_state '  }'
Add-Content $remote_state '}'

terraform init -upgrade -input=false

Write-Host "[$TASK_NAME] Default action is plan`n" -ForegroundColor Cyan
if ( ! $OPT_ARG ) { $OPT_ARG = 'plan' }
EXECMD "terraform $OPT_ARG"

alt text alt text

Once the infrastructure has been deployed, the application components are installed. The release package is downloaded (in this example an container with the AZ extensions pre-installed is used) and then run for the environment.

alt text alt text

Feedback Loop

Realising the Feedback Loop

Based on Realising the Feedback Loop, once the package has been promoted to it’s last stage, it is then pushed to the artefact store

alt text alt text

In this example Azure DevOps (ADO) using the az artifacts extension, see the example push.tsk.

Write-Host "[$TASK_NAME] Verify deployable artefact is available`n"
$package_name = (Get-Item "$(PWD)\release.ps1" -ErrorAction SilentlyContinue).FullName
if ( ! ( $package_name )) { ERRMSG "[PACKAGE_NOT_FOUND] $(PWD)\release.ps1 not found!" 9994 }

Write-Host "[$TASK_NAME] Verify Azure DevOps PAT is set correctly`n"
VARCHK push.varchk

PROPLD manifest.txt
$version = ${artifactPrefix} + '.' + ${BUILDNUMBER}

Write-Host "[$TASK_NAME] Push $SOLUTION release package:"
Write-Host "[$TASK_NAME]   `$ado_org      = $ado_org"
Write-Host "[$TASK_NAME]   `$ado_project  = $ado_project"
Write-Host "[$TASK_NAME]   `$ado_feed     = $ado_feed"
Write-Host "[$TASK_NAME]   `$SOLUTION     = $SOLUTION"
Write-Host "[$TASK_NAME]   `$version      = $version"
Write-Host "[$TASK_NAME]   `$package_name = $package_name"

Write-Host "Verify deployable artefact is available`n"
az artifacts universal publish --organization $ado_org --project $ado_project --scope project --feed $ado_feed --name "$SOLUTION" --version $version --path $package_name

Write-Host "Verify wrapper is available`n"
$package_name = (Get-Item "$(PWD)\userenv.ps1" -ErrorAction SilentlyContinue).FullName
if ( ! ( $package_name )) { ERRMSG "[PACKAGE_NOT_FOUND] $(PWD)\userenv.ps1 not found!" 9995 }
az artifacts universal publish --organization "https://cdaf.visualstudio.com" --project $ado_project --scope project --feed $ado_feed --name "userenv" --version $version --path $package_name

The package can be retrieved using the semantic version, or latest (current production).

alt text alt text

Operations

Operational tasks can be performed using the production (latest) or specific release. In this example, a production-like development environment can be created and destroyed on demand.

alt text alt text