Ansible Automation Platform

Full Stack Release using Ansible Automation Platform

Ansible Automation Platform is the replacement for Ansible Tower.

The Application Stack is a combination of Podman containers with an Apache reverse proxy for ingress.

This implementation does not include infrastructure, i.e. the creation of the host and related networking is not included in the automation, however, it does combine configuration management and software delivery.

graph TD
  client["🌐"]:::transparent

  subgraph dc["Data Center"]
    subgraph vm["Host"]
      Apache
      subgraph Podman
        vm1-con-a["Rails"]
        vm1-con-b["Spring"]
        vm1-con-c["Python"]
      end
    end
  end

  client -->
  Apache --> vm1-con-a
  Apache --> vm1-con-b
  Apache --> vm1-con-c

classDef transparent fill:none,stroke:none,color:black

classDef dashed stroke-dasharray: 5, 5
class dc dashed
 
classDef dotted stroke-dasharray: 2, 2
class Podman dotted

The configuration of the host and deployment of the application are defined once, and deployed many times, e.g. test and production.

graph LR

  subgraph Rails
    Rbuild["Build"] -->
    Rtest["Test"] -->
    Rpublish["Publish"]
  end
  subgraph Python
    Pbuild["Build"] -->
    Ptest["Test"] -->
    Ppublish["Publish"]
  end
  subgraph Spring
    Sbuild["Build"] -->
    Stest["Test"] -->
    Spublish["Publish"]
  end

  subgraph Release
    TEST:::release
    PROD:::release
  end

  store1[(GitLab Docker Registry)]
  store2[(Nexus Docker Registry)]

  Rpublish --> store1
  Spublish --> store1
  Ppublish --> store2
  store1 --> TEST
  store2 --> TEST
  TEST --> PROD

classDef release fill:lightgreen

Subsections of Ansible Automation Platform

Component Pipelines

Autonomous Development

Each development team is responsible to publishing a container image, how they do so it within their control, in this example GitLab and ThoughtWorks Go are used by different teams. The GitLab team are branch based, while the Go team are branch based.

alt text alt text

alt text alt text

Both teams are using CDAF docker image build and push helpers.

productName=Ruby on Rails                    productName=Springboot
solutionName=rails                           solutionName=spring
artifactPrefix=0.3                           artifactPrefix=0.2
defaultBranch=main	
                                             containerImage=cdaf/linux
buildImage=ruby:3.2.2                        buildImage=registry.access.redhat.com/ubi9/openjdk-17-runtime

CDAF_PUSH_REGISTRY_URL=${CI_REGISTRY}        CDAF_PUSH_REGISTRY_URL=https://${NEXUS_REGISTRY}                     
CDAF_PUSH_REGISTRY_TAG=${semver} latest      CDAF_PUSH_REGISTRY_TAG=${NEXUS_REGISTRY}/${SOLUTION}:$BUILDNUMBER   
CDAF_PUSH_REGISTRY_USER=${CI_REGISTRY_USER}  CDAF_PUSH_REGISTRY_USER=${NEXUS_REGISTRY_USER}                        
CDAF_PUSH_REGISTRY_TOKEN=${CI_JOB_TOKEN}     CDAF_PUSH_REGISTRY_TOKEN=${NEXUS_REGISTRY_PASS}                      

Next, build a release package…

Manifest

Application Stack Declaration

The key component of the package is the release manifest, this declares the component versions of the solution. The desired state engine (Ansible) will ensure all components for the release align with the declaration in the manifest. These are added to your CDAF.solution file. To see an example component build, see the Java SpringBoot example.

artifactPrefix=1.2
productName=Ansible Provisioning
solutionName=ansible

# SMTP Configuration
smtp_image=registry.example/mails:0.0.26
smtp_container_name=mail_forwarder
smtp_container_ports=25:25
LISTEN_PORT=25
SITE_NAME=onprem

# OAuth Verification App
rails_image=registry.example/rails:0.3.117
rails_container_name=ruby_on_rails
rails_container_ports=3000:3000

# Springboot
spring_image=registry.example/spring:127
spring_container_name=spring_boot
spring_container_ports=8081:8080

While that stack construction is the same in all environments, unique settings for each environment are defined in configuration management files, e.g. properties.cm.

context    target      deployTaskOverride  sharepoint_list  rails_fqdn              spring_fqdn
remote     staging     tower.tsk           stage            rails-test.example.com  spring-test.example.com
remote     production  tower.tsk           prod             rails.example.com       spring.example.com

Next, build a release package…

Ansible Build

Immutable Release Package

The key construct for the Release Train is that all aspects of the release process are predictable and repeatable. To avoid deploy-time variations in Ansible dependencies, playbooks are not downloaded at deploytime, instead they are resolved at build time and packaged into an immutable release package. For a consistent way-of-working, the Ansible build process resolves dependencies and validates the playbooks.

Due to the complexity, a customer build script build.sh is defined, and broken down into the steps below

Sprint Zero

Based on Sprint-Zero, it is critical that a deployment is verifiable by version. A message of the day (motd) file is generated with the build number included so that a user who logs in to the host can verify what version has been applied.

executeExpression "ansible-playbook --version"

echo "[$scriptName] Build the message of the day verification file"; echo
executeExpression "cp -v devops/motd motd.txt"
propertiesList=$(eval "$AUTOMATIONROOT/remote/transform.sh devops/CDAF.solution")
printf "$propertiesList"
eval $propertiesList
cat >> motd.txt <<< "State version : ${artifactPrefix}.${BUILDNUMBER}"
cat motd.txt

Resolve Dependencies

Playbooks are then downloaded to the release.

common_collections='community.general ansible.posix containers.podman'
for common_collection in $common_collections; do
	executeExpression "ansible-galaxy collection install $common_collection $force_install -p ."
done

alt text alt text

Validation

Once all playbooks have been downloaded, syntax is then validated.

for play in `find playbooks/ -maxdepth 1 -type f -name '*.yaml'`; do
	executeExpression "ansible-playbook $play --list-tasks -vv"
	for inventory in `find inventory/ -maxdepth 1 -type f`; do
		echo
		echo "ansible-playbook ${play} -i $inventory --list-hosts -vv"
		echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
		echo
		executeExpression "ansible-playbook ${play} -i $inventory --list-hosts -vv"
	done
done

alt text alt text

Release Package

The deploytime components are then copied into the release package, based on the storeFor definition in your solution directory

# All Deploy-time Playbooks
release

alt text alt text

The playbooks and helper scripts are then packed into a self-extracting release executable as per standard CDAF release build process

alt text alt text

Ansible Deploy

Detokenisation and Release

At deploy time, the solution manifest and environment settings are applied, the following is an extract from the tower.tsk.

echo "De-tokenise Environment properties prior to loading to Tower"
DETOKN roles/apache-reverse-proxy/vars/main.yml

echo "Resolve global config, i.e. container image version, then environment specific list names"
DETOKN roles/smtp/vars/main.yml
DETOKN roles/smtp/vars/main.yml $WORKSPACE/manifest.txt

DETOKN roles/rails/vars/main.yml
DETOKN roles/rails/vars/main.yml $WORKSPACE/manifest.txt

DETOKN roles/spring/vars/main.yml
DETOKN roles/spring/vars/main.yml $WORKSPACE/manifest.txt

alt text alt text

As the Ansible Automation Platform is the intermediary, the declarations need to be moved to intermediary and then the release triggered. In this example, the desired state is continually apply to remediate any drift, but can also be triggered via a command line interface (CLI). The following extract from towerTemplate.sh sets up the configuration

templateID=$(tower-cli job_template list -n "${name}" -f id)
if [ -z $templateID ]; then
	executeExpression "tower-cli job_template create --name '${name}' --inventory '${inventory}' --project '${project}' --playbook '${playbook}' --verbosity more_verbose"
else
	executeExpression "tower-cli job_template modify --name '${name}' --inventory '${inventory}' --project '${project}' --playbook '${playbook}' --verbosity more_verbose"
fi

for credential in $credentials; do
	executeExpression "tower-cli job_template associate_credential --job-template '${name}' --credential ${credential}"
done

once configured, the deployment is triggered.

echo "With Project and Inventory loaded, can now create the Template which links the Inventory, Project, Playbook and Credentials"
${WORKSPACE}/towerTemplate.sh "$TARGET" "$TARGET" "$TARGET" 'playbooks/common.yaml' 'localadmin'

echo "Launch and watch the deployed playbooks"
templateID=$(tower-cli job_template list -n "$TARGET" -f id)
tower-cli job launch --job-template=$templateID

alt text alt text

An overview of deployment activity and state management is available in the intermediary user interface.

alt text alt text