Infrastructure as code increases productivity and transparency. By storing the architecture configuration in version control, changes can be compared to the previous state, and the history gets visible and traceable. Terraform is an open source command line tool which codifies APIs into declarative configuration files.
In this article, Terraform is used to create templates which will be processed to create the resources in an Openshift project, such as services, build and deployment configurations.
Terraform does not just create resources, but offers a single command for creation, update, and deletion of tracked resources.
Terraform offers a module registry where everyone can search for and download modules for common infrastructure configurations for any provider. The module codes are open source and can be looked up in the respective Github repository.
However, I could not find a module for Openshift. Therefore, I tried to find my own way to transform the following Openshift setup into Terraform module templates.
Terraform Templates for Kubernetes
Kubernetes Service Configuration
The following file defines the configuration for a Kubernetes service. The specifation will create a service named apps-service which targets port 8080 on any pod with the label “apps-service”. The new service is mapped to the namespace prod.
apiVersion: v1
kind: Service
metadata:
name: apps-service
namespace: prod
spec:
externalTrafficPolicy: Cluster
ports:
- port: 8080
targetPort: 8080
selector:
app: apps-service
sessionAffinity: None
type: LoadBalancer
The sessionAffinity keyword means that once a user session is started, the same server serves all requests for that session. This allows caching of resources specific to this session.
Kubernetes Pod Configuration
To keep containerised applications portable, the configuration is codified into standardised files.
The Openshift DeploymentConfig, BuildConfig, and ServiceConfig have to be codified into Terraform templates.
The ConfigMap defines the data to configure a Pod.
Terraform provides a resource named “kubernetes_config_map” which we named “apps-service-config”. It consists of an optional data block and a required metadata block.
resource "kubernetes_config_map" "apps-service-config" {
metadata {
name = "apps-service-config"
namespace = prod
labels = {
app = "apps-service"
}
annotations {
project = prod
}
}
data {
service:
logFolder: /tmp
serviceName: apps_service
ip: 0.0.0.0
port: 8080
}
}
In the annotations section, we can store a set of unstructured key-value pairs. The name attribute names the config map “apps-service-config” (unique and cannot be changed afterwards). The namespace defines that the name must be unique within the prod namespace.
Openshift Deployment Configuration
The DeploymentConfig is Openshift’s template for deployments. It defines the deployment strategy, the replica count and triggers which cause deployments to be created automatically.
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: backend-prod
name: apps-service
namespace: prod
spec:
replicas: 2
selector:
app: apps-service
deploymentconfig: apps-service
strategy:
type: Rolling
template:
metadata:
labels:
app: apps-service
deploymentconfig: apps-service
spec:
containers:
- imagePullPolicy: IfNotPresent
image: >-
docker-registry.default.svc:5000/my-project-build/apps-service-run:deploy-prod
name: apps-service
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 100m
memory: 32Mi
volumeMounts:
- mountPath: /config
name: service-config
volumes:
- configMap:
defaultMode: 420
name: apps-service-config
name: service-config
The above Openshift deployment configuration creates two replicas of the apps-service by default. Thereby, it uses the default rolling strategy which waits for pods to pass their readiness check before scaling down old components.
variable apps_config {
type = "string"
}
data "template_file" "apps_dev_service_dc" {
template = "${file("${path.module}/templates/apps_dc.tpl")}"
vars {
namespace = "${var.namespace}"
deploy_tag = "${var.deploy_tag}"
service_name = "apps-dev-service"
status_url = "/status"
replica_count = 2
}
}
resource "local_file" "apps_dev_service_dc" {
content = "${data.template_file.apps_dev_service_dc.rendered}"
filename = "openshift/apps_dev_service_dc.yaml"
}
data "template_file" "apps_dev_service_sc" {
template = "${file("${path.module}/templates/apps_sc.tpl")}"
vars {
namespace = "${var.namespace}"
service_name = "apps-dev-service"
}
}
resource "local_file" "apps_dev_service_sc" {
content = "${data.template_file.apps_dev_service_sc.rendered}"
filename = "openshift/apps_dev_service_sc.yaml"
}
resource "null_resource" "apps_dev_service_dc_create" {
provisioner "local-exec" {
command = "oc create -f openshift/apps_dev_service_dc.yaml"
}
}
resource "null_resource" "apps_dev_service_sc_create" {
provisioner "local-exec" {
command = "oc create -f openshift/apps_dev_service_sc.yaml"
}
}
Makefile for Terraform Commands
The following Makefile defines commands for the steps we need to create the template files.
make init initializes various local settings and data that will be used by subsequent commands (plugins and terraform module files).
make plan runs the plan step of Terraform.
This command outputs the execution plan before applying it with make apply.
The plan command shows you the difference between the current state and the configuration you intend to apply.
Finally, apply adapts the selected Openshift project to the new service configuration .
.PHONY: init
init:
terraform init
terraform get
.PHONY: plan
plan:
terraform plan
.PHONY: apply
apply:
terraform apply
Apply Terraform Openshift Templates
The created Openshift templates are ready to be applied.
resource "null_resource" "apps_service_compile_imagestream" {
provisioner "local-exec" {
command = "oc create is apps-service-compile"
}
}
resource "null_resource" "apps_service_run_imagestream" {
provisioner "local-exec" {
command = "oc create is apps-service-run"
}
}
resource "null_resource" "apps_service_compile" {
provisioner "local-exec" {
command = "oc apply -f openshift\\apps_service_compile_bc.yaml"
}
}
resource "null_resource" "apps_service_run" {
provisioner "local-exec" {
command = "oc apply -f openshift\\apps_service_run_bc.yaml"
}
}
resource "null_resource" "apps_service_pipeline" {
provisioner "local-exec" {
command = "oc apply -f openshift\\apps_service_pipeline.yaml"
}
}
We apply the configuration defined in the yaml file to a resource with
provisioner "local-exec" {
command = "oc apply -f openshift\\<configfile>.yaml"
}
An imagestream is for the compilation step and the seperate execution is created by
provisioner "local-exec" {
command = "oc create is <imageStreamName>"
}
The create command parses the given configuration file and creates the resources in Openshift. Any existing resources are ignored.
Deploy Service in Openshift
To deploy a new version of the app service, start a command line and log in into Openshift with
oc login
The first time, you are asked for the server URL of the Openshift instance you want to connect with. Also enter your username and password.
If you are working with several Openshift projects, you can select one with
oc project <projectName>
Roll out the latest image of the service to the selected instance with
oc tag apps-service-run:latest apps-service-run:deploy-prod
-oc rollout latest apps-service -n my-project
The tag command allows you to take an existing image from an image stream and to set it as the most recent image for a tag in another image stream.
Here, we tag the current image from the image stream apps-service-run and tag latest into the image stream apps-service-run with tag deploy-prod.