As the number of our new (and old) customers grows, it becomes crucial to use IaC and enjoy all of its benefits like mitigation of human errors, infrastructure (self-)documentation etc.

Why Terraform?

As in (almost) every software product category, there are more products to choose from, but we felt Terraform would be the number one for us for sure. 🙂 There are basically two reasons:

HashiCorp product

We (as Ackee DevOps team) have very good previous experience with Hashicorp products. We’ve incorporated Vault to keep our application secrets really secret and we use Packer to create custom OS images for our virtual machines that run databases (which are provisioned by Terraform by the way, but we will talk about this later). Previously, we also worked with Vagrant, which worked perfectly.

Cloud-agnostic

Thought we run almost everything on GCP nowadays, it is always nice to work with a “swiss-knife” – a product that will provision all platforms (or, as Terraform terminology says: providers) with single interface and DSL.

Terraform and CD

As we already used to deliver our application and some infrastructure code with Jenkins pipeline by applying Kubernetes YAML definition files to GKE clusters, it was easy to integrate Terraform parts into it.

As we run Jenkins slave in Docker, we were pretty good with this “wget-ing” style of Terraform installation (Golang FTW):

Jenkins pipeline integration

As we mentioned, we already provisioned some infrastructure parts by Jenkins pipeline so we just extended that pipeline with a new code:

This code first checks if dir named “terraform” exists and makes is backward-compatible with projects that are not defined in Terraform. Next it creates backend.tf file with information about a project that we already know from our CD environment, it changes directory to terraform (it is were all our .tf files go) and there come interesting parts:

We wrap everything in sshagent pipeline directive – this allows us to use our private GitLab deploy SSH key, so we can fetch Terraform modules from GitHub repositories – public sources and even from private GitLab repositories – private sources. This is good for privacy – we can keep some internal infrastructure characteristics hidden from the world – and also for sharing (and caring) – we can publish some nice generic work.

Then we run some pretty generic stuff, that init project, command that fetches module and update them if needed (so we have to lock version everywhere, which is best practice anyway) and in the end, we apply changes.

Terraform plan

Do you miss terraform plan command in our pipeline? Right, that’s because of the workflow we’ve chosen.

As long as we use GCP service accounts for authorization and GCS buckets as state remote backend, we are able to check with terraform plan on our local machine, then push to git repo and let our CD pipeline run actual terraform apply.

Public work

We made some database provisioning work we wanted share with the community.

It includes Packer image templates for GCP instances, which grabs Ubuntu 16.04, upgrades packages, installs database (Elasticsearch, MongoDB or Redis), installs Stackdriver Monitoring agent and Logging agent for log ingestion and pushes image to GCS:

When you’ve created Stackdriver-enabled image for database, you can use our public Terraform module to provision node (or cluster!):

Private GitLab integration

Besides pretty generic work we have done on database modules, we have created some internal modules, that we want to keep private.

As shown previously, this is done by wrapping terraform calls in sshagent, so we can pull public GitHub repos and also private GitLab ones. Definition goes like this:

Do we love Terraform?

Terraform, as well as all other HashiCorp products we use, has shown great versatility and interoperability with other products. It also helped us normalize and document our infrastructure in code so: Yes, we do!

Leave a Reply

Your email address will not be published. Required fields are marked *