What is the best practice for Terraform? Terraform and Chef integration

Hi,

What would be the best practice for implementing Terraform?

My case: Ideally, I want to use build a CI/CD pipeline in Azure DevOps that will use Terraform to provision infrastructure resources in Azure, then configure them by using Chef (ChefSolo atm, will just Chef Server later in a long run)

I have created a working pipeline that helps me provision a basic Azure VM. However, the structure of Terraform is very basic and not using any Terraform variables files. All the variables are stored as pipeline variables which will be replaced by tokens in Terraform files during the build/release process.

In a long run, we want to use Terraform to create resources for different environments: devlopment, QA, production. And for each environment, there are some certain machine types with different specifications. How would I then structure all my Terraform files in the most effective ways as well as how to manage all the variables files for those machine types terraform file.

And if I want to integrate Chef to the Terraform CI/CD pipeline, how would I achieve that? I know that there is a provisioner step in Terraform that we could use to install and connect the resource to Chef server? But is that a good way though, and what if I only use ChefSolo, how would I integrate ChefSolo into this Terraform pipeline?

Thanks so much.

Hello!

There are many ways to implement Terraform in a pipeline. Regarding the first part of your question:

How would I then structure all my Terraform files in the most effective ways?

A common pattern is to divide a subdirectory for each long-running environment. If there is a chance of reused configuration, we can add it to a module.

For example:

    environments/
        |-- dev/
            |-- dev.tf
        |-- prod/
            |-- prod.tf
    resources/
        |-- main.tf # module, etc. for reusable configuration

There are some fantastic documents and write-ups that can help:

There are many other patterns that will work, it all depends on scale and your development workflow. As for the second part of your question:

…how to manage all the variables files for those machine types terraform file?

You’re right, the variable files can get unruly! One way to do this is to put a step in your pipeline to retrieve configuration from somewhere and inject it into the pipeline. This is what Terraform Enterprise does. I’ve also scripted the composition of variable files via pipeline but it required the investment to construct the logic.

And if I want to integrate Chef to the Terraform CI/CD pipeline, how would I achieve that?

In general, I prefer to use Packer to build an immutable image instead of running the provisioners. Packer has a chef-solo provisioner. If there are post-provisioning steps that can’t be baked into the image, you might be able to use the remote-exec provisioner to trigger chef-solo, and as you mentioned, there is the Chef provisioner if you choose to use a Chef server. There might be community plugins for chef-solo but they may not be maintained.

Hope this helps!

1 Like