How to deploy Apache .conf files during provisioning?

We are deploying an Apache web server using Terraform to host some applications.

We need to be able to deploy our own custom .conf files with several Apache settings including VirtualHosts, etc.

Most articles online suggest using Provisioners:

provisioner "file" {
    source      = "vhost-template.conf"
    destination = "/home/ubuntu/vhosts.conf"

    connection {

        type        = "ssh"
        user        = "ubuntu"
        private_key = "${file("/Path/to/EC2-Key-Pair")}"
        host        = "${self.public_dns}"

But the official documentation is pretty much against the use of Provisioners.

What’s the official suggestion for this use case?

We would prefer handling this during provisioning with Terraform as opposed to including config files in the applications’ deployment artifacts.


Terraform is the wrong tool to be using for this job.

Terraform is for provisioning infrastructure by calling various APIs, mostly.

For performing configuration management within a server, use a tool intended to operate in that environment, such as Ansible, Puppet, etc. - or even just simple ad-hoc scripts along with config managed in Git.

@maxb Interesting. I think Ansible Tower is already part of our infrastructure stack so I’ll follow up on that to see how it would work. We’re new to these tools and we’re figuring it out as we go.

A simple search found me a good enough example on how to configure Apache using an Ansible Playbook. Sharing for future visitors of this thread:

Thank you!

I do broadly agree that it would be better to use explicit configuration management software if you need to change the configuration on your system once it is running.

With that said, there are some other options discussed in the documentation:

  • Build a custom machine image that already includes the software and configuration you need, so that the system will immediately begin performing the intended job as soon as it boots, without any external intervention.

    One way to do this is with HashiCorp Packer, which uses a language similar to the Terraform language to describe the steps to start from a base machine image and then customize it to create a new one.

  • If your existing machine image already includes the software CloudInit and runs it on boot then you can use the “user data” mechanism of your chosen VM platform to pass some instructions to CloudInit which it will then run on boot. It does support writing files to disk at specific locations during the first boot of the system, as long as the file is small enough to fit in the user data blob supported by your VM platform.

There are some links in the docs to more details on both of these options, in case you’d like to learn more.