Configure both local development and production environments with Terraform

I wonder if somebody used Terraform for deploying both production environment (in a cloud, with load balancers, etc) and local development environment (just 1 local machine, simplified configuration, no HA, etc).

My particular concern is how much of code-reuse Terraform would allow in such a case? For instance in Terraform components for managing local files and remote files and executing remote and local commands are different as opposed to, say, Ansible where you can just change connection type to “local” and components that are responsible for managing files would magically start working on the local machine the same way as they work on the remote machine.

I’m evaluating Terraform for such a use case and would be grateful if someone share own experiences/considerations regarding this use case.

Hi @AndreiPashkin,

In situations where your environments are conceptually similar but structurally different, a common answer is to use Module Composition patterns to share the parts that are common while still allowing for differences.

That sort of approach would start by identifying suitable subsystem boundaries in your system where either both environments use the same structure (e.g. differ only in the number of different objects of a type, and not in how different object types are interconnected) or where both environments include the same concept but implemented in different ways. Those will then be roughly where you would draw the boundaries between your shared modules.

Then you can write a separate configuration (that is separate root modules) for each environment but then configure and wire together the modules in different ways in each one, sharing what makes sense and switching out for different implementations for what does not.

With that said, I suspect that by “development environment” you may mean in the sense of software running on your local workstation. I think it’s fair to say that Terraform is not a great fit for that use-case, because it’s primarily aimed at long-lived objects in remote systems rather than short-lived objects on the current system. Some folks certainly have had success representing development environments with Terraform, but it’s not Terraform’s focus and in particular I’d agree with your suspicion that it would probably be quite hard to share much between those two cases, in part because the workflow needs of a local development environment are pretty different than a production environment.

What you’re working on might be a better fit for the other HashiCorp product Waypoint. At the time of writing Waypoint is a relatively new project, so it might not be a good fit for all situations but perhaps worth considering. You can see the Waypoint Roadmap for a high-level overview of what’s planned for the future. Waypoint can be a good companion for Terraform in the sense that the non-development environments will typically need some long-lived supporting cloud infrastructure that Waypoint can then deploy into, and managing that infrastructure is much more in Terraform’s wheelhouse.

1 Like

I think it should be possible to write a custom “local machine” provider that would deploy a restricted SSH-server that would allow only local connections so that provisioners would be able to connect it and operate on the local machine via SSH connection.

Would you say that this is a right direction of thought?

Since a Terraform provider is just arbitrary machine code it would indeed be possible in principle to write a provider like you described here.

The main thing to keep in mind is Terraform’s assumption that the objects managed by resource blocks will “survive” between Terraform runs. If you use such a provider only with local state on a single system that it’s managing then in principle it could keep that assumption, but it probably wouldn’t be appropriate for a Terraform provider to include a server embedded in it or to launch a server itself (rather than e.g. arranging for the OS to start one) because the Terraform provider plugin will exit immediately after the Terraform run completes.

If you can make what the provider manages behave sufficiently like objects managed in a remote system via an API then it would work, but deviations from those assumptions are likely to run into odd behavior just because Terraform isn’t really designed for these use-cases.

One example I’ve seen work okay is for teams where each developer has Docker installed locally to use kreuzwerker/docker to interact with the Docker daemon running on localhost. That Docker daemon’s behavior is enough like a “remote API” that it can work okay, although things can get a little messy if you routinely reboot your machine without running terraform destroy first, because after a reboot the Docker daemon will lose certain state (but not all) which may confuse a subsequent Terraform plan.

1 Like

I’ve realized that it’s actually possible to operate on standalone machines without any custom providers - by using the null-provider:

# main.tf
# Save in an empty directory, then execute terraform init and terraform apply.
terraform {
  required_providers {
    null = {
      source = "hashicorp/null"
      version = "3.1.0"
    }
  }
}

resource "null_resource" "test" {
  connection {
      host = "YOUR_SSH_HOST"
      user = "root"
      password = "YOUR_SSH_PASSWORD"
      type = "ssh"
  }
  provisioner "remote-exec" {
    inline = [
      "echo 'Hello World!' > /tmp/terraform-test",
    ]
  }
}

It’s also possible to launch a local ssh-server and connect to local machine to operate on development environment.