Terraform & Puppet Integration Issue

I am currently working on integrating Puppet with Terraform. On an AWS EC2 instance, I created 2 provisioner blocks of code, 1 for file & 1 for remote-exec. In the remote-exec provisioner block of code I have Ubuntu commands for installing puppet-master which will run when I launch the Terraform template. This is my remote-exec function which includes my Ubuntu commands in order:

provisioner “remote-exec” {
inline = [
“sudo apt-get update”,
“sudo apt-get install -y wget”,
“cd /tmp”,
“wget https://apt.puppetlabs.com/puppet-release-bionic.deb”,
“sudo dpkg -i puppet-release-bionic.deb”,
“sudo apt-get install -y puppet-master”,
“sudo apt policy puppet-master”,
“sudo systemctl status puppet-master.service”,
“echo ‘JAVA_ARGS="-Xms512m -Xmx512m"’ | sudo tee -a /etc/default/puppet-master”,
“echo ‘START=yes’ | sudo tee -a /etc/default/puppet-master”,
“echo ‘DAEMON_OPTS=""’ | sudo tee -a /etc/default/puppet-master”,
]

I have noticed that when I launch my Terraform template, every command runs up until my first echo statement. For some reason I cannot add to the puppet-master file in the /etc/default directory, however if I manually SSH into the EC2 instance & provide the command, it works. When the puppet-master file is created from installation, it has default permissions of -rw-r–r–. I tried to change this permission within my Terraform code prior to launching it but even with the proper command, strangely no change occurs for the permissions of the puppet-master file & nothing gets added to that file. My question would be is there a way to add to a file with a bootstrap script to prevent having to manually go into the file and add the line of code? My other question would be what is the proper command to change the permissions of a file in Ubuntu to allow write privileges from a bootstrap script.

Hi @saadillahi,

systemctl status returns a non-zero (i.e. non-successful) exit code if the service you mention on the command line isn’t currently active. Therefore my first guess would be that this service isn’t actually running yet at the time you ran that command, and so Terraform sees the non-successful status and assumes the script has failed and does not continue executing it.

If that is the explanation then you could work around it by running sudo systemctl status puppet-master.service || true, so that the result of this overall command line will always be successful – true always succeeds.

However, provisioners are a last resort and so I think this would be a problem better solved using cloud-init, assuming that’s available in whatever VM image you are booting from.

Cloud-init’s own “Cloud Config” configuration language has the following features that seem like they should be able to achieve a similar result:

Using cloud-init here will avoid the need for Terraform to connect directly to your VM, and allows a declarative approach rather than an imperative approach that’s typically a better fit for Terraform’s model.

When you are are using Amazon EC2 the “cloud config” settings should be placed into the user_data argument of your EC2 instance, and then cloud-init will automatically read that data from the internal EC2 user-data API during early system boot.