Run a shell script on ubuntu, after all the VMs are created

I am deploying multiple EC2 instances, I want to run a shell script when all the instances are deployed as my script needs the IP of all the instances to work.


I’m not sure Terraform is the right place to be running such a script - this feels like you might be falling into the trap of trying to do everything with Terraform, instead of using it for what it is good for, and using other programming and sysadmin tools when more appropriate.

But, one possible option would be to have a null_resource resource depending on all the VM resources, with a local-exec provisioner triggering a command.

Two points:

  1. If you just need to execute a script on the provisioned EC2 instance, you can use user_data
  2. If you need the EC2 instances’ IP addresses, Terraform can output: private_dns, public_dns, public_ip

In addition to what others mentioned, I’ll note that EC2 instances can potentially fail due to problems with the physical infrastructure they are running on, and so static solutions such as passing all of the IP addresses to all of the instances only during creation tend to be brittle: if one of your instances fails and you create a new one to replace it, how will the existing instances learn that the old IP address is no longer valid and that there is a new IP address to use?

For situations like this I would typically recommend a more dynamic solution where the instances all register themselves into a central location as they boot and then periodically check that location to see if any new IP addresses have appeared or old IP addresses have disappeared from that location. This means that the only information an EC2 instance needs when it boots is the location where it should look to learn about the other instances and any credentials required to read and write that location.

If you want to do this without running any additional services, one way you can set this up in AWS EC2 is to configure all of your instances as belonging to a particular security group and then write some code that will run in your instances and politely poll ec2:DescribeNetworkInterfaces with the group-id filter to find all of the network interfaces currently belonging to that group. From there you can obtain the IP addresses of those network interfaces, which should all belong to the EC2 instances you set up in that security group.

If you detect that the set of IP addresses has changed since the last time you requested then you can update the configuration of whatever software is relying on those IP addresses, so that the system will gradually self-repair in the event that you add or remove specific instances later.

An advantage of using security groups for this is that EC2 will automatically “register” each new EC2 instance into the group as a side-effect of creating the network interface, so you will only need to worry about implementing the querying step. You can use instance profiles to give your EC2 instances access to call the ec2:DescribeNetworkInterfaces action without you needing to explicitly set up credentials, and you can optionally also use the security groups as part of packet filtering if you want to ensure that these servers can only connect to each other and nothing else can connect to them.

1 Like

Dear All,
Thank you for sharing your ideas with me. My problem was not configuring security groups. My goal was to add the private IPs and host names of all servers in /etc/hosts file for internal communication.
But after looking around I don’t think that using Terraform is a good way of achieving this goal.

Thank you all!