Reboot EC2 instance using amazon-ebs builder and ansible provisioner

Hello guys,

I would like to reboot my EC2 instance using the ansible reboot module because of some modifications that need a restart to work.


- name: Reboot Server to apply changes
    reboot_timeout: 3600

packer template:

    "variables": {
      "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
      "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"
    "builders": [
        "type": "amazon-ebs",
        "region": "eu-central-1",
        "access_key": "{{user `aws_access_key`}}",
        "secret_key": "{{user `aws_secret_key`}}",
        "associate_public_ip_address": true,
        "source_ami_filter": {
          "filters": {
            "name": "centos"
          "owners": ["**********"],
          "most_recent": true
        "instance_type": "t2.medium",
        "ssh_interface": "session_manager",
        "ssh_username": "centos",
        "iam_instance_profile": "SSMInstanceProfile",
        "ami_name": "hosting {{timestamp}}"
    "provisioners": [
          "type": "ansible",
          "playbook_file": "ansible/provision.yml",
          "inventory_directory": "ansible/inventory/shared",
          "extra_arguments": [
            "-e 'AWS_ACCESS_KEY_ID={{user `aws_access_key`}} AWS_SECRET_ACCESS_KEY={{user `aws_secret_key`}}'"  
          "ansible_env_vars": [ "ANSIBLE_TIMEOUT=3000" ]

After executing the reboot task it can’t reconnect to the machine:

amazon-ebs: TASK [ec2-setup : Reboot Server to apply changes] *******************
==> amazon-ebs: ssh: handshake failed: read tcp> read: connection reset by peer

Is there a special function to get that running or can’t i reboot when using ansible and packer?


Hi! Try turning off the ssh proxy server that Packer uses to send Ansible commands to instances, using use_proxy: false – that should allow ansible to properly manage the connection through the reboot. (The ability to disable this proxy is a fairly recent feature introduced in Packer v1.5.6 and will become the default in v1.7.0)

Hey SwampDragons . Thank you very much :slight_smile: . This feature is new to me. I tested a bit and use_proxy: false with ssh_interface: session_manager throws an unreachable error ailed to connect to the host via ssh: ssh: connect to host localhost port 8065: Cannot assign requested address.

I found a fresh pull request that maybe fixes the reboot issue when ssh_interface: session_manager is set.

So I temporary moved from session_manager to public_ip but then I run into the next thing. I’m using the ec2_eip module from ansible to change the public ip address from a random to a specifc I need but this logically also doesn’t work because packer has no idea the ip has changed. Is there any possibility to change the ip address of that ec2 instance or just starting the instance with one specific elastic ip?

Oh, interesting. I don’t think I’d tried using Ansible with the session manager before.

Changing IP addresses is definitely going to mess up your build. We currently don’t have a mechanism allowing Packer to swap IPs halfway through provisioning; you’d need to end the build and launch a new builder where you set ssh_host to your new IP… but I don’t think that the IP would persist between instances.

We’re figuring out what it would take to implement this feature, and will hopefully do it sometime in 2021, but that doesn’t help you now.

We’re going to try to get that PR fixing the session manager and reboots merged before the next release; I’ll try to remember to update you here once there’s a dev release for you to try.

I got it running. So until that feature is implemented I’m going to use two packer build processes. The first one that installs all the components until the reboot is required and the second for the rest. I mean I’ve two AMIs right now but that’s ok.

Thank you very much for your help :slight_smile: