Ansible Provisioner Failing on "Gathering Facts" Stage

I am running this from a MacOS laptop against an Linux EC2 instance in AWS, and I only get this error when I am running more than one Ansible provisioner after another. The first provisioner will run fine, and uses the same user and port as the last provisioner. I tried testing this out on very basic playbooks that were set up so that the Ansible provisioner generates the inventory files and it worked fine. When I provide my own generated inventory files, the first one works and the second one fails during the “Gathering Facts” stage.

    amazon-ebs.root_ebs: TASK [Gathering Facts] *********************************************************
    amazon-ebs.root_ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: deployer
    amazon-ebs.root_ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=61244 -o 'IdentityFile="/var/folders/zv/vww0ltn96s7__8dpgpd1yqgxzgglh2/T/ansible-key737562872"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="deployer"' -o ConnectTimeout=10 -o IdentitiesOnly=yes -o ControlPath=/Users/rsilva/.ansible/cp/51af679d85 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /tmp `"&& mkdir "` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `" && echo ansible-tmp-1633247739.178648-5324-111217579249683="` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `" ) && sleep 0'"'"''
    amazon-ebs.root_ebs: <127.0.0.1> (123, b'', b'')
    amazon-ebs.root_ebs: <127.0.0.1> Failed to connect to the host via ssh:
    amazon-ebs.root_ebs: fatal: [default]: UNREACHABLE! => {
    amazon-ebs.root_ebs:     "changed": false,
    amazon-ebs.root_ebs:     "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `\" && echo ansible-tmp-1633247739.178648-5324-111217579249683=\"` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `\" ), exited with result 123",
    amazon-ebs.root_ebs:     "unreachable": true
    amazon-ebs.root_ebs: }

I am using the proxy and specifying a constant local port that is used by both provisioners. I suspect that the Ansible provisioner is trying to do some initial set up, which was already done by the first provisioner, which causes the second one to fail to create the temporary directories. Both provisioners use the same user, so there should not be any permission errors, and the first provisioner runs without errors. If swap the order of the provisioners, the first one still runs fine and the second one hits this error. Any ideas? Perhaps because I am using the same port? If so, not sure how I am suppose to let Packer generate a dynamic port against an existing inventory file.

Here are the provisioners:

# Deploy ActiveMQ Database via Ansible
  provisioner "ansible" {
    inventory_file = "./ansible-inventory/cloudfunc"
    playbook_file = "./ansible-deploy/playbook-amqdb.yml"
    extra_arguments = [
      "--become"
    ]
    user = "deployer"
    local_port = 61244
  }

# Deploy ActiveMQ Service via Ansible
  provisioner "ansible" {
    inventory_file = "./ansible-inventory/cloudfunc"
    playbook_file = "./ansible-deploy/playbook-amq.yml"
    extra_arguments = [
      "--become"
    ]
    user = "deployer"
    local_port = 61244
  }

Did you ever figure this out? I’m facing a similar problem.

@matt3, It has been awhile but I think not specifying the port and/or skipping the “Gathering Facts” stage (e.g. with --skip-tags always) allowed me to get around this issue. I am able to trigger multiple Ansible playbooks without issues now. Skipping the “Gathering Facts” stage may not be doable depending on whether that information is needed. In my case it was not since I am running specific setup stages / tags against the VM and moved the configuration stages that rely on instance-specific information to be ran later after the final VM is provisioned for use.

Thanks for your reply.

I eventually found use_proxy: false and it solved my problems. Seems like it’s current default to true but will be switching to false in a future release.

1 Like