I am running this from a MacOS laptop against an Linux EC2 instance in AWS, and I only get this error when I am running more than one Ansible provisioner after another. The first provisioner will run fine, and uses the same user and port as the last provisioner. I tried testing this out on very basic playbooks that were set up so that the Ansible provisioner generates the inventory files and it worked fine. When I provide my own generated inventory files, the first one works and the second one fails during the “Gathering Facts” stage.
amazon-ebs.root_ebs: TASK [Gathering Facts] *********************************************************
amazon-ebs.root_ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: deployer
amazon-ebs.root_ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=61244 -o 'IdentityFile="/var/folders/zv/vww0ltn96s7__8dpgpd1yqgxzgglh2/T/ansible-key737562872"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="deployer"' -o ConnectTimeout=10 -o IdentitiesOnly=yes -o ControlPath=/Users/rsilva/.ansible/cp/51af679d85 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /tmp `"&& mkdir "` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `" && echo ansible-tmp-1633247739.178648-5324-111217579249683="` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `" ) && sleep 0'"'"''
amazon-ebs.root_ebs: <127.0.0.1> (123, b'', b'')
amazon-ebs.root_ebs: <127.0.0.1> Failed to connect to the host via ssh:
amazon-ebs.root_ebs: fatal: [default]: UNREACHABLE! => {
amazon-ebs.root_ebs: "changed": false,
amazon-ebs.root_ebs: "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `\" && echo ansible-tmp-1633247739.178648-5324-111217579249683=\"` echo /tmp/ansible-tmp-1633247739.178648-5324-111217579249683 `\" ), exited with result 123",
amazon-ebs.root_ebs: "unreachable": true
amazon-ebs.root_ebs: }
I am using the proxy and specifying a constant local port that is used by both provisioners. I suspect that the Ansible provisioner is trying to do some initial set up, which was already done by the first provisioner, which causes the second one to fail to create the temporary directories. Both provisioners use the same user, so there should not be any permission errors, and the first provisioner runs without errors. If swap the order of the provisioners, the first one still runs fine and the second one hits this error. Any ideas? Perhaps because I am using the same port? If so, not sure how I am suppose to let Packer generate a dynamic port against an existing inventory file.
Here are the provisioners:
# Deploy ActiveMQ Database via Ansible
provisioner "ansible" {
inventory_file = "./ansible-inventory/cloudfunc"
playbook_file = "./ansible-deploy/playbook-amqdb.yml"
extra_arguments = [
"--become"
]
user = "deployer"
local_port = 61244
}
# Deploy ActiveMQ Service via Ansible
provisioner "ansible" {
inventory_file = "./ansible-inventory/cloudfunc"
playbook_file = "./ansible-deploy/playbook-amq.yml"
extra_arguments = [
"--become"
]
user = "deployer"
local_port = 61244
}