Packer can't ssh to instance after setting system-wide crypto policy

I’m dealing with a problem where I use packer to create an AWS EC2 AMI after running an Ansible playbook on a RHEL8 instance (ami-09b947b170ccd0dbc/RHEL-8.1.0_HVM-20191029-x86_64-0-Hourly2-GP2) and packer is unable to ssh to the working instance after a reboot step in the playbook. The command in the playbook that creates a problem is update-crypto-policies --set FIPS.

  • packer hangs when the task for the command runs
    amazon-ebs: TASK [Set system-wide crypto policy] *******************************************
    amazon-ebs: changed: [default]
    amazon-ebs: TASK [Reboot] ******************************************************************
    amazon-ebs: changed: [default]
    amazon-ebs: TASK [Wait for reboot to complete] *********************************************
    amazon-ebs: ok: [default -> localhost]
    amazon-ebs: TASK [Get uptime] **************************************************************
==> amazon-ebs: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Cancelling build after receiving interrupt
==> amazon-ebs: Terminating the source AWS instance...
    amazon-ebs:  [ERROR]: User interrupted execution
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.
Cleanly cancelled builds after being interrupted.
  • packer creates the AMI fine if the task is skipped

Additionally, if I do the following steps without packer:

  1. Spun up an instance using the ami-09b947b170ccd0dbc image
  2. Performed update-crypto-policies --set FIPS on the instance
  3. Rebooted the instance and verified I could still ssh into the instance
  4. Used the AWS EC2 dashboard to create an image based on the instance
  5. Use the AMI created in the previous step as the starting image for the package command

packer can’t ssh to the instance after it’s started:

==> amazon-ebs: Prevalidating AMI Name: Hardened-RHEL8_HVM_EBS-2020-01-
    amazon-ebs: Found Image ID: ami-0cb229954d8bb7f27
==> amazon-ebs: Creating temporary keypair: packer_5e172f7b-0171-546a-c8ed-428d8da514f3
==> amazon-ebs: Creating temporary security group for this instance: packer_5e172f7d-b703-da11-aabb-66a09eea58da
==> amazon-ebs: Authorizing access to port 22 from [] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
    amazon-ebs: Adding tag: "Name": "Packer Builder"
    amazon-ebs: Instance ID: i-06a0ee795309c1fad
==> amazon-ebs: Waiting for instance (i-06a0ee795309c1fad) to become ready...
==> amazon-ebs: Using ssh communicator to connect:
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Error waiting for SSH: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
==> amazon-ebs: Terminating the source AWS instance...

I created this gist with files related to the issue.

Does anyone have any suggestion about how to fix this? Should I open a packer issue on github for this?

Take a look at this discussion on RedHat’s forum ->

Thanks. That’s a good article and I’m digesting it. I’m not sure it’s 100% applicable to my situation because if I start an AWS instance using an AMI that has the FIPS policy set, I have no problem with ssh to it. packer seems to be doing different things.

Also the article talks about FIPS mode but does update-crypto-policies set the mode? It sets the policy but I must confess I’m not quite sure what the difference is.