Ansible Provisioner Saturates / Blocks Network on Large File Copy

This issue only seems to happen when I am copying a large-ish file (~200 MB). I don’t experience any issues with the Ansible provisioner running until it hits that large file copy and then my local web browsing, email and even other Packer runs get impacted and are intermittently unable to resolve their respective hosts. Anyone else hitting this? Not sure what information to provide here to help troubleshoot this issue.

I ran these Packer builds in parallel, and only the build that included a “deploy artifact” (remote file copy) step worked, and the other ones (below) all failed because they were unable to establish an SSH connection while this multi minute deploy step was happening. Not sure why this happens when ran via the Ansible provisioner, but doesn’t happen when ran via Ansible directly (outside of Packer). Additionally, if I omit the build that includes the “deploy artifact” step, all of the parallel builds complete fine.

Build Service 1 Image (which causes the issue):

    amazon-ebs.root_ebs: TASK [service : deploy artifact] *******************************************
    amazon-ebs.root_ebs: changed: [default]

Build Service 2 Image:

    amazon-ebs.root_ebs: ok: [default] => (item=git)
    amazon-ebs.root_ebs: changed: [default] => (item=mutt)
    amazon-ebs.root_ebs: changed: [default] => (item=libuuid-devel.x86_64)
    amazon-ebs.root_ebs: changed: [default] => (item=expat-devel)
    amazon-ebs.root_ebs: changed: [default] => (item=e2fsprogs-devel)
    amazon-ebs.root_ebs: changed: [default] => (item=cppunit)
==> amazon-ebs.root_ebs: EOF

Build Service 3 Image:

    amazon-ebs.root_ebs: TASK [mariadb : Cleanup - my.cnf files] ****************************************
    amazon-ebs.root_ebs: ok: [default] => (item=/home/mysql/.my.cnf)
    amazon-ebs.root_ebs: ok: [default] => (item=/root/.my.cnf)
==> amazon-ebs.root_ebs: Timeout during SSH handshake