Packer data: NoCredentialProviders: no valid providers in chain. Deprecated

Hi there!

I have been testing a Packer template with the HCL2 syntax for the past few days and have finally committed it to our repo. Although it was building fine manually (on my machine), I am getting the errors below in CI:

Error: Datasource.Execute failed: no valid credential sources for  found.
Full log
[2021-09-06T15:46:16.168Z] + packer init packer.pkr.hcl
[2021-09-06T15:46:16.434Z] + packer validate -var test=false -var version=latest packer.pkr.hcl
[2021-09-06T15:46:18.364Z] + packer inspect packer.pkr.hcl
[2021-09-06T15:46:18.627Z] Packer Inspect: HCL2 mode
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z] > input-variables:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z] var.ansible_host: "focal"
[2021-09-06T15:46:18.627Z] var.region: "eu-central-1"
[2021-09-06T15:46:18.627Z] var.test: "true"
[2021-09-06T15:46:18.627Z] var.version: "next"
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z] > local-variables:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z] local.aws_access_key: "xxx:
[2021-09-06T15:46:18.627Z] local.aws_secret_key: "xxx"
[2021-09-06T15:46:18.627Z] local.github_token: "xxx"
[2021-09-06T15:46:18.627Z] local.quay_password: "xxx"
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z] > builds:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]   > focal-ami:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     sources:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       amazon-ebs.focal
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     provisioners:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       ansible
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     post-processors:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       <no post-processor>
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]   > docker:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     sources:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       docker.focal
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     provisioners:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       ansible
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]     post-processors:
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       0:
[2021-09-06T15:46:18.627Z]         docker-tag
[2021-09-06T15:46:18.627Z]         docker-push
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.627Z]       1:
[2021-09-06T15:46:18.627Z]         docker-tag
[2021-09-06T15:46:18.627Z]         docker-push
[2021-09-06T15:46:18.627Z] 
[2021-09-06T15:46:18.891Z] + packer build -var test=false -var version=latest packer.pkr.hcl
[2021-09-06T15:46:19.156Z] Error: Datasource.Execute failed: no valid credential sources for  found.
[2021-09-06T15:46:19.156Z] 
[2021-09-06T15:46:19.156Z] Please see 
[2021-09-06T15:46:19.156Z] for more information about providing credentials.
[2021-09-06T15:46:19.156Z] 
[2021-09-06T15:46:19.156Z] Error: NoCredentialProviders: no valid providers in chain. Deprecated.
[2021-09-06T15:46:19.157Z] 	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
[2021-09-06T15:46:19.157Z] 
[2021-09-06T15:46:19.157Z] 
[2021-09-06T15:46:19.157Z]   on packer.pkr.hcl line 32:
[2021-09-06T15:46:19.157Z]   (source code not available)

Some things to note:

  1. The template (shown below) uses credentials from a Vault AWS secret, retrieved with an AppRole token. The AWS credentials are cached in a json file and read by the packer template using the file and jsondecode functions.
  2. I have checked extensively that these credentials are valid.
    1. packer inspect shows that they are valid
    2. If i set them in the environment, the problem disappears
  3. Packer init and packer validate are showing correct output in CI.
  4. I have used this template on my development machine a lot - it definitely works.
  5. I am using the same Vault in dev as in CI.
  6. The CI is run with Jenkins, using an EC2 provisioner. Jenkins injects the AppRole role ID and secret ID into the environment of an EC2 instance with the same tools as my dev machine.
  7. The agent runs in the same AWS region as the AMI I want to build.

The only thing I can think of from the error is that packer is getting AWS credentials from something on the host, according to the credentials setting order.

However, aws sts get-caller-identity shows that there are no configured credentials.

I haven’t found anything on the Github issues for packer either. What gives?


The template is as follows:

/* originally commented out
packer {
  required_plugins {
    amazon = {
      version = ">= 1.0.0"
      source  = "github.com/hashicorp/amazon"
    }
    docker = {
      version = ">= 1.0.1"
      source  = "github.com/hashicorp/docker"
    }
    ansible = {
      version = ">= 1.0.0"
      source  = "github.com/hashicorp/ansible"
    }
  }
}
*/
locals {
  quay_password = vault("path/to/secret "secret")
  github_token = vault("path/to/secret", "secret")
  aws_access_key = jsondecode(file("aws.json")).data.access_key
  aws_secret_key = jsondecode(file("aws.json")).data.secret_key
}

variable "test" {
  description = "Boolean toggle to skip ami creation. If set to true, AMI will _NOT_ be created."
  type    = bool
  default = true
}

variable "ansible_host" {
  description = "Hostname assigned to the focal docker instance created by the docker build."
  type = string
  default = "focal"
}

variable "region" {
  description = "AWS region to use."
  type    = string
  default = "eu-central-1"
}

variable "version" {
  description = "Semantic version used to tag the images."
  type    = string
  default = "next"
}

data "amazon-ami" "focal" {
  filters = {
    virtualization-type = "hvm"
    name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
    root-device-type    = "ebs"
  }
  owners      = ["099720109477"]
  most_recent = true
}

source "docker" "focal" {
  image   = "public.ecr.aws/lts/ubuntu:focal"
  commit  = true
  exec_user = "root"
  changes = [
    "ENTRYPOINT [\"jenkins-agent\"]",
    "WORKDIR /home/jenkins",
    "USER jenkins",
    "LABEL version=${var.version}"
  ]
  run_command = ["-d", "-i", "-t", "--name", "${var.ansible_host}", "{{.Image}}", "/bin/bash"]
}

source "amazon-ebs" "focal" {
  source_ami      = data.amazon-ami.focal.id
  region          = var.region
  skip_create_ami = var.test
  ami_name        = "ansible-test-harness-focal-${formatdate("YYYY-MM-DD-'T'hh-mm-ssZ", timestamp())}"
  instance_type   = "t3.micro"
  ssh_interface   = "private_ip"
  ssh_username    = "ubuntu"
  access_key = local.aws_access_key
  secret_key = local.aws_secret_key
  force_deregister = true
  force_delete_snapshot = true
  associate_public_ip_address = false
  shutdown_behavior = "terminate"
  launch_block_device_mappings {
    device_name = "/dev/sda1"
    volume_size = 40
    volume_type = "gp2"
    delete_on_termination = true
  }
  run_tags = {
    Name = "packer-builder-test-harness"
    stateful = false
    application = "jenkins"
    tier = "agent"
  }
  subnet_filter {
    filters = {
     // other filters
      "availability-zone" = "eu-central-1a"
    }
    most_free = true
  }
  security_group_filter {
    filters = {
      "xxx" : "xxx"
    }
  }
}

build {
  name    = "focal-ami"
  sources = ["source.amazon-ebs.focal"]

  provisioner "ansible" {
    only          = ["amazon-ebs.focal"]
    groups        = ["ec2"]
    playbook_file = "playbook.yml"
    user          = "ubuntu"
    ansible_env_vars = [
      "ANSIBLE_ROLES_PATH=$PWD/../../",
      "ANSIBLE_STDOUT_CALLBACK=yaml",
      "ANSIBLE_REMOTE_USER=ubuntu"
    ]
  }
}


build {
  name = "docker"
  sources = ["source.docker.focal"]

  provisioner "ansible" {
    only          = ["docker.focal"]
    groups        = ["docker"]
    playbook_file = "playbook.yml"
    user          = "root"
    ansible_env_vars = [
      "ANSIBLE_ROLES_PATH=$PWD/../../",
      "ANSIBLE_LOAD_CALLBACK_PLUGINS=1",
      "ANSIBLE_CALLBACK_WHITELIST=yaml,profile_tasks",
      # "ANSIBLE_REMOTE_USER=root",
      "ANSIBLE_PYTHON_INTERPRETER=python3",
      "ANSIBLE_TRANSPORT=docker"
    ]
    extra_arguments = [
      "-e ansible_host=${var.ansible_host}",
      "-e ansible_connection=docker",
      "-e",  "ansible_user=root"
    ]
  }

  post-processors {
    // redacted
}


update 09.09.2021 : seems to be a similar issue to


update 09.09.2021 : Seems that Packer data sources want the AWS credential chain, but don’t accept access_key and secret_key.

Ok, I think I have found out what’s going on.

I’m using locals to set AWS creds, but locals are evaluated after data sources according to

I guess the way to do this is to use the typical AWS credential chain, until the locals are included in the DAG.

Hope I’m getting this right, happy for some critical insight :slight_smile:

That’s correct – you can’t evaluate locals inside data sources. The DAG is something that’ll probably be a bit in the future still, so I’m afraid you’ll need to find another workaround for now.

1 Like