Error: error configuring Terraform AWS Provider: failed to get shared config profile, default

Hey Team,

Good Day All!

We need your suggestion here recently we have upgraded terraform version to 1.17 and required provider as below.

required_providers {
aws = {
source = “hashicorp/aws”
version = “>= 4.8.0”
}
random = {
source = “hashicorp/random”
version = “>= 2.0”
}

We are getting strange error where provider is looking for the aws config file…and failing locally and as well CI/CD Pipeline.

Same configuration works with older version like.

Pin Terraform and provider versions as necessary for the root and child module

terraform {
required_version = “~> 1.1.7”

required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 3.7”
}
random = “>= 2”
}
}

Please let us know what changes we need to do.

Regards,
Shibir G R
DevOps Engineer

I believe there was an undocumented, breaking change in the AWS V4 Provider which results in a failure when it looks for a profile that doesn’t exist.

The V3 provider family didn’t fail: it would fall back in the same manner as AWS-CLI, and use the default (which works fine with configure-aws-credentials and other aws tools).

The easiest solution is to create a variable (bool, default false) which indicates whether the operation is being run in a pipeline. In your pipeline, set this variable to true. Then in your terraform provider config, dynamically set the profile using the pipeline variable as a flag.

provider = var.pipeline ? “” : $YourLocalProfileNameHere

This will allow you to continue to work locally using profile names, while the empty string will cause TF to grab the default profile (generated by configure-aws-credentials or some similar library).

1 Like

I confirm. Removing the profile line makes it work:

provider "aws" {
  profile    = "default" <- to remove
  access_key = "xxx"
  secret_key = "yyy"
  region     = "zzz"
}
3 Likes

same here - having the profile line worked for me the first time I ran terraform init, but when I ran it again I had to remove the line

Thank you! I was tearing my hair out on why my Terraform configuration worked fine locally but plan failed in (Jenkins) pipeline, especially since I don’t have access to the Jenkins instances, making debugging that much more difficult.

This worked for me too.

It worked for me too

Coming to this thread a bit later with the same issue. I was trying to update terraform to version 1.7.3. I did not have the profile set in the provider block, but I did have it in the ‘backend’ block. Removing it from there as well fixed the issue for me.

Interestingly, the backend was set up successfully at the start of terraform init, so this didn’t seem to be the issue. But debugging the output of the init showed that all the modules were successfully set up. :person_shrugging: