Using Credential created by AWS SSO for Terraform

I read a lot of articles related with this issue, including this.

I am little confused so I want to ask my understanding.

Using credential create by AWS SSO and stored in ~/.aws/cli or ~/.aws/sso
to deploy aws resource by terraform is not possible. is this correct?

It seems there are possible way if you are trying to use aws-sdk-go,
but just declare it in terraform file such as
provider “aws” … with using aws_shared_credentials and profile is not working properly.

please help me to understand right answer.

I have an AWS Organization using AWS SSO with Okta that I access via the AWS CLI v2. I can’t recall yet if I’ve since tested running a Terraform deployment against it. Most of my deployments are currently running under Terraform Cloud in another AWS account without SSO which I don’t believe functions well with SSO from what I can tell.

What problem are you experiencing and perhaps I can attempt to reproduce or assist. As for the shared credentials file and profile, the SSO setup for CLIv2 it is setup just like any other profile except that it references the SSO URL and requires logging in first. The ~/.aws/sso and ~/.aws/cli directories are merely storing the cached data and the ~/.aws/config still maintains the profile. The ~/.aws/credentials is then not really used as it defers back to the ~/.aws/sso and ~/.aws/cli cached data for the necessary credentials. The ~/.aws/sso contains the cached AccessToken to authenticate with SSO while the ~/.aws/cli contains the cached AccessKeyId, SecretAccessKey and SessionToken credentials that would normally be stored in ~/.aws/credentials.

So as long as you’re authenticated prior to attempting to run Terraform if your provider declaration references the profile associated with your SSO authenticated account it should be able to execute. If you’re not logged in before I would expect you to get an unauthenticated or a token expired error.

As the shared credentials file ~/.aws/credentials doesn’t really contain the credentials and it is optional I would think that it could be left out and simply reference the profile that it would find in the ~/.aws/config file. The SDK should then know to look into the ~/.aws/cli cached data for the right credentials to you. Again I don’t believe I’ve tried this since enabling SSO but when I have a chance I’ll give it a run to confirm.

I appreciate your reply.
I did login to sso using aws cli v2, “aws sso login” and I check /sso , /cli has cached credential.
I am confusing how to set provider in .tf file for using sso profile. In some articles
it need to use shared_credentials_file or profile but it didn’t work.

@Heeseok-82 not sure if this the same error you got but I was finally able to give it a try and did have it fail.

To document for anyone else to attempt I created a simple main.tf containing the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }

  required_version = ">= 0.13"
}

provider "aws" {
  region  = "us-east-1"
  profile = "sandbox"
}

resource "aws_s3_bucket" "test" {
  tags = {
    Name = "Test bucket"
  }
}

Inside my ~/.aws/config I had the sandbox profile configured as:

[profile sandbox]
sso_start_url = https://[sso name].awsapps.com/start
sso_region = us-east-1
sso_account_id = [my AWS account ID]
sso_role_name = AWSAdministratorAccess
region = us-east-1

I performed my aws sso login --profile sandbox and authenticated, in my case with Okta with MFA, and then proceeded with running the following:

terraform init
terraform fmt
terraform plan

I was good until the plan was being executed where I received the following:

Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.

Please see https://registry.terraform.io/providers/hashicorp/aws
for more information about providing credentials.

Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
caused by: expected RFC3339 timestamp: parsing time "2021-04-16T00:26:08UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"


  on main.tf line 12, in provider "aws":
  12: provider "aws" {

I know the credentials are actually good as an aws s3 ls --profile sandbox command and it executes just fine without issue. So I would assume this is an issue inside the Terraform AWS provider and how it is parsing the credentials when using SSO.

2 Likes

Interestingly however, if I remove the profile = sandbox line from the provider block

provider "aws" {
  region  = "us-east-1"
}

and then look at the ~/.aws/cli/cached/[hash].json file with the SSO credentials

{
  "ProviderType": "sso",
  "Credentials": {
    "AccessKeyId": "ASIAxxxxxxxxxxxxxxxZKPR",
    "SecretAccessKey": "CpONxxxxxxxxxxxxxxxm51k",
    "SessionToken": "IQoJxxxxxxxxxxxxxxxVty4=",
    "Expiration": "2021-04-15T17:40:03UTC"
  }
}

and performed the following:

export AWS_ACCESS_KEY_ID="ASIAxxxxxxxxxxxxxxxZKPR"
export AWS_SECRET_ACCESS_KEY="CpONxxxxxxxxxxxxxxxm51k"
export AWS_SESSION_TOKEN="IQoJxxxxxxxxxxxxxxxVty4="
terraform plan

I was pleased to see it return

An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.test will be created
  + resource "aws_s3_bucket" "test" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = (known after apply)
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Name" = "Test bucket"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Just as I had originally expected. This at least shows it can use the credentials but it isn’t able to parse and determine the values on it’s own from the cached JSON files.

2 Likes

Thank you very much for info.
your testing document is exactly what I’ve done.
so far (maybe it will be updated later),
It seems that If I want to use credential created by aws sso for terraform,
export credential info using command line is required.
thanks again!

@Heeseok-82, yes from what I can tell it is having an issue parsing the cached JSON files and the expiration timestamps. The error was showing the timestamp from the ~/.aws/sso/cached JSON file that has the SSO access token. Because of that error, it never appeared that it attempted to read the ~/.aws/cli/cached JSON file to get the access key id, secret access key and session token. I’m not sure exactly how the filename is generated but I assume it’s a hash of some data.

For now, you could write a script that runs the ~/.aws/cli/cached JSON file through jq and parses the .Credentials and exports them.

export AWS_ACCESS_KEY_ID=$(cat ~/.aws/cli/cached/[hash].json |jq .Credentials.AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(cat ~/.aws/cli/cached/[hash].json |jq .Credentials.SecretAccessKey)
export AWS_SESSION_TOKEN=$(cat ~/.aws/cli/cached/[hash].json |jq .Credentials.SessionToken)
1 Like

Here’s how I have it working for me -

aws-vault v6.3.1
aws-cli/2.2.28 Python/3.9.6 Darwin/20.5.0 source/x86_64 prompt/off
Terraform v1.0.4
on darwin_amd64

  1. Configure SSO profile using aws configure sso
  2. Add `credential_process = aws-vault exec --json’ into the config file under .aws/config
  3. export AWS_SDK_LOAD_CONFIG=1 into your current environment
  4. configure your provider to reference the profile name you’ve used as part of step 1
  5. Run terraform init/plan; this will start a new browser instance where you’ll need to accept the CLI login, then handback instruction to TF to run your plan

Hope this helps

1 Like

I was able to use AWS SSO simply by specifying the AWS_PROFILE environment variable. You can export this, or specify it on the command line.

For example:

export AWS_PROFILE="sandbox"
terraform plan

Or just

AWS_PROFILE="sandbox" terraform plan

Note: I am using AWS CLI v2.4.11, terraform 1.1.3, and AWS provider v3.72.0

3 Likes

How to get more permanent creds for CI instead of creds tied to my user making it look like every developer change was me that did it in the audit log?

AWS SSO (and its successor) is really designed for granting people access to things, whereas for CI systems you are better off looking at options like IAM roles. It sounds like you’ve currently got your CI system setup with your personal credentials which is generally not good practice.

Hiya @stuart-c, how do I use Assume Role for Terraform to get temporary credentials?

Currently, I’ve setup Integration with Okta and can login with a user created on Okta. I’ve setup SAML Integration through IAM. I also have an Assume Role role. I also provide this Assume Role ARN in provider “aws” {}.

But, how will I login as a user and get the temporary credentials that use the Assume Role to execute the Terraform code?

Do I use “aws configure sso” and it will create temporary credentials that will be used by Terraform which uses the Assume Role?

I’m getting 403 Forbidden response when I try to use “aws configure sso”.
How do I actually use AssumeRole of Terraform to perform Infra changes?

I solved this by creating a second profile for terraform. This is my .aws/config file

[default]
region = eu-north-1
cli_pager=

[profile default]
sso_start_url = https://d-XXXXXXXX.awsapps.com/start
sso_region = eu-north-1
sso_account_id = YYYYYYYYYYYYYYY
sso_role_name = aws-sso-admin-permissions
region = eu-north-1

[profile terraform]
credential_process = sh -c "ls -t  /home/kamil/.aws/cli/cache/*.json  | head -n1 | xargs cat | jq -r '.Credentials + {Version: 1}'"

First I login with aws in a console using default profile (aws sso login), then I swithc for terraform to profile terraform (export AWS_PROFILE=terraform) and that’s it.

1 Like

I wrote a tool that automates this a few years back when SSO was brand new and very few CLI tools supported it: GitHub - trondhindenes/awshelper: wrapper which lets you use aws sso-based credentials with your existing utilities

It simply reads the aws sso cli cache and passes the credential envvars into the subprocess it launches. So you can now run:

export AWS_PROFILE=my-aws-sso-profile
awshelper terraform plan

For anyone still experiencing this issue this helped me:

Stuart is right, you shouldn’t use SSO credentials for this purpose, it is possible but not the best practice, at all.
The session needs to be refreshed when its token expires, and the only way to do so with AWS SSO credentials is to re-login.
If you are forced to use this kind of credentials, you should look for “Maximum session duration” at AWS IAM Identity Center, in Settings > Authentication:

Also, you will need to do a “aws sso login --profile [profile_name]”, and keep your session fresh, before running or automating anything.