Private S3 backend configuration - X509 cert not valid - * certificate

Hi,

I am trying to define a private S3 backend like this,

terraform {
  backend "s3" {
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    bucket                      = "api-dev"
    key                         = "api_deploy"
    endpoint                    = "https://sfc.cs.srfc.com"
    encrypt                     = true
    region                      = "us-east-1"
  }
}

Now, the cert associated with the endpoint has a CN of

*.cs.srfc.com

And when performing a terraform init, I get an error like this,

│ Error: Error migrating the workspace "dev" from the previous "local" backend
│ to the newly configured "s3" backend:
│     Error loading state:
│     RequestError: send request failed
│ caused by: Get "https://api-dev.sfc.cs.srfc.com/env%3A/dev/api_deploy": x509: certificate is valid for *.cs.srfc.com, cs.srfc.com, not api-dev.sfc.cs.srfc.com

I think that’s because the CN is

*.cs.srfc.com

and the host is

api-dev.sfc.cs.srfc.com

Essentially a,

*.*.cs.srfc.com

Any ideas of how to handle this?
Is there a way to instruct terraform to ignore this mismatch?

Any help is appreciated.

Hi @arun-a-nayagam,

I think you are right about the cause: a wildcard portion in the hosts covered by your certificate only applies to one level of label.

The usual answer in this situation is to use a certificate which also covers *.sfc.cs.srfc.com. A certificate can be valid for multiple host patterns at the same time, so it is technically possible to have a certificate which will work for both together, but it would also be valid to use a separate certificate for the parent domain vs. the child domain.

Hi @apparentlymart,
Thank you for the reply.
That s3 endpoint is managed by a different team.
By the looks of it, the first * seems to be the bucket name that’s prefixed in front of the actual endpoint.
I feel uncomfortable asking a different team a solution for an S3 internal API implementation quirk.

Is there a way to instruct terraform to ignore this?

Terraform is implementing the TLS protocol and related checks as required by the specification, so I think the only good answer here is to make sure your server also correctly implements the protocol, by presenting a valid certificate.

Alternatively, you could make your server support the http: scheme instead of https: and then configure the http: URL instead, which means that there will be no authentication of the server but you will not need to provide a valid certificate.

Hi @apparentlymart,

While using AWS cli there’s an option to disable SSL verification using, --no-verify-ssl,

aws s3 ls --no-verify-ssl --endpoint-url https://sfc.cs.srfc.com

Is there a way to achieve that through terraform?

Hi @arun-a-nayagam,

The Terraform S3 backend has no option equivalent to that, as far as I know.

Disabling certificate authentication largely defeats the purpose of using TLS – if you aren’t verifying the server then the server could be the attacker attempting to intercept your messages. So if disabling certificate checks would be an acceptable answer then disabling TLS altogether is a similar solution, which will also avoid the S3 backend trying to authenticate the server’s identity.

However, I would still suggest that using a correct TLS certificate is the best answer here. If you cannot obtain a correct certificate for this hostname then perhaps the best alternative is to rename the host containing your S3 implementation so that it is a hostname correctly covered by the existing certificate.

Using correct certificates and hostnames is a highly important part of deploying a TLS-based service and should not just be trivially bypassed.

Hi @apparentlymart,

You are absolutely correct I shouldn’t look to disable TLS verification.

But the issue I have got here is that the AWS CLI successfully makes a call to the host in this format, https://<host_name>/<bucket_name>, I see this in the AWS debug logs here,

2022-10-25 11:33:34,259 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://sfc.cs.srfc.com/api-dev"?list-type=2&prefix=&delimiter=%2F&encoding-type=url, headers={'User-Agent': b'aws-cli/1.22.78 Python/3.10.2 Windows/10 botocore/1.24.23', 'X-Amz-Date': b'20221025T103334Z', 'amz-sdk-request': b'attempt=1'}>
2022-10-25 11:33:34,264 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): sfc.cs.srfc.com:443
2022-10-25 11:33:35,779 - MainThread - urllib3.connectionpool - DEBUG - https://sfc.cs.srfc.com:443 "GET /api-dev?list-type=2&prefix=&delimiter=%2F&encoding-type=url HTTP/1.1" 200 809

Whereas the S3 backend provider connects to the host in this format
https://<bucket_name>.<host_name>, that’s what is causing the issue of CN mismatch.

Not quite understanding why the S3 backend is behaving differently to AWS CLI, is it because I am making a mistake while specifying the backend config?

terraform {
  backend "s3" {
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    bucket                      = "api-dev"
    key                         = "api_deploy"
    endpoint                    = "https://sfc.cs.srfc.com"
    encrypt                     = true
    region                      = "us-east-1"
  }
}

Hi @apparentlymart,

Ok, after quite a bit of debugging, I think the correct format for a bucket name should start with a “/<buck_name>” prefix, note the preceding “/”. When that is omitted, then the backend seems to connect in that https://<bucket_name>.<host_name> format.

So this backend config works fine,

terraform {
  backend "s3" {
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    bucket                      = "/api-dev"
    key                         = "api_deploy"
    endpoint                    = "https://sfc.cs.srfc.com"
    encrypt                     = true
    region                      = "us-east-1"
  }
}

Hi @arun-a-nayagam,

Oh, indeed the real Amazon S3 supports multiple URL schemes that are all equivalent, and Terraform’s S3 backend is designed for the real S3 so it assumes that the remote server will support all of the URL schemes that real S3 supports.

However, I think if you set the option force_path_style in the backend configuration then this will tell the backend to use the URL scheme your fake S3 is expecting, where the bucket name appears in the path instead of in the hostname.

Hi @apparentlymart,

Aah, that explains. Thank you!

Arun