Hello, I ran though the Boundary tutorials and everything works fine locally. Now I’d like to deploy to AWS, I see the diagram and basic steps. But there is a lot missing in terms of configuration. Are there some sample terraform files for creating the EC2 instances and configuring those instances? Are we suppose to just ssh into those boxes and install everything manually? Are there any public AMIs posted? Maybe I’m missing something, but the documentation really dropped off when it came to deploying.
Thanks @coguy450 for trying out Boundary. We have a reference architecture with some example TF deployments for AWS here: https://github.com/hashicorp/boundary-reference-architecture
Let me know if this helps!
Yes, that’s what I needed, thank you very much!
Hi, I followed this setup to install Boundary in AWS.
I’m having some issues. Some of them I figured out by myself like this:
aws/aws/ec2.tf:
provisioner "file" {
source = "${var.boundary_bin}/boundary"
destination = "/home/ubuntu/boundary"
# destination = "~/boundary"
}
I had to put in there the absolute path because the original destination creates file with the name “~” in ubuntu home dir.
Then I had to modify the version of Postgres:
aws/aws/db.tf:
resource "aws_db_instance" "boundary" {
allocated_storage = 20
storage_type = "gp2"
engine = "postgres"
# engine_version = "11.8"
engine_version = "11.13"
After those modifications, step 4. → “terraform apply -target module.aws” finished succesfully.
Then my LB is created, so I replaced DNS in:
aws/boundary/vars.tf
variable "url" {
# default = "http://127.0.0.1:9200"
# default = "http://boundary-demo-controller-ec52c62e6a9979ab.elb.us-east-1.amazonaws.com:9200"
default = "http://my-lb-dns:9200"
}
When I run step 5 → “terraform apply”, I’m getting this error:
│ Warning: Argument is deprecated
│
│ with module.boundary.boundary_account.backend_user_acct,
│ on boundary/principles.tf line 30, in resource "boundary_account" "backend_user_acct":
│ 30: login_name = lower(each.key)
│
│ Will be removed in favor of using attributes parameter
│
│ (and 7 more similar warnings elsewhere)
╵
╷
│ Error: error reading wrappers from "recovery_kms_hcl": Error configuring kms: error fetching AWS KMS wrapping key information: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
│
│ with module.boundary.provider["registry.terraform.io/hashicorp/boundary"],
│ on boundary/main.tf line 10, in provider "boundary":
│ 10: provider "boundary" {
This error I’m not able to fix.
Could someone help with this, please?
Thanks
Try adding the KMS key’s region explicitly to the awskms
portion of the Boundary provider config in boundary/main.tf
.
Is this ok?
provider "boundary" {
addr = var.url
recovery_kms_hcl = <<EOT
kms "awskms" {
region = "eu-central-1" // added region
purpose = "recovery"
key_id = "global_root"
kms_key_id = "${var.kms_recovery_key_id}"
}
EOT
}
Because it’s still not working …
Anyway, from this I understand that kms
is just an other way of authentication because I can also authenticate like this:
provider "boundary" {
addr = var.url
auth_method_id = "ampw_1234567890" # changeme
password_auth_method_login_name = "myuser" # changeme
password_auth_method_password = "passpass" # changeme
}
but how? If I don’t have any auth_method created yet as you can see on my screenshot.
Why this .tf scripts don’t create this Boundary setup with predefined admin and password like it is in Boundary dev server?
Thank you
It’s not really the mode Boundary runs in that creates the static IDs for the auth method, admin user, etc. as it is boundary database init
– dev mode just spins up a Postgres database in Docker and then runs boundary database init
on it, and if you don’t give database init
flags to tell it not to, then it pre-creates the org, project, auth method, etc. with the hardcoded _1234567890
IDs by default.
That said, the KMS recovery workflow is intended as the Terraform-friendly workflow rather than depending on the hardcoded scopes and auth methods provided by database init
– in fact one of the purposes of the KMS recovery auth option is to resolve exactly that circular dependency you noted.
Since this is running in Terraform, your local AWS credentials make a difference – are they valid in the shell you’re running Terraform in and do they allow you to encrypt and decrypt with that KMS key? (i.e. can you successfully aws kms encrypt
with it?)
Thank you for your answer. I’m logged in shell with AWS credentials but I’m not sure what you exactly meant so I’m sending you my experiment:
cat ~/.aws/credentials <aws:dev>
[dev]
aws_access_key_id = abc123
aws_secret_access_key = def456
echo "test" | base64
dGVzdAo=
aws kms encrypt --key-id abc123 --plaintext dGVzdAo=
An error occurred (NotFoundException) when calling the Encrypt operation: Invalid keyId abc123
I don’t understand what --key-id
should I use here and also in .tf
script where the vaule is global_root
.
The --key-id
you need for the AWS CLI is the KMS key ID in AWS. It’ll be in a format like 00000000-0000-0000-0000-000000000000
. You shouldn’t need to manually insert it into the Terraform code, it’s automatically inserted as a variable when terraform apply
calls the boundary
module. For running the test in the AWS CLI, you’ll do exactly what you did but with that KMS key ID after the --key-id
argument.
Also I noticed you’ve got a profile for those credentials, you might need to set the AWS_PROFILE variable in your shell environment to that profile name.
Thank you very much, as you wrote the problem was that I didn’t have this env=AWS_PROFILE
in my terminal session. The reason is, I use this way in Terraform:
provider "aws" {
region = "eu-central-1"
profile = "dev"
}
when you add this profile = "dev/prod"
to your .tf
script, terraform reads dev/prod
section from your ~/.aws/credentials
file and then you don’t have to log in with AWS credentials.
However when I logged in to my terminal with AWS credentials, that kms
started to work
But still don’t understand how this key-id
gets there in format 00000000-0000-0000-0000-000000000000
as it is defined as global_root
in awskms { ... }
.
There’s a bit of confusion here that I think is due to maybe a remnant of some older code in this repo. Some KMS key types require a parameter called key_id
as part of the Boundary configuration for them – the aead
and ocikms
key types. Others have another name for the unique KMS key identifier parameter – the awskms
and alicloudkms
types call it kms_key_id
, the azurekeyvault
and transit
types use key_name
, and gcpckms
calls it crypto_key
.
In this case the key configuration passes both key_id
and kms_key_id
, but since the awskms
key type only uses the latter, the key_id
line is just entirely ignored (it looks like that line might have been copied and pasted from an aead
root key config since our example of that key type uses the same value for that parameter). In fact I’ll probably put in a PR shortly to eliminate that line to make it less confusing.
How the AWS KMS key’s GUID actually gets into the Boundary provider config starts with a reference within the Terraform code. In the aws
module, the kms.tf
file contains this block:
resource "aws_kms_key" "recovery" {
description = "Boundary recovery key"
deletion_window_in_days = 10
[...]
}
After Terraform creates that AWS KMS key, the state file contains a bunch of attributes about it, one of which is the ID. Try terraform state show module.aws.aws_kms_key.recovery
and you should see a property named key_id
that has a GUID for that key, along with an attribute id
with the same GUID. The aws
module has an output named kms_recovery_key_id
that refers back to that aws_kms_key.recovery
resource’s id
property.
Over in the boundary
module, one of the input variables defined in vars.tf
is kms_recovery_key_id
, and the main.tf
in the top-level directory passes the kms_recovery_key_id
output of the aws
module as the kms_recovery_key_id
input of the boundary
module. That module in turn finally uses it as the value of the kms_key_id
attribute of the Boundary Terraform provider.