Hi,
I am new to waypoint and I am using it with gitlab on aws to deploy a simple web-application. I followed the instruction (Integrating Waypoint with GitLab CI/CD | Waypoint by HashiCorp) and everything worked fine in the beginning. After developing the application further, I am facing now the issue that every running pipeline fails.
$ waypoint deploy
Deploying...
Found existing ECS cluster: waypoint
Found existing IAM role to use: ecr-xyz-core
! transport is closing
Cleaning up file based variables
ERROR: Job failed: exit code 1
Does anyone have a hint for me what exactly goes wrong and how I can fix that?
Welcome! If you’re running 0.2.2 and using aws-ecs, this is likely a known bug that we’ll be doing a release today to correct. In the meantime, you can use v0.2.1 or try adding an empty logging {} block to your deploy, for example:
use “aws-ecs” {
region = “us-east-1”
memory = “512”
logging {}
}
If this is doesn’t fix it, please share your waypoint hcl for more troubleshooting.
I was able to successfully install a 0.2.3 server both locally on Docker and remotely on EKS, and successfully run waypoint up on each installation with the below hcl file. Let me know if you are able to verify your server version and we’ll go from there!
project = "example-nodejs"
app "example-nodejs" {
labels = {
"service" = "example-nodejs",
"env" = "dev"
}
build {
use "pack" {}
registry {
use "aws-ecr" {
region = "us-east-1"
repository = "waypoint-example"
tag = "latest"
}
}
}
deploy {
use "aws-ecs" {
region = "us-east-1"
memory = "512"
}
}
}
We need to figure out if the error is bubbling up as part of attempting to create the default subnets, or as part of creating the ALB after the subnet step completes.
Do you have useful logs on your AWS side?
@ltutar can you provide more information on your exact steps? I am not able to recreate this error, so in order for me to help I need a few more details on the setup.
Is your server installed locally on Docker? What version of docker?
What version of the Waypoint server?
What waypoint.hcl are you using?
I am guessing there is some configuration in your AWS account that may be colliding here, but I’m not sure what that might be at this time.