I am using the HashiCorp Cloud Platform to run Consul.
From my application I can successfully access the Consul endpoints and the application works.
However, when I deploy the application to Kubernetes (GCP or AWS) using Shipa.IO I consistently encounter an “unable to load certificate” error. This error prevents the container image of my application from running in the cloud.
What is the workaround to running a containerize application in the cloud (under Kubernetes) that attempts to connect to HCM?
There is no trick to it, they use a AWS SSL cert, so somehow your nodes cannot certify this cert, which if they have access they should be able to. Check your rules and routes to check on what’s blocking them.
I have just checked the ACL inbound and outbound rules. Both allow all all traffic in and out.
However, when I deploy to the cloud (either GCP or AWS) the message regarding an inability to load the certificate keeps causing the POD to crash.
Is there an example that can be provided on how to deploy when using HCM to the cloud as part of a docker container? Apparently, something is missing in the deployment that is present when running locally.
Maybe make this a little easier and just deploy out the hashicorp vault agent (sidecar). That way your application is out of the flow and you’re testing Hashcorp to Hashicorp.
If you don’t want to do sidecar, any image with vault binary will let you test.
Thank-you for the suggestion, but that is not possible as the consul agent is built into the framework.
I deployed Consul using HELM and this same code/agent works.
Now, we want to migrate to a manage Consul service so we are experimenting with the new Consul managed service.
However, we are running into these issues.
It should be straightforward, but apparently there is an additional certificate that is causing the issue where there should not be one.
We have a token and datacenter.
This works locally , but not in the cloud.
I am enclosing the actual stack trace from the container logs so you can see what is happing.