I am evaluating consul in a way. I am trying to put it to use in ahybrid way. I have set up aconsul server cluster, but in order to avoid implementing consul client into each and every production node ( some of the are in K8s , but most are stil vms), i am examining the case of every service registering itself to consul by hitting a go app, that uses the go-consul api.
Consul has a REST API that you can use to register services. You can talk to any http or https end-point in your cluster that your ACL policies allow. However, if you register a service at the Consul server node, then it might become confusing because that node does not run the registered service. The service has another address than the node if you do it like that, and that will hurt in the long run. In our setup Ansible deploys a consul agent to each VM using https://github.com/brianshumate/ansible-consul.git We have a simple playbook to register services, so they don’t need to implement that behaviour.
First of all thank you for the reply.
The situation is that we’ll be migrating within the year all services to k8s, so we’d like to proceed with consul, without the overhead of changing ansible roles and messing with VMs, since it is an interval phase.
That’s why I’m looking for a workaround.
That is not the case. I say that I am in a transitional phase where I don’t want the overhead of changing ansible roles - because in 9 months they are going to be useless - , but I want to register existing vms on consul until we completely migrate to k8s.
Your question: “Is this a best practice or is it cmpletely wrong?”
Ideas: Consul knows where these services are located because each service registers with its local Consul client. Operators can register services manually, configuration management tools can register services when they are deployed, or container orchestration platforms can register services automatically via integrations.
Hi Nikos,
The best practice would be to deploy consul clients to each node. Depending on your use case, this might not be worth the effort since you say that this is temporary.
The primary reasons for running clients on each node are health checking and performance.
Consul clients gossip with one another to determine if they’re healthy. If one of your nodes dies, Consul will quickly discover this through gossip and any services on that node will be marked as unhealthy. If using Consul DNS, this means that the services won’t be returned in the DNS call and so you won’t route to unhealthy services. If using the go api, there would be no way to mark the services as unhealthy unless you had something out-of-band doing this. You can also create health checks for your services that are performed by the Consul client. If there is no client then there’s nothing to run the health check.
For performance, the Consul clients pipeline all requests through a single connection to the Consul servers. In your case, it sounds like you’d only have a couple of instances of the go-consul api running, so you’d actually likely have less connections.
So overall, it might make sense in your use-case but it’s not a best practice. There might be other issues that will arise since it doesn’t follow Consul’s model. That being said, in Kubernetes we actually use the API to register Kubernetes services onto a “fake” node so that’s an example of using the same method you’re contemplating. See https://www.consul.io/docs/platform/k8s/service-sync.html
Hello and thanks for the detailed answer. I have switched tactics and I am about to implement an agent on every node. Since we left up on service-syc, I am wondering hou can I add an envoy car to the pods of the deployment or the statefull sete. I have already installed the connect injector, but I see that you can only add annotations to a pod an not a deployment. How am I suposed to get around it?
You would need to set the annotation on the Pod’s template specification (.template.spec) within your application’s Deployment. Here’s an example.