Clarification on single Consul Datacenter in multiple clusters

I am trying to follow Single Consul Datacenter in Multiple Kubernetes Clusters - Kubernetes | Consul by HashiCorp

It says this:

To use it we need to provide a way for the Consul clients to reach the first Kubernetes cluster. To do that, we need to save the kubeconfig for the first cluster as a Kubernetes secret...

I understand kubeconfig to be a generic term for a Kubernetes config file. And so the statement above is ambiguous to me and I’m stuck.

Unfortunately that particular page lacks command line examples that set the required secrets.

Can anyone clarify what this kubeconfig secret is supposed to contain?

Hey @jimsnab

It should be a secret containing a kubeconfig file contents that gives access to the first kubernetes cluster. That’s because we need to provide kubeconfig file in the cloud auto-join for the kubernetes provider as described here.

To create a secret, you can run kubectl create secret generic cluster1-kubeconfig --from-file=kubeconfig=$HOME/.kube/config. Also, note how we reference it in the client.join value in the Helm values for the second cluster.

Hope this helps!

$HOME/.kube/config was the context I needed. Thanks!

Hi again @ishustava1 ,

I have another question about this documentation page. There is a statement

We set the externalServer.tlsServerName to server.dc1.consul. This the DNS SAN (Subject Alternative Name) that is present in the Consul server’s certificate.

I want to make sure I have the correct domain name, but when I inspect the server’s certificate that was auto-generated, there isn’t a DNS SAN.

I’m getting this cert from kubectl get secret <name>-consul-ca-cert, converting base64 to PEM, and using CSR Decoder and Certificate Decoder | CSR Checker | Certificate Checker to look at the cert details.

Is this a problem with the docs, did I do something wrong with Consul server configuration, or am I looking at the wrong cert?