Now when I try to schedule a deployment following this link,
It fails with,
kubernetes_deployment.nginx: Creating…
Error: Failed to create deployment: Post "https://<ip>/apis/apps/v1/namespaces/default/deployments": x509: certificate signed by unknown authority
on kubernetes.tf line 21, in resource "kubernetes_deployment" "nginx":
21: resource "kubernetes_deployment" "nginx" {
Welcome! How did you create the key and certificate in that config? And did you create any when deploying your GKE cluster? It looks like they aren’t signed by the same CA chain, as a guess.
I get the below error after removing, client_certificate, client_key and cluster_ca_certificate parameters
kubernetes_deployment.nginx: Still creating… [30s elapsed]
2020/12/13 16:45:23 [DEBUG] kubernetes_deployment.nginx: apply errored, but we’re indicating that via the Error pointer rather than returning it: Failed to create deployment: Post “http:///apis/apps/v1/namespaces/default/deployments”: dial tcp :80: i/o timeout
Sorry, my k8s experience is limited. I’ll see whether I can get this demo up and running tomorrow. Hopefully someone with more experience can chime in with help before then.
Error: Failed to create deployment: deployments.apps is forbidden: User “system:anonymous” cannot create resource “deployments” in API group “apps” in the namespace “default”
I suppose I will raise a new ticket for this.
Thank you so much @jlj7 for looking at this and replying, much appreciated.
I’m currently working on the same issue, and I can tell you why you are getting the “system:anonymous” message. As of K8s 1.19, basic authentication (ie, username and password) to the Kubernetes API has been disabled. Thus, your Kubernetes provider is not able to log into the cluster API and the cluster is therefore treating you as a non-user.
I have not been able to test this idea, but I think if you enable “Client certificate” in the container cluster config you should be able to use the “client_certificate” and “client_key” fields in the Kubernetes provider definition.
I have a legacy cluster that was upgraded to 1.19 while my terraform code is still using basic auth (long story) and I am currently trying to use the exec plugin to pull the credentials from gcloud. Fingers crossed.