How to get hostname of OVH Load Balancer created via nginx helm release

Hi everyone, I’d really appreciate some help on this one.

So, I’m creating an OVH Public Cloud Kubernetes cluster, which has gone great so far.

I have deployed a nginx-ingress-controller via the helm_release module:

resource "helm_release" "nginx_ingress" {
  name       = "ingress-nginx"
  namespace = "ingress-nginx"
  create_namespace = true

  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"

  values = [var.helm.nginx-ingress.values]
}

As the service type is set to LoadBalancer this triggers OVH to create a load balancer for me, great! However, I’m struggling with how I’d extract that new load balancer’s hostname/IP address via terraform, as I’d then like to create DNS records that point to it.

I can get it manually via kubectl get service ingress-nginx-controller -n ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" but I can’t find a way to do it in terraform.

I have tried:

Any ideas?

Thanks!

So, I figured out a way to do this.

I used a null_resource coupled with a local-exec provisioner to dump the kubeconfig file to disk (slightly dirty I know).

I then I run a shell command via the Invicton-Labs/shell-resource/external module to get the lb service’s IP.

I then use another null_resource coupled with a local-exec provisioned to delete the kubeconfig from disk.

I then output the shall command’s stdout which should be the LB’s hostname.

resource "null_resource" "create_kubeconfig" {
  provisioner "local-exec" {
    command = "echo \"${module.k8s.kubeconfig_file}\" >> \"${path.module}/kubeconfig\" && chmod 0600 \"${path.module}/kubeconfig\""
  }
  depends_on = [
    module.k8s
  ]
}

module "k8s_lb_hostname_query" {
  source  = "Invicton-Labs/shell-resource/external"
  working_dir = path.module

  // The command to run on resource creation on Unix machines
  command_unix         = "kubectl --kubeconfig \"${path.module}/kubeconfig\" get service ingress-nginx-controller -n ingress-nginx -o jsonpath=\"{.status.loadBalancer.ingress[0].hostname}\""
  depends_on = [
    null_resource.create_kubeconfig,
    helm_release.nginx_ingress
  ]
}

resource "null_resource" "rm_kubeconfig" {
  provisioner "local-exec" {
    command = "rm -f \"${path.module}/kubeconfig\""
  }
  depends_on = [
    null_resource.create_kubeconfig,
    module.k8s_lb_hostname_query
  ]
}```

:)

A common alternative method is to not use Terraform at all to manage the DNS and instead use the External DNS application - GitHub - kubernetes-sigs/external-dns: Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services

1 Like