Kubernetes Persistent Volumes with DigitalOcean?

Is it possible to configure a kubernetes_persistent_volume using DigitalOcean volumes with Terraform? I can do it with a YAML file for kubectl. eg:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: myapp
  name: myapp-config-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage

but when I try to do the same with Terraform, it wants a persistent_volume_source. Offering an empty persistence_volume_source will pass a terraform plan, but when I try to apply, it complains. “Error: PersistentVolume “myapp-config-pv” is invalid: spec: Required value: must specify a volume type”

resource "kubernetes_persistent_volume" "myapp-config-pv" {
  metadata {
    name = "myapp-config-pv"
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    storage_class_name = "do-block-storage"
    persistent_volume_source { }
    capacity = {
      storage = "1Gi"
    }
  }
}

I don’t really know where to go from here. None of the arguments in the TF docs ( https://www.terraform.io/docs/providers/kubernetes/r/persistent_volume.html#persistent_volume_source-1 ) seem relevant. What am I missing?

Hi! Looks like you are using the kubernetes_persistent_volume resource, but you need the kubernetes_persistent_volume_claim one. Generally with managed providers like DigitalOcean, you do not need to create a PV, just a PVC. Even in a non-managed set up, the PV is generally created by the cluster administrator while the PVC is used by the end-user.

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).

So you want something like:

resource "kubernetes_persistent_volume_claim" "myapp-config-pv" {
  metadata {
    name = "myapp-config-pv"
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    storage_class_name = "do-block-storage"
    resources {
      requests = {
        storage = "1Gi"
      }
    }
  }
}

That fixed it. Thanks! I have a related question:

In the yaml that I’m using above, the name of the claim is “myapp-config-pvc”. When I do the deployment using kubectl, I use that same claimName to specify the volume. But in Terraform, I have to use “kubernetes_persistent_volume_claim.myapp-config-pvc.metadata.0.name”.
I understand that the volume that was actually created by digital ocean got a different name, and that’s what’s being looked up with “[…].metadata.0.name”, but I wonder why I can’t just use the name of the claim. Seems like an extra step that terraform should be able to do, especially since kubectl can. Did I do it wrong somehow, or is this expected behaviour?