I followed the Nomad guide for creating CSI
and now have:
[root@cac1prodndc5 ~]# nomad plugin status aws-ebs0
ID = aws-ebs0
Provider = ebs.csi.aws.com
Version = v0.10.1
Controllers Healthy = 1
Controllers Expected = 1
Nodes Healthy = 4
Nodes Expected = 4
Allocations
ID Node ID Task Group Version Desired Status Created Modified
6b3ea6eb 7d415c3f nodes 2 run running 4d19h ago 4d19h ago
eda72a2e 5662f12b nodes 2 run running 4d21h ago 4d21h ago
5deb2f00 c8827229 controller 0 run running 4d21h ago 4d21h ago
7f1698e3 c8827229 nodes 2 run running 4d21h ago 4d21h ago
adc21354 f6164718 nodes 2 run running 4d21h ago 4d21h ago
[root@cac1prodndc5 ~]# nomad volume status ops-span2-linux
ID = ops-span2-linux
Name = ops-span2-linux
Namespace = default
External ID = vol-0b638cd31e9fa6864
Plugin ID = aws-ebs0
Provider = ebs.csi.aws.com
Version = v0.10.1
Capacity = 0 B
Schedulable = true
Controllers Healthy = 1
Controllers Expected = 1
Nodes Healthy = 4
Nodes Expected = 4
Access Mode =
Attachment Mode =
Mount Options =
Namespace = default
Allocations
No allocations placed
However, when trying to use this volume in my job file via:
volume “span2_data” {
type = “csi”
read_only = false
source = “ops-span2-linux”
access_mode = “single-node-writer”
attachment_mode = “file-system”
}
Nomad tells me * Constraint missing CSI Volume ops-span2-linux
filtered 4 nodes
Why would it say that the CSI volume is missing when I can see that it’s healthy and ready?
One interesting note is that the following message gets repeated in the stderr for my ebs-controller job:
I0703 12:02:59.443371 1 controller.go:352] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
Any help would be greatly appreciated.