Vault Agent Injector leader election for HA in Kubernetes

Hi all,

After enabling the leaderElector option on Helm Chart, Does anyone know why the second agent (pod) is constantly trying to get information from de Kubernetes cluster nodes?

Here are some logs from Kubernetes Audit on AWS EKS 1.20:

{
    "kind": "Event",
    "apiVersion": "audit.k8s.io/v1",
    "level": "Request",
    "auditID": "b00969a3-0af0-42ee-8cad-5c586aed3c54",
    "stage": "ResponseComplete",
    "requestURI": "/api/v1/nodes/ip-10-22-2-28.us-east-2.compute.internal",
    "verb": "get",
    "user": {
        "username": "system:serviceaccount:default:injection-agent-injector",
        "uid": "006c1be9-b49d-4f42-a362-0c2b74e3b612",
        "groups": [
            "system:serviceaccounts",
            "system:serviceaccounts:default",
            "system:authenticated"
        ]
    },
    "sourceIPs": [
        "10.22.3.84"
    ],
    "userAgent": "vault-k8s/v0.0.0 (linux/amd64) kubernetes/$Format",
    "objectRef": {
        "resource": "nodes",
        "name": "ip-10-22-2-28.us-east-2.compute.internal",
        "apiVersion": "v1"
    },
    "responseStatus": {
        "metadata": {},
        "status": "Failure",
        "reason": "Forbidden",
        "code": 403
    },
    "requestReceivedTimestamp": "2022-04-25T02:23:18.881221Z",
    "stageTimestamp": "2022-04-25T02:23:18.881621Z",
    "annotations": {
        "authentication.k8s.io/legacy-token": "system:serviceaccount:default:injection-agent-injector",
        "authorization.k8s.io/decision": "forbid",
        "authorization.k8s.io/reason": ""
    }
}

{
    "kind": "Event",
    "apiVersion": "audit.k8s.io/v1",
    "level": "Request",
    "auditID": "1d92dc9c-ee27-4e82-ab89-615028945a1f",
    "stage": "ResponseComplete",
    "requestURI": "/api/v1/nodes/ip-10-22-2-28.us-east-2.compute.internal",
    "verb": "get",
    "user": {
        "username": "system:serviceaccount:default:injection-agent-injector",
        "uid": "006c1be9-b49d-4f42-a362-0c2b74e3b612",
        "groups": [
            "system:serviceaccounts",
            "system:serviceaccounts:default",
            "system:authenticated"
        ]
    },
    "sourceIPs": [
        "10.22.3.84"
    ],
    "userAgent": "vault-k8s/v0.0.0 (linux/amd64) kubernetes/$Format",
    "objectRef": {
        "resource": "nodes",
        "name": "ip-10-22-2-28.us-east-2.compute.internal",
        "apiVersion": "v1"
    },
    "responseStatus": {
        "metadata": {},
        "status": "Failure",
        "reason": "Forbidden",
        "code": 403
    },
    "requestReceivedTimestamp": "2022-04-25T02:23:36.058352Z",
    "stageTimestamp": "2022-04-25T02:23:36.058727Z",
    "annotations": {
        "authentication.k8s.io/legacy-token": "system:serviceaccount:default:injection-agent-injector",
        "authorization.k8s.io/decision": "forbid",
        "authorization.k8s.io/reason": ""
    }
}

Part of the configuration that I’m using on Helm values.yaml file:

  leaderElector:
    enabled: true

  replicas: 2
  failurePolicy: "Fail"

Could someone explain this?

Best,

Daniel LM

I wonder if this is something to do with affinity checking in k8s, since the default affinity for multiple replicas of the injector is set to run each replica on a different k8s node. But as far as I know the vault-k8s injector isn’t querying k8s nodes directly.