HI,
we are using Vault 1.10.3 on GKE 1.25, using a GCS bucket as backend
This has worked for months, but in the last few weeks we have this strange behaviour wher e the active pod memory consumptions will just go steadily up until the pods crash for OOM, no matter how much memory the node has
I noticed that there are THOUSAND of entries in
gcs://<gcs_bucket>/sys/expire/id/auth//login
At first I did as per this thread and memory went to back to normal
After the mem value got back and steady to usual values, I restarted the pods in order to reset the “restart” counter to zero, I though that would have made easier to check if restarts were still happening but after doing that, the previous behaviour came back
Any hint ? Has someone experienced this issue ?
Many thanks