Migrate vault from server instance cluster to eks containers

Hi,
My current vault setup is 2 servers, one active, one passive with a postgres RDS backend. I’m in the process of migrating to EKS. I’ve followed the tutorial for setting up and configuring a new vault cluster in EKS and I’m attempting to connect to a backend postgres rds instead of MYSQL but was wondering if there is a guide for this? Is this documented somewhere?

Thanks,
Jason

Got almost everything working. I’ll paste my custom yaml below. I had to set the audit log to point to stdout. Now I need for these pods to register with the consul cluster in order to properly initiate consul template to get secrets from vault. (not use consul as back end storage)

Blockquote
global:
image: artifactory/docker/hashicorp/vault:1.18
imageK8S: artifactory/docker/hashicorp/vault-k8s:1.6.1
injector:
enabled: true
tolerations: |
- key: “mines”
operator: “Equal”
value: “true”
server:
service:
type: NodePort
volumes:
- name: my-ca-bundle
configMap:
name: my-ca-bundle.pem
volumeMounts:
- name: my-ca-bundle
mountPath: /etc/ssl/certs/my-bundle.pem
subPath: my-ca-bundle.pem
readOnly: true
tolerations: |
- key: “Mines”
operator: “Equal”
value: “true”
affinity: “”
ha:
enabled: true
replicas: 2
apiAddr: “https://${HOST}”
config: |
ui = “true”
default_lease_ttl = “10h”
max_lease_ttl = “240h”
raw_storage_endpoint = “true”
log_level = “info”
plugin_directory = “/etc/vault/vault_plugins”
listener “tcp” {
tls_disable = “true”
address = “0.0.0.0:8200”
cluster_address = “0.0.0.0:8201”
}
storage “postgresql” {
connection_url = “postgres://{VAULT}:{VAULTPW}@{VAULTRDS}:5432/Vault?sslmode=verify-full”
ha_enabled = “true”
}
path “secret/*” {
capabilities = [“create”, “read”, “update”, “delete”, “list”]
}
api_addr = “https://{HOST}”
seal “awskms” {
kms_key_id = “vaultKmsUnsealToken”
endpoint=“”
}
ingress:
enabled: true
activeService: false
hosts:
- host: HOST
annotations: |
kubernetes.io/ingress.class’: alb
alb.ingress.kubernetes.io/group.name’: {PRE}-eks
alb.ingress.kubernetes.io/target-type’: instance
alb.ingress.kubernetes.io/ssl-redirect’: ‘443’
alb.ingress.kubernetes.io/scheme’: internal
alb.ingress.kubernetes.io/listen-ports’: ‘[{“HTTPS”: 443}]’
alb.ingress.kubernetes.io/load-balancer-name’: ${PRE}-eks-alb
alb.ingress.kubernetes.io/security-groups’: $ALB_SG
alb.ingress.kubernetes.io/certificate-arn’: $CERT,$BETA
alb.ingress.kubernetes.io/subnets’: $SUBA,$SUBB
alb.ingress.kubernetes.io/ssl-policy’: ELBSecurityPolicy-TLS13-1-2-2021-06
alb.ingress.kubernetes.io/healthcheck-path’: /v1/sys/health
alb.ingress.kubernetes.io/healthcheck-port’: traffic-port
alb.ingress.kubernetes.io/backend-protocol’: HTTP
alb.ingress.kubernetes.io/backend-protocol-version’: HTTP1

Im actually looking to have the vault serf health status available from consul nodes in consul ui.