K8s deployment goes into CrashLoopBackOff

I have the following simple config for my k8s deployment. Side-car container and all other required steps are all good for secret injection into a pod in k8s cluster. This only works if I specify nginx image, every single other images are trigerring CrashLoopBackOff and no logs on the pod get displayed.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  selector:
    matchLabels:
      app: web-app
  replicas: 1
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "test-db"
        vault.hashicorp.com/agent-inject-secret-injected-creds.txt: "test/data/database/db-creds"
        vault.hashicorp.com/agent-inject-template-injected-creds.txt: |
          {{- with secret "test/data/database/db-creds" -}}
          postgresql://{{ .Data.data.username }}:{{ .Data.data.password }}@postgres:5432/wizard
          {{- end -}}
      labels:
        app: web-app
    spec:
      serviceAccountName: test-database
      containers:
        - name: web-app
          # only works if I set image to nginx, none of other images would work
          image: alpine:latest 

I think this might be because alpine’s default entry point is just /bin/sh so it exits immediately when no command is provided. Whereas by default nginx will run indefinitely.

e.g. if I run docker run --rm alpine it exits immediately. But if I run docker run --rm nginx, it runs until I cancel it. Kubernetes expects deployments to be long-running processes, so I believe it treats the exit of the alpine container as a “crash”.

@tomhjp thank you so much for that clear explanation, that totally makes sense!