Container starts then immediately stops

    2022-12-22T11:13:53.823-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=e6cf0743-b0b1-675e-6fa8-9efd2ab14791 task=vault_task @module=logmon path=/var/lib/nomad/alloc/e6cf0743-b0b1-675e-6fa8-9efd2ab14791/alloc/logs/.vault_task.stdout.fifo timestamp=2022-12-22T11:13:53.823-0700
    2022-12-22T11:13:53.823-0700 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=e6cf0743-b0b1-675e-6fa8-9efd2ab14791 task=vault_task @module=logmon path=/var/lib/nomad/alloc/e6cf0743-b0b1-675e-6fa8-9efd2ab14791/alloc/logs/.vault_task.stderr.fifo timestamp=2022-12-22T11:13:53.823-0700
    2022-12-22T11:13:53.876-0700 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=0580e5c09e2071098f354a4c674e4b305557e060429b46a8338fe92923c5dcb8
    2022-12-22T11:13:53.974-0700 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=0580e5c09e2071098f354a4c674e4b305557e060429b46a8338fe92923c5dcb8
    2022-12-22T11:13:55.094-0700 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=e6cf0743-b0b1-675e-6fa8-9efd2ab14791 task=vault_task reason="Policy allows no restarts"
    2022-12-22T11:13:55.098-0700 [INFO]  client.gc: marking allocation for GC: alloc_id=e6cf0743-b0b1-675e-6fa8-9efd2ab14791
  • allocation logs
ID                     = 38ac2f61-14af-183d-d375-ae7ddaed74a8
Eval ID                = 696270fb-ecc8-3613-be9a-7f8ce9472e94
Name                   = dev_core.vault_group[0]
Node ID                = 9529d1bf-948d-bd58-b051-2f8b34537400
Node Name              = development_nirvai_core_leader
Job ID                 = dev_core
Job Version            = 0
Client Status          = failed
Client Description     = Failed tasks
Desired Status         = run
Desired Description    = <none>
Created                = 2022-12-22T11:19:56-07:00
Modified               = 2022-12-22T11:20:00-07:00
Deployment ID          = e83d5989-9151-9683-5f8e-71b4bc9dbeab
Deployment Health      = unhealthy
Reschedule Eligibility = 31s from now
Evaluated Nodes        = 1
Filtered Nodes         = 0
Exhausted Nodes        = 0
Allocation Time        = 43.003µs
Failures               = 0

Allocation Addresses (mode = "bridge"):
Label   Dynamic  Address
*vault  yes      192.168.0.5:8300

Task "vault_task" is "dead"
Task Resources:
CPU        Memory           Disk     Addresses
0/100 MHz  9.0 MiB/300 MiB  300 MiB  

Memory Stats
Cache  Swap  Usage
0 B    0 B   9.0 MiB

CPU Stats
Percent  Throttled Periods  Throttled Time
0.00%    0                  0

Task Events:
Started At     = 2022-12-22T18:19:57Z
Finished At    = 2022-12-22T18:19:59Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                       Type             Description
2022-12-22T11:19:59-07:00  Alloc Unhealthy  Unhealthy because of failed task
2022-12-22T11:19:59-07:00  Not Restarting   Policy allows no restarts
2022-12-22T11:19:59-07:00  Terminated       Exit Code: 1, Exit Message: "Docker container exited with non-zero exit code: 1"
2022-12-22T11:19:57-07:00  Started          Task started by client
2022-12-22T11:19:57-07:00  Task Setup       Building Task Directory
2022-12-22T11:19:56-07:00  Received         Task received by client

Placement Metrics
Node                                  binpack  job-anti-affinity  node-affinity  node-reschedule-penalty  final score
9529d1bf-948d-bd58-b051-2f8b34537400  0.014    0                  0              -1                       -0.493

  • if I run the container directly from the local registry everything works

  • when i run through nomad it doesnt work :frowning: the container exits immediately

  • i set the restart policy to 0 because it would just restart forever

  • runs from local registry without any env (just to verify if its an env thing)

  • think i found it, cant find the cert

  • if i run with compose and exec into the container, the certs are where they should be. so its something to do with how i’m binding the volume in the nomad jobspec

haha rookie mistake, theres three volumes and i only supplied two in the nomad config.

bam