Containerized Spring Boot Nomad job sometimes fails to read from application.yml

Hi all,

I have a Nomad job with a Docker task. The image is a containerized Spring Boot application.

Inside the application.yml there’s a JDBC url configuration which looks like this:

spring:
  datasource:
    url: "jdbc:mysql://${MY_MYSQL_HOST}:${MY_MYSQL_PORT}/${MY_MYSQL_DBNAME}?useSSL=false&serverTimezone=UTC&allowPublicKeyRetrieval=true"
    username: "${MY_MYSQL_USER}"
    password: "${MY_MYSQL_PASSWORD}"

where MY_MYSQL_* variables are environment variables passed via the env stanza in the job.hcl file like this:

      
      env {
        MYSAS_MYSQL_HOST = "..."
        MYSAS_MYSQL_PORT = "..."
        MYSAS_MYSQL_DBNAME = "..."
        MYSAS_MYSQL_USER = "..."
        MYSAS_MYSQL_PASSWORD = "..."
      }

I have a number of Spring Boot modules with the same JDBC configuration (2 of them have identical application.yml files) and a job.hcl file which basically differs only for the job name.

On my development single-node Nomad installation, it happens that 2 of 3 jobs are deployed successfully while the 3rd fails because Spring Boot fails to read the JDBC configuration from the application.yml. It looks like the environment variables are not set by Nomad, but I got them printed out and they actually have been set.

The same Spring Boot modules obviously work perfectly when started on a regular host (not Nomad deployment).

My job.hcl content:

job "...job name..." {

  datacenters = ["dc1"]
  type        = "service"
  name        = "...job name..."
  
  update {
  
    # Numero massimo di allocazioni aggiornate in parallelo
    max_parallel     = 1
  
    # Finestra di stabilizzazione: tempo che Nomad aspetta
    # affinché le allocazioni aggiornate diventino 'healthy'
    min_healthy_time = "30s"

    # Tempo massimo consentito per l’update prima di considerarlo fallito
    healthy_deadline = "5m"
  
    # Abilita il rollback automatico
    auto_revert = true
    
    # Strategia di update (rolling, canary ecc.)
    canary = 0
  }    

  # ================================
  # Gruppo di allocazione
  # ================================
  group "main" {
    count = 1

    network {
      port "listen_port" {
        to = 8080
      }
      port "mgmt_port"   {
        to = 8081
      }
    }

    # ================================
    # Singolo servizio Consul
    # ================================
    service {
      name = "...job name..."
      port = "listen_port"

      # Health check su Spring Actuator
      check {
        name     = "check-primary"
        type     = "tcp"
        port     = "mgmt_port"
        path     = "/actuator/health/liveness"
        interval = "10s"
        success_before_passing = "1"
        timeout  = "1s"
      }
    }

    # ================================
    # Task Docker
    # ================================
    task "app" {
      driver = "docker"
      
      env {
        SERVER_PORT = 8080
        MANAGEMENT_SERVER_PORT = 8081
        MAX_UPLOAD_SIZE = "10MB"
        LOG_LEVEL = "DEBUG"
        MYSAS_MYSQL_HOST = "...ip address..."
        MYSAS_MYSQL_PORT = ...port...
        MYSAS_MYSQL_DBNAME = "...db name..."
        MYSAS_MYSQL_USER = "...db user..."
        MYSAS_MYSQL_PASSWORD = "...db password..."
      }

      config {
        image = "...private registry url.../...private registry path.../...job name...:0.0.2"
        auth {
            username = "...private registry user..."
            password = "...private registry pwd..."
        }
        ports = ["listen_port", "mgmt_port"]
      }

      resources {
        cpu    = 500
        memory = 512
      }
    }
  }
}

Please help! Thank you in advance

hi, so one thing that stands out is that your config has ${MY_ but your env are MYSAS_* .

Does your JBDC configuration thing actually expands environment variables in the configuration? It is not an universal thing, the program itself has to expand it, or some docker images implement parsing the confirg manually via envsubst program, it all depends.

Hi, no that was just me trying to edit out sensitive informations :smiley: The config and the env are actually coherent.

Basically it looks like sometimes Nomad fails to inject env variables strictly before starting the Spring Boot containers. Because I can’t explain it differently, the jobs and the Spring Boot applications are basically identical as for their configuration, but 2 of 3 jobs are deployed correctly and the third can’t find its configuration.

It’s not a CPU/RAM problem, because the host machine is quite resourceful.

As far as I know, each docker task in each job should receive its copy of the environment - I mean, the same environment variable name can be used for different job tasks - but even if the environment variable name was shared between jobs, it wouldn’t be a problem because all jobs are connecting to the same database :smiley: