CI/CD with Nomad

Hello folks!
I’ve setup Nomad & Consul clusters and now exploring options for CI/CD.
I have following setup:

  • private Git repository hosted on gitlab.com
  • private Docker registry
  • Nomad cluster is in private VPC and sits behind load balancer. Nomad/Consul endpoints are not exposed to the world, only respective webapps ports

At the moment my flow is looking as following:

  • I push code
  • GitLab runs tests and builds Docker image
  • GitLab pushes Docker image to the private registry
  • I’m picking new image and manually update my Nomad Job using the UI

I want to automate this flow. Whats tools will suite better to help me given all limitations?
I considering following options:

  • Setup Waypoint and move build from GitLab to waypoint, but I don’t like it because I want to have my pipelines in the GitLab
  • Setup some background service to periodically poll GitLab and redeploy stuff when image is built
3 Likes

nomad itself will do most of this for you. If data or variables change from one deploy to the next, a nomad plan / apply would pick this up and apply the changes for you.

Doing things like this would allow you to keep the investment in Gitlab pipelines (and perhaps even help clean them up or obsolete many steps).

If you take advantage of the update stanza, you can even implement directly useful continuous deployment concepts like canary checks and blue/green deployment.

As long as the API can be reached by a nomad client on a runner in your pipeline, you can plan / apply changes in a pipeline

I would think just using Nomad is the easiest way to deliver changes, as long as you can trigger a nomad plan / apply when the conditions change, e.g. when an image for a task changes.

Curious to know if this is a useful approach. I too was attracted to Waypoint when it landed:

but for me it still feels like it’s from the future.

1 Like

I’m using GitHub Actions but I’m sure something like this can be done in GitLab.

I run this on self-hosted runners to not to make nomad api public.

jobs:
  nomad:
    name: "Run jobs"
    runs-on: self-hosted

    strategy:
      fail-fast: false
      matrix:
        job: "${{ fromJson(inputs.jobs) }}"

    env:
      NOMAD_ADDR: "http://localhost:4646"
      NOMAD_NAMESPACE: "somenamespace"
      NOMAD_TOKEN: "token"

    defaults:
      run:
        working-directory: "jobs/${{ matrix.job.dir }}"

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Check connection
        run: nomad node status

      - name: Validate job
        run: nomad validate job.nomad.hcl

      - name: Plan job
        run: |
          echo "Run nomad plan"
          if out="$(nomad plan job.nomad.hcl)"; then
            echo "$out"
          else
            # Plan will return one of the following exit codes:
            # * 0: No allocations created or destroyed.
            # * 1: Allocations created or destroyed.
            # * 255: Error determining plan results.
            case $? in
              255) echo "$out"; exit 1 ;;
              1) echo "$out"; exit 0 ;;
            esac
          fi
      - name: Run job
        run: nomad run job.nomad.hcl

nomad run job.nomad.hcl will exit on succesful deploy or will timeout on depending on update stanza settings or CI/CD settings.

One thing that I miss is the ability to run and then pull separately (useful when running batch jobs, as they exit immediately), but it is easy to write a script that does that.

3 Likes

@rozhok in v0.10 of Waypoint we launched a Tech Preview of pipelines in Waypoint! We’d love to get your feedback on it to see if it addresses your use case. :slight_smile:

1 Like

hi,

we using Gitlab too … and I use for deploy and check “levant”. I’ve added some more tools on the Image, like curl / jq / http_check (from monitoring-plugins) and also nomad.

Every projects has a nomad/foo.nomad jobfile, with all what is needed and that is copied via artifact to the deploy stage. Only what isn’t nice … if you have oneshot containers (like for Django and collectstatic) … the pipeline will always fail, because not all containers are up. I try to catch with with a different way and use curl to check the status …

cu denny

1 Like