Running boundary worker through nomad job

Hi I am lee
I am a Nomad user.

This time, I created a new job for the first time in a long time to run workers at the HCP boundary.

My job is as follows:

locals {
  version = "0.14.0"
}

job "boundary-worker" {
  type        = "service"
  datacenters = ["ung"]

  group "worker" {
    count = 1
    service {
      name     = "boundary"
      tags     = ["worker"]
      provider = "nomad"
      port     = "worker"
    }
    network {
      port "worker" {
        static = 9202
      }
    }

    task "worker" {
      driver = "docker"

      config {
        image = "hashicorp/boundary:${local.version}"
        ports = ["worker"]
        volumes = [
          "local/boundary:/boundary/",
        ]
      }

      template {
        data        = <<EOH
hcp_boundary_cluster_id = "my_boundary_id"

disable_mlock = true

listener "tcp" {
  address = "0.0.0.0:9202"
  purpose = "proxy"
}

worker {
  name = "ungworker"
  description = "A worker for a docker demo"
  public_addr = "my_ip"
  auth_storage_path = "/local/boundary/"
  tags {
    type      = ["unghee"]
  }
}

EOH
        destination = "/local/boundary/config.hcl"
      }

      env {
        SKIP_SETCAP = true
      }

      resources {
        memory = 600
      }
    }
  }
}

If you run “nomad run” like this, the worker appears to be running normally.
However, strout is not shown in my nomad, so the ID to be registered in the boundary is not visible.

Is there an issue with my job? Help me, everyone

Hello swbs90,

To troubleshoot the issue, you have two options:

Option 1: Check Docker Logs Manually

  1. Open a terminal on your machine.
  2. Run the command docker ps to list all running containers.
  3. Identify the Boundary worker container.
  4. Run docker logs <container-id> to view the logs.
  5. Scroll to the top of the logs to retrieve the key.

Option 2: Use Nomad Web UI

  1. Open your Nomad Web UI in a browser.
  2. Navigate to “Jobs” and select your “Boundary Worker” job.
  3. Go to “Allocations” and choose the most recent one (your current job).
  4. Click on “View Logs” go to the top and get the key (should be in stdout if not check the stderr logs).

I hope that helped :slight_smile:

hi

I am not seeing log output as follows.

Hello,

I resolved this problem. but the method is raw_exec, not docker.

This is my job

variable "boundary_version" {
  default = "0.14.2"
}

job "boundary-worker-raw" {
  type        = "service"
  datacenters = ["ung"]

  group "worker" {
    count = 1
    service {
      name     = "boundary"
      tags     = ["worker"]
      provider = "nomad"
      port     = "worker"
    }
    network {
      port "worker" {
        static = 9202
      }
    }

    task "worker" {
      driver = "raw_exec"

      config {
        command = "/tmp/boundary"
        args = ["server", "-config=tmp/config.hcl"]
      }

      artifact {
         source     = "https://releases.hashicorp.com/boundary/${var.boundary_version}+ent/boundary_${var.boundary_version}+ent_linux_arm64.zip"
        destination = "./tmp/"
      }

      template {
        data        = <<EOH
hcp_boundary_cluster_id = "my hcp boundary id"

disable_mlock = true

listener "tcp" {
  address = "0.0.0.0:9202"
  purpose = "proxy"
}

worker {
  public_addr = "my worker node ip"
  auth_storage_path = "/local/boundary/ungworker"
  tags {
    type      = ["unghee"]
  }
}

EOH
        destination = "tmp/config.hcl"
      }

      env {
        SKIP_SETCAP = true
      }

      resources {
        memory = 600
      }
    }
  }
}
1 Like