Waypoint deploy error

I’m playing around with Waypoint for the first time. The build process succeeded just fine, but the deploy stage returned the following.

» Deploying...
! Unexpected response code: 400 (input.hcl:22,40-41: Missing key/value separator;
  Expected an equals sign ("=") to mark the beginning of the attribute value.)

I am using an existing nomad job file that before interaction with Waypoint deployed just fine. It’s being used to feed the waypoint template like so:

# The name of your project. A project typically maps 1:1 to a VCS repository.
# This name must be unique for your Waypoint server. If you're running in
# local mode, this must be unique to your machine.
project = "nextcloud"

# Labels can be specified for organizational purposes.
# labels = { "foo" = "bar" }

# An application to deploy.
app "nextcloud" {
  # Build specifies how an application should be deployed. In this case,
  # we'll build using a Dockerfile and keeping it in a local registry.
  build {
    use "docker" {}
  }

  # Deploy to Docker
  deploy {
    use "nomad-jobspec" {
      // Templated to perhaps bring in the artifact from a previous
      // build/registry, entrypoint env vars, etc.
      jobspec = templatefile("${path.app}/nextcloud.nomad.tpl", {
        service_name_app = "nextcloud-app"
        service_name_cache  = "nextcloud-cache"
        service_name_db  = "nextcloud-db"
        consul_token     = "..."
        redis_password = "..."
      })
    }
  }
}

The only thing inside the nomad template for my waypoint deploy to do what I’m hoping is a new task item:

[...]
        task "nextcloud_news_updater" {
          driver = "docker"

          config {
            image = "$${artifact.image}:$${artifact.tag}"
          }
        }
[...]

I took a bit of a gamble assuming that the artifact.image and artifact.tag would work with Nomad’s $$ variable pattern (a kludge I’ve had to use for the templates to work with TF deployments). In their absence, waypoint yells at me that there isn’t a var called artifact, which is similar to how Terraform complains.

What’s puzzling is that I have no idea where to find this input.hcl or determine what lives inside of it. All the logs of the waypoint-server, waypoint-runner, and cli just say that on line 22 of this file, a kv pair is poorly formed, and after several passes of my myriad templates, I can’t seem to find anything out of place.

Anyone have any thoughts on how I could go about troubleshooting this further?

1 Like

Hello! I’ll go through the questions here.

Where does the input.hcl come from? This is the name that nomad gives to the file when it completes a JobParseRequest, which is what Waypoint’s logic relies on. If you’d like to dig in further, that API ends up here.

I took a bit of a gamble assuming that the artifact.image and artifact.tag would work with Nomad’s $$ variable pattern: Unfortunately, this is not something Waypoint supports today. I believe this issue discusses precisely this problem; it includes some troubleshooting and ideas for how to support this within Waypoint if you are curious!

Any thoughts on further troubleshooting?: The above-linked issue would be a good start! I’ll let the team know we now have multiple people asking for this support, and is something we should work on supporting sooner rather than later.

1 Like

Thanks a bunch @krantzinator! Still have some fussing about to do, but at least I’m on to the next hurdle.

I don’t know that I necessarily prefer having access to Nomad’s $${INTERNAL_TO_NOMAD_VAR_NAME} pattern over other options, only that neither the learn guide for using Waypoint with Nomad, nor the waypoint-examples repo made it clear how I should sew all the pieces together. With your explanation, and reading through several disparate pages, I got it but it would be nice to have the full interaction of handing the image name off as an artifact variable to the service.nomad.tpl template documented in one place.

As an aside, I would also love to see a “Day 2: You’ve got your waypoint-server and waypoint-runner on Nomad, now what?” article as there were a few things that happened after running through the learn guide that I felt required reading “all the things” in order to know how to even make sense of what was happening or what to do.

One specific example is, Nomad saw fit to move the Waypoint server and runner to a different client node. From then on my local context was adrift from the deployed server.

It took a fair amount of searching to even figure out that I had a local Waypoint context and how I would go about updating it. It’s one of those things that, once you have a grip on the bigger picture of how Waypoint works, feels like it would be a trivial detail. But when you’re at the start of the learning curve, having your target environment move on you the morning after finishing the learn guide made me feel a little lost and frustrated.

Anyhoo, I’m excited to continue, but in case hearing these experiences is useful to ongoing dev and documentation I would be happy to discuss further.

Best,
Sam

1 Like

For any future travellers wondering what to try, this is what cleared the error for me:

From my service.nomad.tpl

[...]
        task "nextcloud_news_updater" {
          driver = "docker"

          config {
            image = "${artifact.image}:${artifact.tag}"
          }
        }
[...]

From my waypoint.hcl:

app "nextcloud" {
  # Build specifies how an application should be deployed. In this case,
  # we'll build using a Dockerfile and keeping it in a local registry.
  build {
    use "docker" {}
    registry {
      use "docker"{
        image = artifact.image
        tag = artifact.tag
      }
    }
  }

  # Deploy to Docker
  deploy {
    use "nomad-jobspec" {
      // Templated to perhaps bring in the artifact from a previous
      // build/registry, entrypoint env vars, etc.
      jobspec = templatefile("${path.app}/nextcloud.nomad.tpl", {
        service_name_app = "nextcloud-app"
        service_name_cache  = "nextcloud-cache"
        service_name_db  = "nextcloud-db"
        consul_token     = "#####"
        redis_password = "#####"
      })
    }
  }
}

That has moved the problem space to resolving the docker registry, but the templates were processed without error.

Bah! I spoke to soon.

Added local = true to that registry → use “docker” block and I’m seeing the error again

Hang on, so you were able to reference Waypoint outputs in your nomad jobfile?

And then adding local = true to your waypoint.hcl gave you the same input.hcl:22,40-41: Missing key/value separator; error as your original message?

Well, to be clear, I have no idea what’s actually happening under the hood. I’m just trying to make sense of what’s coming up on the screen. :smiley: I was able to get the docker container to build, which it used to not do, throwing that error before build could complete. Now it builds and then throws that error when attempting to deploy. Baby steps.

I would love to be able to see the contents of that input.hcl file. Is there a way to preserve it after the build? Is it floating around in a tmp folder somewhere?

@dehuszar I regret not checking the Discuss forums before I appended to this issue! But try passing in “artifact = artifact” to your templatefile args, and keep "image = "${artifact.image}:${artifact.tag}" and see how that works - recommended by @mitchellh

1 Like