Dynamic routing based on URL

Hey there,

I’m fairly new with Consul/Nomad and I’m trying to explore how to go about solving my use-case.
My goal is to give our users URLs that connect them to specific tasks (batch jobs) running in the Nomad cluster. Imagine it like Gitpod (https://gitpod.io/) - there is a container running for each user and you can connect directly to that container via an URL.

The URLs are in the following format: <port>-<some-id>.ourdomain.com
The URL would correspond to a job that has the some-id somewhere in it’s job stanza. Probably as a tag. The port in the URL would be the service’s port to which route the traffic.

job "docs" {
 group "example" {
  task "server" {
   service {
    tags = ["some-id"]
	...
   }
  }
 }
}

What I basically need is to have a dynamic routing to running jobs based on subdomains. The subdomain contains an ID that should route the traffic to a specific job/task/service with the same ID.

What do you think is the way to do this?

1 Like

I think this is similar to how I have feature branch jobs setup. If a dev checked in a branch called feature-test to the circle repo they can access the deploy at circle-feature-test.ci.example.com. If that’s the kind of thing you need then maybe this will help.

I put the generated deploy URL into a the meta block

meta {
  hostname = "[[ $hostname ]]"
}

of the job as well as an haproxy tag. I’m running HAProxy with consul-template and a stanza like the following to scrape all of the URLs and set up load balancing

[[- range services -]]
  [[ if .Tags | contains "haproxy" -]]
    [[- range $index, $name := service .Name -]]
      [[ if eq $index 0 ]]

backend b_[[.Name]]
    mode http
[[- /* server-template [[.Name]] 3 _[[.Name]]._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer #ipv4 check */]]
      [[- end ]]
    server [[.Name]]-[[ $index ]] [[.Address]]:[[.Port]]

    [[- end -]]
  [[- end -]]
[[- end ]]

Our local DNS server returns the LB IP for any request to *.ci.example.com

Hope that helps!

1 Like

Thanks for the reply, I think the use cases are similar.

I do have a few questions:

  • Are you creating a new job for every branch on the fly so the job has a correct hostname or are you using parametrized jobs somehow?

  • I have multiple dynamic services running on a node and they have their own dynamic local IP addresses (for example 192.168.1.3). They are all reachable from the node’s host namespace. Could this work with your service registration/discovery?

Yes a new job is created on the fly. Our job creation process happens in CI and uses levant to file in the details of a job template. Our setup relies on consul service registration so if you’re using consul then yes it will work. The consul DNS name for the job will be predictable and independent of whatever IP is being used. I can provide more details about our template if you like.

1 Like

Thank you for the examples, we are implementing something very similar to what you described.
One question that I have is how exactly are you routing to your jobs. You are exposing every job via a dynamic port as a Service and then you let Nomad+Consul handle it?

Ultimately the clients are talking to HAProxy which dynamically configures backends based on consul services. You can either use the native consul support in HAProxy or use consul-template. Your wildcard DNS just needs to resolve all your dynamic address to the HAProxy IP.