Microservice Migration - Advice sought

Hi,

I’ve been looking at migrating our current (see below) Microservices to Docker, and quit Kubernetes 4 times already. Nomad FTW!

I’m having a hard time wrapping my head around a (probably) simple architectural/structural change. I’m hoping that someone with more battle-experience than me might offer some wisdom.

Currently we have 28 separate “services”. Each service is written in Python, and uses Nameko for RPC between the services. There is a “HTTP” service that acts as a bridge into the system for external calls to a REST API.

The services each have their own Gitlab repo, and we use Gitlab’s CI to deploy to 14 servers. The services run via Supervisor, and use VirtualEnv’s for the Python dependencies. Host dependencies are managed separately via Salt.

We exchange files between some of the services (for example, we have a “transcoding” service that is used by a few of the other services). We use NFS mounts inside the cluster of servers to do this, and each service knows “where” it is, and requests that files be saved to the appropriate host’s NFS share.

I’m not afraid to re-engineer the whole setup (if that is what is required). I understand that “Jobs” can share network namespaces - but I’m not clear if Groups in separate Jobs can.

Should I be looking at a single “Job” definition, or stick to separate repo’s and deployments?

I have a demo service working, so the nuances of moving to Docker for the service specifically, isn’t a concern.

It’s really the inter-service exchange of Files. I leverage the ~450 cores to do fan-out transcodes, and it’s working really well.

I’m sure there are lots of details missing here. Thanks for reading!

Geoff

1 Like

Thanks to Dave who suggested a (disturbingly) simple solution - just set the host name as an environment variable during the task deployment.

The rest of it works as it always has done (NFS automounts to /net, which is passed through to the container via host_volume)

EDIT: Using the UI I discovered the exec (so useful!) and confirmed that “HOSTNAME” is already in the env variables. It looks like I have all the pieces (and no excuses) to getting this whole shebang moved to Nomad. Awesome!

EDIT 2: To mount the host NFS I has to enable Docker Volumes on the clients:

{
  "client": {
    "enabled": true,
    "options": {
      "docker.volumes.enabled": true
    }
}

then added a volumes to the task:

      config {
        image = "${CI_REGISTRY_IMAGE}:latest"
        auth {
            username = "${CI_REGISTRY_USER}"
            password = "${CI_REGISTRY_PASSWORD}"
        }
        auth_soft_fail = false
        network_mode = "host"
        volumes = [
             "/net:/net",
             "/smb:/smb"

        ]
      }

I tried defining them as host_volumes on the clients, but I kept getting an error about “volumes not being enabled”. As I type this, I wonder if it’s an either/or.

EDIT 3: Last one :slight_smile:
The solution above works for me, because it’s Docker-specific. But I wanted the “nomad” way, and it works:

In the client:

{
  "client": {
    "enabled": true,
    "host_volume": {
      "autofs_nfs": {
        "path": "/net",
        "read_only": false
      }
    }
  }
}

then in the job:

...
  group "service" {
    count = 1

    volume "autofs_nfs" {
      type      = "host"
      read_only = false
      source    = "autofs_nfs"
    }
    task "service" {
      driver = "docker"
      volume_mount {
        volume      = "autofs_nfs"
        destination = "/net"
        read_only   = false
      }
...

I then see the volumes in the UI:

and I can browse them as normal.

This setup is a bit more involved, but I like it because it allows be to be specific with what host data is shared with containers.

I’m having an issue with this setup that I wasn’t expecting. The containers only see the paths that existed when the container started.

I use autofs on the host to auto-mount SMB shares on-demand. If a target doesn’t exist in /smb when the container starts, it’s not available inside the container. If I mount it on the host, it still isn’t available in the container.

EDIT: Very confused. Seems to be working now. I did change the mount to read/write, maybe that was the trick.

EDIT: Spoke too soon. There is an issue. If I ls /smb/new-server I get a list of the shares on the server, but am unable to go any deeper.
If I ls /smb/new-server/new-share on the host, I can see the contents. If I then restart the Nomad job, I can ls /smb/new-server/new-share from a container (but not without restarting)
Might be an AutoFS bug?

Maybe you could use Nomad CSI with NFS back-end to make that even easier, but it depends on what NFS back-end you have (maybe there’s no Nomad CSI plugin for it).

This sounds similar, but it’s an old issue that was since closed - autofs does not work in containers · Issue #13700 · moby/moby · GitHub

thanks @that_man - I missed your reply. I’m going to dust this off and try again :slight_smile: