How to model a p2p application using nomad?

I’m planning to deploy a P2P application (think Ethereum) directly on the linux host using the exec driver(so no containers). I have a couple of questions:

1. Host Volumes vs. Ephemeral Disks(sticky=true):

Should I utilize host volumes or use the ephemeralDisk where sticky is set to true ? My concern with host volumes is creating/removing them as they scale up and down, while ephemeral disks feel less efficient due to data loss on restarts. Are there any best practices or specific considerations for P2P apps in this scenario?

2. Dynamic Persistent Storage:

Each new peer instance needs to:

  • Create its own dedicated directory on the host.
  • Store its data persistently, surviving restarts and scaling events.
  • Upon restart, automatically locate the directory it was previously writing to.

You have covered the nuances between these two options, one thing that you could use to reduce the volume sprawl is to configure the allocations to use a namespaced directory within the mounted volume.

One of the runtime environment variables available is NOMAD_ALLOC_INDEX which could keep individual instances separate. Environment - Runtime | Nomad | HashiCorp Developer


      template {
        destination = "local/config.txt"
        data        = <<EOF
data_dir = "/path/to/mounted/volume/instance-${NOMAD_ALLOC_INDEX}"

Beyond that, if you have a compatible storage device you could make use of the CSI driver to allocate storage that way.


I did look into the NOMAD_ALLOC_INDEX but from what I understand this would change during restarts(like in the event of crash and comes back up). The core problem I am trying to solve is I want a new ID every time I launch an application or say when I issue scale up but this ID should remain the same during the restarts. do it make sense to use NOMAD_ALLOC_ID ?

Yeah, NOMAD_ALLOC_ID would work if you want a completely unique UUID for each instance. One downside I see there is that you’d have to be aware of what is no longer valid to garbage collect.

The ALLOC_INDEX always runs 0 through to the number of instances you have -1, so if you restart you’ll get the same number, if you scale up you’ll get new numbers incrementally. The count will always have 0 to (n-1).

Thanks for your insights, @nickwales1. In this case ALLOC_INDEX is more suitable solution for my use case as it consistently yields the same number during restarts and different number once I issue scale up. Regarding your suggestion about mounted volumes, I’m curious to know if you’re referring to containers specifically, or just general host processes. Currently, I’m not planning to use containers, so I’m trying to understand how a volume mount compares to hosted volumes in this scenario.

From what I understand, hosted volumes are not managed by Nomad, and there aren’t dynamic hosted volumes available. This leads me to wonder if volume mounts are managed by Nomad, particularly as I scale my application up or down? Would Nomad handle the scaling process efficiently in terms of volume management? like say I initially start with one instance of my application with a particular volume and later I scale up to 5 would that lead to creation of 4 additional volumes? similarly would nomad remove volumes once I scale down back to 1?

Additionally, is it safe to assume that volume mounts will utilize the available space on the host as needed, or is there a requirement to specify the size beforehand? Understanding this aspect would greatly help in planning the architecture and resources for my application.