We’d like to use Nomad to schedule FaaS (Function-as-a-Service) like executions. To improve the response time for the user, we want to prewarm environments, i.e. we have a count of let’s say 100 environments that are running idle and waiting to be used. Currently we have one job with one task which represents an environment, e.g. a Python environment. To run, e.g. a Python file supplied by a user, we use the exec functionality of allocations and execute it.
Here comes my question. Is it possible to somehow mark an allocation as in use and retrieve only allocations that are not marked as in use to avoid running two functions in the same container?
One way that comes to my mind is a feature that is known as Labels in Kubernetes. I found that Nomad supports the meta stanza for meta information, but that is not available on the allocation level.
Thanks for your response and sorry for my late response!
Right now, one job represents a set of similar environments where code can be executed, e.g. python environments. We use the count value of a group to increase the amount of running environments, depending on the demand by the users.
There is another problem we came across recently. As Nomad is not aware of any meta information on allocations, lowering the count value of a task group will remove somewhat arbitrary allocations (at least we can’t influence which allocations will be stopped), doesn’t it?
Consul KV seems reasonable, however, we’d like to have the information directly connected to the allocation to make querying the allocations easier.
Yes exactly. I’d like to filter the allocations based on the meta information in the query to Nomad. That way I could give them a marker like in-use and query only those allocations that are currently not in-use.
Ah ok. Unfortunately there’s no way to do that with Nomad alone since it’s not really built for arbitrary data storage. You would need to use an external storage to correlate Nomad data with your own application-specific information.