Nomad agent docker privileged mode is disabled

When i using EBS volume at this time given error.
nomad agent docker privileged mode is disabled

Hi @asd99557,

I believe the enable privileged Docker jobs section of the Nomad CSI learn guide should help you here.

Copying over a short section; you will need to update your client configuration to include the following block:

plugin "docker" {
  config {
    allow_privileged = true
  }
}

Thanks,
jrasell and the Nomad team.

want to use EBS volume as persistent storage device not use with AWS EC2.

want to use EBS volume as persistent storage device not use with AWS EC2.

Can you explain what you mean by that? The CSI EBS plugin is only for use with EC2… you can’t attach an AWS EBS volume to anything other than an EC2 instance. It’s a block storage device, not an object storage like S3 or network storage like NFS.

Thanks for the detail,
Can you please let me know how to mount NFS volume in nomad job.
I had created local nfs server after that mount share directory in nomad client.
Please let me know how can i mount nfs volume in nomad job .

Hey @asd99557 :wave: There is a CSI driver for NFS but that’s listed as Alpha so probably not ready for anything but testing - there is a lot of development happening though so maybe keep an eye on it for future use.

If you have one or more NFS shares mounted on the Nomad client you could use host volumes to expose those. I have linked to a write up on how to accomplish this below - just make sure that the folder you are pointing your volume(s) to resides on the NFS path you have mounted and you should be good to go :+1:

A word of caution though - NFS is not POSIX compliant and certain application will not work correctly over NFS.

1 Like

One more time Thanks for this information
So as host_volume we can we
use NFS as of now or not.
Can we use ceph volume mount in nomad job ?
What is the persistent volume mount another option.

Yes @asd99557, NFS is totally usable as long as the software you want to run inside Nomad has no problems with NFS :slight_smile:

You generally have at least 3 options when it comes to stateful workloads in Nomad

  1. Portworx (which is supported by Nomad)
  2. Host Volumes (which can exposed anything that you have mounted on the client)
  3. CSI drivers

This is actually a pretty good overview of the available options :point_down:

Now, with regards to Ceph I can think of 3 different strategies you could utilize :thinking:

  1. Mount Ceph block devices on a Nomad client and expose those using Host Volumes
  2. Mount CephFS on every Nomad client and expose relevant folders using Host Volumes
  3. Use the Ceph CSI driver

If you read through the 2 links I have posted during our conversation it should become pretty clear what Nomad has to offer in regards to supporting stateful workloads.

Let me know if I can be of further assistance :+1:

1 Like

as host_volume want to use nfs, i follow this link.
Stateful Workloads Overview | Nomad - HashiCorp Learn
i am not clear were to create host_volume.hcl either client side or nomad server side.
i was edit client.hcl but it not working, then i created host_volume.hcl in all three client. Still it’s not working can you please help me how to create host_volume.

Hey @asd99557 :wave:

I assume that you have mounted your NFS share on every Nomad client - in that case the host_volume.hcl file should be defined on each Nomad client where this job should be able to run (remember to restart the nomad service when changing the volumes config file).

Thanks for the information, After create host_volume.hcl then restart service.
after that i used below command for check that hostit’s given error.
nomad node status -short -self.
No allocations Placed.
Could you please guide me.

Try nomad node status -verbose - that could tell you what Host Volumes a given client exposes and make sure that the correct volumes are showing up.

If the correct host volumes are registered you should check your jobfile and check if the volume {} and volume_mount {} stanzas are correct.