AWS announced in April that you could now mount EFS volumes on Batch jobs -
https://aws.amazon.com/blogs/hpc/introducing-support-for-per-job-amazon-efs-volumes-in-aws-batch/
And it does work if you set up the job definition in the AWS console. However with terraform I can get the batch job defined except for the bit where it adds the efs volume configuration.
This is as far as I’ve got -
resource "aws_batch_job_definition" "listefs3" {
name = "listefs3"
type = "container"
container_properties = jsonencode(
{
command = [
"ls",
"-l",
"-a",
"/mount/efs/",
]
environment = []
executionRoleArn = "arn:aws:iam::XXXXXXX:role/batchrole"
image = "amazonlinux"
jobRoleArn = "arn:aws:iam::XXXXXXX:role/batchrole"
linuxParameters = {
devices = []
tmpfs = []
}
mountPoints = [
{
containerPath = "/mount/efs"
sourceVolume = "efs"
},
]
resourceRequirements = [
{
type = "VCPU"
value = "2"
},
{
type = "MEMORY"
value = "2048"
},
]
secrets = []
ulimits = []
volumes = [
{
name = "efs"
efsVolumeConfiguration = {
fileSystemId = "fs-XXXXXXXXX"
rootDirectory = "/"
transitEncryption = "ENABLED"
authorizationConfig = {
accessPointId = "fsap-XXXXXXXXXXXXXXXXX"
"iam" = "ENABLED"
}
}
},
]
}
)
}
The efsVolumeConfiguration section is being ignored. Is there something I’m doing wrong?
Thank you
Theo