Remote-exec on aws: . activate pytorch_p36 doesn`t apply from terraform plan, but interactivelly is ok

Hi, i ran:
Instance type: g2.2xlarge
AMI ID: Deep Learning AMI (Ubuntu 16.04) Version 24.3 (ami-008349d4902badfc6)
main.tf section:
provisioner “remote-exec” {
inline = [
“nvidia-smi”,
“. activate pytorch_p36”,
“python -c “import torch; print(torch.cuda.is_available())””,
]
}
The log:
aws_spot_instance_request.web (remote-exec): Connected!
aws_spot_instance_request.web (remote-exec): Thu Oct 17 08:17:16 2019
aws_spot_instance_request.web (remote-exec): ±----------------------------------------------------------------------------+
aws_spot_instance_request.web (remote-exec): | NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 |
aws_spot_instance_request.web (remote-exec): |-------------------------------±---------------------±---------------------+
aws_spot_instance_request.web (remote-exec): | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
aws_spot_instance_request.web (remote-exec): | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
aws_spot_instance_request.web (remote-exec): |===============================+======================+======================|
aws_spot_instance_request.web (remote-exec): | 0 GRID K520 On | 00000000:00:03.0 Off | N/A |
aws_spot_instance_request.web (remote-exec): | N/A 33C P0 44W / 125W | 0MiB / 4037MiB | 0% Default |
aws_spot_instance_request.web (remote-exec): ±------------------------------±---------------------±---------------------+

aws_spot_instance_request.web (remote-exec): ±----------------------------------------------------------------------------+
aws_spot_instance_request.web (remote-exec): | Processes: GPU Memory |
aws_spot_instance_request.web (remote-exec): | GPU PID Type Process name Usage |
aws_spot_instance_request.web (remote-exec): |=============================================================================|
aws_spot_instance_request.web (remote-exec): | No running processes found |
aws_spot_instance_request.web (remote-exec): ±----------------------------------------------------------------------------+
aws_spot_instance_request.web: Still creating… [1m0s elapsed]
aws_spot_instance_request.web (remote-exec): Traceback (most recent call last):
aws_spot_instance_request.web (remote-exec): File “”, line 1, in
aws_spot_instance_request.web (remote-exec): ModuleNotFoundError: No module named ‘torch’
i.e. envoirement doesnt activated. But, then i sshed to instance manually, it ran:
ubuntu@ip-172-31-29-152:~ . activate pytorch_p36 NOTE that the Amazon EC2 g2 instance type is no longer supported by the Deep Learning AMI. Please review the DLAMI EC2 instance selection guide for supported Amazon EC2 instance types. (pytorch_p36) ubuntu@ip-172-31-29-152:~ python -c “import torch; print(torch.cuda.is_available())”
True

What`s problem with remote-exec ?

the problem is: terraform has construct sh script(not bash) from inline = […] section and has sending to EC2 instance to run. So, bash commands cannot running in inline section. Forget it.
Use
provisioner “file” {
source = “script.sh”
destination = “/tmp/script.sh”
}

  provisioner "remote-exec" {
    inline = [
      "chmod +x /tmp/script.sh",
      "/tmp/script.sh",

instead.