File provisioner error: copying a folder from windows to an azure vm

Hello,

I keep getting this error message when copying my folder into my azure VM, I think it’s because the folder doesn’t exist in my azure VM, but the SCP protocol is meant to create both folders & files and support their creations.

│ Error: file provisioner error
│
│   with null_resource.ansible,
│   on connection.tf line 42, in resource "null_resource" "ansible":
│   42:   provisioner "file" {
│
│ timeout - last error: dial tcp 51.145.166.218:22: i/o timeout

This is my null ressource configuration that’s meant to copy a few files & folders to my azure VM.
Yet I keep having a problem with my file provisionner when trying to copy my ansibe-k8s-setup folder.

  provisioner "file" {
    source      = "../.ssh/"
    destination = "/home/${var.username}/.ssh"
  }

This is my full resource configuration:

resource "null_resource" "ansible" {
  depends_on = [
    azurerm_linux_virtual_machine.vm,
    azurerm_linux_virtual_machine.ansible,
    local_file.ansible_inventory
  ]
  triggers = {
    always_run = timestamp()
  }

  connection {
    type        = "ssh"
    user        = var.username
    private_key = file("../.ssh/id_rsa")
    host        = azurerm_linux_virtual_machine.ansible.public_ip_address
  }

  provisioner "file" {
    source      = "../ansible-k8s-setup/"
    destination = "/home/${var.username}/ansible-k8s-setup"
  }
  provisioner "file" {
    source      = "../.ssh/"
    destination = "/home/${var.username}/.ssh"
  }
  provisioner "remote-exec" {
    inline = [
      "sudo tee -a /etc/ssh/ssh_config <<EOF",
      "Host 10.10.5.*",
      "   StrictHostKeyChecking no",
      "   UserKnownHostsFile=/dev/null",
      "EOF",
      "chmod 600 /home/${var.username}/.ssh/id_rsa",
      "sudo apt update",
      "sudo apt-get update",
      "sudo apt install -y ansible",
      "cd /home/${var.username}/ansible-k8s-setup",
      "chown -R ${var.username}:${var.username} /home/${var.username}/ansible-k8s-setup",
      "cd /home/${var.username}/ansible-k8s-setup",
      "ansible-playbook -i inventory/inventory.ini playbooks/playbook.yml",   
      "ansible-playbook -i inventory/inventory.ini playbooks/hostname.yml",
    ]
  }
}

I don’t know what went wrong as I was using it earlier and it worked perfectly,
This is my terraform version: version = “3.110.0”
I’m a bit odded out by why it’s not working.
Thanks for any solutions that you might provide

I tried adding another remote exec provisionner to create the folder and ended up with another error that timeout out and relapsed,

  provisioner "remote-exec" {
    inline = [
      "mkdir -p /home/${var.username}/ansible-k8s-setup",
      "chmod 700 /home/${var.username}/ansible-k8s-setup",
    ]
  }

This is my error message:

null_resource.ansible (remote-exec):   Checking Host Key: false
null_resource.ansible (remote-exec):   Target Platform: unix
null_resource.ansible: Still creating... [4m51s elapsed]
╷
│ Error: remote-exec provisioner error
│
│   with null_resource.ansible,
│   on connection.tf line 41, in resource "null_resource" "ansible":
│   41:   provisioner "remote-exec" {
│
│ timeout - last error: dial tcp 51.145.166.218:22: i/o timeout

These are my logs that just try to ssh over and over again until it times out.

null_resource.ansible (remote-exec): Connecting to remote host via SSH...
null_resource.ansible (remote-exec):   Host: x.x.x.x
null_resource.ansible (remote-exec):   User: rim
null_resource.ansible (remote-exec):   Certificate: false
null_resource.ansible (remote-exec):   SSH Agent: false
null_resource.ansible (remote-exec):   Checking Host Key: false
null_resource.ansible (remote-exec):   Target Platform: unix

Hi @RimDammak,

Did you try login to 51.145.166.218:22 via ssh manually from the host where you run Terraform? The timeout seems more like some issue with the ssh server on the machine or the network in between (Azure vnet or firewall).

Best,
Andreas