Azure-arm builder: packer - user not deleted after build is completed

I use packer from the deb image packer_1.9.1-1_amd64.deb
I built a custom image using azure-arm builder.
After build is completed there is still packer user left in new images.
Is that expected? I saw the bug Remove packer user at provisioning with AKS distro · Issue #899 · Azure/aks-engine · GitHub but it’s against AKS distro, so I’m not sure if it applies to a regular azure VM.

If you add those users in your VMs and don’t delete them during the build,
it’s expected. Can you show us your config files?

I added one user during the build and I understand this user is expected to be part of the build. But I didn’t add “packer” user. I’m assuming that it’s technically needed to perform the build operations. But I don’t want it to be left after the build is completed.
What would be recommended way to remove packer user from the build?
I tried to delete it by adding at the end of the script: “userdel -r packer” but it failed with the error message:
==> azure-arm: userdel: user packer is currently used by process 1020

Is there anything specific from my config file you need?
I’m guessing the most relevant version is the “provisioners” section and it looks like this:

  "provisioners": [{
    "scripts": ["script1.sh"],
    "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
    "type": "shell",
    "env": {
       "http_proxy": "{{user `http_proxy`}}",
       "https_proxy": "{{user `https_proxy`}}",
       "no_proxy": "{{user `no_proxy`}}",
    }
  }]

The parts where you add any kind of user.

So the source part of your packer config, as it needs a user to make an ssh connection and the parts where script1.sh adds users for packer to make an ssh connection to
.
After my build is completed I don’t have any user named packer.

I don’t add any users via packer directly (in packer.json file).
The user (but not a ‘packer’ user) is added by one of the commands the script1.sh
So if you are saying you have no packer user after your build is completed I would assume I shouldn’t see it too, right?

Below is my complete config file:

{
“variables” : {
“zima” : “{{ env zima }}”,
“client_id” : “{{ env client_id }}”,
“client_secret” : “{{ env client_secret }}”,
“tenant_id” : “{{ env tenant_id }}”,
“subscription_id” : “{{ env subscription_id }}”,
“managed_image_resource_group_name”: “{{ env managed_image_resource_group_name }}”,
“managed_image_name”: “{{ env managed_image_name }}”,
“virtual_network_name” : “{{ env virtual_network_name }}”,
“virtual_network_subnet_name”: “{{ env virtual_network_subnet_name }}”,
“virtual_network_resource_group_name”: “{{ env virtual_network_resource_group_name }}”,
“location”: “{{ env location }}”,
“vm_size”: “{{ env vm_size }}”,
“http_proxy”: “{{ env http_proxy }}”,
“https_proxy”: “{{ env https_proxy }}”,
“no_proxy”: “{{ env no_proxy }}”,
“admin_user”: “{{ env admin_user }}”,
“public_ssh_key”: “{{ env public_ssh_key }}”

},

“builders”: [{

"type": "azure-arm",

"client_id": "{{user `client_id`}}",
"client_secret": "{{user `client_secret`}}",
"tenant_id": "{{user `tenant_id`}}",
"subscription_id": "{{user `subscription_id`}}",

"managed_image_resource_group_name": "{{user `managed_image_resource_group_name`}}",
"managed_image_name": "{{user `managed_image_name`}}",

"os_type": "Linux",
"image_publisher": "canonical",
"image_offer": "0001-com-ubuntu-server-focal",
"image_sku": "20_04-lts-gen2",
"virtual_network_name": "{{user `virtual_network_name`}}",
"virtual_network_subnet_name": "{{user `virtual_network_subnet_name`}}",
"virtual_network_resource_group_name": "{{user `virtual_network_resource_group_name`}}",

"azure_tags": {
    "task": "custom image deployment"
},

"location": "{{user `location`}}",
"vm_size": "{{user `vm_size`}}"

}],
“provisioners”: [{
“scripts”: [“script1.sh”],
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}’”,
“type”: “shell”,
“env”: {
“admin_user”: “{{user admin_user}}”,
“public_ssh_key”: “{{user public_ssh_key}}”,
“http_proxy”: “{{user http_proxy}}”,
“https_proxy”: “{{user https_proxy}}”,
“no_proxy”: “{{user no_proxy}}”,
“zima”: “{{user zima}}”,
“lato”: “latoxxx”
}
}]
}

That’s weird. I ran my pipeline again and the ‘packer’ user is not there. It might have been one time occurrence of the problem or some intermittent issue.

That’s great to hear.
If the problem occurs again, let me know and I’ll try to help.