Due to lack of options for state locking I’m testing a custom combination of local backend and sshfs.
I’m mounting remote fs locally using sshfs and intructing terraform to use “local” backend pointed inside sshfs mount, effectively creating a centralised “local” backend.
I wasn’t sure it would work, so:
- mounted sshfs
- terraform apply to create lockfile
- unmount sshfs
- cancel terraform apply, so now lockfile is intact
- mount sshfs
- edit sensitive fields in lockfile to make it look like someone else is working with same state file
- run terraform apply
- terraform apply has acquired another lock by just overwriting previous one
I was expecting it to fail on step 7
Is this expected?
How to make it fail?
In general, you should assume any network filesystem does not support file locking unless it specifically says otherwise. SSHFS makes no such guarantee.
.lock.info file created by Terraform is only a lock info file. The actual lock is implemented using either POSIX or Windows file locking APIs. This lock has no ability to cross SSHFS connections, and is automatically released by the operating system when the
terraform process exits.
You will not be able to use SSHFS to engineer reliably-locked state storage.
You must use one of the backends intended to provide remote state storage with locking for this.
Thanks for insight, kind of a shame one has to spin up either an http server solution or whole PGSQL instance or go to a cloud provider.
Whould be interesting to see why just using a file system is not enough, if you have any links.
Using a filesystem is absolutely fine, if it’s a local filesystem which supports standard operating system locking semantics. The problem is that SSHFS doesn’t.