The terraform-provider-vault_v3.0.1_x4 plugin crashes on GitLab runner

Hi folks - see the error below. It is persistent and there is a request to let you know. Any ideas on a fix? This is having quite a severe impact on us. Nothing in our code has changed since the last successful run a week ago, where we upgraded Terraform to this:

terraform {
  required_version = "~> 1.0.11"
  required_providers {
    aws = {
      version = "~> 3.66.0"
      source  = "hashicorp/aws"
    }
    vault = {
      source = "hashicorp/vault"
    }
  }
}

> Running with gitlab-runner 14.1.0 (8925d9a0)
> on dna-meshfnd-runner-gitlab-runner-58c4bc8df9-567vn E4Rfx4f_
> Resolving secrets
> 00:00
> Preparing the "kubernetes" executor
> 00:00
> Using Kubernetes namespace: gitlab-runners
> Using Kubernetes executor with image registry.gitlab.com/dwp/foundation-services-for-mesh/poc-runner/infra/tfv1-runner:20211123_0852 ...
> Using attach strategy to execute scripts...
> Preparing environment
> 00:06
> Waiting for pod gitlab-runners/runner-e4rfx4f-project-24337147-concurrent-06d7qv to be running, status is Pending
> Waiting for pod gitlab-runners/runner-e4rfx4f-project-24337147-concurrent-06d7qv to be running, status is Pending
> ContainersNotReady: "containers with unready status: [build helper]"
> ContainersNotReady: "containers with unready status: [build helper]"
> Running on runner-e4rfx4f-project-24337147-concurrent-06d7qv via dna-meshfnd-runner-gitlab-runner-58c4bc8df9-567vn...
> Getting source from Git repository
> 00:02
> Fetching changes with git depth set to 50...
> Initialized empty Git repository in /builds/E4Rfx4f_/0/dwp/foundation-services-for-mesh/vault-config/.git/
> Created fresh repository.
> Checking out ddf30f27 as main...
> Skipping Git submodules setup
> Executing "step_script" stage of the job script
> 00:21
> $ . ./ci/prep.sh
> VAULT_TOKEN set.
> $ cd services/terraform/configure
> $ terraform init
> Initializing modules...
> - vault_config in ../../../modules/vault_config
> - vault_pki in ../../../modules/pki
> Initializing the backend...
> Successfully configured the backend "s3"! Terraform will automatically
> use this backend unless the backend configuration changes.
> Initializing provider plugins...
> - Finding hashicorp/aws versions matching "~> 3.66.0"...
> - Finding latest version of hashicorp/local...
> - Finding latest version of hashicorp/vault...
> - Installing hashicorp/aws v3.66.0...
> - Installed hashicorp/aws v3.66.0 (signed by HashiCorp)
> - Installing hashicorp/local v2.1.0...
> - Installed hashicorp/local v2.1.0 (signed by HashiCorp)
> - Installing hashicorp/vault v3.0.1...
> - Installed hashicorp/vault v3.0.1 (signed by HashiCorp)
> Terraform has created a lock file .terraform.lock.hcl to record the provider
> selections it made above. Include this file in your version control repository
> so that Terraform can guarantee to make the same selections by default when
> you run "terraform init" in the future.
> Terraform has been successfully initialized!
> You may now begin working with Terraform. Try running "terraform plan" to see
> any changes that are required for your infrastructure. All Terraform commands
> should now work.
> If you ever set or change modules or backend configuration for Terraform,
> rerun this command to reinitialize your working directory. If you forget, other
> commands will detect it and remind you to do so if necessary.
> $ terraform workspace select $ENV
> $ terraform plan
> module.vault_config.vault_policy.root_iam: Refreshing state... [id=root_iam]
> module.vault_config.vault_aws_auth_backend_sts_role.runner_sts: Refreshing state... [id=auth/aws/config/sts/024329120042]
> module.vault_pki.vault_mount.pki_int: Refreshing state... [id=pki]
> module.vault_pki.vault_pki_secret_backend_intermediate_cert_request.cert_request: Refreshing state... [id=pki/intermediate/generate/internal]
> module.vault_config.vault_aws_auth_backend_client.client_conf: Refreshing state... [id=auth/aws/config/client]
> module.vault_config.vault_raft_snapshot_agent_config.s3_backups: Refreshing state... [id=s3]
> module.vault_config.aws_iam_policy.runner_assume_policy: Refreshing state... [id=arn:aws:iam::054474849801:policy/dev-runner-assume-policy]
> module.vault_pki.aws_s3_bucket_object.csr_file: Refreshing state... [id=dna-pwdev.dwpcloud.uk.csr]
> module.vault_config.aws_iam_role_policy_attachment.runner_assume_policy_attach: Refreshing state... [id=dev-vault-20211013115316786600000001]
> ╷
> │ Error: Request cancelled
> │
> │ with module.vault_config.vault_aws_auth_backend_role.runner_role,
> │ on ../../../modules/vault_config/runner_vault_auth.tf line 7, in resource "vault_aws_auth_backend_role" "runner_role":
> │ 7: resource "vault_aws_auth_backend_role" "runner_role" {
> │
> │ The plugin.(*GRPCProvider).UpgradeResourceState request was cancelled.
> ╵
> ╷
> │ Error: Plugin did not respond
> │
> │ with module.vault_config.vault_raft_snapshot_agent_config.s3_backups,
> │ on ../../../modules/vault_config/vault-backup.tf line 1, in resource "vault_raft_snapshot_agent_config" "s3_backups":
> │ 1: resource "vault_raft_snapshot_agent_config" "s3_backups" {
> │
> │ The plugin encountered an error, and failed to respond to the
> │ plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more
> │ details.
> ╵
> ╷
> │ Error: Request cancelled
> │
> │ with module.vault_config.vault_aws_auth_backend_role.root_iam,
> │ on ../../../modules/vault_config/vault_auth.tf line 24, in resource "vault_aws_auth_backend_role" "root_iam":
> │ 24: resource "vault_aws_auth_backend_role" "root_iam" {
> │
> │ The plugin.(*GRPCProvider).UpgradeResourceState request was cancelled.
> ╵
> Stack trace from the terraform-provider-vault_v3.0.1_x4 plugin:
> panic: runtime error: invalid memory address or nil pointer dereference
> [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0xfac086]
> goroutine 154 [running]:
> github.com/hashicorp/terraform-provider-vault/vault.readSnapshotAgentConfigResource(0xc0001fc980, 0x11f4c60, 0xc0009b17c0, 0x1acbe00, 0xc0001b6000)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vault/vault/resource_raft_snapshot_agent_config.go:321 +0x346
> github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc0004641c0, 0x14537e8, 0xc0006e7ac0, 0xc0001fc980, 0x11f4c60, 0xc0009b17c0, 0x0, 0x0, 0x0)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[v2@v2.8.0](mailto:v2@v2.8.0)/helper/schema/resource.go:335 +0x1ee
> github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc0004641c0, 0x14537e8, 0xc0006e7ac0, 0xc000783520, 0x11f4c60, 0xc0009b17c0, 0xc000795950, 0x0, 0x0, 0x0)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[v2@v2.8.0](mailto:v2@v2.8.0)/helper/schema/resource.go:624 +0x1cb
> github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0xc0001a0318, 0x14537e8, 0xc0006e7ac0, 0xc0006e7b00, 0xc0006e7ac0, 0x40b965, 0x116fc80)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[v2@v2.8.0](mailto:v2@v2.8.0)/helper/schema/grpc_provider.go:576 +0x47d
> github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0xc000881a00, 0x1453890, 0xc0006e7ac0, 0xc00088a960, 0xc000881a00, 0xc00079ff20, 0xc000369ba0)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/[terraform-plugin-go@v0.4.0](mailto:terraform-plugin-go@v0.4.0)/tfprotov5/tf5server/server.go:298 +0x105
> github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler(0x11b32a0, 0xc000881a00, 0x1453890, 0xc00079ff20, 0xc00088a900, 0x0, 0x1453890, 0xc00079ff20, 0xc000832000, 0x26f)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/[terraform-plugin-go@v0.4.0](mailto:terraform-plugin-go@v0.4.0)/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:344 +0x214
> google.golang.org/grpc.(*Server).processUnaryRPC(0xc000274540, 0x145ec18, 0xc0007ca780, 0xc0002ffe00, 0xc000678570, 0x1a8a7f0, 0x0, 0x0, 0x0)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[grpc@v1.32.0](mailto:grpc@v1.32.0)/server.go:1194 +0x52b
> google.golang.org/grpc.(*Server).handleStream(0xc000274540, 0x145ec18, 0xc0007ca780, 0xc0002ffe00, 0x0)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[grpc@v1.32.0](mailto:grpc@v1.32.0)/server.go:1517 +0xd0c
> google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000118470, 0xc000274540, 0x145ec18, 0xc0007ca780, 0xc0002ffe00)
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[grpc@v1.32.0](mailto:grpc@v1.32.0)/server.go:859 +0xab
> created by google.golang.org/grpc.(*Server).serveStreams.func1
> /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[grpc@v1.32.0](mailto:grpc@v1.32.0)/server.go:857 +0x1fd
> Error: The terraform-provider-vault_v3.0.1_x4 plugin crashed!
> This is always indicative of a bug within the plugin. It would be immensely
> helpful if you could report the crash with the plugin's maintainers so that it
> can be fixed. The output above should help diagnose the issue.
> Cleaning up file based variables
> 00:00
> ERROR: Job failed: command terminated with exit code 1

This ended up being an error nothing to do with the plugin, but putting the solution here in case it anyone has the same experience. It was due to the Terraform state file and the AWS resources not being aligned, meaning the the plug in errored at the point it reached a resource which was different on AWS to that in the Terraform state plan. However the plug in does consistently produce this error when that happens, and it’s very misleading, so would be good if the plug in creators revised that.