I deployed vault 1.14.3 node into already existed HA cluster with nodes worked on 1.6.1 vault version. The new node was joined to cluster and unsealed successfully. But when it become master node it fall down with error:
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.473Z [INFO] core: successfully mounted: type=kubernetes version="v0.16.0+builtin" path=kubernetes/ namespace="ID: root. Path: "
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.474Z [INFO] rollback: starting rollback manager
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.474Z [INFO] core: restoring leases
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.491Z [INFO] storage.raft: pipelining replication: peer="{Voter raft-node-1 node-1.vault-cluster.locals-env.com:8201}"
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.504Z [INFO] identity: entities restored
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.505Z [INFO] identity: groups restored
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.505Z [INFO] core: starting raft active node
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.505Z [INFO] storage.raft: starting autopilot: config="&{false 0 10s 24h0m0s 1000 0 10s false redundancy_zone upgrade_version}" reconcile_interval=0s
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.506Z [INFO] core: usage gauge collection is disabled
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.508Z [INFO] core: post-unseal setup complete
Dec 04 19:14:53 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: 2023-12-04T19:14:53.871Z [INFO] expiration: lease restore complete
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: panic: runtime error: invalid memory address or nil pointer dereference
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x31b4be0]
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: goroutine 272 [running]:
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: github.com/hashicorp/vault/vault.(*Core).GetHAPeerNodesCached(0x10717f00?)
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: /home/runner/work/vault/vault/vault/core.go:3750 +0x180
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: github.com/hashicorp/vault/vault.(*Core).getHAMembers(0xc00331a000)
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: /home/runner/work/vault/vault/vault/ha.go:118 +0x1ff
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: github.com/hashicorp/vault/vault.(*Core).monitorUndoLogs.func1()
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: /home/runner/work/vault/vault/vault/raft.go:244 +0x2bf
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: created by github.com/hashicorp/vault/vault.(*Core).monitorUndoLogs
Dec 04 19:14:54 ip-172-22-112-19.eu-west-1.compute.internal vault[26558]: /home/runner/work/vault/vault/vault/raft.go:217 +0x218
Dec 04 19:14:59 ip-172-22-112-19.eu-west-1.compute.internal vault[26766]: ==> Vault server configuration:
Dec 04 19:14:59 ip-172-22-112-19.eu-west-1.compute.internal vault[26766]: Administrative Namespace:
What might be the reason for such behavior?