So - when writing multi-region deployments, I have tended to define two aws providers, each with an alias (aws.primary and aws.secondary) and role assumption (credential in org root, assume role to deployment account). Up to some point (I can check later) - if I didn’t specify primary or secondary - an error was thrown that “aws” provider wasn’t defined (without an alias at least). Per-design… since I am working in two different places… I am being specific, no “default” activity that can slip me up (and yes - I use the allowed_account_ids config for each to “keep me from configuring the wrong environment”). Most of the time this is two regions, same account… but not always - some code, like RAM needs “handshakes” between accounts.
Anyhow - I just made the mistake of defining a resource with a “provider = aws.xxx” attribute, on a new TF 1.x and AWS Provider 4.x… and there is no longer an error when I don’t provide a valid provider. There is now a “phantom” DEFAULT aws provider, despite not having been defined… and since it uses my raw credential without an allowed_account_ids and it doesn’t have an assume_role clause… it is writing to my root account (where my credential is sourced from).
Now - this is my test environment and prod isn’t exactly the same (I don’t have quite the same level of rights - for obvious reasons), but… it could be, and I really don’t want to be making changes to an account I never configured terraform to work on.
Is this “unconfigured/default provider” behavior expected? Because… blech. I could easily have destroyed my entire test org… and I configured things in the past specifically to avoid this sort of issue