You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been wondering where to put the state for the remote state s3 bakend terraform setup.
So I should use this module to create a remote state backend, then update the terraform {} block with the newly created backend and run terraform init -migrate-state?
Thus I have a state remotely, but doesn't it make this a catch-22? I can't change the anything without breaking access to the state.
And it's probably unwise to put the state in a private git?
@Raboo we have been using the following principle for 7 years now when creating state in s3+ddb backend, and this works also in the Jenkins, GitLab or Github CI/CD scenarios.
our repos include backend.tf_ENV and terraform.tfvars_ENV
the build process for the state init or the state change includes a setp to symlink the correct ENV to terraform.tfvars if it is "first" run. And to also symlink the correct backend.tf for any run after that
the first run - or init runs also includes a step to migrate the state - terraform init -migrate-state -input=false -force-copy
any run after this would then reference the key that we use for the bootstrap terraform.tfstate file
State files should never be in a git repo.
I have not yet had a situation where I would break access to the state by changing anything to our "s3+ddb" state creation process. As long as you are not blanket removing your own access to S3 or the bucket with an invalid policy then it should be fine. Even when it times out you can in most cases salvage the state file through the "errored.tfstate" and use this for completing the run.
This would make the readme a single stop tutorial for setting up new projects with remote state.
Once you set up the provider run the command below to migrate to the new remote state.
The text was updated successfully, but these errors were encountered: