-
Notifications
You must be signed in to change notification settings - Fork 10.3k
When Storing the Terraform State, commit updates more often ( configurable? ) #24276
Description
Current Terraform Version
Terraform v0.12.21
+ provider.aws v2.51.0
+ provider.http v1.1.1
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.template v2.1.2
Use-cases
Occasionally in our environment, our network connectivity fails. This causes terraform to fail as well, and prevent TF from updating the remote state in S3.
EDIT: For this case, we are executing a terraform project inside of a Docker container, either as a docker run or a k8s pod. When terraform is running and we lose the network connection to the container's interactive shell, the container is unceremoniously and immediately destroyed. Because of docker container being terminated, TF doesn't know it's being killed as well and doesn't have an opportunity to write any state.
When we restart Terraform, we receive errors like "Resource <> already exists". If terraform had been saving the remote state more often then TF shouldn't be 'confused' and will know not to re-create and already existing resource that it had created on the previous run.
If there has been any, more current, state stored in the file location where terraform is executing, that's lost as well, because the docker container ( where this all runs ) is also lost when we lose a network connection.
Attempted Solutions
I've tried monitoring the folder where TF is running to see if I can capture intermediate state files, additionally I've scanned the docs to see if there is any information or configuration on how state is written remotely.
Proposal
I would be interested in knowing, if after any "Creation complete" or "Destruction complete" message an updated state file could be pushed. I understand that any multi threading might make this challenging etc.