I am switching to gitlab and plan to use terraform. I have used cloudformation before and understand , deploying stack to aws, creating change stack and updating resources. how does updating/deleting work in terraform.
Its similar to CFN. TF has a state file (can be local or remote) where it stores information about your currently deployed resources and their configuration.
After any changes to your TF config files, TF would create a plan of how to apply your changes in relation to what it has in the state. The plan is similar to changeset in CFN, it will show what resources have to be deleted, replaced, created or modified.
Just like with changeset you have option to review the plan and if you agree with a proposed actions, you can apply it.
The biggest difference is what happens if there is a failure. Cloudformation will rollback the stack to the previous state whereas Terraform will leave the resources in a partially deployed state.
Related
There is a terraform code to configure an MWAA Environment in AWS. When it runs second time, no need to create IAM role or policy again. So it gives an error.
How to ignore the creation of existing resources in TF?
I assume that you applied a Terraform plan which created resource "MWAA", then you somehow lost the state (locally stored and lost?, or the state wasn't shared with a different client?), then you re-apply the plan again, and Terraform informs you that it created "MWAA", again.
In that case, your main problem is that you lost the state, and you need to make sure that you do persist it, e.g., by storing it in a bucket.
However, if you really need to make Terraform aware about an already created resource, you need to put it in Terraform's state. One tool to do that is "terraform import", about which you can read more here: https://www.terraform.io/cli/import
If you already have the statefile and if terraform is trying to re-install it again, then may be some tag change or modified timestamp value change...
In order to avoid it, you can specify the resource you want to apply using a terraform apply command..
terraform apply --target=resource
TL;DR: I have accidentally deployed via terraform to multiple regions using back-end state management in multiple regions. Now I want to clean it all up with terraform. Is this possible? How should I approach the problem?
A few months ago I created a solution which I pushed to AWS via terraform, using back-end state management with S3 and dynamo table lock. It deployed successfully, but upon returning to it recently I have discovered that I had apparently changed both the terraform init back-end parameters and the provider region values between deployments.
What I believe I was left with, back then, was two separate deployments - one in one region and one in another. The problem, now, is that I'm not sure which region's state is used to manage which region's resources.
My documented terraform init is setup to use us-east-1 to manage back-end state. Looking at the versioning of the .tf files, I can see that at some point I had resources deployed to eu-central-1. I don't know if I have erroneously deployed to one region while managing state in another, but I suspect so.
In an attempt to destroy the eu-central-1 resources, I have run the below init locally. The result below is what followed.
> terraform init -backend-config="bucket=app-us-east-1-lambda-state" -backend-config="key=modules/app-lambda-function/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="dynamodb_table=app-us-east-1-lambda-lock"
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Acquiring state lock. This may take a few moments...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "s3" backend. An existing non-empty state already exists in
the new backend. The two states have been saved to temporary files that will be
removed after responding to this query.
Previous (type "s3"): C:\Users\USER\AppData\Local\Temp\terraform123456798\1-s3.tfstate
New (type "s3"): C:\Users\USER\AppData\Local\Temp\terraform123456789\2-s3.tfstate
Do you want to overwrite the state in the new backend with the previous state?
Enter "yes" to copy and "no" to start with the existing state in the newly
configured "s3" backend.
Enter a value:
Now, unfortunately (I suspect), at this point I typed no, hit return and the following is what happened...
Enter a value: no
Releasing state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/archive from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/archive v2.1.0
- Using previously-installed hashicorp/aws v3.30.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
At this point I tried to destroy the resources but this failed for various reasons. I have trimmed most of them, but what you see is a good example of them all...
> terraform destroy --auto-approve --var-file=../../local.tfvars
Acquiring state lock. This may take a few moments...
Error: error deleting CloudWatch Events Rule (my-app-rule): ValidationException: Rule can't be deleted since it has targets.
status code: 400, request id: 1234567-1234-1234-1234-12345679012
So what I believe I have done in the past is:
Deploy with both back-end management and providers set to eu-central-1
Changed provider to us-east-1 and re-deployed
Changed back-end management to us-east-1 and re-deployed
More recently, changed the back-end management back to eu-central-1 and attempted destroy
Now, I understand that this is all on my personal account and I can manually destroy all the resources using the console. However, I would like to understand what I should have done when I realised that (months ago) I had been repeatedly deploying while also changing the back-end and provider regions.
I created the aws beanstalk resources using terraform and included S3 as the backend for the storage of the tfstate. I'm reusing the same terraform infra code to deploy same resources with different properties like different instance-type, security groups, etc...
My question:, is there a way where I can still destroy the previous beanstalk infra created by same terraform code? Maybe referring to the tfstate files created from s3 then do the terraform destroy? thanks in advance for your answers
If you have the Terraform S3 backend configured in your codebase containing the Terraform state with the resources you would like to destroy, you can run terraform destroy and see the removal plan.
You can also simply run terraform apply and Terraform will converge the previously existing infrastructure to the newly desired one, without the intermediate destroy run
i had provisioned some resources over AWS which includes EC2 instance as well,but then after that we had attached some extra security groups to these instances which now been detected by terraform and it say's it'll rollback it as per the configuration file.
Let's say i had below code which attaches SG to my EC2
vpc_security_group_ids = ["sg-xxxx"]
but now my problem is how can i update the terraform.tfstate file so that it should not detach manually attached security groups :
I can solve it as below:
i would refresh terraform state file with terraform refresh which will update the state file.
then i have to update my terraform configuration file manually with security group id's that were attached manually
but that possible for a small kind of setup what if we have a complex scenario, so do we have any other mechanism in terraform which would detect the drift and update it
THanks !!
There is no way Terraform will update your source code when detecting a drift on AWS.
The process you mention is right:
Report manual changes done in AWS into the Terraform code
Do a terraform plan. It will refresh the state and show you if there is still a difference
You can use terraform import with the id to import the remote changes to your terraform state file. Later use terraform plan to check if the change is reflected in the code.
This can be achieved by updating terraform state file manually but it is not best practice to update this file manually.
Also, if you are updating your AWS resources (created by Terraform) manually or outside terraform code then it defeats the whole purpose of Infrastructure as Code.
If you are looking to manage complex infrastructure on AWS using Terraform then it is very good to follow best practices and one of them is all changes should be done via code.
Hope this helps.
terraform import <resource>.<resource_name> [unique_id_from_aws]
You may need to temporarily comment out any provider/resource that relies on the output of the manually created resource.
After running the above, un-comment the dependencies and run terraform refresh.
The accepted answer is technically not correct.
As per my testing:
Terraform refresh will update the state file with current live configuration
Terraform plan will only internally update with the live configuration and compare to the code, but not actually update the state file
Terraform apply will update the state file to current live configuration, even if it says no changes to apply (use case = manual change then update TF code to reflect change and now want to update state file)
I follow the tutorial on http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
The tutorial demonstrate how to automatically deploy a lambda and an API gateway using AWS cloudformation.
After some time I was able to complete the tutorial with success. This means that when I push a commit to the github repository linked to the AWS CodePipeline the changed code is uploaded/packaged to AWS -> build -> and deployed (i.e. i can see the code change)
My problem is that I tried to delete the lambda function and then invoke the Codepipeline by pushing a git commit. This trickered the codepipeline and I could watch source, build and staging steps complete successfully. However, I cannot find the lambda? I thought that cloudformation would recreate the application ? Can you help?
If you deleted the function manually then you're most likely running into this issue:
Resources that are created as part of an AWS CloudFormation stack must be managed from the same stack. Modifications to a resource must be done by a stack update. If a resource is deleted, a stack update is also necessary to remove the resource from the template. If a resource has been accidentally or purposely manually deleted, you can encounter errors when attempting to perform a stack update.
https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/
You can resolve this by manually recreating the resource with the same name, then allowing CloudFormation to manage the resource in future.
The reason why I did not see any lambda function was because I only created the change set ("create or update change set") and missed to add the actual deploy stage "execute change set".