I have a DMS task that failed and isn't resuming or restarting. Unfortunately, according to AWS Support, the only recourse is to destroy and recreate it. I have a large infrastructure that takes several hours to destroy and recreate with Terraform. I'm running Terraform version 1.2.X with the AWS provider version 4.17.0.
I tried running terraform plan -destroy -target="<insert resource_type>.<insert resource_name>". I tried with and without quotes, double hyphens prior to the target option, module names, etc. Every time the result comes back with this error:
Either you have not created any objects yet or the existing objects were...
My hierarchy is this: Main module -> sub module -> resource. My spelling and punctuation are correct.
I've Google it. I find only the Hashicorp documentation that specifies the syntax but not the naming convention, as well as bug reports from years ago. How do I selectively destroy a resource?
It turns out I wasn't naming my resource correctly.
After some trial and error, I ran a destroy plan for my entire infrastructure (terraform plan <insert module runtime params> -destroy). Using the output from that, I found the name of the resource I wanted to destroy. The format was module.<submodule>.<resourcetype>.<resourcename>.
Once I acquired the resource name directly from Terraform, I first ran the terraform plan -destroy -target="module.<submodule>.<resourcetype>.<resourcename>" command to verify the outcome, then the terraform destroy -target="module.<submodule>.<resourcetype>.<resourcename>" command and it worked!
Related
I'm setting up a pipeline that provisions resources in AWS. Each time I run the pipeline, I get get a module already exist error. I know the resources I want I already provisioned but my understanding of Terraform is that if it already exists it just skips it and provisions the rest that don't already exist. How do I make it skip existing modules and not result into a pipeline build error.
my understanding of Terraform is that if it already exists it just skips it and provisions
Sadly your understanding is incorrect. TF does not check if something exists before it provisions resources. By TF design principles it is assumed that resources do not exist if they are to be managed by TF.
How do I make it skip existing modules and not result into a pipeline build error.
You have to do it manually. Pass some variables to your TF script for conditional creation of resources. TF has no capability to check for pre-existance of resources, unless you do it yourself.
Terraform does not skip the resource if it already exists, it throws an error and quits execution.
To deal with this kind of problem, the best alternative is to import the existing resource to your state file.
In the end of each resource page from the official documentation you will find a "import" section, usually it goes like:
terraform import terraform_state_id component_id
Example:
terraform import aws_instance.web i-12345678
I have a CDK based CodePipeline with a 'Deploy' Step for a Lambda function that started to fail recently, but succeeded already multiple times for this branch in the past. The odd thing about it, compared to the deploying fine production branch:
the code changes are minimal and passing ci the unit test and all other steps (2 loc)
the only other change is a changed version number of the deployed application in npm's package.json file
The CDK Stack did not see changes recently.
The failing step is the one where the CI pipeline tries to deploy the lambda based application via cloudformation. When review the error in cloudformation, the Error 'Update to resource type AWS::CodeDeploy::Application is not supported' pops up for the ressource type 'AWS::CodeDeploy::Application which is deployed by a 'CodePipelineAction.S3DeployAction'.
The error seems to be regarding the created deploy stack and not a resource in it.
Update: I got a answer from AWS Support
This is a known issue which you can track on the GitHub CDK issues. I encourage you to add your voice and experience to this issue. The more input we have, the more visibility we have to improve the service. Please see the link below:
https://github.com/aws/aws-cdk/issues/15947
If you are updating this resource's tags, there is a CDK workaround. You can add the Exclude Resources Types within your construct. Below is a sample of how this would look:
const tagOptions = {
excludeResourceTypes: ['AWS::CodeDeploy::Application'],
};
cdk.Tags.of(deployment).add('Name', `buffer-${props.environment}`, tagOptions);
The reason why it broke may be: It seems that this resource type didn't support tags before, and now it does, leading to the issue.
We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?
What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.
So I've gone through some basic reading:
https://blog.gruntwork.io/an-introduction-to-terraform-f17df9c6d180
and
https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa
So I understand that .tfstate tells the terraform CLI which resources it's actually responsible for managing. But couldn't that be done more minimally with a list of ID's?
Why does .tfstate need to contain full configurations of all resources, if terraform refetch is run implicitly before terraform apply?
Would the fetch not get complete information from the infrastructure? And then that could be used to do the diff etc...
I suppose if you get complete information every time, you might as well record it. But I'm wondering if it's a necessary step. Thanks!
I'm facing a difficult issue to resolve.
I'm terraforming the deployement for multiple ressources on GCP's plateform.
Those ressources are all included in the terraform GCP's network module. (https://github.com/terraform-google-modules/terraform-google-network).
I'm building 2 projects with VPC (shared) and some subnetworks. Easy at first glance.
First terraform init/plan & apply was Ok, the tfstate file is on gcs backend with versionning set on true.
Today, I launch an terraform plan on it to check if everything was ok before doing some modifications.
The output of the plan is telling me that terraform wants to destroy some resources ... and recreate (adding) ... strictly the same resources ...
The code is on our bitbucket repo, no changes on it till the last apply who was ok.
I tried to retreive an old version of the tfstate files, disable the gcs backend to debug and correct it localy, but I can't find a way to refresh the current state.
I tried those tricks :
terraform refresh
terraform import (40 ressources with my little hands ... and even if import commands are working, the plan command still want to destroy my existing resources to recreate strictly the same ...)
So I'm wondering if you already encountered the same problem.
If yes, how you managed it ?
I can share my source on demand.
Terraform v0.12.9
provider.google v2.19.0
provider.google-beta v3.3.0
provider.null v2.1.2
provider.random v2.2.1
Ok, big rookie mistake, terraform providers save my day. No versions was set on the source's module version ... I just define it, replan, everything was fine again.