I'm setting up a pipeline that provisions resources in AWS. Each time I run the pipeline, I get get a module already exist error. I know the resources I want I already provisioned but my understanding of Terraform is that if it already exists it just skips it and provisions the rest that don't already exist. How do I make it skip existing modules and not result into a pipeline build error.
my understanding of Terraform is that if it already exists it just skips it and provisions
Sadly your understanding is incorrect. TF does not check if something exists before it provisions resources. By TF design principles it is assumed that resources do not exist if they are to be managed by TF.
How do I make it skip existing modules and not result into a pipeline build error.
You have to do it manually. Pass some variables to your TF script for conditional creation of resources. TF has no capability to check for pre-existance of resources, unless you do it yourself.
Terraform does not skip the resource if it already exists, it throws an error and quits execution.
To deal with this kind of problem, the best alternative is to import the existing resource to your state file.
In the end of each resource page from the official documentation you will find a "import" section, usually it goes like:
terraform import terraform_state_id component_id
Example:
terraform import aws_instance.web i-12345678
Related
I have a DMS task that failed and isn't resuming or restarting. Unfortunately, according to AWS Support, the only recourse is to destroy and recreate it. I have a large infrastructure that takes several hours to destroy and recreate with Terraform. I'm running Terraform version 1.2.X with the AWS provider version 4.17.0.
I tried running terraform plan -destroy -target="<insert resource_type>.<insert resource_name>". I tried with and without quotes, double hyphens prior to the target option, module names, etc. Every time the result comes back with this error:
Either you have not created any objects yet or the existing objects were...
My hierarchy is this: Main module -> sub module -> resource. My spelling and punctuation are correct.
I've Google it. I find only the Hashicorp documentation that specifies the syntax but not the naming convention, as well as bug reports from years ago. How do I selectively destroy a resource?
It turns out I wasn't naming my resource correctly.
After some trial and error, I ran a destroy plan for my entire infrastructure (terraform plan <insert module runtime params> -destroy). Using the output from that, I found the name of the resource I wanted to destroy. The format was module.<submodule>.<resourcetype>.<resourcename>.
Once I acquired the resource name directly from Terraform, I first ran the terraform plan -destroy -target="module.<submodule>.<resourcetype>.<resourcename>" command to verify the outcome, then the terraform destroy -target="module.<submodule>.<resourcetype>.<resourcename>" command and it worked!
We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?
What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.
So I've gone through some basic reading:
https://blog.gruntwork.io/an-introduction-to-terraform-f17df9c6d180
and
https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa
So I understand that .tfstate tells the terraform CLI which resources it's actually responsible for managing. But couldn't that be done more minimally with a list of ID's?
Why does .tfstate need to contain full configurations of all resources, if terraform refetch is run implicitly before terraform apply?
Would the fetch not get complete information from the infrastructure? And then that could be used to do the diff etc...
I suppose if you get complete information every time, you might as well record it. But I'm wondering if it's a necessary step. Thanks!
I'm facing a difficult issue to resolve.
I'm terraforming the deployement for multiple ressources on GCP's plateform.
Those ressources are all included in the terraform GCP's network module. (https://github.com/terraform-google-modules/terraform-google-network).
I'm building 2 projects with VPC (shared) and some subnetworks. Easy at first glance.
First terraform init/plan & apply was Ok, the tfstate file is on gcs backend with versionning set on true.
Today, I launch an terraform plan on it to check if everything was ok before doing some modifications.
The output of the plan is telling me that terraform wants to destroy some resources ... and recreate (adding) ... strictly the same resources ...
The code is on our bitbucket repo, no changes on it till the last apply who was ok.
I tried to retreive an old version of the tfstate files, disable the gcs backend to debug and correct it localy, but I can't find a way to refresh the current state.
I tried those tricks :
terraform refresh
terraform import (40 ressources with my little hands ... and even if import commands are working, the plan command still want to destroy my existing resources to recreate strictly the same ...)
So I'm wondering if you already encountered the same problem.
If yes, how you managed it ?
I can share my source on demand.
Terraform v0.12.9
provider.google v2.19.0
provider.google-beta v3.3.0
provider.null v2.1.2
provider.random v2.2.1
Ok, big rookie mistake, terraform providers save my day. No versions was set on the source's module version ... I just define it, replan, everything was fine again.
I would like to version control my cloud resources initially before using it to apply through Terraform. Is there anyway I can run a single command and store the current state of my cloud?
I have tried to use Terraform's import command:
terraform import ADR ID
But this takes a long time to identify all the resources and import them.
I have tried terraforming but this also needs a resource type to import:
terraforming s3
Is there any tool that can help in importing all existing resources?
While this doesn't technically answer your question I would strongly advise not to try and import an entire existing AWS account into Terraform in a single way even if it was possible.
If you look at any Terraform best practices an awful lot of it comes down to minimising blast radius of things so only things that make sense to be changed at the same time as each other are ever applied at the same time. Charity Majors wrote up a good blog post about this and the impact it had when that wasn't the case.
Any tool that would mass import things (eg terraforming) is just going to dump everything in a single state file. Which, as mentioned before, is a bad idea.
While it sounds laborious I'd recommend that you being your migration to Terraform more carefully and methodically. In general I'd probably say that only new infrastructure should use Terraform, utilising Terraform's data sources to look up existing things such as VPC IDs that already exist.
Once you feel comfortable with using Terraform and structuring your infrastructure code and state files in a particular way you can then begin to think about how you would map your existing infrastructure code into Terraform state files etc and begin manually importing specific resources as necessary.
Doing things this way also allows you to find your feet with Terraform a bit better and understand its limitations and strengths while also working out how your team and/or CI will work together (eg remote state and state file locking and orchestration) without tripping over each other or causing potentially crippling state issues.
I'm using terraformer to import my existing AWS infrastructure. It's much more flexible than terraforming and has no issues mentioned in answers.