Terraform quickest way to import multiple resources - amazon-web-services

My Terraform state file is messedup. Resources are already available on AWS. When I run terraform apply command I am getting multiple "Already Exists" error same as below.
aws_autoscaling_group.mysql-asg: Error creating AutoScaling Group: AlreadyExists: AutoScalingGroup by this name already exists - A group with the name int-mysql-asg already exists
When I do terraform import then it goes away. but I have hundreds of resources which is giving error. What is the best way to sync terraform state and make terraform apply successful?

You may want to look at Terraforming
It's a Ruby project that states "Export existing AWS resources to Terraform style (tf, tfstate)"

Related

Terraform deploy different resource for enviroment

I'm new on Terraform so I'm sure it is a easy question.
I'm trying to deploy into GCP using terraform.
I have 2 different enviroments both on same GCP project:
nonlive
live
I have alerts for each enviroment so that is what I intend to create:
If I deploy into an enviroment then Terraform must create/update resources for this enviromet but don't update resources for rest of enviroments.
I'm trying to user modules and conditions, it's similar to this:
module "enviroment_live" {
source = "./live"
module_create = (var.environment=="live")
}
resource "google_monitoring_alert_policy" "alert_policy_live" {
count = var.module_create ? 1 : 0
display_name = "Alert CPU LPProxy Live"
Problem:
When I deploy on live enviroment Terraform delete alerts for nonlive enviroment and vice versa.
Is it possible to update resources of one enviroment without deleting those of the other?
Regards
As Marko E suggested solution was to use workspaces:
Terraform workspaces
The steps must be:
Create a workspace for each enviroment.
On deploy (CI/CD) select workspace befor plan/apply:
terraform workspace select $ENVIROMENT
Use conditions (as I explained before) to create/configure the resource.

Terraform destroy error 'Instance cannot be destroyed' and 'Failed getting S3 bucket'

I'm currently trying to destroy a workspace, I know that there are some buckets that have a 'do not destroy' type tag applied to them, so when I run terraform destroy for the first time, I got Instance cannot be destroyed error for two buckets:
Resource module.xxx.aws_s3_bucket.xxx_bucket has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.
so I navigate to the AWS console and delete them manually then tried to run terraform destroy again, then it's complaining about one of the buckets that I've removed manually: Failed getting S3 bucket: NotFound: Not Found, the other one seems fine.
Does anyone know how to resolve this please? Thanks.
If you removed the resource with an action external to a modification in the Terraform state (in this situation a bucket removed manually through the console), then you need to update the Terraform state correspondingly. You can do this with the terraform state subcommand. Given your listed example of a resource named module.xxx.aws_s3_bucket.xxx_bucket, it would appear like:
terraform state rm module.xxx.aws_s3_bucket.xxx_bucket
You can find more info in the documentation.

terraform import fargate cluster

I have an existing manually created fargate cluster named "test-cluster" in us-west-1
In terraform configuration file i created
resource "aws_ecs_cluster" "mycluster" {
}
I run terraform command to import the files
terraform import aws_ecs_cluster.mycluster test-cluster
I receive this error message
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_ecs_cluster.cluster, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
I've also ran aws configure adding the correct region.
Based on the comments.
The issue was caused by using wrong account in terraform and/or AWS console.
The solution was to use correct account.

Terraform + Route53 - manage existing record

I have a production environment that is configured to have a domain name that points to a load-balancer. This is already working, and it was configured using Route53.
I am using Terraform to deploy the infrastructure, including the Route53 record.
The Route53 record was set manually.
I would like for Terraform to manage the Route53 record in subsequent deployments. However, when I run an update to update the infrastructure and include the Route53 record, I get this error:
Error: Error applying plan:
1 error(s) occurred:
* module.asg.aws_route53_record.www: 1 error(s) occurred:
* aws_route53_record.www: [ERR]: Error building changeset:
InvalidChangeBatch: [Tried to create a resource record set
[name='foo.com.', type='A'] but it already exists]
Well, at first, this error makes sense, because the resource already exists. But, given this, how can I overcome this issue without causing downtime?
I've tried to manually edit the state file to include the route53 record, but that failed with the same error...
I'm happy to provide more information if necessary. Any suggestions that you might have are welcome. Thank you.
You can use terraform import to import the existing Route53 resource into your current terraform infrastructure. Here are the steps:
Init terraform with your desire workspace via terraform init.
Define your aws_route53_record exactly the same as the existing resource that you have
resource "aws_route53_record" "www" {
// your code here
}
Import the desired resource
terraform import aws_route53_record.www ZONEID_RECORDNAME_TYPE_SET-IDENTIFIER
For example:
terraform import aws_route53_record.www Z4KAPRWWNC7JR_dev.example.com_CNAME
After import successfully, this will save the state of the existing resource.
Do terraform plan to check the resource
You now can update to your existing resource
You have to import the record into your Terraform state with the terraform import command. You should not edit the state manually!
See the resource Docs for additional information on how to import the record.
Keeping it here for new visitors.
In the later versions of aws provider(~3.10), they offer an argument allow_overwrite defaults to false.
No need to edit state file (not recommended) or do terraform import.
allow_overwrite - (Optional) Allow creation of this record in Terraform to overwrite an existing record, if any. This does not affect the ability to update the record in Terraform and does not prevent other resources within Terraform or manual Route 53 changes outside Terraform from overwriting this record. false by default. This configuration is not recommended for most environments.
from: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record#allow_overwrite

Exporting Google cloud configuration

Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.