I have a production environment that is configured to have a domain name that points to a load-balancer. This is already working, and it was configured using Route53.
I am using Terraform to deploy the infrastructure, including the Route53 record.
The Route53 record was set manually.
I would like for Terraform to manage the Route53 record in subsequent deployments. However, when I run an update to update the infrastructure and include the Route53 record, I get this error:
Error: Error applying plan:
1 error(s) occurred:
* module.asg.aws_route53_record.www: 1 error(s) occurred:
* aws_route53_record.www: [ERR]: Error building changeset:
InvalidChangeBatch: [Tried to create a resource record set
[name='foo.com.', type='A'] but it already exists]
Well, at first, this error makes sense, because the resource already exists. But, given this, how can I overcome this issue without causing downtime?
I've tried to manually edit the state file to include the route53 record, but that failed with the same error...
I'm happy to provide more information if necessary. Any suggestions that you might have are welcome. Thank you.
You can use terraform import to import the existing Route53 resource into your current terraform infrastructure. Here are the steps:
Init terraform with your desire workspace via terraform init.
Define your aws_route53_record exactly the same as the existing resource that you have
resource "aws_route53_record" "www" {
// your code here
}
Import the desired resource
terraform import aws_route53_record.www ZONEID_RECORDNAME_TYPE_SET-IDENTIFIER
For example:
terraform import aws_route53_record.www Z4KAPRWWNC7JR_dev.example.com_CNAME
After import successfully, this will save the state of the existing resource.
Do terraform plan to check the resource
You now can update to your existing resource
You have to import the record into your Terraform state with the terraform import command. You should not edit the state manually!
See the resource Docs for additional information on how to import the record.
Keeping it here for new visitors.
In the later versions of aws provider(~3.10), they offer an argument allow_overwrite defaults to false.
No need to edit state file (not recommended) or do terraform import.
allow_overwrite - (Optional) Allow creation of this record in Terraform to overwrite an existing record, if any. This does not affect the ability to update the record in Terraform and does not prevent other resources within Terraform or manual Route 53 changes outside Terraform from overwriting this record. false by default. This configuration is not recommended for most environments.
from: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record#allow_overwrite
Related
I have an existing manually created fargate cluster named "test-cluster" in us-west-1
In terraform configuration file i created
resource "aws_ecs_cluster" "mycluster" {
}
I run terraform command to import the files
terraform import aws_ecs_cluster.mycluster test-cluster
I receive this error message
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_ecs_cluster.cluster, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
I've also ran aws configure adding the correct region.
Based on the comments.
The issue was caused by using wrong account in terraform and/or AWS console.
The solution was to use correct account.
My Terraform state file is messedup. Resources are already available on AWS. When I run terraform apply command I am getting multiple "Already Exists" error same as below.
aws_autoscaling_group.mysql-asg: Error creating AutoScaling Group: AlreadyExists: AutoScalingGroup by this name already exists - A group with the name int-mysql-asg already exists
When I do terraform import then it goes away. but I have hundreds of resources which is giving error. What is the best way to sync terraform state and make terraform apply successful?
You may want to look at Terraforming
It's a Ruby project that states "Export existing AWS resources to Terraform style (tf, tfstate)"
Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.
I'm trying out terraform to set up an S3 + Cloudfront static site. Initially, I set up the site successfully, following the steps from https://alimac.io/static-websites-with-s3-and-hugo-part-1/
However, afterwards I changed the terraform state backend from local to s3 Now, when I perform terraform apply I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* aws_cloudfront_distribution.primary_domain: 1 error(s) occurred:
* aws_cloudfront_distribution.primary_domain: CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource.
status code: 409, request id: <removed>
* aws_cloudfront_distribution.secondary_domain: 1 error(s) occurred:
* aws_cloudfront_distribution.secondary_domain: CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource.
status code: 409, request id: <removed>
Any ideas about why this might be happening and what can I do to fix this issue?
Terraform uses the state file to keep track of resources it manages. If it does not have a particular resource (in this case probably your aws_cloudfront_distribution.primary_domain resource), it will create a new one and store the ID of that new resource in your state file.
It looks like you did a terraform apply with your local state file, changed the backend to s3 without porting the state to s3, then ran terraform apply again. This second S3-backed run has a blank state, so it tried to recreate your aws_cloudfront_distribution resources again. Looks like the error indicates a conflict in using the same CNAME for two distributions, which is what would happen if you ran Terraform twice without keeping track of state in between.
You have a couple of options to fix this:
Go back to using your existing local state file, terraform destroy
to remove the resources it created, switch back to s3, then
terraform apply to start anew. Be aware that this will actually
delete resources.
Properly change your backend and reinitialize, then answer "yes" to copying your remote state to S3.
terraform import the resources you created with your local state file into your S3 backend. Do this with terraform import
aws_cloudfront_distribution.primary_domain <EXISTING CLOUDFRONT DIST.
ID>.
I want to modify existing VPC by removing the black holed routetabes and update it with new route tables - the routetables i want to modify are created manually (not by the terraform). is that possible in terraform? any sample templates i can refer? Many Thanks,
Deepak
If you have existing infrastructure in AWS and you want to manage it with Terraform, you need to use the Terraform import command.
First, write the Terraform code that matches the route tables you already have. For example:
resource "aws_route_table" "example" {
vpc_id = "${aws_vpc.main.id}"
}
Next, look up the route table ID of the existing route table, and use the import command to have Terraform link the Terraform code above to that existing table:
terraform import aws_route_table.example rtb-12345678
You can also try out a tool like Terraforming which can generate the code and import the state automatically.