I am trying to update my already existing launch template on AWS using Terraform.
Below is the config for Terraform.
resource "aws_launch_template" "update" {
name = data.aws_launch_template.config.name
image_id = data.aws_ami.ubuntu.id
instance_type = "c5.large"
// arn = data.aws_launch_template.config.arn
}
On passing the name it is throwing error 400 with the below error.
Error: InvalidLaunchTemplateName.AlreadyExistsException: Launch template name already in use.
I want the same launch template with just update version. Couldn't able to find any documentation on terraform official website for modifying templates. Or am I missing something?
OS - macOS Catalina
Terraform version - v0.12.21
One thing to note about terraform in general is that it wants to own the entire lifecycle of any resources it manages.
In your example, aws_launch_template.update with that name already exists, so terraform says, essentially, "I don't own this resource, so I shouldn't change it."
This is actually a pretty nice benefit because it means that terraform wont (or at least shouldn't) overwrite or delete resources that it doesn't know about.
Now, since you are referencing an existing launch template, I would recommend bringing it under terraform's ownership (that's assuming you're allowed to do so). To do this, I would recommend
Hard-coding the launch template's name in the resource itself, rather than referencing it via data, and
Importing the resource by running a command like so
terraform import aws_launch_template.update lt-12345678
Where you would replace lt-12345678 with your actual launch template ID. This will bring the resource under terraform's ownership and actually allow updates via terraform code.
Just be careful that you're not stepping on the toes of someone else's resources, if you are in a context where this was created by someone else.
Related
Well, I have some resources in AWS that were created via terraform module. But now I have to change source module to the identical module except some stuff like name of some resources and now I need to use another module and avoid replacement. Now I have problems only with names of 4 resources. Here is the example:
KMS-ALIAS: BEFORE: kms-alias-s3bucket, CHANGES IN MODULE: kms-alias-s3bucket-dev. How to avoid replacement without changing the resources names'. I heard about terraform state mv but actually don't know how to properly configure that
Here is the output:
Here is how changes looks like:
Changing the terraform state to add the prefix -dev in the resources names will force terraform to diff from your Cloud environment, any update on those resources afterwards will force a replacement unless you do not touch those resources anymore.
If you cloud environment has this bucket named xyz, you want your state with the bucket name as xyz. So changing those names will depend on what those 4 resources are, bucket name change forces replacement for instance, so if you really want this environment as prefix you can create another bucket with the desired name <bucket-name>-dev and move everything from the old to the new one and then import the new one using terraform import into your state, then terraform will not force replacement anymore.
terraform import aws_s3_bucket.bucket <new-bucket-name>-dev
Additional Info
Modifying your state directly is usually for changing structural stuff and the resource local name could be among those potential changes.
resource "aws_s3_bucket" "bucket" { #bucket = resource local name
bucket = "my-tf-test-bucket" # my-tf-test-bucket = bucket name itself, it is unique and could not be changed without creating another bucket. AWS api does not allow that. That's why always depends on which resource.
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
In resume I would say if terraform try to replace when you change some argument (such as "bucket name") you will need to replace to be applied in your remote system (cloud environment). If AWS API / AWS Console allow you to change without recreating (even terraform saying that you need, it could happen sometimes), you can then import the resource into your state instead of editing the state.
I have run a terraform script to create some resources, including a VPC with private subnets, an RDS instance, and Kinesis/Firehose. This is working fine.
When I went to re-run terraform and add some new resources (ElasticSearch in this case), Terraform started outputting a plan that included adding AWS tags to many of my previously existing resources, the text of which look like "map-migrated" = "d-server-01uw80xeqs2083". Here is a snippet from the plan:
# module.rds.aws_db_instance.etl_metastore_rds_dbinstance will be updated in-place
~ resource "aws_db_instance" "rds_dbinstance" {
id = "MyRDSId"
name = "etldb"
~ tags = {
- "map-migrated" = "d-server-01uw80xeqs2083" -> null
# (2 unchanged elements hidden)
}
~ tags_all = {
- "map-migrated" = "d-server-01uw80xeqs2083" -> null
# (2 unchanged elements hidden)
}
# (48 unchanged attributes hidden)
}
I don't know why these tags are being added. Neither Google nor the Terraform docs have been any help on this issue. Is this something I can safely ignore? I'm worried that somehow I have crossed versions of Terraform and it's doing a migration that I don't want. As far as I know I am using the same version of Terraform before and after (1.0.1).
Terraform proposes to remove those tags.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
- destroy
Reason for this is, that Terraform compares what you have running on AWS to what you defined in your terraform files. (docs)
It will then take all the actions needed, so that your AWS infrastructure will match your Terraform configuration. (i.e. create, change and destroy AWS resources)
Here Terraform detected, that you have those "map-migrated" tags on various AWS resources, so it proposes to delete them, since they are not defined in your terraform files.
Now why and how were those tags added - and can you safely remove/ignore them?
Why?
These tags are used for the AWS Migration Acceleration Program (MAP), that's why adding those tags is called "MAP tagging".
This is a cloud migration program, in which AWS offers to help companies get into the cloud by giving methods, tools, trainings and money (you can get AWS Credits).
Now a requirement for that program is tagging your migrated resources with a "map-migrated" tag - or no credits for you. (credits are based on the cost and usage of tagged resources)
How?
Maybe your team is using the AWS Application Migration Service, which offers a setting to automatically apply MAP tags to launched instances.
Or someone added the tags manually in the AWS account.
Can you safely remove/ignore them?
Technically you can, it won't break anything.
But your project manager might get really angry, since you will lose out on funding. And the person who added those tags in AWS won't be happy either, if you override them with Terraform.
Solution
So the solution will be to ask your manager for the "List of all MAP Included Services" and incorporate the MAP tags in the Terraform code for all appropriate resources:
tags = {
...
"map-migrated" = "d-server-01uw80xeqs2083"
}
(Or - dirty solution - copy the tags from the terraform apply output into your code, until there are no such tag differences between AWS and your terraform code anymore.)
Note: this tag is always the same for one management (payer) account and all its member accounts. The unique value is called Server Id.
Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.
After running out of space I had to resize my EBS Volume, now I wanted to make the size part of my Terraform configurated and added the following block to the aws_instance resource:
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 32
volume_type = "gp2"
}
Now after running terraform plan it wanted to destroy the existing volume, which is terrible. I also tried to import the existing one using terraform import but it wanted me to use a different name for the resource which is also not great.
So what is the correct procedure here?
The aws_instance resource docs mention that changes to any EBS block devices will cause the instance to be recreated.
To get around this you can use something other than Terraform to grow the EBS volumes using AWS' new elastic volumes feature. Terraform also cannot detect changes to any of the attached block devices created in the aws_instance resource:
NOTE: Currently, changes to *_block_device configuration of existing resources cannot be automatically detected by Terraform. After making updates to block device configuration, resource recreation can be manually triggered by using the taint command.
As such you shouldn't need to go back and change anything in your Terraform configuration unless you are wanting to rebuild the instance using Terraform at some point at which point the worry about losing the instance is obviously moot.
However, if for some reason you want to be able to make the change to your Terraform configuration and keep the instance from being destroyed then you would need to manipulate your state file.
I want to share a terraform script that will be used across different projects. I know how to create and share modules, but this setup has a big annoyance: when I reference a module in a script and perform a terraform apply, if the module resource does not exist it will be created, but also if I perform a terraform destroy this resource will be destroyed.
If I have two projects dependent on the same module, and in one of them I call a terraform destroy it may lead to a inconsistent state, since the module is being used by another project. The script can either fail because it cannot destroy the resource or it will destroy the resource and affect the other project.
In my scenario, I want to share network scripts between two projects and I want the network resources to never be destroyed. I cannot create a project only for this resource because I need to reference it somehow in my projects, and the only way to do it is via its ID, which I have no idea what is going to be.
prevent_destroy is also not an option, since I do need to destroy other resources but the shared script resource. This configuration makes terraform destroy fail.
Is there any way to reference the resource, like by its name, or is there any other better approach to accomplish what I want?
If I understand you correctly, you have some resource R that is a "singleton". That is, only one instance of R can ever exist in your AWS account. For example, you can only ever have one aws_route53_zone with the name "foo.com". If you include R as a module in two different places, then either one may create it when you run terraform apply and either one may delete it when you run terraform destroy. You'd like to avoid that, but you still need some way to get an output attribute from R (e.g. the zone_id for an aws_route53_zone resource is generated by AWS, so you can't guess it).
If that's the case, then instead of using a R as a module, you should:
Create R by itself in its own set of Terraform templates. Let's say those are under /terraform/R.
Configure /terraform/R to use Remote State. For example, here is how you can configure those templates to store their remote state in an S3 bucket (you'll need to fill in the bucket name/region as indicated):
terraform remote config \
-backend=s3 \
-backend-config="bucket=(YOUR BUCKET NAME)" \
-backend-config="key=terraform.tfstate" \
-backend-config="region=(YOUR BUCKET REGION)" \
-backend-config="encrypt=true"
Define any output attributes you need from R as output variables. For example:
output "zone_id" {
value = "${aws_route_53.example.zone_id}"
}
When you run terraform apply in /terraform/R, it will store its Terraform state, including that output, in an S3 bucket.
Now, in all other Terraform templates that need that output attribute from R, you can pull it in from the S3 bucket using the terraform_remote_state data source. For example, let's say you had some template /terraform/foo that needed that zone_id parameter to create an aws_route53_record (you'll need to fill in the bucket name/region as indicated):
data "terraform_remote_state" "r" {
backend = "s3"
config {
bucket = "(YOUR BUCKET NAME)"
key = "terraform.tfstate"
region = "(YOUR BUCKET REGION)"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.r.zone_id}"
name = "www.foo.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note that terraform_remote_state is a read-only data source. That means when you run terraform apply or terraform destroy on any templates that use that resource, they will not have any effect in R.
For more info, check out How to manage terraform state and Terraform: Up & Running.