I have a user-data file attached to the main.tf. Whenever I make changes to the user-data file, and then run the terraform apply, the changes do not reflect on the server until I destroy and recreate the resources. Please is this the default operation or am I missing something. Thank you for answers.
After making the changes on the user date file, I expect that terraform apply will create a new instance with the updated user-data file content, but that is not happening.
As documented on the aws_instance resource on terraform, you need to set the user_data_replace_on_change attribute to true. It is false by default.
user_data_replace_on_change - (Optional) When used in combination with user_data or user_data_base64 will trigger a destroy and recreate when set to true. Defaults to false if not set.
Also note that by default user_data is only applied at the time of instance creation so terraform will destroy and create the resource again when this flag is set.
Related
I'm currently trying to destroy a workspace, I know that there are some buckets that have a 'do not destroy' type tag applied to them, so when I run terraform destroy for the first time, I got Instance cannot be destroyed error for two buckets:
Resource module.xxx.aws_s3_bucket.xxx_bucket has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.
so I navigate to the AWS console and delete them manually then tried to run terraform destroy again, then it's complaining about one of the buckets that I've removed manually: Failed getting S3 bucket: NotFound: Not Found, the other one seems fine.
Does anyone know how to resolve this please? Thanks.
If you removed the resource with an action external to a modification in the Terraform state (in this situation a bucket removed manually through the console), then you need to update the Terraform state correspondingly. You can do this with the terraform state subcommand. Given your listed example of a resource named module.xxx.aws_s3_bucket.xxx_bucket, it would appear like:
terraform state rm module.xxx.aws_s3_bucket.xxx_bucket
You can find more info in the documentation.
I am trying to update my already existing launch template on AWS using Terraform.
Below is the config for Terraform.
resource "aws_launch_template" "update" {
name = data.aws_launch_template.config.name
image_id = data.aws_ami.ubuntu.id
instance_type = "c5.large"
// arn = data.aws_launch_template.config.arn
}
On passing the name it is throwing error 400 with the below error.
Error: InvalidLaunchTemplateName.AlreadyExistsException: Launch template name already in use.
I want the same launch template with just update version. Couldn't able to find any documentation on terraform official website for modifying templates. Or am I missing something?
OS - macOS Catalina
Terraform version - v0.12.21
One thing to note about terraform in general is that it wants to own the entire lifecycle of any resources it manages.
In your example, aws_launch_template.update with that name already exists, so terraform says, essentially, "I don't own this resource, so I shouldn't change it."
This is actually a pretty nice benefit because it means that terraform wont (or at least shouldn't) overwrite or delete resources that it doesn't know about.
Now, since you are referencing an existing launch template, I would recommend bringing it under terraform's ownership (that's assuming you're allowed to do so). To do this, I would recommend
Hard-coding the launch template's name in the resource itself, rather than referencing it via data, and
Importing the resource by running a command like so
terraform import aws_launch_template.update lt-12345678
Where you would replace lt-12345678 with your actual launch template ID. This will bring the resource under terraform's ownership and actually allow updates via terraform code.
Just be careful that you're not stepping on the toes of someone else's resources, if you are in a context where this was created by someone else.
Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.
After running out of space I had to resize my EBS Volume, now I wanted to make the size part of my Terraform configurated and added the following block to the aws_instance resource:
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 32
volume_type = "gp2"
}
Now after running terraform plan it wanted to destroy the existing volume, which is terrible. I also tried to import the existing one using terraform import but it wanted me to use a different name for the resource which is also not great.
So what is the correct procedure here?
The aws_instance resource docs mention that changes to any EBS block devices will cause the instance to be recreated.
To get around this you can use something other than Terraform to grow the EBS volumes using AWS' new elastic volumes feature. Terraform also cannot detect changes to any of the attached block devices created in the aws_instance resource:
NOTE: Currently, changes to *_block_device configuration of existing resources cannot be automatically detected by Terraform. After making updates to block device configuration, resource recreation can be manually triggered by using the taint command.
As such you shouldn't need to go back and change anything in your Terraform configuration unless you are wanting to rebuild the instance using Terraform at some point at which point the worry about losing the instance is obviously moot.
However, if for some reason you want to be able to make the change to your Terraform configuration and keep the instance from being destroyed then you would need to manipulate your state file.
I want to share a terraform script that will be used across different projects. I know how to create and share modules, but this setup has a big annoyance: when I reference a module in a script and perform a terraform apply, if the module resource does not exist it will be created, but also if I perform a terraform destroy this resource will be destroyed.
If I have two projects dependent on the same module, and in one of them I call a terraform destroy it may lead to a inconsistent state, since the module is being used by another project. The script can either fail because it cannot destroy the resource or it will destroy the resource and affect the other project.
In my scenario, I want to share network scripts between two projects and I want the network resources to never be destroyed. I cannot create a project only for this resource because I need to reference it somehow in my projects, and the only way to do it is via its ID, which I have no idea what is going to be.
prevent_destroy is also not an option, since I do need to destroy other resources but the shared script resource. This configuration makes terraform destroy fail.
Is there any way to reference the resource, like by its name, or is there any other better approach to accomplish what I want?
If I understand you correctly, you have some resource R that is a "singleton". That is, only one instance of R can ever exist in your AWS account. For example, you can only ever have one aws_route53_zone with the name "foo.com". If you include R as a module in two different places, then either one may create it when you run terraform apply and either one may delete it when you run terraform destroy. You'd like to avoid that, but you still need some way to get an output attribute from R (e.g. the zone_id for an aws_route53_zone resource is generated by AWS, so you can't guess it).
If that's the case, then instead of using a R as a module, you should:
Create R by itself in its own set of Terraform templates. Let's say those are under /terraform/R.
Configure /terraform/R to use Remote State. For example, here is how you can configure those templates to store their remote state in an S3 bucket (you'll need to fill in the bucket name/region as indicated):
terraform remote config \
-backend=s3 \
-backend-config="bucket=(YOUR BUCKET NAME)" \
-backend-config="key=terraform.tfstate" \
-backend-config="region=(YOUR BUCKET REGION)" \
-backend-config="encrypt=true"
Define any output attributes you need from R as output variables. For example:
output "zone_id" {
value = "${aws_route_53.example.zone_id}"
}
When you run terraform apply in /terraform/R, it will store its Terraform state, including that output, in an S3 bucket.
Now, in all other Terraform templates that need that output attribute from R, you can pull it in from the S3 bucket using the terraform_remote_state data source. For example, let's say you had some template /terraform/foo that needed that zone_id parameter to create an aws_route53_record (you'll need to fill in the bucket name/region as indicated):
data "terraform_remote_state" "r" {
backend = "s3"
config {
bucket = "(YOUR BUCKET NAME)"
key = "terraform.tfstate"
region = "(YOUR BUCKET REGION)"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.r.zone_id}"
name = "www.foo.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note that terraform_remote_state is a read-only data source. That means when you run terraform apply or terraform destroy on any templates that use that resource, they will not have any effect in R.
For more info, check out How to manage terraform state and Terraform: Up & Running.