Trigger random_id resource recreation on rds instance destroy and recreate - amazon-web-services

Folks, am trying to find a way with terraform random_id resource to recreate and provide a new random value when the rds instance destroys and recreates due to a change that went in, say the username on rds has changed.
This random value am trying to attach to final_snapshot_identifier of the aws_db_instance resource so that the snapshot should have a unique value to its id everytime it gets created upon rds instance being destroyed.
Current code:
resource "random_id" "snap_id" {
byte_length = 8
}
locals {
inst_id = "test-rds-inst"
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}"
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = local.inst_snap_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
output "snap_id" {
value = aws_db_instance.rds.final_snapshot_identifier
}
Output after terraform apply:
snap_id = "test-rds-inst-snap-5553"
Use case am trying out:
#1:
Modify value in rds instance to simulate a destroy & recreate:
Modify username to "foo-tmp"
terraform apply -auto-approve
Output:
snap_id = "test-rds-inst-snap-5553"
I was expecting the random_id to kick in and output a unique id, but it didn't.
Observation:
rds instance in deleting state
snapshot "test-rds-inst-snap-5553" in creating state
rds instance recreated and in available state
snapshot "test-rds-inst-snap-5553" in available state
#2:
Modify value again in rds instance to simulate a destroy & recreate:
Modify username to "foo-new"
terraform apply -auto-approve
Kind of expected below error, coz snap id didn't get a new value in prior attempt, but tired anyways..
Observation:
**Error:** error deleting DB Instance (test-rds-inst): DBSnapshotAlreadyExists: Cannot create the snapshot because a snapshot with the identifier test-rds-inst-snap-5553 already exists.
Am aware of the keepers{} map for random_id resource, but not sure on what from the rds_instance that I need to put in the map so that the random_id resource will be recreated and it ends up providing a new unique value to the snap_id suffix.
Also I feel using any attribute of rds instance in the random_id keepers, might cause a circular dependency issue. I may be wrong but haven't tried it though.
Any suggestions will be helpful. Thanks.

The easiest way to do this would be to use taint on the random_id resource, as per the documentation [1]:
To force a random result to be replaced, the taint command can be used to produce a new result on the next run.
Alternatively, looking at the example from the documentation, you could do something like:
resource "random_id" "snap_id" {
byte_length = 8
keepers {
snapshot_id = var.snapshot_id
}
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = random_id.snap_id.keepers.snapshot_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
This means that until the value of the variable snapshot_id changes, the random_id will generate the same result. Not sure if that would work with locals, but you could try replacing var.snapshot_id with local.inst_snap_id. If that works, you could then name the snapshot using built-in functions like formatdate [2] and timestamp [3] to create a snapshot id which will be tied to the time when you were running apply, something like:
locals {
inst_id = "test-rds-inst"
snap_time = formatdate("YYYYMMDD", timestamp())
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}-${local.snap_time}"
}
[1] https://registry.terraform.io/providers/hashicorp/random/latest/docs#resource-keepers
[2] https://www.terraform.io/language/functions/formatdate
[3] https://www.terraform.io/language/functions/timestamp

Related

How can I configure Terraform to update a GCP compute engine instance template without destroying and re-creating?

I have a service deployed on GCP compute engine. It consists of a compute engine instance template, instance group, instance group manager, and load balancer + associated forwarding rules etc.
We're forced into using compute engine rather than Cloud Run or some other serverless offering due to the need for docker-in-docker for the service in question.
The deployment is managed by terraform. I have a config that looks something like this:
data "google_compute_image" "debian_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance_template" "my_service_template" {
name = "my_service"
machine_type = "n1-standard-1"
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
}
...
metadata_startup_script = data.local_file.startup_script.content
metadata = {
MY_ENV_VAR = var.whatever
}
}
resource "google_compute_region_instance_group_manager" "my_service_mig" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
...
}
resource "google_compute_region_backend_service" "my_service_backend" {
...
backend {
group = google_compute_region_instance_group_manager.my_service_mig.instance_group
}
}
resource "google_compute_forwarding_rule" "my_service_frontend" {
depends_on = [
google_compute_region_instance_group_manager.my_service_mig,
]
name = "my_service_ilb"
backend_service = google_compute_region_backend_service.my_service_backend.id
...
}
I'm running into issues where Terraform is unable to perform any kind of update to this service without running into conflicts. It seems that instance templates are immutable in GCP, and doing anything like updating the startup script, adding an env var, or similar forces it to be deleted and re-created.
Terraform prints info like this in that situation:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.connectors_compute_engine.google_compute_instance_template.airbyte_translation_instance1 must be replaced
-/+ resource "google_compute_instance_template" "my_service_template" {
~ id = "projects/project/..." -> (known after apply)
~ metadata = { # forces replacement
+ "TEST" = "test"
# (1 unchanged element hidden)
}
The only solution I've found for getting out of this situation is to entirely delete the entire service and all associated entities from the load balancer down to the instance template and re-create them.
Is there some way to avoid this situation so that I'm able to change the instance template without having to manually update all the terraform config two times? At this point I'm even fine if it ends up creating some downtime for the service in question rather than a full rolling update or something since that's what's happening now anyway.
I was triggered by this issue as well.
However, according to:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#using-with-instance-group-manager
Instance Templates cannot be updated after creation with the Google
Cloud Platform API. In order to update an Instance Template, Terraform
will destroy the existing resource and create a replacement. In order
to effectively use an Instance Template resource with an Instance
Group Manager resource, it's recommended to specify
create_before_destroy in a lifecycle block. Either omit the Instance
Template name attribute, or specify a partial name with name_prefix.
I would also test and plan with this lifecycle meta argument as well:
+ lifecycle {
+ prevent_destroy = true
+ }
}
Or more realistically in your specific case, something like:
resource "google_compute_instance_template" "my_service_template" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
+ lifecycle {
+ create_before_destroy = true
+ }
}
So terraform plan with either create_before_destroy or prevent_destroy = true before terraform apply on google_compute_instance_template to see results.
Ultimately, you can remove google_compute_instance_template.my_service_template.id from state file and import it back.
Some suggested workarounds in this thread:
terraform lifecycle prevent destroy

How to recreate aws_rds_cluster in Terraform

I am trying to create an encrypted version of my currently existing unencrypted aws_rds_cluster by updating my resource, I added:
kms_key_id = "mykmskey"
storage_encrypted = true
This is how my resource should look like:
resource "aws_rds_cluster" "my_rds_cluster" {
cluster_identifier = "${var.service_name}-rds-cluster"
database_name = var.db_name
master_username = var.db_username
master_password = random_password.db_password.result
engine = var.db_engine
engine_version = var.db_engine_version
kms_key_id = "mykmskey"
storage_encrypted = true
db_subnet_group_name = aws_db_subnet_group.fleet_service_db_subnet_group.name
vpc_security_group_ids = [aws_security_group.fleet_service_service_db_security_group.id]
skip_final_snapshot = true
backup_retention_period = var.environment != "prod" ? null : 7
# snapshot_identifier = "my-rds-instance-snapshot"
tags = { Name = "${var.service_name}-rds-cluster" }
}
The problem is that the original resource had delete_protection = true defined, which I also removed but, even though I removed it the original cluster cannot be deleted by any means in order for the new one to be created, neither through changes in Terraform, nor manually in AWS console, it just throws an error like:
error creating RDS cluster: DBClusterAlreadyExistsFault: DB Cluster already exists
Any ideas what to do in such cases?
To do that purely through Terraform, you would have to:
Remove deletion protection from the original Terraform resource
Run terraform apply, which will remove deletion protection from the actual resource in AWS
Make the modifications to the Terraform resource that will result in a delete or replace of the current resource
Run terraform apply again, during which time Terraform will now delete and/or replace the resource.
The key thing here being that you can't remove deleting protection at the same time you are actually deleting a resource, because Terraform isn't going to update an existing resource to modify an attribute before attempting to delete the resource.

Terraform wants to replace Google compute engine if its start/stop scheduler is modified

First of all, I am surprised that I have found very few resources on Google that mention this issue with Terraform.
This is an essential feature for optimizing the cost of cloud instances though, so I'm probably missing out on a few things, thanks for your tips and ideas!
I want to create an instance and manage its start and stop daily, programmatically.
The resource "google_compute_resource_policy" seems to meet my use case. However, when I change the stop or start time, Terraform plans to destroy and recreate the instance... which I absolutely don't want!
The resource "google_compute_resource_policy" is attached to the instance via the argument resource_policies where it is specified: "Modifying this list will cause the instance to recreate."
I don't understand why Terraform handles this simple update so badly. It is true that it is not possible to update a scheduler, whereas it is perfectly possible to detach it manually from the instance, then to destroy it before recreating it with the new stop/start schedule and the attach to the instance again.
Is there a workaround without going through a null resource to run a gcloud script to do these steps?
I tried to add an "ignore_changes" lifecycle on the "resource_policies" argument of my instance, Terraform no longer wants to destroy my instance, but it gives me the following error:
Error when reading or editing ResourcePolicy: googleapi: Error 400: The resource_policy resource 'projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule' is already being used by 'projects/my-project-id/zones/europe-west1-b/instances/my-instance', resourceInUseByAnotherResource"
Here is my Terraform code
resource "google_compute_resource_policy" "instance_schedule" {
name = "my-instance-schedule"
region = var.region
description = "Start and stop instance"
instance_schedule_policy {
vm_start_schedule {
schedule = var.vm_start_schedule
}
vm_stop_schedule {
schedule = var.vm_stop_schedule
}
time_zone = "Europe/Paris"
}
}
resource "google_compute_instance" "my-instance" {
// ******** This is my attempted workaround ********
lifecycle {
ignore_changes = [resource_policies]
}
name = "my-instance"
machine_type = var.machine_type
zone = "${var.region}-b"
allow_stopping_for_update = true
resource_policies = [
google_compute_resource_policy.instance_schedule.id
]
boot_disk {
device_name = local.ref_name
initialize_params {
image = var.boot_disk_image
type = var.disk_type
size = var.disk_size
}
}
network_interface {
network = data.google_compute_network.default.name
access_config {
nat_ip = google_compute_address.static.address
}
}
}
If it can be useful, here is what the terraform apply returns
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_compute_resource_policy.instance_schedule must be replaced
-/+ resource "google_compute_resource_policy" "instance_schedule" {
~ id = "projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule" -> (known after apply)
name = "my-instance-schedule"
~ project = "my-project-id" -> (known after apply)
~ region = "https://www.googleapis.com/compute/v1/projects/my-project-id/regions/europe-west1" -> "europe-west1"
~ self_link = "https://www.googleapis.com/compute/v1/projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule" -> (known after apply)
# (1 unchanged attribute hidden)
~ instance_schedule_policy {
# (1 unchanged attribute hidden)
~ vm_start_schedule {
~ schedule = "0 9 * * *" -> "0 8 * * *" # forces replacement
}
# (1 unchanged block hidden)
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
Do you want to perform these actions in workspace "prd"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_resource_policy.instance_schedule: Destroying... [id=projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule]
Error: Error when reading or editing ResourcePolicy: googleapi: Error 400: The resource_policy resource 'projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule' is already being used by 'projects/my-project-id/zones/europe-west1-b/instances/my-instance', resourceInUseByAnotherResource
NB: I am working with Terraform 0.14.7 and I am using google provider version 3.76.0
An instance inside GCP can be power off without destroy it with the module google_compute_instance using the argument desired_status, keep in mind that if you are creating the instance for the first time this argument needs to be on “RUNNING”. This module can be used as the following.
resource "google_compute_instance" "default" {
name = "test"
machine_type = "f1-micro"
zone = "us-west1-a"
desired_status = "RUNNING"
}
You can also modify your “main.tf” file if you need to stop the VM first and then started creating a dependency in terraform with depends_on.
As you can see in the following comment, the service account will be created but the key will be assigned until the first sentence is done.
resource "google_service_account" "service_account" {
account_id = "terraform-test"
display_name = "Service Account"
}
resource "google_service_account_key" "mykey" {
service_account_id = google_service_account.service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
depends_on = [google_service_account.service_account]
}
If the first component already exists, terraform only deploys the dependent.
I faced same problem with snapshot policy.
I controlled resource policy creation using a flag input variable and using count. For the first time, I created policy resource using flag as 'true'. When I want to change schedule time, I change the flag as 'false' and apply the plan. This will detach the resource.
I then make flag as 'true' again and apply the plan with new time.
This worked for me for snapshot policy. Hope it could solve yours too.
I solved the "resourceInUseByAnotherResource" error by adding the following lifecycle to the google_compute_resource_policy resource:
lifecycle {
create_before_destroy = true
}
Also, this requires to have a unique name with each change, otherwise, the new resource can't be created, because the resource with the same name already exists. So I appended a random ID to the end of the schedule name:
resource "random_pet" "schedule" {
keepers = {
start_schedule = "${var.vm_start_schedule}"
stop_schedule = "${var.vm_stop_schedule}"
}
}
...
resource "google_compute_resource_policy" "schedule" {
name = "schedule-${random_pet.schedule.id}"
...
lifecycle {
create_before_destroy = true
}
}

Preventing destroy of resources when refactoring Terraform to use indices

When I was just starting to use Terraform, I more or less naively declared resources individually, like this:
resource "aws_cloudwatch_log_group" "image1_log" {
name = "${var.image1}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_group" "image2_log" {
name = "${var.image2}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "image1_stream" {
name = "${var.image1}-log-stream"
log_group_name = aws_cloudwatch_log_group.image1_log.name
}
resource "aws_cloudwatch_log_stream" "image2_stream" {
name = "${var.image2}-log-stream"
log_group_name = aws_cloudwatch_log_group.image2_log.name
}
Then, 10-20 different log groups later, I realized this wasn't going to work well as infrastructure grew. I decided to define a variable list:
variable "image_names" {
type = list(string)
default = [
"image1",
"image2"
]
}
Then I replaced the resources using indices:
resource "aws_cloudwatch_log_group" "service-log-groups" {
name = "${element(var.image_names, count.index)}-log-group"
count = length(var.image_names)
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "service-log-streams" {
name = "${element(var.image_names, count.index)}-log-stream"
log_group_name = aws_cloudwatch_log_group.service-log-groups[count.index].name
count = length(var.image_names)
}
The problem here is that when I run terraform apply, I get 4 resources to add, 4 resources to destroy. I tested this with an old log group, and saw that all my logs were wiped (obviously, since the log was destroyed).
The names and other attributes of the log groups/streams are identical- I'm simply refactoring the infrastructure code to be more maintainable. How can I maintain my existing log groups without deleting them yet still refactor my code to use lists?
You'll need to move the existing resources within the Terraform state.
Try running terraform show to get the strings under which the resources are stored, this will be something like [module.xyz.]aws_cloudwatch_log_group.image1_log ...
You can move it with terraform state mv [module.xyz.]aws_cloudwatch_log_group.image1_log '[module.xyz.]aws_cloudwatch_log_group.service-log-groups[0]'.
You can choose which index to assign to each resource by changing [0] accordingly.
Delete the old resource definition for each moved resource, as Terraform would otherwise try to create a new group/stream.
Try it with the first import and check with terraform plan if the resource was moved correctly...
Also check if you need to choose some index for the image_names list jsut to be sure, but I think that won't be necessary.

Creating RDS Instances from Snapshot Using Terraform

Working on a Terraform project in which I am creating an RDS cluster by grabbing and using the most recent production db snapshot:
# Get latest snapshot from production DB
data "aws_db_snapshot" "db_snapshot" {
most_recent = true
db_instance_identifier = "${var.db_instance_to_clone}"
}
#Create RDS instance from snapshot
resource "aws_db_instance" "primary" {
identifier = "${var.app_name}-primary"
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
instance_class = "${var.instance_class}"
vpc_security_group_ids = ["${var.security_group_id}"]
skip_final_snapshot = true
final_snapshot_identifier = "snapshot"
parameter_group_name = "${var.parameter_group_name}"
publicly_accessible = true
timeouts {
create = "2h"
}
}
The issue with this approach is that following runs of the terraform code (once another snapshot has been taken) want to re-create the primary RDS instance (and subsequently, the read replicas) with the latest snapshot of the DB. I was thinking something along the lines of a boolean count parameters that specifies first run, but setting count = 0 on the snapshot resource causes issues with the snapshot_id parameters of the db resource. Likewise setting a count = 0 on the db resource would indicate that it would destroy the db.
Use case for this is to be able to make changes to other aspects of the production infrastructure that this terraform plan manages without having to re-create the entire RDS cluster, which is a very time consuming resource to destroy/create.
Try placing an ignore_changes lifecycle block within your aws_db_instance definition:
lifecycle {
ignore_changes = [
snapshot_identifier,
]
}
This will cause Terraform to only look for changes to the database's snapshot_identifier upon initial creation.
If the database already exists, Terraform will ignore any changes to the existing database's snapshot_identifier field -- even if a new snapshot has been created since then.