AWS Terraform tried to destroy and rebuild RDS cluster - amazon-web-services

I have an RDS cluster I built using Terraform, this is running deletion protection currently.
When I update my Terraform script for something (example security group change) and run this into the environment it always tries to breakdown and rebuild the RDS cluster.
Running this now with deletion protection stops the rebuild, but causes the terraform apply to fail as it cannot destroy the cluster.
How can I get this to keep the existing RDS cluster without rebuilding every time I run my script?
`resource "aws_rds_cluster" "env-cluster" {
cluster_identifier = "mysql-env-cluster"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
availability_zones = ["${var.aws_az1}", "${var.aws_az2}"]
db_subnet_group_name = "${aws_db_subnet_group.env-rds-subg.name}"
database_name = "dbname"
master_username = "${var.db-user}"
master_password = "${var.db-pass}"
backup_retention_period = 5
preferred_backup_window = "22:00-23:00"
deletion_protection = true
skip_final_snapshot = true
}
resource "aws_rds_cluster_instance" "env-01" {
identifier = "${var.env-db-01}"
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
instance_class = "db.t2.small"
apply_immediately = true
}
resource "aws_rds_cluster_instance" "env-02" {
identifier = "${var.env-db-02}"
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
instance_class = "db.t2.small"
apply_immediately = true
}
resource "aws_rds_cluster_endpoint" "env-02-ep" {
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
cluster_endpoint_identifier = "reader"
custom_endpoint_type = "READER"
excluded_members = ["${aws_rds_cluster_instance.env-01.id}"]
}`

I had a similar experience when trying to set up an AWS Aurora cluster and instance.
Each time I run a terraform apply it tries to recreate the Aurora cluster and instance.
Here's my Terraform script:
locals {
aws_region = "eu-west-1"
tag_environment = "Dev"
tag_terraform = {
"true" = "Managed by Terraform"
"false" = "Not Managed by Terraform"
}
tag_family = {
"aurora" = "Aurora"
}
tag_number = {
"1" = "1"
"2" = "2"
"3" = "3"
"4" = "4"
}
}
# RDS Cluster
module "rds_cluster_1" {
source = "../../../../modules/aws/rds-cluster-single"
rds_cluster_identifier = var.rds_cluster_identifier
rds_cluster_engine = var.rds_cluster_engine
rds_cluster_engine_mode = var.rds_cluster_engine_mode
rds_cluster_engine_version = var.rds_cluster_engine_version
rds_cluster_availability_zones = ["${local.aws_region}a"]
rds_cluster_database_name = var.rds_cluster_database_name
rds_cluster_port = var.rds_cluster_port
rds_cluster_master_username = var.rds_cluster_master_username
rds_cluster_master_password = module.password.password_result
rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
rds_cluster_apply_immediately = var.rds_cluster_apply_immediately
allow_major_version_upgrade = var.allow_major_version_upgrade
db_cluster_parameter_group_name = var.rds_cluster_parameter_group_name
rds_cluster_deletion_protection = var.rds_cluster_deletion_protection
enabled_cloudwatch_logs_exports = var.enabled_cloudwatch_logs_exports
skip_final_snapshot = var.skip_final_snapshot
# vpc_security_group_ids = var.vpc_security_group_ids
tag_environment = local.tag_environment
tag_terraform = local.tag_terraform.true
tag_number = local.tag_number.1
tag_family = local.tag_family.aurora
}
Here's how I solved it:
The issue was that each time I run terraform apply Terraform tries to check to recreate the resources in 2 subnets:
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# module.rds_cluster_1.aws_rds_cluster.main has changed
~ resource "aws_rds_cluster" "main" {
~ availability_zones = [
+ "eu-west-1b",
+ "eu-west-1c",
# (1 unchanged element hidden)
]
~ cluster_members = [
+ "aurora-postgres-instance-0",
however, my terraform script only specified one availability (rds_cluster_availability_zones = ["${local.aws_region}a") . All I had to do was specify all 3 availability zones (rds_cluster_availability_zones = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]) for my region:
locals {
aws_region = "eu-west-1"
tag_environment = "Dev"
tag_terraform = {
"true" = "Managed by Terraform"
"false" = "Not Managed by Terraform"
}
tag_family = {
"aurora" = "Aurora"
}
tag_number = {
"1" = "1"
"2" = "2"
"3" = "3"
"4" = "4"
}
}
# RDS Cluster
module "rds_cluster_1" {
source = "../../../../modules/aws/rds-cluster-single"
rds_cluster_identifier = var.rds_cluster_identifier
rds_cluster_engine = var.rds_cluster_engine
rds_cluster_engine_mode = var.rds_cluster_engine_mode
rds_cluster_engine_version = var.rds_cluster_engine_version
rds_cluster_availability_zones = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]
rds_cluster_database_name = var.rds_cluster_database_name
rds_cluster_port = var.rds_cluster_port
rds_cluster_master_username = var.rds_cluster_master_username
rds_cluster_master_password = module.password.password_result
rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
rds_cluster_apply_immediately = var.rds_cluster_apply_immediately
allow_major_version_upgrade = var.allow_major_version_upgrade
db_cluster_parameter_group_name = var.rds_cluster_parameter_group_name
rds_cluster_deletion_protection = var.rds_cluster_deletion_protection
enabled_cloudwatch_logs_exports = var.enabled_cloudwatch_logs_exports
skip_final_snapshot = var.skip_final_snapshot
# vpc_security_group_ids = var.vpc_security_group_ids
tag_environment = local.tag_environment
tag_terraform = local.tag_terraform.true
tag_number = local.tag_number.1
tag_family = local.tag_family.aurora
}
Resources: Terraform wants to recreate cluster on every apply #8

If you dont want to have you RDS in three zones there is a workaround here: https://github.com/hashicorp/terraform-provider-aws/issues/1111#issuecomment-373433010

Related

Unable to configure Parameter Group for Neptune Cluster

I am trying to create Neptune DB with terraform, but facing the following issue.
Please find the terraform script i am using.
## Create Neptune DB cluster and instance
resource "aws_neptune_cluster_parameter_group" "neptune1" {
family = "neptune1.2"
name = "neptune1"
description = "neptune cluster parameter group"
parameter {
name = "neptune_enable_audit_log"
value = 1
apply_method = "pending-reboot"
}
}
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
}
resource "aws_neptune_cluster_instance" "gh-instance" {
count = 1
cluster_identifier = "${aws_neptune_cluster.gh-cluster.id}"
engine = "neptune"
instance_class = "db.r5.large"
apply_immediately = true
}
resource "aws_neptune_subnet_group" "gh-dbg" {
name = "gh-dbg"
subnet_ids = ["${aws_subnet.private.id}" , "${aws_subnet.public.id}"]
}
I think i am not adding the parameter group to the Neptune DB and i am not sure how to do that.
I have tried the following keys in the terraform instance script.
db_parameter_group
parameter_group_name
But both are throwing error - 'This argument is not expected here'
According to the Official documentation the argument you are looking for is "neptune_cluster_parameter_group_name"
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
neptune_cluster_parameter_group_name = "${aws_neptune_cluster_parameter_group.neptune1.name}"
}

InvalidDBClusterStateFault: Source cluster is in a state which is not valid for physical replication when adding a new rds cluster in global cluster

I am using Terraform to setup RDS Global Cluster in 2 regions - us-east-1 and us-east-2. Engine is "aurora-postgres" and engine_version is "13.4".
I already had an existing cluster in us-east-1 made without Terraform, which I imported into terraform, and now want to create a global cluster with another cluster in us-east-2. So I am following this part of the aws-provider docs
Here is what my current hcl looks like:
# provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
alias = "useast1"
region = "us-east-1"
assume_role {
role_arn = var.TF_IAM_ROLE_ARN
}
}
provider "aws" {
alias = "useast2"
region = "us-east-2"
assume_role {
role_arn = var.TF_IAM_ROLE_ARN
}
}
# rds.tf
locals {
rds-monitoring-role_arn = "iam role for rds monitoring"
kms_key_id = {
"us-east-1" : "aws managed rds key arn in us-east-1"
"us-east-2" : "aws managed rds key arn in us-east-2"
}
}
resource "aws_rds_global_cluster" "global-lego-production" {
global_cluster_identifier = "global-lego-production"
force_destroy = true
source_db_cluster_identifier = aws_rds_cluster.lego-production-us-east-1.arn
lifecycle {
ignore_changes = [
engine_version,
database_name
]
}
}
resource "aws_rds_cluster" "lego-production-us-east-1" {
provider = aws.useast1
engine = "aurora-postgresql"
engine_version = "13.4"
cluster_identifier = "lego-production"
master_username = "nektar"
master_password = var.RDS_MASTER_PASSWORD
database_name = "lego"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
db_cluster_parameter_group_name = module.us-east-1.rds-lego-production-parameter-group-id
backup_retention_period = 7
storage_encrypted = true
kms_key_id = local.kms_key_id.us-east-1
copy_tags_to_snapshot = true
deletion_protection = true
skip_final_snapshot = true
iam_database_authentication_enabled = true
enabled_cloudwatch_logs_exports = ["postgresql"]
vpc_security_group_ids = [
module.us-east-1.rds-db-webserver-security-group-id,
module.us-east-1.rds-db-quicksight-security-group-id
]
tags = {
vpc = "nektar"
}
lifecycle {
ignore_changes = [
engine_version,
global_cluster_identifier
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-1-instance-1" {
provider = aws.useast1
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1"
cluster_identifier = aws_rds_cluster.lego-production-us-east-1.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-1
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-1-instance-2" {
provider = aws.useast1
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1-us-east-1b"
cluster_identifier = aws_rds_cluster.lego-production-us-east-1.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-1
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
resource "aws_rds_cluster" "lego-production-us-east-2" {
provider = aws.useast2
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
cluster_identifier = "lego-production-us-east-2"
global_cluster_identifier = aws_rds_global_cluster.global-lego-production.id
db_subnet_group_name = module.us-east-2.rds-lego-prod-subnet-group-id
db_cluster_parameter_group_name = module.us-east-2.rds-lego-production-parameter-group-id
backup_retention_period = 7
storage_encrypted = true
kms_key_id = local.kms_key_id.us-east-2
copy_tags_to_snapshot = true
deletion_protection = true
skip_final_snapshot = true
iam_database_authentication_enabled = true
enabled_cloudwatch_logs_exports = ["postgresql"]
vpc_security_group_ids = [
module.us-east-2.rds-db-webserver-security-group-id,
module.us-east-2.rds-db-quicksight-security-group-id
]
tags = {
vpc = "nektar"
}
depends_on = [
aws_rds_cluster.lego-production-us-east-1,
aws_rds_cluster_instance.lego-production-us-east-1-instance-1,
aws_rds_cluster_instance.lego-production-us-east-1-instance-2
]
lifecycle {
ignore_changes = [
engine_version
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-2-instance-1" {
provider = aws.useast2
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1"
cluster_identifier = aws_rds_cluster.lego-production-us-east-2.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-2.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-2
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
When applying it with terraform plan -out tfplan.out and then terraform apply tfplan.out (the initial plan only showed adding the 3 resources - aws_rds_global_cluster, aws_rds_cluster & aws_rds_cluster_instance in us-east-2)...
The Global Cluster was created successfully (as seen in the AWS Console). But the RDS Cluster in us-east-2 is failing due to the error InvalidDBClusterStateFault: Source cluster: arn:aws:rds:us-east-1:<account-id>:cluster:lego-production is in a state which is not valid for physical replication.
I tried the same thing using just the AWS Console (without terraform, "Add Region" through the "Modify" option on selecting the Global Cluster), and it shows the same error.
What criteria is missing for adding another region to my global cluster? It certainly isn't just terraform acting up. And I couldn't find any other places on the internet where somebody encountered the same error.
If there is any other information that I should provide, pls comment.
You are referencing your useast2 cluster engine to useast1 which has a provider of useast1, which is trying to replicate the same thing.
You should create an additional resource such as "aws_rds_cluster" "lego-production-us-east-2" and provide the same information but enter useast2 as a provider.
For example,for your useast2 cluster you have:
resource "aws_rds_cluster" "lego-production-us-east-2" {
provider = aws.useast2
engine = 👉🏽aws_rds_cluster.lego-production-us-east-1.engine
engine_version = 👉🏽aws_rds_cluster.lego-production-us-east-1.engine_version
cluster_identifier = "lego-production-us-east-2"
Notice your engine is pointing to your useast1 cluster. Reference your engine and engine_version to a new rds cluster which will include your useast2 alias.
Let me know if this works.
It took me the AWS Developer Support Plan to resolve this.
The reason for the error InvalidDBClusterStateFault is pretty straighforward apparently - there are some pending changes to the cluster, to be applied at the next maintenance window.
That's it! To view the pending changes you can run the following command:
aws rds describe-db-clusters --db-cluster-identifier lego-production --query 'DBClusters[].{DBClusterIdentifier:DBClusterIdentifier,PendingModifiedValues:PendingModifiedValues}'
In my case, some changes made through terraform were gonna be applied at the next maintenance window. I had to add the following line in my aws_rds_cluster resource block to apply the aforementioned changes - immediately:
resource "aws_rds_cluster" "lego-production-us-east-1" {
...
+ apply_immediately = true
...
}
And the same had to be done for resource block lego-production-us-east-2 also, just to be sure.
Once I applied these changes, the cluster addition to the global cluster took place as expected.

Terraform: prevent recreating rds cluster during each apply with ignore_changes

I managed to prevent recreating rds cluster during each apply by setting ignore_changes = all, but it is only one parameter -- cluster_members -- that changes. Setting only this parameter to ignore_changes doesn't work -- cluster gets destroyed and created again. Could someone help me understand why it doesn't work please?
resource "aws_rds_cluster" "aurora_cluster" {
cluster_identifier = local.cluster_id
final_snapshot_identifier = try(data.aws_db_cluster_snapshot.start_from_snapshot[0].id, null)
engine = "aurora-mysql"
engine_version = var.rds_engine_version
engine_mode = "provisioned"
master_username = var.rds_master_username
master_password = var.rds_master_password
db_subnet_group_name = aws_db_subnet_group.rds-subnet-group.name
vpc_security_group_ids = concat([aws_security_group.rds_inbound.id], var.external_sgs)
apply_immediately = true
skip_final_snapshot = true
deletion_protection = var.deletion_protection
backup_retention_period = var.backup_retention_period
enabled_cloudwatch_logs_exports = var.rds_cloudwatch_logs
kms_key_id = data.aws_kms_alias.rds_kms_key.arn
storage_encrypted = true
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.rds-cluster-params.name
db_instance_parameter_group_name = aws_db_parameter_group.rds-instance-params.name
tags = merge(
local.common_tags,
{
Description = "RDS cluster ..."
}
)
lifecycle {
ignore_changes = [cluster_members]
}
}
Terraform plan:
# aws_rds_cluster.aurora_cluster has changed
~ resource aws_rds_cluster aurora_cluster {
~ cluster_members = [
+ rds-1-1,
+ rds-1-2,
]
id = rds-1
tags = {
Description = RDS cluster 1
EnvName = env
EnvType = dev
...
}
# (37 unchanged attributes hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.

When using Terraform, why does my RDS instance tear down and stand back up when I make a change to an EC2 Instance in its in/egress rules?

I have an ec2 resource (shown) with its own security group (not shown)
resource "aws_instance" "outpost" {
ami = "ami-0469d1xxxxxxxx"
instance_type = "t2.micro"
key_name = module.secretsmanager.key_name
vpc_security_group_ids = [module.ec2_security_group.security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "${var.env}-${var.user}-ec2-outpost"
Terraform = "true"
Environment = var.env
Created = "${timestamp()}"
}
}
A security group for an RDS instance that has ingress and egress rules for that ec2's security group:
module "db_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4"
name = "${var.env}-${var.user}-${local.db_name}"
vpc_id = module.vpc.vpc_id
ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
egress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
}
And the RDS instance that is in db_security_group
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 3.4.0"
identifier = "${var.env}-${var.user}-${local.db_name}"
engine = var.postgres.engine
engine_version = var.postgres.engine_version
family = var.postgres.family
major_engine_version = var.postgres.major_engine_version
instance_class = var.postgres.instance_class
allocated_storage = var.postgres.allocated_storage
max_allocated_storage = var.postgres.max_allocated_storage
storage_encrypted = var.postgres.storage_encrypted
name = var.postgres.name
username = var.postgres.username
password = var.rds_password
port = var.postgres.port
multi_az = var.postgres.multi_az
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [module.db_security_group.security_group_id]
maintenance_window = var.postgres.maintenance_window
backup_window = var.postgres.backup_window
enabled_cloudwatch_logs_exports = var.postgres.enabled_cloudwatch_logs_exports
backup_retention_period = var.postgres.backup_retention_period
skip_final_snapshot = var.postgres.skip_final_snapshot
deletion_protection = var.postgres.deletion_protection
performance_insights_enabled = var.postgres.performance_insights_enabled
performance_insights_retention_period = var.postgres.performance_insights_retention_period
create_monitoring_role = var.postgres.create_monitoring_role
monitoring_role_name = "${var.env}-${var.user}-${var.postgres.monitoring_role_name}"
monitoring_interval = var.postgres.monitoring_interval
snapshot_identifier = var.postgres.snapshot_identifier
}
When I change something with the ec2 instance (like, say, iam_instance_profile) or anything about instances referenced in the in/outbound rules for module.db_security_group.security_group_id, why does does the RDS instance get destroyed and recreated by Terraform?
It seems that in addition to the username and password behavior seen when snapshot_identifier is given (here and here), Terraform will also mark the RDS instance for deletion and recreation when either of these parameters is set. You will see this happening when re-applying the plan in question, because the initial username and/or password is never actually set by Terraform; it thinks there is a change.

terraform import error with module with rds instances

So I created rds instance and I am trying to import it into terraform. However I am using modules in my code so when running terraform I am getting the error:
AT first it says:
module.rds_dr.aws_db_instance.db_instance: Import prepared!
Prepared aws_db_instance for import
then it gives error:
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_db_instance.db_instance,
the provider detected that no object exists with the given id. Only
pre-existing objects can be imported; check that the id is correct and that it
is associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
The command I ran was :
terraform import module.rds_dr.aws_db_instance.db_instance db-ID
I created the instance using source of module in github. The code for the rds instance is below:
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:****"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
security_groups = [aws_security_group.rds_app.id]
subnets = [module.vpc.public_subnets]
auto_minor_version_upgrade = var.rds_auto_minor_version_upgrade_dr
backup_retention_period = var.rds_backup_retention_period_dr
backup_window = var.rds_backup_window_dr
maintenance_window = var.rds_maintenance_window_dr
environment = var.environment
kms_key_id = aws_kms_key.rds.arn
multi_az = var.rds_multi_az_dr
notification_topic = var.rds_notification_topic_dr
publicly_accessible = var.rds_publicly_accessible_dr
storage_encrypted = var.rds_storage_encrypted_dr
storage_size = var.rds_storage_size_dr
storage_type = var.rds_storage_type_dr
apply_immediately = true
}
Also, this is part of the module code:
resource "aws_db_instance" "db_instance" {
allocated_storage = local.storage_size
allow_major_version_upgrade = false
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
backup_retention_period = var.read_replica ? 0 : var.backup_retention_period
backup_window = var.backup_window
character_set_name = local.is_oracle ? var.character_set_name : null
copy_tags_to_snapshot = var.copy_tags_to_snapshot
db_subnet_group_name = local.same_region_replica ? null : local.subnet_group
deletion_protection = var.enable_deletion_protection
engine = var.engine
engine_version = local.engine_version
final_snapshot_identifier = lower("${var.name}-final-snapshot${var.final_snapshot_suffix == "" ? "" : "-"}${var.final_snapshot_suffix}")
iam_database_authentication_enabled = var.iam_authentication_enabled
identifier_prefix = "${lower(var.name)}-"
instance_class = var.instance_class
iops = var.storage_iops
kms_key_id = var.kms_key_id
license_model = var.license_model == "" ? local.license_model : var.license_model
maintenance_window = var.maintenance_window
max_allocated_storage = var.max_storage_size
monitoring_interval = var.monitoring_interval
monitoring_role_arn = var.monitoring_interval > 0 ? local.monitoring_role_arn : null
multi_az = var.read_replica ? false : var.multi_az
name = var.dbname
option_group_name = local.same_region_replica ? null : local.option_group
parameter_group_name = local.same_region_replica ? null : local.parameter_group
password = var.password
port = local.port
publicly_accessible = var.publicly_accessible
replicate_source_db = var.source_db
skip_final_snapshot = var.read_replica || var.skip_final_snapshot
snapshot_identifier = var.db_snapshot_id
storage_encrypted = var.storage_encrypted
storage_type = var.storage_type
tags = merge(var.tags, local.tags)
timezone = local.is_mssql ? var.timezone : null
username = var.username
vpc_security_group_ids = var.security_groups
}
This is my code for the providers:
# pinned provider versions
provider "random" {
version = "~> 2.3.0"
}
provider "template" {
version = "~> 2.1.2"
}
provider "archive" {
version = "~> 1.1"
}
# default provider
provider "aws" {
version = "~> 2.44"
allowed_account_ids = [var.aws_account_id]
region = "us-east-1"
}
# remote state
terraform {
required_version = "0.12.24"
backend "s3" {
key = "terraform.dev.tfstate"
encrypt = "true"
bucket = "dev-tfstate"
region = "us-east-1"
}
}
I have inserted the correct DB ID and i still do not know why terraform says "import non-existent remote object"?
How do I fix this?