"global_replication_group_id": conflicts with parameter_group_name in terraform - amazon-web-services

Here, we use aws_elasticache_global_replication_group in terraform to add the multi-region elasticache redis cluster in AWS. Here is the code we are trying and we get "global_replication_group_id": conflicts with parameter_group_name error after applying the terraform plan.
resource "aws_elasticache_global_replication_group" "global-redis" {
global_replication_group_id_suffix = "global-redis"
primary_replication_group_id = aws_elasticache_replication_group.primary.id
}
resource "aws_elasticache_replication_group" "primary" {
replication_group_id = "redis-primary"
replication_group_description = "primary replication group"
engine = "redis"
engine_version = "5.0.6"
node_type = "cache.m5.large"
snapshot_retention_limit = var.snapshot_retention
parameter_group_name = var.parameter_group_name
availability_zones = var.availability-zones-primary
number_cache_clusters = 1
}
resource "aws_elasticache_replication_group" "secondary" {
replication_group_id = "redis-secondary"
replication_group_description = "secondary replication group"
global_replication_group_id = aws_elasticache_global_replication_group.global-redis.global_replication_group_id
snapshot_retention_limit = var.snapshot_retention
parameter_group_name = var.parameter_group_name
availability_zones = var.availability-zones-secondary
number_cache_clusters = 1
provider = aws.other_region
}
We couldn't find any documentation regarding this error and looking for answers if anyone faced the same issue.

Related

Unable to configure Parameter Group for Neptune Cluster

I am trying to create Neptune DB with terraform, but facing the following issue.
Please find the terraform script i am using.
## Create Neptune DB cluster and instance
resource "aws_neptune_cluster_parameter_group" "neptune1" {
family = "neptune1.2"
name = "neptune1"
description = "neptune cluster parameter group"
parameter {
name = "neptune_enable_audit_log"
value = 1
apply_method = "pending-reboot"
}
}
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
}
resource "aws_neptune_cluster_instance" "gh-instance" {
count = 1
cluster_identifier = "${aws_neptune_cluster.gh-cluster.id}"
engine = "neptune"
instance_class = "db.r5.large"
apply_immediately = true
}
resource "aws_neptune_subnet_group" "gh-dbg" {
name = "gh-dbg"
subnet_ids = ["${aws_subnet.private.id}" , "${aws_subnet.public.id}"]
}
I think i am not adding the parameter group to the Neptune DB and i am not sure how to do that.
I have tried the following keys in the terraform instance script.
db_parameter_group
parameter_group_name
But both are throwing error - 'This argument is not expected here'
According to the Official documentation the argument you are looking for is "neptune_cluster_parameter_group_name"
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
neptune_cluster_parameter_group_name = "${aws_neptune_cluster_parameter_group.neptune1.name}"
}

InvalidDBClusterStateFault: Source cluster is in a state which is not valid for physical replication when adding a new rds cluster in global cluster

I am using Terraform to setup RDS Global Cluster in 2 regions - us-east-1 and us-east-2. Engine is "aurora-postgres" and engine_version is "13.4".
I already had an existing cluster in us-east-1 made without Terraform, which I imported into terraform, and now want to create a global cluster with another cluster in us-east-2. So I am following this part of the aws-provider docs
Here is what my current hcl looks like:
# provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
alias = "useast1"
region = "us-east-1"
assume_role {
role_arn = var.TF_IAM_ROLE_ARN
}
}
provider "aws" {
alias = "useast2"
region = "us-east-2"
assume_role {
role_arn = var.TF_IAM_ROLE_ARN
}
}
# rds.tf
locals {
rds-monitoring-role_arn = "iam role for rds monitoring"
kms_key_id = {
"us-east-1" : "aws managed rds key arn in us-east-1"
"us-east-2" : "aws managed rds key arn in us-east-2"
}
}
resource "aws_rds_global_cluster" "global-lego-production" {
global_cluster_identifier = "global-lego-production"
force_destroy = true
source_db_cluster_identifier = aws_rds_cluster.lego-production-us-east-1.arn
lifecycle {
ignore_changes = [
engine_version,
database_name
]
}
}
resource "aws_rds_cluster" "lego-production-us-east-1" {
provider = aws.useast1
engine = "aurora-postgresql"
engine_version = "13.4"
cluster_identifier = "lego-production"
master_username = "nektar"
master_password = var.RDS_MASTER_PASSWORD
database_name = "lego"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
db_cluster_parameter_group_name = module.us-east-1.rds-lego-production-parameter-group-id
backup_retention_period = 7
storage_encrypted = true
kms_key_id = local.kms_key_id.us-east-1
copy_tags_to_snapshot = true
deletion_protection = true
skip_final_snapshot = true
iam_database_authentication_enabled = true
enabled_cloudwatch_logs_exports = ["postgresql"]
vpc_security_group_ids = [
module.us-east-1.rds-db-webserver-security-group-id,
module.us-east-1.rds-db-quicksight-security-group-id
]
tags = {
vpc = "nektar"
}
lifecycle {
ignore_changes = [
engine_version,
global_cluster_identifier
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-1-instance-1" {
provider = aws.useast1
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1"
cluster_identifier = aws_rds_cluster.lego-production-us-east-1.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-1
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-1-instance-2" {
provider = aws.useast1
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1-us-east-1b"
cluster_identifier = aws_rds_cluster.lego-production-us-east-1.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-1.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-1
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
resource "aws_rds_cluster" "lego-production-us-east-2" {
provider = aws.useast2
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
cluster_identifier = "lego-production-us-east-2"
global_cluster_identifier = aws_rds_global_cluster.global-lego-production.id
db_subnet_group_name = module.us-east-2.rds-lego-prod-subnet-group-id
db_cluster_parameter_group_name = module.us-east-2.rds-lego-production-parameter-group-id
backup_retention_period = 7
storage_encrypted = true
kms_key_id = local.kms_key_id.us-east-2
copy_tags_to_snapshot = true
deletion_protection = true
skip_final_snapshot = true
iam_database_authentication_enabled = true
enabled_cloudwatch_logs_exports = ["postgresql"]
vpc_security_group_ids = [
module.us-east-2.rds-db-webserver-security-group-id,
module.us-east-2.rds-db-quicksight-security-group-id
]
tags = {
vpc = "nektar"
}
depends_on = [
aws_rds_cluster.lego-production-us-east-1,
aws_rds_cluster_instance.lego-production-us-east-1-instance-1,
aws_rds_cluster_instance.lego-production-us-east-1-instance-2
]
lifecycle {
ignore_changes = [
engine_version
]
}
}
resource "aws_rds_cluster_instance" "lego-production-us-east-2-instance-1" {
provider = aws.useast2
engine = aws_rds_cluster.lego-production-us-east-1.engine
engine_version = aws_rds_cluster.lego-production-us-east-1.engine_version
identifier = "lego-production-instance-1"
cluster_identifier = aws_rds_cluster.lego-production-us-east-2.id
instance_class = "db.r6g.4xlarge"
db_subnet_group_name = module.us-east-2.rds-lego-prod-subnet-group-id
monitoring_role_arn = local.rds-monitoring-role_arn
performance_insights_enabled = true
performance_insights_kms_key_id = local.kms_key_id.us-east-2
performance_insights_retention_period = 7
monitoring_interval = 60
tags = {
"devops-guru-default" = "lego-production"
}
lifecycle {
ignore_changes = [
instance_class
]
}
}
When applying it with terraform plan -out tfplan.out and then terraform apply tfplan.out (the initial plan only showed adding the 3 resources - aws_rds_global_cluster, aws_rds_cluster & aws_rds_cluster_instance in us-east-2)...
The Global Cluster was created successfully (as seen in the AWS Console). But the RDS Cluster in us-east-2 is failing due to the error InvalidDBClusterStateFault: Source cluster: arn:aws:rds:us-east-1:<account-id>:cluster:lego-production is in a state which is not valid for physical replication.
I tried the same thing using just the AWS Console (without terraform, "Add Region" through the "Modify" option on selecting the Global Cluster), and it shows the same error.
What criteria is missing for adding another region to my global cluster? It certainly isn't just terraform acting up. And I couldn't find any other places on the internet where somebody encountered the same error.
If there is any other information that I should provide, pls comment.
You are referencing your useast2 cluster engine to useast1 which has a provider of useast1, which is trying to replicate the same thing.
You should create an additional resource such as "aws_rds_cluster" "lego-production-us-east-2" and provide the same information but enter useast2 as a provider.
For example,for your useast2 cluster you have:
resource "aws_rds_cluster" "lego-production-us-east-2" {
provider = aws.useast2
engine = 👉🏽aws_rds_cluster.lego-production-us-east-1.engine
engine_version = 👉🏽aws_rds_cluster.lego-production-us-east-1.engine_version
cluster_identifier = "lego-production-us-east-2"
Notice your engine is pointing to your useast1 cluster. Reference your engine and engine_version to a new rds cluster which will include your useast2 alias.
Let me know if this works.
It took me the AWS Developer Support Plan to resolve this.
The reason for the error InvalidDBClusterStateFault is pretty straighforward apparently - there are some pending changes to the cluster, to be applied at the next maintenance window.
That's it! To view the pending changes you can run the following command:
aws rds describe-db-clusters --db-cluster-identifier lego-production --query 'DBClusters[].{DBClusterIdentifier:DBClusterIdentifier,PendingModifiedValues:PendingModifiedValues}'
In my case, some changes made through terraform were gonna be applied at the next maintenance window. I had to add the following line in my aws_rds_cluster resource block to apply the aforementioned changes - immediately:
resource "aws_rds_cluster" "lego-production-us-east-1" {
...
+ apply_immediately = true
...
}
And the same had to be done for resource block lego-production-us-east-2 also, just to be sure.
Once I applied these changes, the cluster addition to the global cluster took place as expected.

Terraform: prevent recreating rds cluster during each apply with ignore_changes

I managed to prevent recreating rds cluster during each apply by setting ignore_changes = all, but it is only one parameter -- cluster_members -- that changes. Setting only this parameter to ignore_changes doesn't work -- cluster gets destroyed and created again. Could someone help me understand why it doesn't work please?
resource "aws_rds_cluster" "aurora_cluster" {
cluster_identifier = local.cluster_id
final_snapshot_identifier = try(data.aws_db_cluster_snapshot.start_from_snapshot[0].id, null)
engine = "aurora-mysql"
engine_version = var.rds_engine_version
engine_mode = "provisioned"
master_username = var.rds_master_username
master_password = var.rds_master_password
db_subnet_group_name = aws_db_subnet_group.rds-subnet-group.name
vpc_security_group_ids = concat([aws_security_group.rds_inbound.id], var.external_sgs)
apply_immediately = true
skip_final_snapshot = true
deletion_protection = var.deletion_protection
backup_retention_period = var.backup_retention_period
enabled_cloudwatch_logs_exports = var.rds_cloudwatch_logs
kms_key_id = data.aws_kms_alias.rds_kms_key.arn
storage_encrypted = true
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.rds-cluster-params.name
db_instance_parameter_group_name = aws_db_parameter_group.rds-instance-params.name
tags = merge(
local.common_tags,
{
Description = "RDS cluster ..."
}
)
lifecycle {
ignore_changes = [cluster_members]
}
}
Terraform plan:
# aws_rds_cluster.aurora_cluster has changed
~ resource aws_rds_cluster aurora_cluster {
~ cluster_members = [
+ rds-1-1,
+ rds-1-2,
]
id = rds-1
tags = {
Description = RDS cluster 1
EnvName = env
EnvType = dev
...
}
# (37 unchanged attributes hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.

When using Terraform, why does my RDS instance tear down and stand back up when I make a change to an EC2 Instance in its in/egress rules?

I have an ec2 resource (shown) with its own security group (not shown)
resource "aws_instance" "outpost" {
ami = "ami-0469d1xxxxxxxx"
instance_type = "t2.micro"
key_name = module.secretsmanager.key_name
vpc_security_group_ids = [module.ec2_security_group.security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "${var.env}-${var.user}-ec2-outpost"
Terraform = "true"
Environment = var.env
Created = "${timestamp()}"
}
}
A security group for an RDS instance that has ingress and egress rules for that ec2's security group:
module "db_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4"
name = "${var.env}-${var.user}-${local.db_name}"
vpc_id = module.vpc.vpc_id
ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
egress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
}
And the RDS instance that is in db_security_group
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 3.4.0"
identifier = "${var.env}-${var.user}-${local.db_name}"
engine = var.postgres.engine
engine_version = var.postgres.engine_version
family = var.postgres.family
major_engine_version = var.postgres.major_engine_version
instance_class = var.postgres.instance_class
allocated_storage = var.postgres.allocated_storage
max_allocated_storage = var.postgres.max_allocated_storage
storage_encrypted = var.postgres.storage_encrypted
name = var.postgres.name
username = var.postgres.username
password = var.rds_password
port = var.postgres.port
multi_az = var.postgres.multi_az
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [module.db_security_group.security_group_id]
maintenance_window = var.postgres.maintenance_window
backup_window = var.postgres.backup_window
enabled_cloudwatch_logs_exports = var.postgres.enabled_cloudwatch_logs_exports
backup_retention_period = var.postgres.backup_retention_period
skip_final_snapshot = var.postgres.skip_final_snapshot
deletion_protection = var.postgres.deletion_protection
performance_insights_enabled = var.postgres.performance_insights_enabled
performance_insights_retention_period = var.postgres.performance_insights_retention_period
create_monitoring_role = var.postgres.create_monitoring_role
monitoring_role_name = "${var.env}-${var.user}-${var.postgres.monitoring_role_name}"
monitoring_interval = var.postgres.monitoring_interval
snapshot_identifier = var.postgres.snapshot_identifier
}
When I change something with the ec2 instance (like, say, iam_instance_profile) or anything about instances referenced in the in/outbound rules for module.db_security_group.security_group_id, why does does the RDS instance get destroyed and recreated by Terraform?
It seems that in addition to the username and password behavior seen when snapshot_identifier is given (here and here), Terraform will also mark the RDS instance for deletion and recreation when either of these parameters is set. You will see this happening when re-applying the plan in question, because the initial username and/or password is never actually set by Terraform; it thinks there is a change.

AWS RDS Aurora Global Cluster Error: Cannot specify user name for cross region replication cluster

I am trying to create an rds global database in aws using Terraform. The primary cluster gets created but the secondary cluster fails with the following error-
* aws_rds_cluster.secondary: error creating RDS cluster:
InvalidParameterCombination: Cannot specify user name for cross
region replication cluster
status code: 400, request id: 10b82a78-898c-49e6-b28f-
0a318fdc226f
I tried by removing master_username but I got the below error-
* aws_rds_cluster.secondary: provider.aws: aws_rds_cluster: :
"master_username": required field is not set
My Terraform Module to create rds global database in aws-
resource "aws_rds_global_cluster" "rdsglobal" {
provider = "aws.primary"
global_cluster_identifier = "${var.global_database_id}"
storage_encrypted = "${var.storage_encrypted}"
}
resource "aws_rds_cluster_instance" "primary" {
provider = "aws.primary"
count = "${var.instance_count}"
identifier = "${var.db_name}-${count.index+1}"
cluster_identifier = "${aws_rds_cluster.primary.id}"
instance_class = "${var.instance_class}"
engine = "${var.engine}"
engine_version = "${var.engine_version}"
publicly_accessible = "${var.publicly_accessible}"
}
resource "aws_rds_cluster" "primary" {
provider = "aws.primary"
cluster_identifier = "${var.primary_cluster_id}"
database_name = "${var.db_name}"
port = "${var.port}"
engine = "${var.engine}"
engine_version = "${var.engine_version}"
master_username = "${var.master_username}"
master_password = "${random_string.password.result}"
vpc_security_group_ids = ["${var.security_group_ids}"]
db_subnet_group_name = "${var.db_subnet_group_name}"
storage_encrypted = "${var.storage_encrypted}"
backup_retention_period = "${var.backup_retention_period}"
skip_final_snapshot = "${var.skip_final_snapshot}"
engine_mode = "${var.engine_mode}"
global_cluster_identifier = "${aws_rds_global_cluster.rdsglobal.id}"
}
resource "aws_rds_cluster_instance" "secondary" {
provider = "aws.secondary"
count = "${var.instance_count}"
identifier = "${var.db_name}-${count.index+1}"
cluster_identifier = "${aws_rds_cluster.secondary.id}"
instance_class = "${var.instance_class}"
engine = "${var.engine}"
engine_version = "${var.engine_version}"
publicly_accessible = "${var.publicly_accessible}"
}
resource "aws_rds_cluster" "secondary" {
depends_on = ["aws_rds_cluster_instance.primary"]
provider = "aws.secondary"
cluster_identifier = "${var.secondary_cluster_id}"
port = "${var.port}"
engine = "${var.engine}"
engine_version = "${var.engine_version}"
master_username = "${var.master_username}"
master_password = "${random_string.password.result}"
vpc_security_group_ids = ["${var.secondary_security_group_ids}"]
db_subnet_group_name = "${var.db_subnet_group_name}"
engine_mode = "${var.engine_mode}"
global_cluster_identifier = "${aws_rds_global_cluster.rdsglobal.id}"
}
Reference: https://www.terraform.io/docs/providers/aws/r/rds_global_cluster.html
If you are creating a global cluster, you don't need to provide master_username and master_password for the secondary cluster.