I'm trying to do a DMS infrastructure, but I'm stuck due to quotas. I'm trying to the following:
I got a list of 64 DBs => replicated_db separate in chunk of 50
create x DMS instances => X is number of chunk of the list 1)
associate DMS replication task to DMS instances separated into chunks of the list 1)
Points 1 and 2 are OK, but I'm stuck on the 3rd.
What I have already done:
# Create a new replication instance
resource "aws_dms_replication_instance" "dms_instance" {
count = length(chunklist(var.replicated_db, 50))
allow_major_version_upgrade = true
apply_immediately = false
auto_minor_version_upgrade = true
allocated_storage = var.allocated_storage
availability_zone = module.rds-mssql.db_instance_availability_zone[0]
engine_version = "3.4.7"
multi_az = var.dms_multi_az
preferred_maintenance_window = var.maintenance_window
publicly_accessible = false
replication_instance_class = var.replication_instance_class
replication_instance_id = "${var.identifier}-dms-${count.index}"
replication_subnet_group_id = aws_dms_replication_subnet_group.dms_subnet_group[0].id
vpc_security_group_ids = flatten([
var.vpc_security_group_ids,
"sg-XXXXXXXXXXXXXXXXX",
"sg-XXXXXXXXXXXXXXXXX",
"sg-XXXXXXXXXXXXXXXXX",
]
)
tags = var.tags
}
resource "aws_dms_replication_task" "dms_replication_task" {
for_each = var.replicated_db
migration_type = "full-load"
replication_instance_arn = aws_dms_replication_instance.dms_instance[*].replication_instance_arn
replication_task_id = "${var.identifier}-${replace(each.value, "_", "-")}-replication-task"
table_mappings = file("${var.path_table_mapping}/table_mappings.json")
source_endpoint_arn = aws_dms_endpoint.dms_endpoint_source[each.value].endpoint_arn
target_endpoint_arn = aws_dms_endpoint.dms_endpoint_target[each.value].endpoint_arn
tags = var.tags
depends_on = [
aws_dms_endpoint.dms_endpoint_source,
aws_dms_endpoint.dms_endpoint_target,
]
}
Could someone help me with this data manipulation?
Related
I am trying to create Neptune DB with terraform, but facing the following issue.
Please find the terraform script i am using.
## Create Neptune DB cluster and instance
resource "aws_neptune_cluster_parameter_group" "neptune1" {
family = "neptune1.2"
name = "neptune1"
description = "neptune cluster parameter group"
parameter {
name = "neptune_enable_audit_log"
value = 1
apply_method = "pending-reboot"
}
}
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
}
resource "aws_neptune_cluster_instance" "gh-instance" {
count = 1
cluster_identifier = "${aws_neptune_cluster.gh-cluster.id}"
engine = "neptune"
instance_class = "db.r5.large"
apply_immediately = true
}
resource "aws_neptune_subnet_group" "gh-dbg" {
name = "gh-dbg"
subnet_ids = ["${aws_subnet.private.id}" , "${aws_subnet.public.id}"]
}
I think i am not adding the parameter group to the Neptune DB and i am not sure how to do that.
I have tried the following keys in the terraform instance script.
db_parameter_group
parameter_group_name
But both are throwing error - 'This argument is not expected here'
According to the Official documentation the argument you are looking for is "neptune_cluster_parameter_group_name"
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
neptune_cluster_parameter_group_name = "${aws_neptune_cluster_parameter_group.neptune1.name}"
}
I managed to prevent recreating rds cluster during each apply by setting ignore_changes = all, but it is only one parameter -- cluster_members -- that changes. Setting only this parameter to ignore_changes doesn't work -- cluster gets destroyed and created again. Could someone help me understand why it doesn't work please?
resource "aws_rds_cluster" "aurora_cluster" {
cluster_identifier = local.cluster_id
final_snapshot_identifier = try(data.aws_db_cluster_snapshot.start_from_snapshot[0].id, null)
engine = "aurora-mysql"
engine_version = var.rds_engine_version
engine_mode = "provisioned"
master_username = var.rds_master_username
master_password = var.rds_master_password
db_subnet_group_name = aws_db_subnet_group.rds-subnet-group.name
vpc_security_group_ids = concat([aws_security_group.rds_inbound.id], var.external_sgs)
apply_immediately = true
skip_final_snapshot = true
deletion_protection = var.deletion_protection
backup_retention_period = var.backup_retention_period
enabled_cloudwatch_logs_exports = var.rds_cloudwatch_logs
kms_key_id = data.aws_kms_alias.rds_kms_key.arn
storage_encrypted = true
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.rds-cluster-params.name
db_instance_parameter_group_name = aws_db_parameter_group.rds-instance-params.name
tags = merge(
local.common_tags,
{
Description = "RDS cluster ..."
}
)
lifecycle {
ignore_changes = [cluster_members]
}
}
Terraform plan:
# aws_rds_cluster.aurora_cluster has changed
~ resource aws_rds_cluster aurora_cluster {
~ cluster_members = [
+ rds-1-1,
+ rds-1-2,
]
id = rds-1
tags = {
Description = RDS cluster 1
EnvName = env
EnvType = dev
...
}
# (37 unchanged attributes hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.
I have to create RDS with data encryption enabled in the multi-region. When RDS creating I need to enable "storage_encrypted": true, and when encryption is enabled AWS required multi-region support kms_key_id to create global RDS.
There are two scenarios I have to validate before creating the RDS.
When storage_encrypted": true, get the "kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx", create a RDS.
If storage_encrypted": false, get the default kms_key_id key and create RDS with validating a condition.
How can I make conditions for the above scenario?
In the variables.tf
variable "storage_encrypted" {
type = bool
default = false
}
variable "kms_key_id" {
type = string
default = null
}
In the vars.tfvars.json file I have two parameters.
"storage_encrypted" : true,
"kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
This is the current aurora.tf module for secondary
module "aurora_secondary" {
source = "terraform-aws-modules/rds-aurora/aws"
version = "7.3.0"
apply_immediately = var.apply_immediately
create_cluster = var.create_secondary_cluster
providers = { aws = aws.secondary }
is_primary_cluster = false
source_region = var.primary_region
name = var.name
engine = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine[0] : null
engine_version = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine_version[0] : null
global_cluster_identifier = var.create_global_cluster ? aws_rds_global_cluster.this.*.id[0] : null
*********************************************
storage_encrypted = var.storage_encrypted
*********************************************
create_random_password = var.create_random_password
instance_class = var.instance_class
instances = var.secondary_instances
vpc_id = var.vpc_id_us_east_2
create_db_subnet_group = var.create_secondary_db_subnet_group
subnets = var.private_subnets_us_east_2
create_security_group = var.create_secondary_security_group
allowed_cidr_blocks = var.allowed_cidr_blocks
create_monitoring_role = var.create_monitoring_role
monitoring_interval = var.monitoring_interval
monitoring_role_arn = var.monitoring_role_arn
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.preferred_backup_window
preferred_maintenance_window = var.preferred_maintenance_window
depends_on = [
module.aurora
]
tags = local.cluster_tags
}
We are utilizing the GCP network and GKE modules in Terraform to create the VPC and GKE cluster subsequently. Now we would like to create a firewall rule with the target as GKE nodes. We don't want to update the existing firewall rules which are auto-created as the format which GCP uses to name them might change in future due to which our logic may fail. That's why there is a need to create a separate firewall rule along with a separate network tag pointing to the GKE nodes. Module info
VPC
module "vpc" {
source = "terraform-google-modules/network/google"
#version = "~> 2.5"
project_id = var.project_id
network_name = "${var.project_name}-${var.env_name}-vpc"
subnets = [
{
subnet_name = "${var.project_name}-${var.env_name}-subnet"
subnet_ip = "${var.subnetwork_cidr}"
subnet_region = var.region
}
]
secondary_ranges = {
"${var.project_name}-${var.env_name}-subnet" = [
{
range_name = "${var.project_name}-gke-pod-ip-range"
ip_cidr_range = "${var.ip_range_pods_cidr}"
},
{
range_name = "${var.project_name}-gke-service-ip-range"
ip_cidr_range = "${var.ip_range_services_cidr}"
}
]
}
}
GKE_CLUSTER
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
project_id = var.project_id
name = "${var.project_name}-gke-${var.env_name}-cluster"
regional = true
region = var.region
zones = ["${var.region}-a", "${var.region}-b", "${var.region}-c"]
network = module.vpc.network_name
subnetwork = module.vpc.subnets_names[0]
ip_range_pods = "${var.project_name}-gke-pod-ip-range"
ip_range_services = "${var.project_name}-gke-service-ip-range"
http_load_balancing = false
network_policy = false
horizontal_pod_autoscaling = true
filestore_csi_driver = false
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "${var.control_plane_cidr}"
istio = false
cloudrun = false
dns_cache = false
node_pools = [
{
name = "${var.project_name}-gke-node-pool"
machine_type = "${var.machine_type}"
node_locations = "${var.region}-a,${var.region}-b,${var.region}-c"
min_count = "${var.node_pools_min_count}"
max_count = "${var.node_pools_max_count}"
disk_size_gb = "${var.node_pools_disk_size_gb}"
# local_ssd_count = 0
# spot = false
# local_ssd_ephemeral_count = 0
# disk_type = "pd-standard"
# image_type = "COS_CONTAINERD"
# enable_gcfs = false
auto_repair = true
auto_upgrade = true
# service_account = "project-service-account#<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
# initial_node_count = 80
}
]
# node_pools_tags = {
# all = []
# default-node-pool = ["default-node-pool",]
# }
}
FIREWALL
module "firewall_rules" {
source = "terraform-google-modules/network/google//modules/firewall-rules"
project_id = var.project_id
network_name = module.vpc.network_name
rules = [{
name = "allow-istio-ingress"
description = null
direction = "INGRESS"
priority = null
ranges = ["${var.control_plane_cidr}"]
source_tags = null
source_service_accounts = null
target_tags = null
target_service_accounts = null
allow = [{
protocol = "tcp"
ports = ["15017"]
}]
deny = []
log_config = {
metadata = "INCLUDE_ALL_METADATA"
}
}]
depends_on = [module.gke]
}
Although the GKE module has tags property to define tags explicitly, we still need assistance to properly instantiate it and then fetch the same tag value in the firewall module.
I found a working solution to my question posted earlier. Please refer to the GKE module snippet. In that, we only need to modify the below part and an explicit network tag will be created to point to all the nodes in that node pool.
module "gke" {
.
.
node_pools = [
{
name = "gke-node-pool"
.
.
.
},
]
node_pools_tags = {
"gke-node-pool" = "gke-node-pool-network-tag"
}
}
I have an RDS cluster I built using Terraform, this is running deletion protection currently.
When I update my Terraform script for something (example security group change) and run this into the environment it always tries to breakdown and rebuild the RDS cluster.
Running this now with deletion protection stops the rebuild, but causes the terraform apply to fail as it cannot destroy the cluster.
How can I get this to keep the existing RDS cluster without rebuilding every time I run my script?
`resource "aws_rds_cluster" "env-cluster" {
cluster_identifier = "mysql-env-cluster"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
availability_zones = ["${var.aws_az1}", "${var.aws_az2}"]
db_subnet_group_name = "${aws_db_subnet_group.env-rds-subg.name}"
database_name = "dbname"
master_username = "${var.db-user}"
master_password = "${var.db-pass}"
backup_retention_period = 5
preferred_backup_window = "22:00-23:00"
deletion_protection = true
skip_final_snapshot = true
}
resource "aws_rds_cluster_instance" "env-01" {
identifier = "${var.env-db-01}"
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
instance_class = "db.t2.small"
apply_immediately = true
}
resource "aws_rds_cluster_instance" "env-02" {
identifier = "${var.env-db-02}"
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
instance_class = "db.t2.small"
apply_immediately = true
}
resource "aws_rds_cluster_endpoint" "env-02-ep" {
cluster_identifier = "${aws_rds_cluster.env-cluster.id}"
cluster_endpoint_identifier = "reader"
custom_endpoint_type = "READER"
excluded_members = ["${aws_rds_cluster_instance.env-01.id}"]
}`
I had a similar experience when trying to set up an AWS Aurora cluster and instance.
Each time I run a terraform apply it tries to recreate the Aurora cluster and instance.
Here's my Terraform script:
locals {
aws_region = "eu-west-1"
tag_environment = "Dev"
tag_terraform = {
"true" = "Managed by Terraform"
"false" = "Not Managed by Terraform"
}
tag_family = {
"aurora" = "Aurora"
}
tag_number = {
"1" = "1"
"2" = "2"
"3" = "3"
"4" = "4"
}
}
# RDS Cluster
module "rds_cluster_1" {
source = "../../../../modules/aws/rds-cluster-single"
rds_cluster_identifier = var.rds_cluster_identifier
rds_cluster_engine = var.rds_cluster_engine
rds_cluster_engine_mode = var.rds_cluster_engine_mode
rds_cluster_engine_version = var.rds_cluster_engine_version
rds_cluster_availability_zones = ["${local.aws_region}a"]
rds_cluster_database_name = var.rds_cluster_database_name
rds_cluster_port = var.rds_cluster_port
rds_cluster_master_username = var.rds_cluster_master_username
rds_cluster_master_password = module.password.password_result
rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
rds_cluster_apply_immediately = var.rds_cluster_apply_immediately
allow_major_version_upgrade = var.allow_major_version_upgrade
db_cluster_parameter_group_name = var.rds_cluster_parameter_group_name
rds_cluster_deletion_protection = var.rds_cluster_deletion_protection
enabled_cloudwatch_logs_exports = var.enabled_cloudwatch_logs_exports
skip_final_snapshot = var.skip_final_snapshot
# vpc_security_group_ids = var.vpc_security_group_ids
tag_environment = local.tag_environment
tag_terraform = local.tag_terraform.true
tag_number = local.tag_number.1
tag_family = local.tag_family.aurora
}
Here's how I solved it:
The issue was that each time I run terraform apply Terraform tries to check to recreate the resources in 2 subnets:
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# module.rds_cluster_1.aws_rds_cluster.main has changed
~ resource "aws_rds_cluster" "main" {
~ availability_zones = [
+ "eu-west-1b",
+ "eu-west-1c",
# (1 unchanged element hidden)
]
~ cluster_members = [
+ "aurora-postgres-instance-0",
however, my terraform script only specified one availability (rds_cluster_availability_zones = ["${local.aws_region}a") . All I had to do was specify all 3 availability zones (rds_cluster_availability_zones = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]) for my region:
locals {
aws_region = "eu-west-1"
tag_environment = "Dev"
tag_terraform = {
"true" = "Managed by Terraform"
"false" = "Not Managed by Terraform"
}
tag_family = {
"aurora" = "Aurora"
}
tag_number = {
"1" = "1"
"2" = "2"
"3" = "3"
"4" = "4"
}
}
# RDS Cluster
module "rds_cluster_1" {
source = "../../../../modules/aws/rds-cluster-single"
rds_cluster_identifier = var.rds_cluster_identifier
rds_cluster_engine = var.rds_cluster_engine
rds_cluster_engine_mode = var.rds_cluster_engine_mode
rds_cluster_engine_version = var.rds_cluster_engine_version
rds_cluster_availability_zones = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]
rds_cluster_database_name = var.rds_cluster_database_name
rds_cluster_port = var.rds_cluster_port
rds_cluster_master_username = var.rds_cluster_master_username
rds_cluster_master_password = module.password.password_result
rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
rds_cluster_apply_immediately = var.rds_cluster_apply_immediately
allow_major_version_upgrade = var.allow_major_version_upgrade
db_cluster_parameter_group_name = var.rds_cluster_parameter_group_name
rds_cluster_deletion_protection = var.rds_cluster_deletion_protection
enabled_cloudwatch_logs_exports = var.enabled_cloudwatch_logs_exports
skip_final_snapshot = var.skip_final_snapshot
# vpc_security_group_ids = var.vpc_security_group_ids
tag_environment = local.tag_environment
tag_terraform = local.tag_terraform.true
tag_number = local.tag_number.1
tag_family = local.tag_family.aurora
}
Resources: Terraform wants to recreate cluster on every apply #8
If you dont want to have you RDS in three zones there is a workaround here: https://github.com/hashicorp/terraform-provider-aws/issues/1111#issuecomment-373433010