I have two VPCs. One is blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below.
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}
Normally, what you should do is to have vpc_security_group_ids from the VPC where your RDS is. In your case it would be shared vpc:
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id]
Having this one SG, you would add rules to it to allow ingress traffic from other security groups. So basically, your RDS would have one SG with multiple ingress rules. The ingress rules would specify other security groups as allowed.
Related
I am trying to create Neptune DB with terraform, but facing the following issue.
Please find the terraform script i am using.
## Create Neptune DB cluster and instance
resource "aws_neptune_cluster_parameter_group" "neptune1" {
family = "neptune1.2"
name = "neptune1"
description = "neptune cluster parameter group"
parameter {
name = "neptune_enable_audit_log"
value = 1
apply_method = "pending-reboot"
}
}
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
}
resource "aws_neptune_cluster_instance" "gh-instance" {
count = 1
cluster_identifier = "${aws_neptune_cluster.gh-cluster.id}"
engine = "neptune"
instance_class = "db.r5.large"
apply_immediately = true
}
resource "aws_neptune_subnet_group" "gh-dbg" {
name = "gh-dbg"
subnet_ids = ["${aws_subnet.private.id}" , "${aws_subnet.public.id}"]
}
I think i am not adding the parameter group to the Neptune DB and i am not sure how to do that.
I have tried the following keys in the terraform instance script.
db_parameter_group
parameter_group_name
But both are throwing error - 'This argument is not expected here'
According to the Official documentation the argument you are looking for is "neptune_cluster_parameter_group_name"
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
neptune_cluster_parameter_group_name = "${aws_neptune_cluster_parameter_group.neptune1.name}"
}
I have to create RDS with data encryption enabled in the multi-region. When RDS creating I need to enable "storage_encrypted": true, and when encryption is enabled AWS required multi-region support kms_key_id to create global RDS.
There are two scenarios I have to validate before creating the RDS.
When storage_encrypted": true, get the "kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx", create a RDS.
If storage_encrypted": false, get the default kms_key_id key and create RDS with validating a condition.
How can I make conditions for the above scenario?
In the variables.tf
variable "storage_encrypted" {
type = bool
default = false
}
variable "kms_key_id" {
type = string
default = null
}
In the vars.tfvars.json file I have two parameters.
"storage_encrypted" : true,
"kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
This is the current aurora.tf module for secondary
module "aurora_secondary" {
source = "terraform-aws-modules/rds-aurora/aws"
version = "7.3.0"
apply_immediately = var.apply_immediately
create_cluster = var.create_secondary_cluster
providers = { aws = aws.secondary }
is_primary_cluster = false
source_region = var.primary_region
name = var.name
engine = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine[0] : null
engine_version = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine_version[0] : null
global_cluster_identifier = var.create_global_cluster ? aws_rds_global_cluster.this.*.id[0] : null
*********************************************
storage_encrypted = var.storage_encrypted
*********************************************
create_random_password = var.create_random_password
instance_class = var.instance_class
instances = var.secondary_instances
vpc_id = var.vpc_id_us_east_2
create_db_subnet_group = var.create_secondary_db_subnet_group
subnets = var.private_subnets_us_east_2
create_security_group = var.create_secondary_security_group
allowed_cidr_blocks = var.allowed_cidr_blocks
create_monitoring_role = var.create_monitoring_role
monitoring_interval = var.monitoring_interval
monitoring_role_arn = var.monitoring_role_arn
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.preferred_backup_window
preferred_maintenance_window = var.preferred_maintenance_window
depends_on = [
module.aurora
]
tags = local.cluster_tags
}
We have a lambda function in our VPC so that it can connect to our RDS instance. This lambda also needs to connect to s3. It seems that in order to connect to s3 from a VPC, you need to set up a VPC endpoint of the Gateway type. Given the below config we are able to connect to our database, but are still unable to get_object from s3:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.12.0"
name = var.name
cidr = var.vpc_cidr
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
public_subnets = var.vpc_public_subnets
private_subnets = var.vpc_private_subnets
database_subnets = var.vpc_database_subnets
create_database_subnet_group = true
create_database_subnet_route_table = true
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
tags = local.default_tags
}
module "endpoints_us_east_1" {
source = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints"
version = "3.10.0"
vpc_id = module.vpc.vpc_id
security_group_ids = [module.security_group_allow_all.security_group_id]
endpoints = {
s3 = {
service = "s3"
service_type = "Gateway"
route_table_ids = flatten([module.vpc.private_route_table_ids])
tags = { Name = "s3-vpc-endpoint" }
},
}
tags = local.default_tags
}
module "security_group_allow_all" {
source = "terraform-aws-modules/security-group/aws"
name = "${var.name}-allow-all"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = [var.vpc_cidr]
ingress_rules = ["all-all"]
egress_cidr_blocks = [var.vpc_cidr]
egress_rules = ["all-all"]
}
The lambda function (using the terraform module) has these settings applied to it:
vpc_subnet_ids = data.terraform_remote_state.foundation.outputs.vpc_private_subnets
vpc_security_group_ids = [data.terraform_remote_state.foundation.outputs.security_group_allow_all_id]
attach_network_policy = true
We are utilizing the GCP network and GKE modules in Terraform to create the VPC and GKE cluster subsequently. Now we would like to create a firewall rule with the target as GKE nodes. We don't want to update the existing firewall rules which are auto-created as the format which GCP uses to name them might change in future due to which our logic may fail. That's why there is a need to create a separate firewall rule along with a separate network tag pointing to the GKE nodes. Module info
VPC
module "vpc" {
source = "terraform-google-modules/network/google"
#version = "~> 2.5"
project_id = var.project_id
network_name = "${var.project_name}-${var.env_name}-vpc"
subnets = [
{
subnet_name = "${var.project_name}-${var.env_name}-subnet"
subnet_ip = "${var.subnetwork_cidr}"
subnet_region = var.region
}
]
secondary_ranges = {
"${var.project_name}-${var.env_name}-subnet" = [
{
range_name = "${var.project_name}-gke-pod-ip-range"
ip_cidr_range = "${var.ip_range_pods_cidr}"
},
{
range_name = "${var.project_name}-gke-service-ip-range"
ip_cidr_range = "${var.ip_range_services_cidr}"
}
]
}
}
GKE_CLUSTER
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
project_id = var.project_id
name = "${var.project_name}-gke-${var.env_name}-cluster"
regional = true
region = var.region
zones = ["${var.region}-a", "${var.region}-b", "${var.region}-c"]
network = module.vpc.network_name
subnetwork = module.vpc.subnets_names[0]
ip_range_pods = "${var.project_name}-gke-pod-ip-range"
ip_range_services = "${var.project_name}-gke-service-ip-range"
http_load_balancing = false
network_policy = false
horizontal_pod_autoscaling = true
filestore_csi_driver = false
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "${var.control_plane_cidr}"
istio = false
cloudrun = false
dns_cache = false
node_pools = [
{
name = "${var.project_name}-gke-node-pool"
machine_type = "${var.machine_type}"
node_locations = "${var.region}-a,${var.region}-b,${var.region}-c"
min_count = "${var.node_pools_min_count}"
max_count = "${var.node_pools_max_count}"
disk_size_gb = "${var.node_pools_disk_size_gb}"
# local_ssd_count = 0
# spot = false
# local_ssd_ephemeral_count = 0
# disk_type = "pd-standard"
# image_type = "COS_CONTAINERD"
# enable_gcfs = false
auto_repair = true
auto_upgrade = true
# service_account = "project-service-account#<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
# initial_node_count = 80
}
]
# node_pools_tags = {
# all = []
# default-node-pool = ["default-node-pool",]
# }
}
FIREWALL
module "firewall_rules" {
source = "terraform-google-modules/network/google//modules/firewall-rules"
project_id = var.project_id
network_name = module.vpc.network_name
rules = [{
name = "allow-istio-ingress"
description = null
direction = "INGRESS"
priority = null
ranges = ["${var.control_plane_cidr}"]
source_tags = null
source_service_accounts = null
target_tags = null
target_service_accounts = null
allow = [{
protocol = "tcp"
ports = ["15017"]
}]
deny = []
log_config = {
metadata = "INCLUDE_ALL_METADATA"
}
}]
depends_on = [module.gke]
}
Although the GKE module has tags property to define tags explicitly, we still need assistance to properly instantiate it and then fetch the same tag value in the firewall module.
I found a working solution to my question posted earlier. Please refer to the GKE module snippet. In that, we only need to modify the below part and an explicit network tag will be created to point to all the nodes in that node pool.
module "gke" {
.
.
node_pools = [
{
name = "gke-node-pool"
.
.
.
},
]
node_pools_tags = {
"gke-node-pool" = "gke-node-pool-network-tag"
}
}
I have an ec2 resource (shown) with its own security group (not shown)
resource "aws_instance" "outpost" {
ami = "ami-0469d1xxxxxxxx"
instance_type = "t2.micro"
key_name = module.secretsmanager.key_name
vpc_security_group_ids = [module.ec2_security_group.security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "${var.env}-${var.user}-ec2-outpost"
Terraform = "true"
Environment = var.env
Created = "${timestamp()}"
}
}
A security group for an RDS instance that has ingress and egress rules for that ec2's security group:
module "db_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4"
name = "${var.env}-${var.user}-${local.db_name}"
vpc_id = module.vpc.vpc_id
ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
egress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
}
And the RDS instance that is in db_security_group
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 3.4.0"
identifier = "${var.env}-${var.user}-${local.db_name}"
engine = var.postgres.engine
engine_version = var.postgres.engine_version
family = var.postgres.family
major_engine_version = var.postgres.major_engine_version
instance_class = var.postgres.instance_class
allocated_storage = var.postgres.allocated_storage
max_allocated_storage = var.postgres.max_allocated_storage
storage_encrypted = var.postgres.storage_encrypted
name = var.postgres.name
username = var.postgres.username
password = var.rds_password
port = var.postgres.port
multi_az = var.postgres.multi_az
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [module.db_security_group.security_group_id]
maintenance_window = var.postgres.maintenance_window
backup_window = var.postgres.backup_window
enabled_cloudwatch_logs_exports = var.postgres.enabled_cloudwatch_logs_exports
backup_retention_period = var.postgres.backup_retention_period
skip_final_snapshot = var.postgres.skip_final_snapshot
deletion_protection = var.postgres.deletion_protection
performance_insights_enabled = var.postgres.performance_insights_enabled
performance_insights_retention_period = var.postgres.performance_insights_retention_period
create_monitoring_role = var.postgres.create_monitoring_role
monitoring_role_name = "${var.env}-${var.user}-${var.postgres.monitoring_role_name}"
monitoring_interval = var.postgres.monitoring_interval
snapshot_identifier = var.postgres.snapshot_identifier
}
When I change something with the ec2 instance (like, say, iam_instance_profile) or anything about instances referenced in the in/outbound rules for module.db_security_group.security_group_id, why does does the RDS instance get destroyed and recreated by Terraform?
It seems that in addition to the username and password behavior seen when snapshot_identifier is given (here and here), Terraform will also mark the RDS instance for deletion and recreation when either of these parameters is set. You will see this happening when re-applying the plan in question, because the initial username and/or password is never actually set by Terraform; it thinks there is a change.