I have to create RDS with data encryption enabled in the multi-region. When RDS creating I need to enable "storage_encrypted": true, and when encryption is enabled AWS required multi-region support kms_key_id to create global RDS.
There are two scenarios I have to validate before creating the RDS.
When storage_encrypted": true, get the "kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx", create a RDS.
If storage_encrypted": false, get the default kms_key_id key and create RDS with validating a condition.
How can I make conditions for the above scenario?
In the variables.tf
variable "storage_encrypted" {
type = bool
default = false
}
variable "kms_key_id" {
type = string
default = null
}
In the vars.tfvars.json file I have two parameters.
"storage_encrypted" : true,
"kms_key_id" : "arn:aws:kms:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
This is the current aurora.tf module for secondary
module "aurora_secondary" {
source = "terraform-aws-modules/rds-aurora/aws"
version = "7.3.0"
apply_immediately = var.apply_immediately
create_cluster = var.create_secondary_cluster
providers = { aws = aws.secondary }
is_primary_cluster = false
source_region = var.primary_region
name = var.name
engine = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine[0] : null
engine_version = var.create_global_cluster ? aws_rds_global_cluster.this.*.engine_version[0] : null
global_cluster_identifier = var.create_global_cluster ? aws_rds_global_cluster.this.*.id[0] : null
*********************************************
storage_encrypted = var.storage_encrypted
*********************************************
create_random_password = var.create_random_password
instance_class = var.instance_class
instances = var.secondary_instances
vpc_id = var.vpc_id_us_east_2
create_db_subnet_group = var.create_secondary_db_subnet_group
subnets = var.private_subnets_us_east_2
create_security_group = var.create_secondary_security_group
allowed_cidr_blocks = var.allowed_cidr_blocks
create_monitoring_role = var.create_monitoring_role
monitoring_interval = var.monitoring_interval
monitoring_role_arn = var.monitoring_role_arn
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.preferred_backup_window
preferred_maintenance_window = var.preferred_maintenance_window
depends_on = [
module.aurora
]
tags = local.cluster_tags
}
Related
I am trying to create Neptune DB with terraform, but facing the following issue.
Please find the terraform script i am using.
## Create Neptune DB cluster and instance
resource "aws_neptune_cluster_parameter_group" "neptune1" {
family = "neptune1.2"
name = "neptune1"
description = "neptune cluster parameter group"
parameter {
name = "neptune_enable_audit_log"
value = 1
apply_method = "pending-reboot"
}
}
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
}
resource "aws_neptune_cluster_instance" "gh-instance" {
count = 1
cluster_identifier = "${aws_neptune_cluster.gh-cluster.id}"
engine = "neptune"
instance_class = "db.r5.large"
apply_immediately = true
}
resource "aws_neptune_subnet_group" "gh-dbg" {
name = "gh-dbg"
subnet_ids = ["${aws_subnet.private.id}" , "${aws_subnet.public.id}"]
}
I think i am not adding the parameter group to the Neptune DB and i am not sure how to do that.
I have tried the following keys in the terraform instance script.
db_parameter_group
parameter_group_name
But both are throwing error - 'This argument is not expected here'
According to the Official documentation the argument you are looking for is "neptune_cluster_parameter_group_name"
resource "aws_neptune_cluster" "gh-cluster" {
cluster_identifier = "gh-db"
skip_final_snapshot = true
iam_database_authentication_enabled = false
apply_immediately = true
neptune_subnet_group_name = "${aws_neptune_subnet_group.gh-dbg.name}"
vpc_security_group_ids = ["${aws_security_group.sgdb.id}"]
iam_roles = ["${aws_iam_role.NeptuneRole.arn}"]
neptune_cluster_parameter_group_name = "${aws_neptune_cluster_parameter_group.neptune1.name}"
}
We are utilizing the GCP network and GKE modules in Terraform to create the VPC and GKE cluster subsequently. Now we would like to create a firewall rule with the target as GKE nodes. We don't want to update the existing firewall rules which are auto-created as the format which GCP uses to name them might change in future due to which our logic may fail. That's why there is a need to create a separate firewall rule along with a separate network tag pointing to the GKE nodes. Module info
VPC
module "vpc" {
source = "terraform-google-modules/network/google"
#version = "~> 2.5"
project_id = var.project_id
network_name = "${var.project_name}-${var.env_name}-vpc"
subnets = [
{
subnet_name = "${var.project_name}-${var.env_name}-subnet"
subnet_ip = "${var.subnetwork_cidr}"
subnet_region = var.region
}
]
secondary_ranges = {
"${var.project_name}-${var.env_name}-subnet" = [
{
range_name = "${var.project_name}-gke-pod-ip-range"
ip_cidr_range = "${var.ip_range_pods_cidr}"
},
{
range_name = "${var.project_name}-gke-service-ip-range"
ip_cidr_range = "${var.ip_range_services_cidr}"
}
]
}
}
GKE_CLUSTER
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
project_id = var.project_id
name = "${var.project_name}-gke-${var.env_name}-cluster"
regional = true
region = var.region
zones = ["${var.region}-a", "${var.region}-b", "${var.region}-c"]
network = module.vpc.network_name
subnetwork = module.vpc.subnets_names[0]
ip_range_pods = "${var.project_name}-gke-pod-ip-range"
ip_range_services = "${var.project_name}-gke-service-ip-range"
http_load_balancing = false
network_policy = false
horizontal_pod_autoscaling = true
filestore_csi_driver = false
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "${var.control_plane_cidr}"
istio = false
cloudrun = false
dns_cache = false
node_pools = [
{
name = "${var.project_name}-gke-node-pool"
machine_type = "${var.machine_type}"
node_locations = "${var.region}-a,${var.region}-b,${var.region}-c"
min_count = "${var.node_pools_min_count}"
max_count = "${var.node_pools_max_count}"
disk_size_gb = "${var.node_pools_disk_size_gb}"
# local_ssd_count = 0
# spot = false
# local_ssd_ephemeral_count = 0
# disk_type = "pd-standard"
# image_type = "COS_CONTAINERD"
# enable_gcfs = false
auto_repair = true
auto_upgrade = true
# service_account = "project-service-account#<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
# initial_node_count = 80
}
]
# node_pools_tags = {
# all = []
# default-node-pool = ["default-node-pool",]
# }
}
FIREWALL
module "firewall_rules" {
source = "terraform-google-modules/network/google//modules/firewall-rules"
project_id = var.project_id
network_name = module.vpc.network_name
rules = [{
name = "allow-istio-ingress"
description = null
direction = "INGRESS"
priority = null
ranges = ["${var.control_plane_cidr}"]
source_tags = null
source_service_accounts = null
target_tags = null
target_service_accounts = null
allow = [{
protocol = "tcp"
ports = ["15017"]
}]
deny = []
log_config = {
metadata = "INCLUDE_ALL_METADATA"
}
}]
depends_on = [module.gke]
}
Although the GKE module has tags property to define tags explicitly, we still need assistance to properly instantiate it and then fetch the same tag value in the firewall module.
I found a working solution to my question posted earlier. Please refer to the GKE module snippet. In that, we only need to modify the below part and an explicit network tag will be created to point to all the nodes in that node pool.
module "gke" {
.
.
node_pools = [
{
name = "gke-node-pool"
.
.
.
},
]
node_pools_tags = {
"gke-node-pool" = "gke-node-pool-network-tag"
}
}
I have two VPCs. One is blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below.
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}
Normally, what you should do is to have vpc_security_group_ids from the VPC where your RDS is. In your case it would be shared vpc:
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id]
Having this one SG, you would add rules to it to allow ingress traffic from other security groups. So basically, your RDS would have one SG with multiple ingress rules. The ingress rules would specify other security groups as allowed.
I use the module, https://github.com/cloudposse/terraform-aws-elasticsearch to provision ElasticSearch. I set kibana_hostname_enabled = false, and domain_hostname_enabled = false. Per document, dns_zone_id is not required. But, it asks for dns zone id when I run terraform plan.
terraform plan
var.dns_zone_id
Route53 DNS Zone ID to add hostname records for Elasticsearch domain and Kibana
Enter a value:
I prefer not to use Route53. How to avoid dns_zone_id? Below is the code:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = false
domain_hostname_enabled = false
iam_role_arns = ["*"]
iam_actions = ["es:*"]
enabled = var.enabled
vpc_enabled = var.vpc_enabled
name = var.name
tags = var.tags
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
In your code you have the following:
#dns_zone_id = var.dns_zone_id
So the plan asks for your var.dns_zone_id which you defined, not from the module.
By setting kibana_hostname_enabled = false, will terraform provide a random url/endpoint for accessing Kibana?
So I created rds instance and I am trying to import it into terraform. However I am using modules in my code so when running terraform I am getting the error:
AT first it says:
module.rds_dr.aws_db_instance.db_instance: Import prepared!
Prepared aws_db_instance for import
then it gives error:
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_db_instance.db_instance,
the provider detected that no object exists with the given id. Only
pre-existing objects can be imported; check that the id is correct and that it
is associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
The command I ran was :
terraform import module.rds_dr.aws_db_instance.db_instance db-ID
I created the instance using source of module in github. The code for the rds instance is below:
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:****"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
security_groups = [aws_security_group.rds_app.id]
subnets = [module.vpc.public_subnets]
auto_minor_version_upgrade = var.rds_auto_minor_version_upgrade_dr
backup_retention_period = var.rds_backup_retention_period_dr
backup_window = var.rds_backup_window_dr
maintenance_window = var.rds_maintenance_window_dr
environment = var.environment
kms_key_id = aws_kms_key.rds.arn
multi_az = var.rds_multi_az_dr
notification_topic = var.rds_notification_topic_dr
publicly_accessible = var.rds_publicly_accessible_dr
storage_encrypted = var.rds_storage_encrypted_dr
storage_size = var.rds_storage_size_dr
storage_type = var.rds_storage_type_dr
apply_immediately = true
}
Also, this is part of the module code:
resource "aws_db_instance" "db_instance" {
allocated_storage = local.storage_size
allow_major_version_upgrade = false
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
backup_retention_period = var.read_replica ? 0 : var.backup_retention_period
backup_window = var.backup_window
character_set_name = local.is_oracle ? var.character_set_name : null
copy_tags_to_snapshot = var.copy_tags_to_snapshot
db_subnet_group_name = local.same_region_replica ? null : local.subnet_group
deletion_protection = var.enable_deletion_protection
engine = var.engine
engine_version = local.engine_version
final_snapshot_identifier = lower("${var.name}-final-snapshot${var.final_snapshot_suffix == "" ? "" : "-"}${var.final_snapshot_suffix}")
iam_database_authentication_enabled = var.iam_authentication_enabled
identifier_prefix = "${lower(var.name)}-"
instance_class = var.instance_class
iops = var.storage_iops
kms_key_id = var.kms_key_id
license_model = var.license_model == "" ? local.license_model : var.license_model
maintenance_window = var.maintenance_window
max_allocated_storage = var.max_storage_size
monitoring_interval = var.monitoring_interval
monitoring_role_arn = var.monitoring_interval > 0 ? local.monitoring_role_arn : null
multi_az = var.read_replica ? false : var.multi_az
name = var.dbname
option_group_name = local.same_region_replica ? null : local.option_group
parameter_group_name = local.same_region_replica ? null : local.parameter_group
password = var.password
port = local.port
publicly_accessible = var.publicly_accessible
replicate_source_db = var.source_db
skip_final_snapshot = var.read_replica || var.skip_final_snapshot
snapshot_identifier = var.db_snapshot_id
storage_encrypted = var.storage_encrypted
storage_type = var.storage_type
tags = merge(var.tags, local.tags)
timezone = local.is_mssql ? var.timezone : null
username = var.username
vpc_security_group_ids = var.security_groups
}
This is my code for the providers:
# pinned provider versions
provider "random" {
version = "~> 2.3.0"
}
provider "template" {
version = "~> 2.1.2"
}
provider "archive" {
version = "~> 1.1"
}
# default provider
provider "aws" {
version = "~> 2.44"
allowed_account_ids = [var.aws_account_id]
region = "us-east-1"
}
# remote state
terraform {
required_version = "0.12.24"
backend "s3" {
key = "terraform.dev.tfstate"
encrypt = "true"
bucket = "dev-tfstate"
region = "us-east-1"
}
}
I have inserted the correct DB ID and i still do not know why terraform says "import non-existent remote object"?
How do I fix this?