Using Terraform, I am able to successfully create a RDS cluster using the following config in Region 1 -
resource "aws_rds_cluster" "aurora_cluster" {
cluster_identifier = "${var.environment_name}-aurora-cluster"
database_name = "mydb"
master_username = "${var.rds_master_username}"
master_password = "${var.rds_master_password}"
backup_retention_period = 14
final_snapshot_identifier = "${var.environment_name}AuroraCluster"
apply_immediately = true
db_cluster_parameter_group_name = "${aws_rds_cluster_parameter_group.default.name}"
tags {
Name = "${var.environment_name}-Aurora-DB-Cluster"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_rds_cluster_instance" "aurora_cluster_instance" {
count = "${length(split(",", var.multi_azs))}"
identifier = "${var.environment_name}-aurora-instance-${count.index}"
cluster_identifier = "${aws_rds_cluster.aurora_cluster.id}"
instance_class = "db.t2.small"
publicly_accessible = true
apply_immediately = true
tags {
Name = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
output "db_primary_cluster_arn" {
rds_cluster.aurora_cluster.cluster_identifier}"
value = "${"${format("arn:aws:rds:%s:%s:cluster:%s", "${var.db_region}", "${data.aws_caller_identity.current.account_id}", "${aws_rds_cluster.aurora_cluster.cluster_identifier}")}"}"
}
and create a Cross Region replica using the below, in region 2 -
resource "aws_rds_cluster" "aurora_crr_cluster" {
cluster_identifier = "${var.environment_name}-aurora-crr-cluster"
database_name = "mydb"
master_username = "${var.rds_master_username}"
master_password = "${var.rds_master_password}"
backup_retention_period = 14
final_snapshot_identifier = "${var.environment_name}AuroraCRRCluster"
apply_immediately = true
# Referencing to the primary region's cluster
replication_source_identifier = "${var.db_primary_cluster_arn}"
tags {
Name = "${var.environment_name}-Aurora-DB-CRR-Cluster"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_rds_cluster_instance" "aurora_crr_cluster_instance" {
count = "${length(split(",", var.multi_azs))}"
identifier = "${var.environment_name}-aurora-crr-instance-${count.index}"
cluster_identifier = "${aws_rds_cluster.aurora_crr_cluster.id}"
instance_class = "db.t2.small"
publicly_accessible = true
apply_immediately = true
tags {
Name = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
When I want to promote the Cross Region Replica in Region 2 to a stand alone cluster - I try to remove the replication source (replication_source_identifier) from the Cross Region RDS Cluster and do "terraform apply". I see that the output from Terraform says -
module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifying... (ID: dev-aurora-crr-cluster)
replication_source_identifier: "arn:aws:rds:us-east-2:account_nbr:cluster:dev-aurora-cluster" => ""
module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifications complete after 1s (ID: dev-aurora-crr-cluster)
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
But, I see "NO CHANGE" happening to the cross region cluster on the AWS console. I still see that the replication source is existing and same and the cross region cluster is NOT updated to a "standalone" in AWS.
If I try to do the same thing via the AWS CLI -
aws rds promote-read-replica-db-cluster --db-cluster-identifier="dev-aurora-crr-cluster" --region="us-west-1"
I see that the change is triggered immediately and the Cross Region Replica is promoted to a stand alone cluster. Does anyone know where I may be doing things wrong?
or Terraform does not support promoting Cross Regional Replica's to standalone clusters. Please advice.
Related
Terraform code is here:
resource "aws_rds_cluster" "tf-aws-rds-1" {
cluster_identifier = "aurora-cluster-1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
availability_zones = ["us-east-1a","us-east-1b","us-east-1c"]
database_name = "cupday"
master_username = "administrator"
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
storage_encrypted = true
kms_key_id = data.aws_kms_key.rds_key.arn
}
However, when I do run terraform apply, I get the error message below:
aws_rds_cluster.tf-aws-rds-1: Creating...
Error: error creating RDS cluster: InvalidVPCNetworkStateFault: DB Subnet Group doesn't meet availability zone coverage requirement. Please add subnets to cover at least 2 availability zones. Current coverage: 0
status code: 400, request id: bc05fb5f-311c-4d15-821a-8b97fc27ab5b
However, I do have subnets in multiple AZ, screenshot below:
Any idea what is the issue and how do I solve it?
P.S: Subnet created as like below
resource "aws_subnet" "tf-aws-sn" {
count = var.subnet_count
vpc_id = aws_vpc.tf-aws-vn.id
cidr_block = data.template_file.public_cidrsubnet[count.index].rendered
availability_zone = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)[count.index]
tags = local.common_tags
}
Availability Zones I get as like below:
data "aws_availability_zones" "available" {}
I don't see in your code reference to aws_db_subnet_group, so I guess a default subnet group used does not meet this constrain. You can create your own aws_db_subnet_group:
resource "aws_db_subnet_group" "db_subnets" {
name = "main"
subnet_ids = aws_subnet.tf-aws-sn[*].id
tags = {
Name = "My DB subnet group"
}
}
And then use it (no need for availability_zones in this case)
resource "aws_rds_cluster" "tf-aws-rds-1" {
cluster_identifier = "aurora-cluster-1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
db_subnet_group_name = aws_db_subnet_group.db_subnets.name
database_name = "cupday"
master_username = "administrator"
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
storage_encrypted = true
kms_key_id = data.aws_kms_key.rds_key.arn
}
I have some Terraform code that creates an AWS Aurora RDS cluster:
resource "aws_rds_cluster" "default" {
provider = aws.customer
cluster_identifier = "my_id"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
database_name = var.db_name
port = var.db_port
master_username = var.db_master_username
master_password = random_password.sqlpassword.result
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
db_subnet_group_name = aws_db_subnet_group.default.name
vpc_security_group_ids = [aws_security_group.rds.id]
deletion_protection = true
}
This code has been working fine for quite a while until just recently when terraform apply fails with this error Error: Failed to modify RDS Cluster (my_id): InvalidParameterCombination: Cannot upgrade aurora-mysql from 5.7.mysql_aurora.2.07.2 to 5.7.mysql_aurora.2.03.2
To make a long story short, AWS upgraded the minor version number in a maintenance window and refuses to allow Terraform to downgrade the database. I'm fine with AWS doing this, but I don't want to have to commit new Terraform code every time this happens.
I tried being less specific in the engine version by using engine_version="5.7.mysql_aurora.2", but that failed like this: InvalidParameterCombination: Cannot find upgrade target from 5.7.mysql_aurora.2.07.2 with requested version 5.7.mysql_aurora.2.
What would be the appropriate Terraform method to allow the RDS Minor Version to float with modifications performed by AWS?
You can add an ignore_changes lifecycle block to the resource.
resource "aws_rds_cluster" "default" {
provider = aws.customer
cluster_identifier = "my_id"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
database_name = var.db_name
port = var.db_port
master_username = var.db_master_username
master_password = random_password.sqlpassword.result
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
db_subnet_group_name = aws_db_subnet_group.default.name
vpc_security_group_ids = [aws_security_group.rds.id]
deletion_protection = true
lifecycle {
ignore_changes = [
engine_version,
]
}
}
You can read more about this here: https://www.terraform.io/docs/language/meta-arguments/lifecycle.html
I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.
I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339