Terraform error creating subnet dependency - amazon-web-services

I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"

A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.

It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish

Related

Unable to create RDS instance even though subnets are in different Availability Zones

Terraform code is here:
resource "aws_rds_cluster" "tf-aws-rds-1" {
cluster_identifier = "aurora-cluster-1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
availability_zones = ["us-east-1a","us-east-1b","us-east-1c"]
database_name = "cupday"
master_username = "administrator"
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
storage_encrypted = true
kms_key_id = data.aws_kms_key.rds_key.arn
}
However, when I do run terraform apply, I get the error message below:
aws_rds_cluster.tf-aws-rds-1: Creating...
Error: error creating RDS cluster: InvalidVPCNetworkStateFault: DB Subnet Group doesn't meet availability zone coverage requirement. Please add subnets to cover at least 2 availability zones. Current coverage: 0
status code: 400, request id: bc05fb5f-311c-4d15-821a-8b97fc27ab5b
However, I do have subnets in multiple AZ, screenshot below:
Any idea what is the issue and how do I solve it?
P.S: Subnet created as like below
resource "aws_subnet" "tf-aws-sn" {
count = var.subnet_count
vpc_id = aws_vpc.tf-aws-vn.id
cidr_block = data.template_file.public_cidrsubnet[count.index].rendered
availability_zone = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)[count.index]
tags = local.common_tags
}
Availability Zones I get as like below:
data "aws_availability_zones" "available" {}
I don't see in your code reference to aws_db_subnet_group, so I guess a default subnet group used does not meet this constrain. You can create your own aws_db_subnet_group:
resource "aws_db_subnet_group" "db_subnets" {
name = "main"
subnet_ids = aws_subnet.tf-aws-sn[*].id
tags = {
Name = "My DB subnet group"
}
}
And then use it (no need for availability_zones in this case)
resource "aws_rds_cluster" "tf-aws-rds-1" {
cluster_identifier = "aurora-cluster-1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.03.2"
db_subnet_group_name = aws_db_subnet_group.db_subnets.name
database_name = "cupday"
master_username = "administrator"
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
storage_encrypted = true
kms_key_id = data.aws_kms_key.rds_key.arn
}

DB Subnet Group doesn't meet availability zone coverage requirement. Please add subnets to cover at least 2 availability zones. Current coverage: 1

I have created two subnets for rds but still, I am getting error DB Subnet Group doesn't meet the availability zone coverage requirement. Please add subnets to cover at least 2 availability zones. Current coverage: 1, As I can check my both subnets, even all of my subnets are getting created in the same availability zone. Can you Please guide me
resource "aws_db_subnet_group" "rdssubnet" {
name = "database subnet"
subnet_ids = ["${aws_subnet.rds_subnet.id}","${aws_subnet.rds_subnet1.id}"]
}
#provision the database
resource "aws_db_instance" "database" {
identifier = "database"
instance_class = var.db_instance_type
allocated_storage = var.db_size
engine = "mysql"
multi_az = false
name = "Database "
password = var.rds_password
username = var.rds_user
engine_version = "5.7.00"
skip_final_snapshot = true
db_subnet_group_name = aws_db_subnet_group.rdssubnet.name
vpc_security_group_ids = [aws_security_group.rds.id]
When create your aws_subnet you have to specify AZs where to place them. There is special attribute for that called availability_zone. For example:
resource "aws_subnet" "rds_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
}
resource "aws_subnet" "rds_subnet1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
}

Terraform AWS - How to change default route to an existing route table from nat-gateway to ec2?

I have a private subnet with default route targeting to a nat-gateway. Both were created by terraform.
Now I have another code to raise an ec2 to use as NAT in my VPC (as cloud-nat-gateway become very expensive). I'm trying to change the default route in my rtb to this new ec2 and getting the error below:
Error: Error applying plan:
1 error occurred:
* module.ec2-nat.aws_route.defaultroute_to_ec2-nat: 1 error occurred:
* aws_route.defaultroute_to_ec2-nat: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: 408deb59-d223-4c9f-9a28-209e2e0478e9
I know this route already exists, but how to change this already existing route to a new target? In this case my new ec2 network interface?
Thanks for your help.
Follow are the code I'm using:
#####################
# FIRST TERRAFORM
# create the internet gateway
resource "aws_internet_gateway" "this" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
vpc_id = "${aws_vpc.this.id}"
tags = "${merge(map("Name", format("%s", var.name)), var.igw_tags, var.tags)}"
}
# Add default route (0.0.0.0/0) to internet gateway
resource "aws_route" "public_internet_gateway" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
route_table_id = "${aws_route_table.public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.this.id}"
timeouts {
create = "5m"
}
}
#####################
# SECOND TERRAFORM
# Spin EC2 to run as NAT
resource "aws_instance" "ec2-nat" {
count = "${var.instance_qtd}"
ami = "${data.aws_ami.nat.id}"
availability_zone = "${var.region}a"
instance_type = "${var.instance_type}"
key_name = "${var.aws_key_name}"
vpc_security_group_ids = ["${var.sg_ec2}","${var.sg_ops}"]
subnet_id = "${var.public_subnet_id}"
iam_instance_profile = "${var.iam_instance_profile}"
associate_public_ip_address = true
source_dest_check = false
tags = {
Name = "ec2-nat-${var.brand}-${var.role}-${count.index}"
Brand = "${var.brand}"
Role = "${var.role}"
Type = "ec2-nat"
}
}
# Add default route (0.0.0.0/0) to aws_instance.ec2-nat
variable "default_route" {
default = "0.0.0.0/0"
}
resource "aws_route" "defaultroute_to_ec2-nat" {
route_table_id = "${var.private_route_id}"
destination_cidr_block = "${var.default_route}"
instance_id = "${element(aws_instance.ec2-nat.*.id, 0)}"
}

Terraform - DB and security group are in different VPCs

What am I trying to achive:
Create and RDS Aurora cluster and place it in the same VPC as EC2 instances that I start so they can comunicate.
I'm trying to start an SG named "RDS_DB_SG" and make it part of the VPC i'm creating in the process.
I also create an SG named "BE_SG" and make it part of the same VPC.
I'm doing this so I can get access between the 2 (RDS and BE server).
What I did so far:
Created an .tf code and started everything up.
What I got:
It starts ok if I don't include the RDS cluster inside the RDS SG - The RDS creates it's own VPC.
When I include the RDS in the SG I want for him, The RDS cluster can't start and get's an error.
Error I got:
"The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-5a***63c and the EC2 security group is in vpc-0e5391*****273b3d"
Workaround for now:
I started the infrastructure without specifing a VPC for the RDS. It created it's own default VPC.
I then created manuall VPC-peering between the VPC that was created for the EC2's and the VPC that was created for the RDS.
But I want them to be in the same VPC so I won't have to create the VPC-peering manuall.
My .tf code:
variable "vpc_cidr" {
description = "CIDR for the VPC"
default = "10.0.0.0/16"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_subnet" "vpc_subnet" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.vpc_cidr}"
availability_zone = "eu-west-1a"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_db_subnet_group" "subnet_group" {
name = "${var.env}-subnet-group"
subnet_ids = ["${aws_subnet.vpc_subnet.id}"]
}
resource "aws_security_group" "RDS_DB_SG" {
name = "${var.env}-rds-sg"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 3396
to_port = 3396
protocol = "tcp"
security_groups = ["${aws_security_group.BE_SG.id}"]
}
}
resource "aws_security_group" "BE_SG" {
name = "${var.env}_BE_SG"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "BE" {
ami = "ami-*********************"
instance_type = "t2.large"
associate_public_ip_address = true
key_name = "**********"
tags = {
Name = "WEB-${var.env}"
Porpuse = "Launched by Terraform"
ENV = "${var.env}"
}
subnet_id = "${aws_subnet.vpc_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.BE_SG.id}", "${aws_security_group.ssh.id}"]
}
resource "aws_rds_cluster" "rds-cluster" {
cluster_identifier = "${var.env}-cluster"
database_name = "${var.env}-rds"
master_username = "${var.env}"
master_password = "PASSWORD"
backup_retention_period = 5
vpc_security_group_ids = ["${aws_security_group.RDS_DB_SG.id}"]
}
resource "aws_rds_cluster_instance" "rds-instance" {
count = 1
cluster_identifier = "${aws_rds_cluster.rds-cluster.id}"
instance_class = "db.r4.large"
engine_version = "5.7.12"
engine = "aurora-mysql"
preferred_backup_window = "04:00-22:00"
}
Any suggestions on how to achieve my first goal?

Terraform - Creating resources in one transaction / setting rollback policies

I'm using Terraform with AWS as a provider.
In one of my networks I accidentally configured wrong values which led to
failure in resources creation.
So the situation was that some parts of the resources were up and running,
but I would prefer that the all process was executed as one transaction.
I'm familiar with the output the Terraform gives in such cases:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
My question is: Is there still a way to setup a rollback policy in cases that some resources where created and some failed?
Below is a simple example to reproduce the problem.
In the local variable 'az_list' just the change value from 'names' to 'zone_ids':
az_list = "${data.aws_availability_zones.available.zone_ids}"
And a VPC will be created with some default security groups and Route tables but without subnets.
resources.tf:
provider "aws" {
region = "${var.region}"
}
### Local data ###
data "aws_availability_zones" "available" {}
locals {
#In order to reproduce an error: Change 'names' to 'zone_ids'
az_list = "${data.aws_availability_zones.available.names}"
}
### Vpc ###
resource "aws_vpc" "base_vpc" {
cidr_block = "${var.cidr}"
instance_tenancy = "default"
enable_dns_hostnames = "false"
enable_dns_support = "true"
}
### Subnets ###
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet( var.cidr, 8, count.index + 1 + length(local.az_list) )}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet(var.cidr, 8, count.index + 1)}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
map_public_ip_on_launch = true
}
variables.tf:
variable "region" {
description = "Name of region"
default = "ap-south-1"
}
variable "cidr" {
description = "The CIDR block for the VPC"
default = "10.0.0.0/16"
}