Terraform - How to loop on Specific subnets - amazon-web-services

I'm running terraform that creates 4 Subnets , 2 of the subnets are public and the name starts with "public".
Subnet code
Private subnet
resource "aws_subnet" "private-subnet-az-a" {
availability_zone = "us-east-1a"
vpc_id = aws_vpc.vpc-homework2.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = false
}
resource "aws_subnet" "private-subnet-az-b" {
availability_zone = "us-east-1b"
vpc_id = aws_vpc.vpc-homework2.id
cidr_block = "10.0.2.0/24"
map_public_ip_on_launch = false
}
## Public subnet
resource "aws_subnet" "public-subnet-az-a" {
availability_zone = "us-east-1a"
vpc_id = aws_vpc.vpc-homework2.id
cidr_block = "10.0.3.0/24"
map_public_ip_on_launch = true
}
resource "aws_subnet" "public-subnet-az-b" {
availability_zone = "us-east-1b"
vpc_id = aws_vpc.vpc-homework2.id
cidr_block = "10.0.4.0/24"
map_public_ip_on_launch = true
}
When creating Load Balancer I need to attach it both public sunsets - i have tries the For as you can see in the example, but it is not working
## Create lb code ; [for subnet in aws_subnet.public-[*].id : subnet]
resource "aws_lb" "nlb" {
name = "nlb-web"
internal = false
load_balancer_type = "network"
subnets = [for subnet in aws_subnet.public-[*].id : subnet]
}

You can't construct such a loop. The proper way of doing this is to create a map and use for_each to create your subnets:
variable "subnets" {
default = {
private-subnet-az-a = {
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1a"
}
private-subnet-az-a = {
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1b"
}
# and so on
}
}
resource "aws_subnet" "subnet" {
for_each = var.subnets
availability_zone = each.value.availability_zone
vpc_id = aws_vpc.vpc-homework2.id
cidr_block = each.value.cidr_block
map_public_ip_on_launch = each.value.map_public_ip_on_launch
}
resource "aws_lb" "nlb" {
name = "nlb-web"
internal = false
load_balancer_type = "network"
subnets = [for key, subnet in aws_subnet.subnet : subnet.id if length(regexall("public.*", key)) > 0]
}

Related

ALB health check failing with 502 error with my terraform config

Hi i'm beginner and i'm trying to play with VPC on AWS and Terraform and i'm stuck on ALB health check issue
I have 2 az with a ec2 and a webserver on each ec2 my goal is to setup the load balancer and be able to be redirected on one of my ec2, my ec2 are on a private subnet and a Nat Gatway and elastic IP
I've tried to setup a bastion host to check on the ssh if my ec2 was well link to internet and the answer is yes
this is my setup terraform : ( baybe there is an obvious error that i haven't seen )
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
shared_credentials_file = "./aws/credentials"
region = "us-east-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "my-vpc"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "my-internet-gateway"
}
}
resource "aws_subnet" "public_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "my-public-a-subnet"
}
}
resource "aws_subnet" "public_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true
tags = {
Name = "my-public-b-subnet"
}
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "my-private-a-subnet"
}
}
resource "aws_subnet" "private_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.4.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "my-private-b-subnet"
}
}
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.main.id
subnet_id = aws_subnet.public_a.id
}
resource "aws_eip" "main" {
vpc = true
tags = {
Name = "my-nat-gateway-eip"
}
}
resource "aws_security_group" "main" {
name = "my-security-group"
description = "Allow HTTP and SSH access"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-security-group"
}
}
resource "aws_instance" "ec2_a" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.private_a.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-ec2-a"
}
key_name = "vockey"
user_data = file("user_data.sh")
}
resource "aws_instance" "ec2_b" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.private_b.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-ec2-b"
}
key_name = "vockey"
user_data = file("user_data.sh")
}
resource "aws_instance" "bastion" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.public_a.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-bastion"
}
key_name = "vockey"
user_data = file("user_data_bastion.sh")
}
resource "aws_alb" "main" {
name = "my-alb"
internal = false
security_groups = [aws_security_group.main.id]
subnets = [aws_subnet.public_a.id, aws_subnet.public_b.id]
tags = {
Name = "my-alb"
}
}
resource "aws_alb_target_group" "ec2" {
name = "my-alb-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
tags = {
Name = "my-alb-target-group"
}
}
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.main.id
}
tags = {
Name = "my-private-route-table"
}
}
resource "aws_route_table_association" "private_a" {
subnet_id = aws_subnet.private_a.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "private_b" {
subnet_id = aws_subnet.private_b.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "My Public Route Table"
}
}
resource "aws_route_table_association" "public_a" {
subnet_id = aws_subnet.public_a.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public_b" {
subnet_id = aws_subnet.public_b.id
route_table_id = aws_route_table.public.id
}
resource "aws_alb_listener" "main" {
load_balancer_arn = aws_alb.main.arn
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.ec2.arn
type = "forward"
}
}
resource "aws_alb_target_group_attachment" "ec2_a" {
target_group_arn = aws_alb_target_group.ec2.arn
target_id = aws_instance.ec2_a.id
port = 80
}
resource "aws_alb_target_group_attachment" "ec2_b" {
target_group_arn = aws_alb_target_group.ec2.arn
target_id = aws_instance.ec2_b.id
port = 80
}
It looks like you don't have a health_check block on the aws_alb_target_group resource. Try adding something like this:
resource "aws_alb_target_group" "ec2" {
name = "my-alb-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
path = "/"
matcher = "200"
}
tags = {
Name = "my-alb-target-group"
}
}
Also, make sure that the HTTP services on your EC2 instances are listening and accepting connections on port 80. You should be able to curl http://<ec2 ip address> with a 200 response.

How to reuse Elastic IPs for a set of private and public subnets dedicated to Fargate tasks

I got the following setup to create the networking requirements for a Fargate setup:
resource "aws_vpc" "main" {
cidr_block = var.cidr
tags = {
Environment = var.environment
DO_NOT_DELETE = true
CreatedBy = "terraform"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
data "aws_availability_zones" "region_azs" {
state = "available"
}
locals {
az_count = length(data.aws_availability_zones.region_azs.names)
}
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 4, count.index)
availability_zone = data.aws_availability_zones.region_azs.names[count.index]
count = local.az_count
tags = {
Name = "public-subnet-${data.aws_availability_zones.region_azs.names[count.index]}"
AvailabilityZone = data.aws_availability_zones.region_azs.names[count.index]
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
Type = "private"
DO_NOT_DELETE = true
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 4, count.index + local.az_count )
availability_zone = data.aws_availability_zones.region_azs.names[count.index]
count = local.az_count
map_public_ip_on_launch = true
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
DO_NOT_DELETE = true
Type = "public"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
Type = "public"
}
}
resource "aws_route" "public" {
route_table_id = aws_route_table.public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
resource "aws_route_table_association" "public" {
count = local.az_count
subnet_id = element(aws_subnet.public.*.id, count.index)
route_table_id = aws_route_table.public.id
}
resource "aws_nat_gateway" "main" {
count = local.az_count
allocation_id = element(aws_eip.nat.*.id, count.index)
subnet_id = element(aws_subnet.public.*.id, count.index)
depends_on = [aws_internet_gateway.main]
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_eip" "nat" {
count = local.az_count
vpc = true
tags = {
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_route_table" "private" {
count = local.az_count
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Type = "private"
Vpc = aws_vpc.main.id
}
}
resource "aws_route" "private" {
count = local.az_count
route_table_id = element(aws_route_table.private.*.id, count.index)
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = element(aws_nat_gateway.main.*.id, count.index)
}
resource "aws_route_table_association" "private" {
count = local.az_count
subnet_id = element(aws_subnet.private.*.id, count.index)
route_table_id = element(aws_route_table.private.*.id, count.index)
}
resource "aws_security_group" "alb" {
name = "${var.resources_name_prefix}-alb-sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_security_group" "ecs_tasks" {
name = "${var.resources_name_prefix}-ecs-sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = "tcp"
from_port = 3000
to_port = 3000
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
This is been working great for a couple availability zones, but now that I'm dynamically creating subnets for running tasks in every AZ per region, I'm reaching the limit of Elastic IP's per region.
So I'm getting this erorr while trying to create the stack:
Error creating EIP: AddressLimitExceeded: The maximum number of addresses has been reached.
status code: 400
I'm wodering if the following part:
resource "aws_nat_gateway" "main" {
count = local.az_count
allocation_id = element(aws_eip.nat.*.id, count.index)
subnet_id = element(aws_subnet.public.*.id, count.index)
depends_on = [aws_internet_gateway.main]
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_eip" "nat" {
count = local.az_count
vpc = true
tags = {
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
Could be structured to use a single EIP and routing internally, if this makes sense.
I modified your code a bit, but it's a mess. For example all private subnets are called "public". It creates two NATs now. Obviously if you have subnets in, lets say, 6 AZs, there will be some cross-AZ traffic to get to those NATs.
Alternatively, simply don't create VPCs spanning so many AZs. Typically only two-three AZs are used for a VPC. Having more than that is not really needed.
Finally, you can request AWS support to give your more EIPs, if you want to preserve your original setup.
resource "aws_vpc" "main" {
cidr_block = var.cidr
tags = {
Environment = var.environment
DO_NOT_DELETE = true
CreatedBy = "terraform"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
data "aws_availability_zones" "region_azs" {
state = "available"
}
locals {
az_count = length(data.aws_availability_zones.region_azs.names)
}
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 4, count.index)
availability_zone = data.aws_availability_zones.region_azs.names[count.index]
count = local.az_count
tags = {
Name = "private-subnet-${data.aws_availability_zones.region_azs.names[count.index]}"
AvailabilityZone = data.aws_availability_zones.region_azs.names[count.index]
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
Type = "private"
DO_NOT_DELETE = true
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 4, count.index + local.az_count )
availability_zone = data.aws_availability_zones.region_azs.names[count.index]
count = local.az_count
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-${data.aws_availability_zones.region_azs.names[count.index]}"
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
DO_NOT_DELETE = true
Type = "public"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
Type = "public"
}
}
resource "aws_route" "public" {
route_table_id = aws_route_table.public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
resource "aws_route_table_association" "public" {
count = local.az_count
subnet_id = element(aws_subnet.public.*.id, count.index)
route_table_id = aws_route_table.public.id
}
resource "aws_nat_gateway" "main" {
count = 2
allocation_id = element(aws_eip.nat.*.id, count.index)
subnet_id = element(aws_subnet.public.*.id, count.index)
depends_on = [aws_internet_gateway.main]
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_eip" "nat" {
count = 2
vpc = true
tags = {
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_route_table" "private" {
count = local.az_count
vpc_id = aws_vpc.main.id
tags = {
Environment = var.environment
CreatedBy = "terraform"
Type = "private"
Vpc = aws_vpc.main.id
}
}
resource "aws_route" "private" {
count = local.az_count
route_table_id = element(aws_route_table.private.*.id, count.index)
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = element(aws_nat_gateway.main.*.id, count.index)
}
resource "aws_route_table_association" "private" {
count = local.az_count
subnet_id = element(aws_subnet.private.*.id, count.index)
route_table_id = element(aws_route_table.private.*.id, count.index)
}
resource "aws_security_group" "alb" {
name = "${var.resources_name_prefix}-alb-sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}
resource "aws_security_group" "ecs_tasks" {
name = "${var.resources_name_prefix}-ecs-sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = "tcp"
from_port = 3000
to_port = 3000
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Environment = var.environment
CreatedBy = "terraform"
Vpc = aws_vpc.main.id
}
}

Problem with accessing ASG in private subnet from elb

i have the 502 error in the ALB.
my vpc and routes.
resource "aws_vpc" "My_VPC" {
cidr_block = "${var.vpcCIDRblock}"
instance_tenancy = "${var.instanceTenancy}"
enable_dns_support = "true"
enable_dns_hostnames = "true"
tags = {
Name = "My VPC"
}
}
resource "aws_subnet" "Public_Subnet" {
vpc_id = "${aws_vpc.My_VPC.id}"
cidr_block = "${var.subnetCIDRblock}"
map_public_ip_on_launch = "true"
availability_zone = "eu-central-1a"
tags= {
Name = "My Public Subnet"
}
}
resource "aws_subnet" "Public_Subnet_elb" {
vpc_id = "${aws_vpc.My_VPC.id}"
cidr_block = "${var.subnetCIDRblock4}"
map_public_ip_on_launch = "true"
availability_zone = "eu-central-1"
tags = {
Name = "My Public Subnet ELB"
}
}
resource "aws_subnet" "Private_Subnet" {
vpc_id = "${aws_vpc.My_VPC.id}"
cidr_block = "172.16.2.0/24"
map_public_ip_on_launch = "false"
availability_zone = "$eu-central-1a"
tags = {
Name = "My_Private_Subnet"
}
}
resource "aws_internet_gateway" "My_VPC_GW" {
vpc_id = "${aws_vpc.My_VPC.id}"
tags = {
Name = "My VPC Internet Gateway"
}
}
resource "aws_route_table" "eu-central-1a" {
vpc_id = "${aws_vpc.My_VPC.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.My_VPC_GW.id}"
}
tags = {
Name = "Public Subnet"
}
}
resource "aws_main_route_table_association" "public" {
vpc_id = "${aws_vpc.My_VPC.id}"
route_table_id = "${aws_route_table.eu-central-1a.id}"
}
resource "aws_route_table_association" "eu-central-1a-public" {
subnet_id = "${aws_subnet.Public_Subnet.id}"
route_table_id = "${aws_route_table.eu-central-1a.id}"
}
resource "aws_route_table_association" "elb" {
subnet_id = "${aws_subnet.Public_Subnet_elb.id}"
route_table_id = "${aws_route_table.eu-central-1a.id}"
}
resource "aws_eip" "eip" {
vpc = true
depends_on = ["aws_internet_gateway.My_VPC_GW"]
}
resource "aws_nat_gateway" "gateway" {
allocation_id = "${aws_eip.eip.id}"
subnet_id = "${aws_subnet.Public_Subnet.id}"
depends_on = ["aws_internet_gateway.My_VPC_GW"]
}
output "NAT_GW_IP" {
value = "${aws_eip.eip.public_ip}"
}
## Routing table
resource "aws_route_table" "private_route_table" {
vpc_id = "${aws_vpc.My_VPC.id}"
}
resource "aws_route" "private" {
route_table_id = "${aws_route_table.private_route_table.id}"
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = "${aws_nat_gateway.gateway.id}"
}
# Associate subnet private_subnet to private route table
resource "aws_route_table_association" "private_subnet_association" {
subnet_id = "${aws_subnet.Private_Subnet.id}"
route_table_id = "${aws_route_table.private_route_table.id}"
}
each security group open for incoming traffic for port 80 443 and 22 . outbound are 0.0.0.0
ELB
resource "aws_lb" "test" {
name = "test-lb-tf"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.elb-security.id}"]
subnets = ["${aws_subnet.Public_Subnet_elb.id}","${aws_subnet.Public_Subnet.id}"]
enable_deletion_protection = false
depends_on = ["aws_nat_gateway.gateway"]
access_logs {
bucket = "test-listener"
prefix = "test-lb"
enabled = true
}
tags = {
Environment = "production"
}
}
resource "aws_lb_target_group" "test" {
name = "moodle-tg"
port = "80"
protocol = "HTTP"
vpc_id = aws_vpc.My_VPC.id
target_type = "instance"
deregistration_delay = "300"
health_check {
path = "/"
interval = "300"
port = "80"
matcher = "200"
protocol = "HTTP"
timeout = "10"
healthy_threshold = "10"
unhealthy_threshold= "10"
}
}
resource "aws_lb_listener" "front_end" {
load_balancer_arn = aws_lb.test.arn
port = "80"
protocol = "HTTP"
depends_on = ["aws_nat_gateway.gateway"]
default_action {
target_group_arn = "${aws_lb_target_group.test.arn}"
type = "forward"
}
}
resource "aws_lb_listener_rule" "asg-listener_rule" {
listener_arn = aws_lb_listener.front_end.arn
priority = 100
depends_on = ["aws_nat_gateway.gateway"]
condition {
path_pattern {
values = ["/"]
}
}
action {
type = "forward"
target_group_arn = aws_lb_target_group.test.arn
}
}
ASG
resource "aws_launch_configuration" "moodle-lc" {
name_prefix = "moodle-lc-"
image_id = "${data.aws_ami.centos.id}"
instance_type = "${var.instance}"
security_groups = ["${aws_security_group.web_ubuntu1.id}"]
key_name = "moodle_agents"
user_data = "${file("init-agent-instance.sh")}"
depends_on = ["aws_nat_gateway.gateway"]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "moodle-agents" {
vpc_zone_identifier = ["${aws_subnet.Private_Subnet.id}"]
name = "agents"
max_size = "20"
min_size = "1"
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 2
target_group_arns = ["${aws_lb_target_group.test.arn}"]
force_delete = true
launch_configuration = "${aws_launch_configuration.moodle-lc.name}"
depends_on = ["aws_nat_gateway.gateway"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Name"
value = "Agent Instance"
propagate_at_launch = true
}
}
user_data script just installs apache web-server and starts it
I read this article link and my code looks the same for me can someone please explain where I made a mistake.
Without nat-gateway(and ASG are in public subnet) everything works fine, but it doesn't have sense to use ALB for accessing instances that are already visible in the internet.
Your general architecture is correct, although there are still some mistakes:
Incorrect AZ:
availability_zone = "$eu-central-1a"
Again wrong AZ:
availability_zone = "eu-central-1"
ALB must be in two different AZs, maybe you should have "eu-central-1a" and "eu-central-1b"

Attach multiple private subnet to route table for each terraform

I have public and private subnets established in a VPC created with for each. I am now trying to create route tables for the subnets and nat gateways specifically for access for private instances. My subnets, route tables, and public subnet associations are working properly. I am having trouble getting my private subnets to attach to the route table connecting it to the NAT gateway. I believe my logic correct. My NAT gateways are sitting in my public subnets. The only issue is private subnets being attached to the route table that connects to the NAT gateway. Below is my code, any advice is appreciated.
resource "aws_route_table" "public" {
for_each = var.pub_subnet
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = var.rt_tags
}
}
resource "aws_route_table_association" "public" {
for_each = aws_subnet.public
route_table_id = aws_route_table.public[each.key].id
subnet_id = each.value.id
}
resource "aws_route_table_association" "nat" {
for_each = aws_subnet.private
route_table_id = aws_route_table.nat[each.key].id
subnet_id = each.value.id
}
resource "aws_route_table" "nat" {
for_each = var.pub_subnet
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.main[each.key].id
}
tags = {
Name = var.rt_tags_private
}
}
resource "aws_subnet" "public" {
for_each = var.pub_subnet
vpc_id = aws_vpc.main.id
cidr_block = each.value.cidr_block
availability_zone = each.value.availability_zone
map_public_ip_on_launch = true
tags = {
Name = each.key
}
}
resource "aws_subnet" "private" {
for_each = var.priv_subnet
vpc_id = aws_vpc.main.id
cidr_block = each.value.cidr_block
availability_zone = each.value.availability_zone
map_public_ip_on_launch = false
tags = {
Name = each.key
}
}
Variables
variable "pub_subnet" {
type = map(object({
cidr_block = string
availability_zone = string
}))
default = {
"PubSub1" = {
cidr_block = "10.0.1.0/24"
availability_zone = "us-west-1a"
}
}
}
variable "priv_subnet" {
type = map(object({
cidr_block = string
availability_zone = string
}))
default = {
"PrivSub1" = {
cidr_block = "10.0.2.0/24"
availability_zone = "us-west-1c"
}
}
}
Error
Error: Invalid index
on vpc.tf line 61, in resource "aws_route_table_association" "nat":
61: route_table_id = aws_route_table.nat[each.key].id
|----------------
| aws_route_table.nat is object with 1 attribute "PubSub1"
| each.key is "PrivSub1"
The given key does not identify an element in this collection value.
NAT Gateway
resource "aws_nat_gateway" "main" {
for_each = aws_subnet.public
subnet_id = each.value.id
allocation_id = aws_eip.main[each.key].id
}
EIP
resource "aws_eip" "main" {
for_each = aws_subnet.public
vpc = true
lifecycle {
create_before_destroy = true
}
}
You are defining your route table for nat using var.pub_subnet which has the form of:
"PubSub1" = {
cidr_block = "10.0.1.0/24"
availability_zone = "us-west-1a"
}
Thus to refer to aws_route_table you have to use PubSub1 key.
However, in your aws_route_table_association you are iterating over aws_subnet.private which has key of PrivSub1.
update
The issue can be overcome by creating a local mapping for private=>public subnets names, e.g.:
locals {
private_public_mapping = zipmap(keys(var.priv_subnet), keys(var.pub_subnet))
}
resource "aws_route_table_association" "nat" {
for_each = aws_subnet.private
route_table_id = aws_route_table.nat[local.private_public_mapping[each.key]].id
subnet_id = each.value.id
}

Create AWS RDS instance in non default VPC using terraform

I am using Terraform v0.10.2. I have created VPC in modules/vpc/main.tf and modules/acl/main.tf. I am accessing it using it's output.
I can successfully create ec2 instance in public subnet in above vpc like so:
subnet_id = "${element(module.vpc.public_subnet_ids, count.index)}"
I want to add the RDS instance to private subnet. I tried what the terraform doc said:
vpc_security_group_ids = [
"${aws_security_group.db_access_sg.id}"
]
db_subnet_group_name = "${module.vpc.aws_db_subnet_group_database}"
But, it is adding to the default VPC. If i put the subnet outside the module and access the resource, it gives the variable not found error.
I have referred many GitHub examples, but without success. Am i missing something ?
And this is one of the link i referred: https://github.com/hashicorp/terraform/issues/13739
Contents of modules/vpc/main.tf
resource "aws_vpc" "mod" {
cidr_block = "${var.cidr}"
tags {
Name = "${var.name}"
}
}
resource "aws_internet_gateway" "mod" {
vpc_id = "${aws_vpc.mod.id}"
}
resource "aws_route_table" "public" {
vpc_id = "${aws_vpc.mod.id}"
propagating_vgws = ["${compact(split(",", var.public_propagating_vgws))}"]
tags {
Name = "${var.name}-public"
}
}
resource "aws_route" "public_internet_gateway" {
route_table_id = "${aws_route_table.public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.mod.id}"
}
resource "aws_route_table" "private" {
vpc_id = "${aws_vpc.mod.id}"
propagating_vgws = ["${compact(split(",", var.private_propagating_vgws))}"]
tags {
Name = "${var.name}-private"
}
}
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.mod.id}"
cidr_block = "${element(split(",", var.private_subnets), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(compact(split(",", var.private_subnets)))}"
tags {
Name = "${var.name}-private"
}
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.mod.id}"
cidr_block = "${element(split(",", var.public_subnets), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(compact(split(",", var.public_subnets)))}"
tags {
Name = "${var.name}-public"
}
map_public_ip_on_launch = true
}
resource "aws_db_subnet_group" "database" {
name = "${var.name}-rds-subnet-group-${count.index}"
description = "Database subnet groups for ${var.name}"
subnet_ids = ["${aws_subnet.private.*.id}"]
#tags = "${merge(var.tags, map("Name", format("%s-database-subnet-group", var.name)))}"
count = "${length(compact(split(",", var.private_subnets)))}"
}
resource "aws_route_table_association" "private" {
count = "${length(compact(split(",", var.private_subnets)))}"
subnet_id = "${element(aws_subnet.private.*.id, count.index)}"
route_table_id = "${aws_route_table.private.id}"
}
resource "aws_route_table_association" "public" {
count = "${length(compact(split(",", var.public_subnets)))}"
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
route_table_id = "${aws_route_table.public.id}"
}
Contents of modules/vpc/outputs.tf
output "vpc_id" {
value = "${aws_vpc.mod.id}"
}
output "public_subnet_ids" {
value = ["${aws_subnet.public.*.id}"]
}
output "private_subnet_ids" {
value = ["${aws_subnet.private.*.id}"]
}
output "aws_db_subnet_group_database" {
value = "${aws_db_subnet_group.database.name}"
}
Contents of modules/acl/main.tf
resource "aws_network_acl" "private_app_subnets" {
vpc_id = "${var.vpc_id}"
subnet_ids = ["${var.private_subnet_ids}"]
}
The issue was, i had enabled the "Publicly Accessible" to true, while trying to add the RDS instance to private subnet. I had to remove the count from aws_db_subnet_group like ydaetskcoR told me to, of course.