I try to create nat gateway on terraform, on each public subnets that I created.
I create the public subntes like that:
resource "aws_subnet" "public_subnet" {
count = length(var.vpc.public_subnets)
vpc_id = aws_vpc.vpc.id
availability_zone = var.vpc.public_subnets[count.index].availability_zone
cidr_block = var.vpc.public_subnets[count.index].cidr_block
tags = var.vpc.public_subnets[count.index].tags
}
I create all elastic ip like that:
resource "aws_eip" "eip" {
for_each = { for eip in var.vpc.eip : eip.name => eip }
vpc = true
tags = each.value.tags
}
And finally I have a resource block to create 3 nat gateways. Each nat gateway have to use a subnet and an eip:
resource "aws_nat_gateway" "ngw" {
count = length(var.vpc.public_subnets)
allocation_id = element(aws_eip.eip.*.allocation_id, count.index)
subnet_id = element(aws_subnet.public_subnet.*.id, count.index)
}
results ==> This object does not have an attribute named "allocation_id"
How should I iterate over 2 resources to create the nat gateay for each pair of subnet/eip ?
thanks.
Since you are using for_each for eip it will be a map, not a list. Thus to access its values you can use values:
allocation_id = element(values(aws_eip.eip)[*].allocation_id, count.index)
Related
I have two VPCs:
VPC A
RDS instance
VPC B
EC2 instance
There are also few subnets:
VPC A
Private A
Private B
Peer A
VPC B
Private A
Private B
Peer A
The RDS is in Private A, Private B, Peer A of VPC A.
The EC2 is in Peer A of VPC B.
I want to connect to the RDS instance from the EC2.
I have created a peering:
resource "aws_vpc_peering_connection" "a_to_b" {
vpc_id = aws_vpc.a.id
peer_vpc_id = aws_vpc.b.id
auto_accept = true
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
}
resource "aws_vpc_peering_connection_accepter" "a_to_b" {
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
auto_accept = true
}
I also have route tables for the whole CIDR block like so:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
resource "aws_route_table" "b_peer" {
vpc_id = aws_vpc.b.id
}
resource "aws_route_table_association" "b_peer" {
route_table_id = aws_route_table.b_peer.id
subnet_id = aws_subnet.b_peer.id
}
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_subnet.a_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
I have also created security groups from ingress and egress on the RDS instance to the EC2 security group.
When I SSH into the EC2 I can get the DNS:
$ nslookup rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Server: 192.16.0.2
Address: 192.16.0.2#53
Non-authoritative answer:
Name: rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Address: 10.16.192.135
However, curl cannot connect:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
The expected response is:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
curl: (52) Empty reply from server
The VPC peering is "Active" and the route tables match the Terraform.
How can I get this to connect?
I did some tests on my own, and I'm pretty sure that the issue is caused by your routes, assuming that everything else in your VPC is correct as the VPCs and subnets definitions are not shown .
Specifically, you wrote that "RDS is in Private A, Private B, Peer A of VPC A". This means that RDS master may be in any of these subnets. You have no control over it, as its up to RDS to choose which subnet to use. You can only partially control it by selecting AZs when you create your RDS. Subsequently, your peering route tables should cover all these three subnets. The easiest way to achieve this is by using VPC CIDR range:
# Route from instance in VPC B to any subnet in VPC A which
# hosts your RDS in all its subnets
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_vpc.a.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
Then you also need to have a route table in VPC A associated with your peering connections for all its subnets:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route_table_association" "a_private1" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private1.id
}
resource "aws_route_table_association" "a_private2" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private2.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
Main Two Question with terraform code.
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
I'm trying to build infra like under image
resource "aws_vpc" "cluster_vpc" {
tags = {
Name = "ecs-vpc"
}
cidr_block = "10.30.0.0/16"
}
data "aws_availability_zones" "available" {
}
resource "aws_subnet" "cluster" {
vpc_id = aws_vpc.cluster_vpc.id
count = length(data.aws_availability_zones.available.names)
cidr_block = "10.30.${10 + count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "ecs-subnet"
}
}
resource "aws_internet_gateway" "cluster_igw" {
vpc_id = aws_vpc.cluster_vpc.id
tags = {
Name = "ecs-igw"
}
}
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.cluster_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cluster_igw.id
}
tags = {
Name = "ecs-route-table"
}
}
resource "aws_route_table_association" "to-public" {
count = length(aws_subnet.cluster)
subnet_id = aws_subnet.cluster[count.index].id
route_table_id = aws_route_table.public_route.id
}
resource "aws_ecs_cluster" "staging" {
name = "service-ecs-cluster"
}
resource "aws_ecs_service" "staging" {
name = "staging"
cluster = aws_ecs_cluster.staging.id
task_definition = aws_ecs_task_definition.service.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = aws_subnet.cluster[*].id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.staging.arn
container_name = var.app_name
container_port = var.container_port
}
resource "aws_lb" "staging" {
name = "alb"
subnets = aws_subnet.cluster[*].id
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
access_logs {
bucket = aws_s3_bucket.log_storage.id
prefix = "frontend-alb"
enabled = true
}
tags = {
Environment = "staging"
Application = var.app_name
}
}
... omit like lb_target, or specific components
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
Not really. It is to provide a single, fixed endpoint (url) to your ECS service. The ALB will automatically distribute incoming connection from the internet across your ECS services. They can be in one or multiple AZs. In your case it is only 1 AZ since you are using desired_count = 1. This means that you will have only 1 ECS service in a single AZ.
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
Yes, because your ALB is enabled for the same subnets as your ECS service through aws_subnet.cluster[*].id. But as explained in the first question, you will have only 1 service in one AZ.
my intent is to build infra which has three availability zone and also deploy aws fargate on three availablity zone.
As explained before, your desired_count = 1 so you will not have ECS services across 3 AZs.
Also you are creating only public subnets, while your schematic diagram shows that ECS services should be in private ones.
This is what I'm trying to do. I have 3 NAT gateways deployed into separate AZs. I am now trying to create 1 route table for my private subnets pointing to the NAT gateway. In terraform I have created the NAT Gateways using for_each. I am now trying to associate these NAT gateways with a private route table and getting an error because I created the NAT gateways using for_each. Essentially, I am trying to refer to resources created with for_each in a resource that I do not need to use "for_each." Below is the code and error message. Any advice would be appreciated.
resource "aws_route_table" "nat" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main[each.key].id
}
tags = {
Name = "${var.vpc_tags}_PrivRT"
}
}
resource "aws_eip" "main" {
for_each = aws_subnet.public
vpc = true
lifecycle {
create_before_destroy = true
}
}
resource "aws_nat_gateway" "main" {
for_each = aws_subnet.public
subnet_id = each.value.id
allocation_id = aws_eip.main[each.key].id
}
resource "aws_subnet" "public" {
for_each = var.pub_subnet
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, each.value)
availability_zone = each.key
map_public_ip_on_launch = true
tags = {
Name = "PubSub-${each.key}"
}
}
Error
Error: Reference to "each" in context without for_each
on vpc.tf line 89, in resource "aws_route_table" "nat":
89: nat_gateway_id = aws_nat_gateway.main[each.key].id
The "each" object can be used only in "resource" blocks, and only when the
"for_each" argument is set.
The problem is that you are referencing each.key in the nat_gateway_id property of the "aws_route_table" "nat" resource without a for_each anywhere in that resource or sub-block.
Add a for_each to that resource and that should do the trick:
Here is some sample code (untested):
resource "aws_route_table" "nat" {
for_each = var.pub_subnet
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main[each.key].id
}
}
I'd like to create security group which allow me to communicate between instances within subnet and don't expose some ports outside.
Of course I can explicite specify my CIDR, but how create data source which give me CIDR block for my subnet in default VPC?
with terraform data source aws_vpc, you can get what you need.
The example shows what you need.
variable "vpc_id" {}
data "aws_vpc" "selected" {
id = "${var.vpc_id}"
}
resource "aws_subnet" "example" {
vpc_id = "${data.aws_vpc.selected.id}"
availability_zone = "us-west-2a"
cidr_block = "${cidrsubnet(data.aws_vpc.selected.cidr_block, 4, 1)}"
}
I'm attempting to associate my elastic IP address to a newly created network balancer using Terraform. I see no option in the aws_lb documentation for adding an elastic IP like one is able to do in the AWS console. The difficulty is that you have to associate the elastic IP upon creation of NLB.
EDIT: They now have made an explicit example on their documentation!
The aws_lb resource has a subnet_mapping block which allows you to specify an Elastic IP per subnet that the network load balancer exists in.
An absolutely minimal example looks something like this:
resource "aws_eip" "lb" {
vpc = true
}
resource "aws_lb" "network" {
name = "test-lb-tf"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${var.subnet_id}"
allocation_id = "${aws_eip.lb.id}"
}
}
Obviously you'll probably want to run the load balancer in multiple subnets so you'd probably use something like this:
variable "vpc" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc}"
}
}
data "aws_subnet_ids" "public" {
vpc_id = "${data.aws_vpc.selected.id}"
tags {
Tier = "public"
}
}
resource "aws_eip" "lb" {
count = "${length(data.aws_subnet_ids.public)}"
vpc = true
}
resource "aws_lb" "network" {
name = "test-lb-tf"
internal = false
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[0]}"
allocation_id = "${aws_eip.lb.id[0]}"
}
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[1]}"
allocation_id = "${aws_eip.lb.id[1]}"
}
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[2]}"
allocation_id = "${aws_eip.lb.id[2]}"
}
}
The above assumes you have tagged your VPC with a Name tag and your subnets with a Tier tag that in this case uses public as the value for any external facing subnets. It then creates an elastic IP address for each of the public subnets a network load balancer in each of the public subnets, attaching an elastic IP for each of them.
The above answer is correct, however it can now be simplified using dynamic blocks available in Terraform 0.12. This has the advantage of working in vpcs with more or less subnets.
resource "aws_lb" "network" {
name = "test-lb-tf"
internal = false
load_balancer_type = "network"
dynamic "subnet_mapping" {
for_each = data.aws.subnet_ids.public_ids
content {
subnet_id = subnet_mapping.value
allocation_id = aws_eip.lb.id[subnet_mapping.key].allocation_id
}
}
}
Here's my implementation base on the answer above:
resource "aws_eip" "nlb" {
for_each = toset(aws_subnet.public.*.id)
vpc = true
tags = {
"Name" = "my-app-nlb-eip"
}
}
resource "aws_lb" "nlb" {
name = "my-app-nlb"
internal = false
load_balancer_type = "network"
enable_deletion_protection = false
enable_cross_zone_load_balancing = true
dynamic "subnet_mapping" {
for_each = toset(aws_subnet.public.*.id)
content {
subnet_id = subnet_mapping.value
allocation_id = aws_eip.nlb[subnet_mapping.key].allocation_id
}
}
tags = {
Environment = "development"
}
}