Cannot connect to RDS in another VPC via VPC peering - amazon-web-services

I have two VPCs:
VPC A
RDS instance
VPC B
EC2 instance
There are also few subnets:
VPC A
Private A
Private B
Peer A
VPC B
Private A
Private B
Peer A
The RDS is in Private A, Private B, Peer A of VPC A.
The EC2 is in Peer A of VPC B.
I want to connect to the RDS instance from the EC2.
I have created a peering:
resource "aws_vpc_peering_connection" "a_to_b" {
vpc_id = aws_vpc.a.id
peer_vpc_id = aws_vpc.b.id
auto_accept = true
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
}
resource "aws_vpc_peering_connection_accepter" "a_to_b" {
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
auto_accept = true
}
I also have route tables for the whole CIDR block like so:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
resource "aws_route_table" "b_peer" {
vpc_id = aws_vpc.b.id
}
resource "aws_route_table_association" "b_peer" {
route_table_id = aws_route_table.b_peer.id
subnet_id = aws_subnet.b_peer.id
}
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_subnet.a_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
I have also created security groups from ingress and egress on the RDS instance to the EC2 security group.
When I SSH into the EC2 I can get the DNS:
$ nslookup rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Server: 192.16.0.2
Address: 192.16.0.2#53
Non-authoritative answer:
Name: rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Address: 10.16.192.135
However, curl cannot connect:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
The expected response is:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
curl: (52) Empty reply from server
The VPC peering is "Active" and the route tables match the Terraform.
How can I get this to connect?

I did some tests on my own, and I'm pretty sure that the issue is caused by your routes, assuming that everything else in your VPC is correct as the VPCs and subnets definitions are not shown .
Specifically, you wrote that "RDS is in Private A, Private B, Peer A of VPC A". This means that RDS master may be in any of these subnets. You have no control over it, as its up to RDS to choose which subnet to use. You can only partially control it by selecting AZs when you create your RDS. Subsequently, your peering route tables should cover all these three subnets. The easiest way to achieve this is by using VPC CIDR range:
# Route from instance in VPC B to any subnet in VPC A which
# hosts your RDS in all its subnets
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_vpc.a.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
Then you also need to have a route table in VPC A associated with your peering connections for all its subnets:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route_table_association" "a_private1" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private1.id
}
resource "aws_route_table_association" "a_private2" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private2.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}

Related

terraform iterate to associate aws resources

I try to create nat gateway on terraform, on each public subnets that I created.
I create the public subntes like that:
resource "aws_subnet" "public_subnet" {
count = length(var.vpc.public_subnets)
vpc_id = aws_vpc.vpc.id
availability_zone = var.vpc.public_subnets[count.index].availability_zone
cidr_block = var.vpc.public_subnets[count.index].cidr_block
tags = var.vpc.public_subnets[count.index].tags
}
I create all elastic ip like that:
resource "aws_eip" "eip" {
for_each = { for eip in var.vpc.eip : eip.name => eip }
vpc = true
tags = each.value.tags
}
And finally I have a resource block to create 3 nat gateways. Each nat gateway have to use a subnet and an eip:
resource "aws_nat_gateway" "ngw" {
count = length(var.vpc.public_subnets)
allocation_id = element(aws_eip.eip.*.allocation_id, count.index)
subnet_id = element(aws_subnet.public_subnet.*.id, count.index)
}
results ==> This object does not have an attribute named "allocation_id"
How should I iterate over 2 resources to create the nat gateay for each pair of subnet/eip ?
thanks.
Since you are using for_each for eip it will be a map, not a list. Thus to access its values you can use values:
allocation_id = element(values(aws_eip.eip)[*].allocation_id, count.index)

Share RDS instance with another VPC, but no other resources?

I have created two VPCs using Terraform:
resource "aws_vpc" "alpha" {
cidr_block = "10.16.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "Alpha"
}
}
resource "aws_subnet" "alpha_private_a" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.192.0/24"
availability_zone = "${var.aws_region}a"
tags = {
Name = "Alpha Private A"
}
}
resource "aws_subnet" "alpha_private_b" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.224.0/24"
availability_zone = "${var.aws_region}b"
tags = {
Name = "Alpha Private B"
}
}
resource "aws_route_table" "alpha_private" {
vpc_id = aws_vpc.alpha.id
tags = {
Name = "Alpha Private"
}
}
resource "aws_route_table_association" "alpha_private_a" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_a.id
}
resource "aws_route_table_association" "alpha_private_b" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_b.id
}
# The same again for VPC "Bravo"
I also have an RDS in VPC "Alpha":
resource "aws_db_subnet_group" "alpha_rds" {
subnet_ids = [ aws_subnet.alpha_private_a.id, aws_subnet.alpha_private_b.id ]
tags = {
Name = "Alpha RDS"
}
}
resource "aws_db_instance" "alpha" {
identifier = "alpha"
allocated_storage = 20
max_allocated_storage = 1000
storage_type = "gp2"
engine = "postgres"
engine_version = "11.8"
publicly_accessible = false
db_subnet_group_name = aws_db_subnet_group.alpha_rds.name
performance_insights_enabled = true
vpc_security_group_ids = [ aws_security_group.alpha_rds.id ]
lifecycle {
prevent_destroy = true
}
}
Then I have an Elastic Beanstalk instance inside VPC "Bravo".
What I want to achieve:
alpha_rds is accessible to my Elastic Beanstalk instance inside Bravo VPC
Nothing else inside Alpha VPC is accessible to Bravo VPC
Nothing else inside Bravo VPC is accessible to Alpha VPC
I think VPC Peering is required for this?
How can I implement this in Terraform?
Related but not Terraform:
Access Private RDS DB From Another VPC
AWS Fargate connection to RDS in a different VPC
You should be able to set it up like this:
Create a VPC Peering Connection between Alpha and Bravo
In the Route table for Alpha add a route for the CIDR range of Bravo and set the destination to the peering connection (pcx-XXXXXX) to Bravo
In the Route table for Bravo add a route for the IP-address(es) of the Database and point it to the peering connection to Alpha
This setup guarantees, that resources in Bravo can only communicate to the Database in Alpha, every other packet to that VPC can't be routed.
The inverse is a little tougher - right now this setup should stop TCP connections from Alpha to Bravo being established, because there is no return path except for the database. UDP traffic could still go through, it's response will be dropped though, unless it comes from the database.
At this point you could set up Network Access Control lists in the Subnets in Bravo to Deny traffic from Alpha except for the database IPs. This depends on your level of paranoia or your requirements in terms of isolation - personally I wouldn't do it, but it's Friday afternoon and I'm in a lazy mood ;-).
Update
As Mark B correctly pointed out in the comments there is a risk, that the private IP addresses of your RDS cluster may change on failover if the underlying host can't be recovered.
To address these concerns, you could create separate subnets in Alpha for your database node(s) and substitute the database IPs in my description above with the CIDRs of these subnets. That should allow for slightly more flexibility and allows you to get around the NACL problem as well, because you can just edit the routing table of the new database subnet(s) and only add the Peering Connection there.

How Aws ecs fargate availablity zone works?

Main Two Question with terraform code.
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
I'm trying to build infra like under image
resource "aws_vpc" "cluster_vpc" {
tags = {
Name = "ecs-vpc"
}
cidr_block = "10.30.0.0/16"
}
data "aws_availability_zones" "available" {
}
resource "aws_subnet" "cluster" {
vpc_id = aws_vpc.cluster_vpc.id
count = length(data.aws_availability_zones.available.names)
cidr_block = "10.30.${10 + count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "ecs-subnet"
}
}
resource "aws_internet_gateway" "cluster_igw" {
vpc_id = aws_vpc.cluster_vpc.id
tags = {
Name = "ecs-igw"
}
}
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.cluster_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cluster_igw.id
}
tags = {
Name = "ecs-route-table"
}
}
resource "aws_route_table_association" "to-public" {
count = length(aws_subnet.cluster)
subnet_id = aws_subnet.cluster[count.index].id
route_table_id = aws_route_table.public_route.id
}
resource "aws_ecs_cluster" "staging" {
name = "service-ecs-cluster"
}
resource "aws_ecs_service" "staging" {
name = "staging"
cluster = aws_ecs_cluster.staging.id
task_definition = aws_ecs_task_definition.service.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = aws_subnet.cluster[*].id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.staging.arn
container_name = var.app_name
container_port = var.container_port
}
resource "aws_lb" "staging" {
name = "alb"
subnets = aws_subnet.cluster[*].id
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
access_logs {
bucket = aws_s3_bucket.log_storage.id
prefix = "frontend-alb"
enabled = true
}
tags = {
Environment = "staging"
Application = var.app_name
}
}
... omit like lb_target, or specific components
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
Not really. It is to provide a single, fixed endpoint (url) to your ECS service. The ALB will automatically distribute incoming connection from the internet across your ECS services. They can be in one or multiple AZs. In your case it is only 1 AZ since you are using desired_count = 1. This means that you will have only 1 ECS service in a single AZ.
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
Yes, because your ALB is enabled for the same subnets as your ECS service through aws_subnet.cluster[*].id. But as explained in the first question, you will have only 1 service in one AZ.
my intent is to build infra which has three availability zone and also deploy aws fargate on three availablity zone.
As explained before, your desired_count = 1 so you will not have ECS services across 3 AZs.
Also you are creating only public subnets, while your schematic diagram shows that ECS services should be in private ones.

Why can I not ping my EC2 instance when I've set up the VPC and EC2 via Terraform?

I have a setup via Terraform which includes a VPC, a public subnet, and an EC2 instance with a security group. I am trying to ping the EC2 instance but get timeouts.
A few things I've tried to ensure:
the EC2 is in the subnet, and the subnet is routed to internet via the gateway
the EC2 has a security group allowing all traffic both ways
the EC2 has an elastic IP
The VPC has an ACL that is attached to the subnet and allows all traffic both ways
I'm not sure what I missed here.
My tf file looks like (edited to reflect latest changes):
resource "aws_vpc" "foobar" {
cidr_block = "10.0.0.0/16"
}
resource "aws_internet_gateway" "foobar_gateway" {
vpc_id = aws_vpc.foobar.id
}
/*
Public subnet
*/
resource "aws_subnet" "foobar_subnet" {
vpc_id = aws_vpc.foobar.id
cidr_block = "10.0.1.0/24"
}
resource "aws_route_table" "foobar_routetable" {
vpc_id = aws_vpc.foobar.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.foobar_gateway.id
}
}
resource "aws_route_table_association" "foobar_routetable_assoc" {
subnet_id = aws_subnet.foobar_subnet.id
route_table_id = aws_route_table.foobar_routetable.id
}
/*
Web
*/
resource "aws_security_group" "web" {
name = "vpc_web"
vpc_id = aws_vpc.foobar.id
ingress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_network_acl" "main" {
vpc_id = aws_vpc.foobar.id
subnet_ids = [aws_subnet.foobar_subnet.id]
egress {
protocol = -1
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 0
to_port = 0
}
ingress {
protocol = -1
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 0
to_port = 0
}
}
resource "aws_instance" "web-1" {
ami = "ami-0323c3dd2da7fb37d"
instance_type = "t2.micro"
subnet_id = aws_subnet.foobar_subnet.id
associate_public_ip_address = true
}
resource "aws_eip" "web-1" {
instance = aws_instance.web-1.id
vpc = true
}
Why can I not ping my EC2 instance when I've set up the VPC and EC2 via Terraform?
Why are you adding the self parameter in your security group rule. The docs for terraform state that If true, the security group itself will be added as a source to this ingress rule. Which basically means that only that security group can access the instance. Please remove that and try.
EDIT: see comments below for steps that fixed the problem
Allowing all the traffic through security group would not enable ping to the instance. You need to add a specific security rule - image shown below to enable the ping request.
Remember that AWS has made this rule separate to ensure that you know what you are doing. Being able to ping the instance from anywhere around the world leaves your instance vulnerable to people trying to find instance by bruteforcing various IP address.
Hence, it is advisable to carefully change this rule.

Terraform: aws_elb.terraformelb: : invalid or unknown key: subnet_id

I'm trying to assign an ELB to a public subnet that's within a new VPC:
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.dev-vpc.id}"
cidr_block = "${var.public_subnet}"
availability_zone = "${var.aws_region}a"
map_public_ip_on_launch = false
tags {
Name = "public"
Environment = "${var.environment}"
}
}
and I get the following error:
aws_elb.terraformelb: : invalid or unknown key: subnet_id
If I remove the subnet parameter, the ELB is assigned to a default VPC.
Here's my terraform elb code:
resource "aws_elb" "terraformelb" {
subnet_id = "${aws_subnet.public.id}"
security_groups = ["${aws_security_group.terraformelb-sg.id}"]
cross_zone_load_balancing = "true"
idle_timeout = "60"
connection_draining = "true"
connection_draining_timeout = "300"
tags = {
Name = "${var.environment}-${var.environment_name}-elb"
Env_Name = "${var.environment}-${var.environment_name}"
Environment = "${var.environment}"
Version = "${var.version}"
}
listener {
lb_port = 80
lb_protocol = "http"
instance_port = "${var.server_port}"
instance_protocol = "http"
}
health_check {
healthy_threshold = "10"
unhealthy_threshold = "2"
timeout = "2"
interval = "5"
target = "HTTP:${var.server_port}/"
}
}
Please let me know how to assign an ELB to a subnet.
Thanks,
it's subnets instead of subnet_id = "${aws_subnet.public.id}"
Subnets is the parameter you want.
subnets - (Required for a VPC ELB) A list of subnet IDs to attach to
the ELB.
subnets = ["${aws_subnet.public.id}"]
Also availability_zones is not required for VPC ELB's, it's implied with the subnet provided.
availability_zones - (Required for an EC2-classic ELB) The AZ's to
serve traffic in.
https://www.terraform.io/docs/providers/aws/r/elb.html#subnets
It may be a good idea to also provision several more public subnets in different AZ's, if you are setting cross_zone_load_balancing to "true"
Create a subnet in each Availability Zone where you want to launch
instances. Depending on your application, you can launch your
instances in public subnets, private subnets, or a combination of
public and private subnets. A public subnet has a route to an Internet
gateway. Note that default VPCs have one public subnet per
Availability Zone by default.
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-backend-instances.html