Share RDS instance with another VPC, but no other resources? - amazon-web-services

I have created two VPCs using Terraform:
resource "aws_vpc" "alpha" {
cidr_block = "10.16.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "Alpha"
}
}
resource "aws_subnet" "alpha_private_a" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.192.0/24"
availability_zone = "${var.aws_region}a"
tags = {
Name = "Alpha Private A"
}
}
resource "aws_subnet" "alpha_private_b" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.224.0/24"
availability_zone = "${var.aws_region}b"
tags = {
Name = "Alpha Private B"
}
}
resource "aws_route_table" "alpha_private" {
vpc_id = aws_vpc.alpha.id
tags = {
Name = "Alpha Private"
}
}
resource "aws_route_table_association" "alpha_private_a" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_a.id
}
resource "aws_route_table_association" "alpha_private_b" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_b.id
}
# The same again for VPC "Bravo"
I also have an RDS in VPC "Alpha":
resource "aws_db_subnet_group" "alpha_rds" {
subnet_ids = [ aws_subnet.alpha_private_a.id, aws_subnet.alpha_private_b.id ]
tags = {
Name = "Alpha RDS"
}
}
resource "aws_db_instance" "alpha" {
identifier = "alpha"
allocated_storage = 20
max_allocated_storage = 1000
storage_type = "gp2"
engine = "postgres"
engine_version = "11.8"
publicly_accessible = false
db_subnet_group_name = aws_db_subnet_group.alpha_rds.name
performance_insights_enabled = true
vpc_security_group_ids = [ aws_security_group.alpha_rds.id ]
lifecycle {
prevent_destroy = true
}
}
Then I have an Elastic Beanstalk instance inside VPC "Bravo".
What I want to achieve:
alpha_rds is accessible to my Elastic Beanstalk instance inside Bravo VPC
Nothing else inside Alpha VPC is accessible to Bravo VPC
Nothing else inside Bravo VPC is accessible to Alpha VPC
I think VPC Peering is required for this?
How can I implement this in Terraform?
Related but not Terraform:
Access Private RDS DB From Another VPC
AWS Fargate connection to RDS in a different VPC

You should be able to set it up like this:
Create a VPC Peering Connection between Alpha and Bravo
In the Route table for Alpha add a route for the CIDR range of Bravo and set the destination to the peering connection (pcx-XXXXXX) to Bravo
In the Route table for Bravo add a route for the IP-address(es) of the Database and point it to the peering connection to Alpha
This setup guarantees, that resources in Bravo can only communicate to the Database in Alpha, every other packet to that VPC can't be routed.
The inverse is a little tougher - right now this setup should stop TCP connections from Alpha to Bravo being established, because there is no return path except for the database. UDP traffic could still go through, it's response will be dropped though, unless it comes from the database.
At this point you could set up Network Access Control lists in the Subnets in Bravo to Deny traffic from Alpha except for the database IPs. This depends on your level of paranoia or your requirements in terms of isolation - personally I wouldn't do it, but it's Friday afternoon and I'm in a lazy mood ;-).
Update
As Mark B correctly pointed out in the comments there is a risk, that the private IP addresses of your RDS cluster may change on failover if the underlying host can't be recovered.
To address these concerns, you could create separate subnets in Alpha for your database node(s) and substitute the database IPs in my description above with the CIDRs of these subnets. That should allow for slightly more flexibility and allows you to get around the NACL problem as well, because you can just edit the routing table of the new database subnet(s) and only add the Peering Connection there.

Related

Cannot connect to RDS in another VPC via VPC peering

I have two VPCs:
VPC A
RDS instance
VPC B
EC2 instance
There are also few subnets:
VPC A
Private A
Private B
Peer A
VPC B
Private A
Private B
Peer A
The RDS is in Private A, Private B, Peer A of VPC A.
The EC2 is in Peer A of VPC B.
I want to connect to the RDS instance from the EC2.
I have created a peering:
resource "aws_vpc_peering_connection" "a_to_b" {
vpc_id = aws_vpc.a.id
peer_vpc_id = aws_vpc.b.id
auto_accept = true
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
}
resource "aws_vpc_peering_connection_accepter" "a_to_b" {
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
auto_accept = true
}
I also have route tables for the whole CIDR block like so:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
resource "aws_route_table" "b_peer" {
vpc_id = aws_vpc.b.id
}
resource "aws_route_table_association" "b_peer" {
route_table_id = aws_route_table.b_peer.id
subnet_id = aws_subnet.b_peer.id
}
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_subnet.a_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
I have also created security groups from ingress and egress on the RDS instance to the EC2 security group.
When I SSH into the EC2 I can get the DNS:
$ nslookup rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Server: 192.16.0.2
Address: 192.16.0.2#53
Non-authoritative answer:
Name: rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Address: 10.16.192.135
However, curl cannot connect:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
The expected response is:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
curl: (52) Empty reply from server
The VPC peering is "Active" and the route tables match the Terraform.
How can I get this to connect?
I did some tests on my own, and I'm pretty sure that the issue is caused by your routes, assuming that everything else in your VPC is correct as the VPCs and subnets definitions are not shown .
Specifically, you wrote that "RDS is in Private A, Private B, Peer A of VPC A". This means that RDS master may be in any of these subnets. You have no control over it, as its up to RDS to choose which subnet to use. You can only partially control it by selecting AZs when you create your RDS. Subsequently, your peering route tables should cover all these three subnets. The easiest way to achieve this is by using VPC CIDR range:
# Route from instance in VPC B to any subnet in VPC A which
# hosts your RDS in all its subnets
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_vpc.a.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
Then you also need to have a route table in VPC A associated with your peering connections for all its subnets:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route_table_association" "a_private1" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private1.id
}
resource "aws_route_table_association" "a_private2" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private2.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}

How Aws ecs fargate availablity zone works?

Main Two Question with terraform code.
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
I'm trying to build infra like under image
resource "aws_vpc" "cluster_vpc" {
tags = {
Name = "ecs-vpc"
}
cidr_block = "10.30.0.0/16"
}
data "aws_availability_zones" "available" {
}
resource "aws_subnet" "cluster" {
vpc_id = aws_vpc.cluster_vpc.id
count = length(data.aws_availability_zones.available.names)
cidr_block = "10.30.${10 + count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "ecs-subnet"
}
}
resource "aws_internet_gateway" "cluster_igw" {
vpc_id = aws_vpc.cluster_vpc.id
tags = {
Name = "ecs-igw"
}
}
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.cluster_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cluster_igw.id
}
tags = {
Name = "ecs-route-table"
}
}
resource "aws_route_table_association" "to-public" {
count = length(aws_subnet.cluster)
subnet_id = aws_subnet.cluster[count.index].id
route_table_id = aws_route_table.public_route.id
}
resource "aws_ecs_cluster" "staging" {
name = "service-ecs-cluster"
}
resource "aws_ecs_service" "staging" {
name = "staging"
cluster = aws_ecs_cluster.staging.id
task_definition = aws_ecs_task_definition.service.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = aws_subnet.cluster[*].id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.staging.arn
container_name = var.app_name
container_port = var.container_port
}
resource "aws_lb" "staging" {
name = "alb"
subnets = aws_subnet.cluster[*].id
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
access_logs {
bucket = aws_s3_bucket.log_storage.id
prefix = "frontend-alb"
enabled = true
}
tags = {
Environment = "staging"
Application = var.app_name
}
}
... omit like lb_target, or specific components
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
Not really. It is to provide a single, fixed endpoint (url) to your ECS service. The ALB will automatically distribute incoming connection from the internet across your ECS services. They can be in one or multiple AZs. In your case it is only 1 AZ since you are using desired_count = 1. This means that you will have only 1 ECS service in a single AZ.
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
Yes, because your ALB is enabled for the same subnets as your ECS service through aws_subnet.cluster[*].id. But as explained in the first question, you will have only 1 service in one AZ.
my intent is to build infra which has three availability zone and also deploy aws fargate on three availablity zone.
As explained before, your desired_count = 1 so you will not have ECS services across 3 AZs.
Also you are creating only public subnets, while your schematic diagram shows that ECS services should be in private ones.

Terraform get IPs of vpc endpoint subnets

I am trying to setup AWS SFTP transfer in vpc endpoint mode but there is one think I can't manage with.
The problem I have is how to get target IPs for NLB target group.
The only output I found:
output "vpc_endpoint_transferserver_network_interface_ids" {
description = "One or more network interfaces for the VPC Endpoint for transferserver"
value = flatten(aws_vpc_endpoint.transfer_server.*.network_interface_ids)
}
gives network interface ids which cannot be used as targets:
Outputs:
api_url = https://12345.execute-api.eu-west-1.amazonaws.com/prod
vpc_endpoint_transferserver_network_interface_ids = [
"eni-12345",
"eni-67890",
"eni-abcde",
]
I went through:
terraform get subnet integration ips from vpc endpoint subnets tab
and
Terraform how to get IP address of aws_lb
but none of them seems to be working. The latter says:
on modules/sftp/main.tf line 134, in data "aws_network_interface" "ifs":
134: count = "${length(local.nlb_interface_ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
You can create an Elastic IP
resource "aws_eip" "lb" {
instance = "${aws_instance.web.id}"
vpc = true
}
Then specify the Elastic IPs while creating Network LB
resource "aws_lb" "example" {
name = "example"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}

Terraform: aws_elb.terraformelb: : invalid or unknown key: subnet_id

I'm trying to assign an ELB to a public subnet that's within a new VPC:
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.dev-vpc.id}"
cidr_block = "${var.public_subnet}"
availability_zone = "${var.aws_region}a"
map_public_ip_on_launch = false
tags {
Name = "public"
Environment = "${var.environment}"
}
}
and I get the following error:
aws_elb.terraformelb: : invalid or unknown key: subnet_id
If I remove the subnet parameter, the ELB is assigned to a default VPC.
Here's my terraform elb code:
resource "aws_elb" "terraformelb" {
subnet_id = "${aws_subnet.public.id}"
security_groups = ["${aws_security_group.terraformelb-sg.id}"]
cross_zone_load_balancing = "true"
idle_timeout = "60"
connection_draining = "true"
connection_draining_timeout = "300"
tags = {
Name = "${var.environment}-${var.environment_name}-elb"
Env_Name = "${var.environment}-${var.environment_name}"
Environment = "${var.environment}"
Version = "${var.version}"
}
listener {
lb_port = 80
lb_protocol = "http"
instance_port = "${var.server_port}"
instance_protocol = "http"
}
health_check {
healthy_threshold = "10"
unhealthy_threshold = "2"
timeout = "2"
interval = "5"
target = "HTTP:${var.server_port}/"
}
}
Please let me know how to assign an ELB to a subnet.
Thanks,
it's subnets instead of subnet_id = "${aws_subnet.public.id}"
Subnets is the parameter you want.
subnets - (Required for a VPC ELB) A list of subnet IDs to attach to
the ELB.
subnets = ["${aws_subnet.public.id}"]
Also availability_zones is not required for VPC ELB's, it's implied with the subnet provided.
availability_zones - (Required for an EC2-classic ELB) The AZ's to
serve traffic in.
https://www.terraform.io/docs/providers/aws/r/elb.html#subnets
It may be a good idea to also provision several more public subnets in different AZ's, if you are setting cross_zone_load_balancing to "true"
Create a subnet in each Availability Zone where you want to launch
instances. Depending on your application, you can launch your
instances in public subnets, private subnets, or a combination of
public and private subnets. A public subnet has a route to an Internet
gateway. Note that default VPCs have one public subnet per
Availability Zone by default.
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-backend-instances.html

Terraform: programmatically generate multiple routing tables for multiple VPC peering connections

Using terraform-0.7.7. I'm following, more or less, the Best Practices repo, the AWS part:
https://github.com/hashicorp/best-practices/tree/master/terraform
I am trying to come up with a general template for building VPCs. The idea is to have a network module with several sub-modules (vpc, private_subnet, etc), and just plug into it different variables from terraform.tfvars files, in order to build different environments.
Let's say in the .tfvars file for one environment I have a list of availability zones, and another list with the IP blocks for the private subnets:
azs = "us-west-2a,us-west-2b,us-west-2c"
private_subnets = "10.XXX.1.0/24,10.XXX.2.0/24,10.XXX.3.0/24"
The network/private_subnets module will happily create subnets, route tables, and associations based on those lists:
resource "aws_subnet" "private" {
vpc_id = "${var.vpc_id}"
cidr_block = "${element(split(",", var.cidrs), count.index)}"
availability_zone = "${element(split(",", var.azs), count.index)}"
count = "${length(split(",", var.cidrs))}"
tags { Name = "${var.name}-${element(split(",", var.azs), count.index)}-private" }
lifecycle { create_before_destroy = true }
}
resource "aws_route_table" "private" {
vpc_id = "${var.vpc_id}"
count = "${length(split(",", var.cidrs))}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(split(",", var.nat_gateway_ids), count.index)}"
}
tags { Name = "${var.name}-${element(split(",", var.azs), count.index)}-private" }
lifecycle { create_before_destroy = true }
}
resource "aws_route_table_association" "private" {
count = "${length(split(",", var.cidrs))}"
subnet_id = "${element(aws_subnet.private.*.id, count.index)}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
lifecycle { create_before_destroy = true }
}
That works well. For each environment, I have different azs and private_subnets lists, and the VPC is created correctly. The NAT gateways are created prior to that in a different module. I have one NAT gateway, one private subnet, and one private routing table, per AZ.
But now I'm trying to create VPC peering connections in the same way.
peer_vpc_ids = "vpc-XXXXXXXX, vpc-YYYYYYYY"
peer_vpc_blocks = "10.XXX.0.0/16, 10.YYY.0.0/16"
So now I need to write Terraform code that goes into the private route tables and adds routes for each VPC peering connection.
The problem is, I need to iterate by two variables: the list of AZs, and the list of peering connections, and Terraform does not seem to allow that. AFAICT, you cannot do nested loops in Terraform.
Am I missing something? Is there a better way to solve this problem?
Of course I could write by hand some custom spaghetti code that would build the VPC no matter what, but the goal here is to keep the code composable and maintainable, and separate the logic from the attributes.
You can potentially achieve this with some math on the element interpolation method:
resource "aws_vpc_peering_connection" {
peer_vpc_ids = "${element(var.vpc_ids, count.index)}"
peer_vpc_blocks = "${element(var.vpc_blocks,floor(count.index / length(var.vpc_ids))}"
count = "${length(var.vpc_ids) * length(var.vpc_blocks)}"
}
Specifically the element(var.vpc_blocks,floor(count.index / length(var.vpc_ids)) call will make VPC blocks increment 1 for each of the var.vpc_ids values.