How Aws ecs fargate availablity zone works? - amazon-web-services

Main Two Question with terraform code.
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
I'm trying to build infra like under image
resource "aws_vpc" "cluster_vpc" {
tags = {
Name = "ecs-vpc"
}
cidr_block = "10.30.0.0/16"
}
data "aws_availability_zones" "available" {
}
resource "aws_subnet" "cluster" {
vpc_id = aws_vpc.cluster_vpc.id
count = length(data.aws_availability_zones.available.names)
cidr_block = "10.30.${10 + count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "ecs-subnet"
}
}
resource "aws_internet_gateway" "cluster_igw" {
vpc_id = aws_vpc.cluster_vpc.id
tags = {
Name = "ecs-igw"
}
}
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.cluster_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cluster_igw.id
}
tags = {
Name = "ecs-route-table"
}
}
resource "aws_route_table_association" "to-public" {
count = length(aws_subnet.cluster)
subnet_id = aws_subnet.cluster[count.index].id
route_table_id = aws_route_table.public_route.id
}
resource "aws_ecs_cluster" "staging" {
name = "service-ecs-cluster"
}
resource "aws_ecs_service" "staging" {
name = "staging"
cluster = aws_ecs_cluster.staging.id
task_definition = aws_ecs_task_definition.service.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = aws_subnet.cluster[*].id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.staging.arn
container_name = var.app_name
container_port = var.container_port
}
resource "aws_lb" "staging" {
name = "alb"
subnets = aws_subnet.cluster[*].id
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
access_logs {
bucket = aws_s3_bucket.log_storage.id
prefix = "frontend-alb"
enabled = true
}
tags = {
Environment = "staging"
Application = var.app_name
}
}
... omit like lb_target, or specific components

Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
Not really. It is to provide a single, fixed endpoint (url) to your ECS service. The ALB will automatically distribute incoming connection from the internet across your ECS services. They can be in one or multiple AZs. In your case it is only 1 AZ since you are using desired_count = 1. This means that you will have only 1 ECS service in a single AZ.
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
Yes, because your ALB is enabled for the same subnets as your ECS service through aws_subnet.cluster[*].id. But as explained in the first question, you will have only 1 service in one AZ.
my intent is to build infra which has three availability zone and also deploy aws fargate on three availablity zone.
As explained before, your desired_count = 1 so you will not have ECS services across 3 AZs.
Also you are creating only public subnets, while your schematic diagram shows that ECS services should be in private ones.

Related

Cannot connect to RDS in another VPC via VPC peering

I have two VPCs:
VPC A
RDS instance
VPC B
EC2 instance
There are also few subnets:
VPC A
Private A
Private B
Peer A
VPC B
Private A
Private B
Peer A
The RDS is in Private A, Private B, Peer A of VPC A.
The EC2 is in Peer A of VPC B.
I want to connect to the RDS instance from the EC2.
I have created a peering:
resource "aws_vpc_peering_connection" "a_to_b" {
vpc_id = aws_vpc.a.id
peer_vpc_id = aws_vpc.b.id
auto_accept = true
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
}
resource "aws_vpc_peering_connection_accepter" "a_to_b" {
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
auto_accept = true
}
I also have route tables for the whole CIDR block like so:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
resource "aws_route_table" "b_peer" {
vpc_id = aws_vpc.b.id
}
resource "aws_route_table_association" "b_peer" {
route_table_id = aws_route_table.b_peer.id
subnet_id = aws_subnet.b_peer.id
}
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_subnet.a_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
I have also created security groups from ingress and egress on the RDS instance to the EC2 security group.
When I SSH into the EC2 I can get the DNS:
$ nslookup rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Server: 192.16.0.2
Address: 192.16.0.2#53
Non-authoritative answer:
Name: rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com
Address: 10.16.192.135
However, curl cannot connect:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
The expected response is:
$ curl rds.xxxxxxxxxxx.eu-west-2.rds.amazonaws.com:5432
curl: (52) Empty reply from server
The VPC peering is "Active" and the route tables match the Terraform.
How can I get this to connect?
I did some tests on my own, and I'm pretty sure that the issue is caused by your routes, assuming that everything else in your VPC is correct as the VPCs and subnets definitions are not shown .
Specifically, you wrote that "RDS is in Private A, Private B, Peer A of VPC A". This means that RDS master may be in any of these subnets. You have no control over it, as its up to RDS to choose which subnet to use. You can only partially control it by selecting AZs when you create your RDS. Subsequently, your peering route tables should cover all these three subnets. The easiest way to achieve this is by using VPC CIDR range:
# Route from instance in VPC B to any subnet in VPC A which
# hosts your RDS in all its subnets
resource "aws_route" "b_peer_a" {
route_table_id = aws_route_table.b_peer.id
destination_cidr_block = aws_vpc.a.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}
Then you also need to have a route table in VPC A associated with your peering connections for all its subnets:
resource "aws_route_table" "a_peer" {
vpc_id = aws_vpc.a.id
}
resource "aws_route_table_association" "a_peer" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_peer.id
}
resource "aws_route_table_association" "a_private1" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private1.id
}
resource "aws_route_table_association" "a_private2" {
route_table_id = aws_route_table.a_peer.id
subnet_id = aws_subnet.a_private2.id
}
resource "aws_route" "a_peer_b" {
route_table_id = aws_route_table.a_peer.id
destination_cidr_block = aws_subnet.b_peer.cidr_block
vpc_peering_connection_id = aws_vpc_peering_connection.a_to_b.id
}

Share RDS instance with another VPC, but no other resources?

I have created two VPCs using Terraform:
resource "aws_vpc" "alpha" {
cidr_block = "10.16.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "Alpha"
}
}
resource "aws_subnet" "alpha_private_a" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.192.0/24"
availability_zone = "${var.aws_region}a"
tags = {
Name = "Alpha Private A"
}
}
resource "aws_subnet" "alpha_private_b" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.224.0/24"
availability_zone = "${var.aws_region}b"
tags = {
Name = "Alpha Private B"
}
}
resource "aws_route_table" "alpha_private" {
vpc_id = aws_vpc.alpha.id
tags = {
Name = "Alpha Private"
}
}
resource "aws_route_table_association" "alpha_private_a" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_a.id
}
resource "aws_route_table_association" "alpha_private_b" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_b.id
}
# The same again for VPC "Bravo"
I also have an RDS in VPC "Alpha":
resource "aws_db_subnet_group" "alpha_rds" {
subnet_ids = [ aws_subnet.alpha_private_a.id, aws_subnet.alpha_private_b.id ]
tags = {
Name = "Alpha RDS"
}
}
resource "aws_db_instance" "alpha" {
identifier = "alpha"
allocated_storage = 20
max_allocated_storage = 1000
storage_type = "gp2"
engine = "postgres"
engine_version = "11.8"
publicly_accessible = false
db_subnet_group_name = aws_db_subnet_group.alpha_rds.name
performance_insights_enabled = true
vpc_security_group_ids = [ aws_security_group.alpha_rds.id ]
lifecycle {
prevent_destroy = true
}
}
Then I have an Elastic Beanstalk instance inside VPC "Bravo".
What I want to achieve:
alpha_rds is accessible to my Elastic Beanstalk instance inside Bravo VPC
Nothing else inside Alpha VPC is accessible to Bravo VPC
Nothing else inside Bravo VPC is accessible to Alpha VPC
I think VPC Peering is required for this?
How can I implement this in Terraform?
Related but not Terraform:
Access Private RDS DB From Another VPC
AWS Fargate connection to RDS in a different VPC
You should be able to set it up like this:
Create a VPC Peering Connection between Alpha and Bravo
In the Route table for Alpha add a route for the CIDR range of Bravo and set the destination to the peering connection (pcx-XXXXXX) to Bravo
In the Route table for Bravo add a route for the IP-address(es) of the Database and point it to the peering connection to Alpha
This setup guarantees, that resources in Bravo can only communicate to the Database in Alpha, every other packet to that VPC can't be routed.
The inverse is a little tougher - right now this setup should stop TCP connections from Alpha to Bravo being established, because there is no return path except for the database. UDP traffic could still go through, it's response will be dropped though, unless it comes from the database.
At this point you could set up Network Access Control lists in the Subnets in Bravo to Deny traffic from Alpha except for the database IPs. This depends on your level of paranoia or your requirements in terms of isolation - personally I wouldn't do it, but it's Friday afternoon and I'm in a lazy mood ;-).
Update
As Mark B correctly pointed out in the comments there is a risk, that the private IP addresses of your RDS cluster may change on failover if the underlying host can't be recovered.
To address these concerns, you could create separate subnets in Alpha for your database node(s) and substitute the database IPs in my description above with the CIDRs of these subnets. That should allow for slightly more flexibility and allows you to get around the NACL problem as well, because you can just edit the routing table of the new database subnet(s) and only add the Peering Connection there.

How to create an EKS cluster with public and private subnets using terraform?

I'm usin terraform to set up an EKS cluster i need to make sure that my worker nodes will be placed on private subnets and that my public subnets will be used for my load balancers but i don't actually know how to inject public and private subnets in my cluster because i'm only using private ones.
resource "aws_eks_cluster" "master_node" {
name = "my-cluster"
role_arn = aws_iam_role.master_iam_role.arn
version = "1.14"
vpc_config {
security_group_ids = [aws_security_group.master_security_group.id]
subnet_ids = var.private_subnet_eks_ids
}
depends_on = [
aws_iam_role_policy_attachment.main-cluster-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.main-cluster-AmazonEKSServicePolicy,
]
}
resource "aws_autoscaling_group" "eks_autoscaling_group" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.eks_launch_config.id
max_size = var.max_size
min_size = var.min_size
name = "my-autoscaling-group"
vpc_zone_identifier = var.private_subnet_eks_ids
depends_on = [
aws_efs_mount_target.efs_mount_target
]
}
Give only private subnets to your eks cluster but, before that, make sure you've tagged the public subnets so:
Key: kubernetes.io/role/elb
value: 1
as described here: https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/
EKS will discover the public subnets where to place the load balancer querying by tags.
I make use to create both public and private subnets on the VPC using the vpc module. Then I create the EKS cluster using the eks module and refere to the vpc-data.
Example
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-north-1a", "eu-north-1b", "eu-north-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
}
And then EKS cluster where I refer to the VPC subnets using module.vpc.private_subnets and module.vpc.vpc_id:
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "my-eks-cluster"
cluster_version = "1.17"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups = [
{
instance_type = "t3.small"
asg_max_size = 2
}
]
}

Terraform - DB and security group are in different VPCs

What am I trying to achive:
Create and RDS Aurora cluster and place it in the same VPC as EC2 instances that I start so they can comunicate.
I'm trying to start an SG named "RDS_DB_SG" and make it part of the VPC i'm creating in the process.
I also create an SG named "BE_SG" and make it part of the same VPC.
I'm doing this so I can get access between the 2 (RDS and BE server).
What I did so far:
Created an .tf code and started everything up.
What I got:
It starts ok if I don't include the RDS cluster inside the RDS SG - The RDS creates it's own VPC.
When I include the RDS in the SG I want for him, The RDS cluster can't start and get's an error.
Error I got:
"The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-5a***63c and the EC2 security group is in vpc-0e5391*****273b3d"
Workaround for now:
I started the infrastructure without specifing a VPC for the RDS. It created it's own default VPC.
I then created manuall VPC-peering between the VPC that was created for the EC2's and the VPC that was created for the RDS.
But I want them to be in the same VPC so I won't have to create the VPC-peering manuall.
My .tf code:
variable "vpc_cidr" {
description = "CIDR for the VPC"
default = "10.0.0.0/16"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_subnet" "vpc_subnet" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.vpc_cidr}"
availability_zone = "eu-west-1a"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_db_subnet_group" "subnet_group" {
name = "${var.env}-subnet-group"
subnet_ids = ["${aws_subnet.vpc_subnet.id}"]
}
resource "aws_security_group" "RDS_DB_SG" {
name = "${var.env}-rds-sg"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 3396
to_port = 3396
protocol = "tcp"
security_groups = ["${aws_security_group.BE_SG.id}"]
}
}
resource "aws_security_group" "BE_SG" {
name = "${var.env}_BE_SG"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "BE" {
ami = "ami-*********************"
instance_type = "t2.large"
associate_public_ip_address = true
key_name = "**********"
tags = {
Name = "WEB-${var.env}"
Porpuse = "Launched by Terraform"
ENV = "${var.env}"
}
subnet_id = "${aws_subnet.vpc_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.BE_SG.id}", "${aws_security_group.ssh.id}"]
}
resource "aws_rds_cluster" "rds-cluster" {
cluster_identifier = "${var.env}-cluster"
database_name = "${var.env}-rds"
master_username = "${var.env}"
master_password = "PASSWORD"
backup_retention_period = 5
vpc_security_group_ids = ["${aws_security_group.RDS_DB_SG.id}"]
}
resource "aws_rds_cluster_instance" "rds-instance" {
count = 1
cluster_identifier = "${aws_rds_cluster.rds-cluster.id}"
instance_class = "db.r4.large"
engine_version = "5.7.12"
engine = "aurora-mysql"
preferred_backup_window = "04:00-22:00"
}
Any suggestions on how to achieve my first goal?

How do I attach an elastic IP upon creation of a network load balancer with Terraform?

I'm attempting to associate my elastic IP address to a newly created network balancer using Terraform. I see no option in the aws_lb documentation for adding an elastic IP like one is able to do in the AWS console. The difficulty is that you have to associate the elastic IP upon creation of NLB.
EDIT: They now have made an explicit example on their documentation!
The aws_lb resource has a subnet_mapping block which allows you to specify an Elastic IP per subnet that the network load balancer exists in.
An absolutely minimal example looks something like this:
resource "aws_eip" "lb" {
vpc = true
}
resource "aws_lb" "network" {
name = "test-lb-tf"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${var.subnet_id}"
allocation_id = "${aws_eip.lb.id}"
}
}
Obviously you'll probably want to run the load balancer in multiple subnets so you'd probably use something like this:
variable "vpc" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc}"
}
}
data "aws_subnet_ids" "public" {
vpc_id = "${data.aws_vpc.selected.id}"
tags {
Tier = "public"
}
}
resource "aws_eip" "lb" {
count = "${length(data.aws_subnet_ids.public)}"
vpc = true
}
resource "aws_lb" "network" {
name = "test-lb-tf"
internal = false
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[0]}"
allocation_id = "${aws_eip.lb.id[0]}"
}
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[1]}"
allocation_id = "${aws_eip.lb.id[1]}"
}
subnet_mapping {
subnet_id = "${data.aws_subnet_ids.public.ids[2]}"
allocation_id = "${aws_eip.lb.id[2]}"
}
}
The above assumes you have tagged your VPC with a Name tag and your subnets with a Tier tag that in this case uses public as the value for any external facing subnets. It then creates an elastic IP address for each of the public subnets a network load balancer in each of the public subnets, attaching an elastic IP for each of them.
The above answer is correct, however it can now be simplified using dynamic blocks available in Terraform 0.12. This has the advantage of working in vpcs with more or less subnets.
resource "aws_lb" "network" {
name = "test-lb-tf"
internal = false
load_balancer_type = "network"
dynamic "subnet_mapping" {
for_each = data.aws.subnet_ids.public_ids
content {
subnet_id = subnet_mapping.value
allocation_id = aws_eip.lb.id[subnet_mapping.key].allocation_id
}
}
}
Here's my implementation base on the answer above:
resource "aws_eip" "nlb" {
for_each = toset(aws_subnet.public.*.id)
vpc = true
tags = {
"Name" = "my-app-nlb-eip"
}
}
resource "aws_lb" "nlb" {
name = "my-app-nlb"
internal = false
load_balancer_type = "network"
enable_deletion_protection = false
enable_cross_zone_load_balancing = true
dynamic "subnet_mapping" {
for_each = toset(aws_subnet.public.*.id)
content {
subnet_id = subnet_mapping.value
allocation_id = aws_eip.nlb[subnet_mapping.key].allocation_id
}
}
tags = {
Environment = "development"
}
}