My situation is as follows:
I’m creating a AWS ECS (NGINX container for SFTP traffic) FARGATE setup with Terraform with a network load balancer infront of it. I’ve got most parts set-up just fine and the current setup works. But now i want to add more target groups to the setup so i can allow more different ports to the container. My variable part is as follows:
variable "sftp_ports" {
type = map
default = {
test1 = {
port = 50003
}
test2 = {
port = 50004
}
}
}
and the actual deployment is as follows:
resource "aws_alb_target_group" "default-target-group" {
name = local.name
port = var.sftp_test_port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_target_group" "test" {
for_each = var.sftp_ports
name = "sftp-target-group-${each.key}"
port = each.value.port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_listener" "ecs-alb-https-listenertest" {
for_each = var.sftp_ports
load_balancer_arn = aws_lb.proxypoc.id
port = each.value.port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.default-target-group.arn
}
}
This deploys the needed listeners and the target groups just fine but the only problem i have is on how i can configure the registered target part. The aws ecs service resource only allows one target group arn so i have no clue on how i can add the additional target groups in order to reach my goal. I've been wrapping my head around this problem and scoured the internet but so far .. nothing. So is it possible to configure the ecs service to contain more target groups arns or am i supposed to configure only single target group with multiple ports (as far as i know, this is not supported out of the box, checked the docs as well. But it is possible to add multiple registered targets in the GUI so i guess it is a possibility)?
I’d like to hear from you guys,
Thanks!
Using terraform I'm provisioning infra in AWS for my K3S cluster. I have provisioned an NLB with two listeners on port 80 and 443, with appropriate self-signed certs. This works. I can access HTTP services in my cluster via the nlb.
resource "tls_private_key" "agents" {
algorithm = "RSA"
}
resource "tls_self_signed_cert" "agents" {
key_algorithm = "RSA"
private_key_pem = tls_private_key.agents.private_key_pem
validity_period_hours = 24
subject {
common_name = "my hostname"
organization = "My org"
}
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth"
]
}
resource "aws_acm_certificate" "agents" {
private_key = tls_private_key.agents.private_key_pem
certificate_body = tls_self_signed_cert.agents.cert_pem
}
resource "aws_lb" "agents" {
name = "basic-load-balancer"
load_balancer_type = "network"
subnet_mapping {
subnet_id = aws_subnet.agents.id
allocation_id = aws_eip.agents.id
}
}
resource "aws_lb_listener" "agents_80" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_80.arn
}
}
resource "aws_lb_listener" "agents_443" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TLS"
port = 443
certificate_arn = aws_acm_certificate.agents.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_443.arn
}
}
resource "aws_lb_target_group" "agents_80" {
port = 30000
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_lb_target_group" "agents_443" {
port = 30001
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_autoscaling_attachment" "agents_80" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_80.arn
}
resource "aws_autoscaling_attachment" "agents_443" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_443.arn
}
That's a cutdown version of my code.
I have configured my ingress controller to listen for HTTP and HTTPS on NodePorts 30000 and 30001 respectively. This works too.
The thing that doesn't work is that the NLB is terminating TLS, but I need it to passthrough. I'm doing this so that I can access Kubernetes Dashboard (among other apps), but the dashboard requires https to sign-in, something I can't provide if tls is terminated at the nlb.
I need help configuring the nlb for passthrough. I have searched and searched and can't find any examples. If anyone knows how to configure this it would be good to get some tf code, or even just an idea of the appropriate way of achieving it in AWS so that I can implement it myself in tf.
Do you need TLS passthrough, or just TLS communication between the NLB and the server? Or do you just need to configure your server to be aware that the initial connection was TLS?
For TLS passthrough you would install an SSL certificate on the server, and delete the certificate from the load balancer. You would change the protocol of the port 443 listener on the load balancer from "TLS" to "TCP". This is not a very typical setup on AWS, and you can't use the free AWS ACM SSL certificates in this configuration, you would have to use something like Let's Encrypt on the server.
For TLS communication between the NLB and the server, you would install a certificate on the server, a self-signed cert is fine for this, and then just change the target group settings on the load balancer to point to the secure ports on the server.
If you just want to make the server aware that the initial connection protocol was TLS, you would configure the server to use the x-forwarded-proto header passed by the load balancer to determine if the connection is secure.
I am running an ECS cluster with about 20 containers. I have a big monolith application running on 1 container which requires to listen on 10 ports.
However AWS requires to have a max of 5 load balancer target group links in an ECS Service.
Any ideas how to overcome this (if possible)? Here's what I've tried:
Defining 10+ target groups with 1 listener each. Doesn't work since AWS requires a max of 5 load balancer definitions in the aws_ecs_service - for info - here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html as stated in the 2nd bullet under "Service load balancing considerations"
Defining 10+ listeners with 1 target group - however all listeners forward to a single port on the container...
Tried without specifying port in the load_balancer definition in aws_ecs_service, however AWS complains for missing argument
Tried without specifying port in the aws_lb_target_group, however AWS complains that target type is "ip", so port is required...
Here's my current code:
resource "aws_ecs_service" "service_app" {
name = "my_service_name"
cluster = var.ECS_CLUSTER_ID
task_definition = aws_ecs_task_definition.task_my_app.arn
desired_count = 1
force_new_deployment = true
...
load_balancer { # Note: I've stripped the for_each to simplify reading
target_group_arn = var.tga
container_name = var.n
container_port = var.p
}
}
resource "aws_lb_target_group" "tg_mytg" {
name = "Name"
protocol = "HTTP"
port = 3000
target_type = "ip"
vpc_id = aws_vpc.my_vpc.id
}
resource "aws_lb_listener" "ls_3303" {
load_balancer_arn = aws_lb.my_lb.id
port = "3303"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.tg_mytg.arn
}
}
...
I'm confused with configuring LoadBalancer(NLB) in AWS. When configuring the LB as below (it's the Terraform file), I never specified HTTPS protocol. However, after the LB gets spinned up, I can only reach my targets by https://LB_ARN:80 and nothing is shown when I hit http://LB_ARN:80. I am quite confused of the reason, and also, more confusing part is that using https://LB_ARN:80 as DNS, my Browser(Chrome) tells me the site is not secure (though it is HTTPS). Any help please ?
resource "aws_lb" "boundary" {
name = "boundary-nlb"
load_balancer_type = "network"
internal = false
subnets = data.terraform_remote_state.network.outputs.tokyo_vpc_main.public_subnet_ids
tags = merge(local.common_tags, {
Name = "boundary-${terraform.workspace}-controller-nlb"
})
}
resource "aws_lb_target_group" "boundary" {
name = "boundary-nlb"
port = 9200
protocol = "TCP"
vpc_id = data.terraform_remote_state.network.outputs.tokyo_vpc_main.vpc_id
stickiness {
enabled = false
type = "source_ip"
}
tags = merge(local.common_tags, {
Name = "boundary-${terraform.workspace}-controller-nlb-tg"
})
}
resource "aws_lb_target_group_attachment" "boundary" {
count = var.num_controllers
target_group_arn = aws_lb_target_group.boundary.arn
target_id = aws_instance.controller[count.index].id
port = 9200
}
resource "aws_lb_listener" "boundary" {
load_balancer_arn = aws_lb.boundary.arn
port = "80"
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.boundary.arn
}
}
resource "aws_security_group" "boundary_lb" {
vpc_id = data.terraform_remote_state.network.outputs.tokyo_vpc_main.vpc_id
tags = merge(local.common_tags, {
Name = "boundary-${terraform.workspace}-controller-nlb-sg"
})
}
resource "aws_security_group_rule" "allow_9200" {
type = "ingress"
from_port = 9200
to_port = 9200
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.boundary_lb.id
}
This appears to me as misconfiguration of your backend servers. Specifically, they seem to server HTTPS connections on port 80.
Since you are using NLB with TCP protocol, any HTTPS connection is forwarded to your backend servers. Meaning, there is no SSL termination on your NLB. So even though you haven't specified HTTPS in your NLB settings, HTTPS connections are forwarded on top of TCP to your backend instances. The backend instance handle the HTTPS with maybe self-signed SSL certificate, not NLB, on the wrong port. This would explain warnings from browser.
I would recommend checking configuration of your backend services and making sure that are serving HTTP on port 80, not HTTPS as it seems to be the case now.
I have created a Custom VPC using the Terraform code below.
I have applied the Terraform module below, which:
Adds Public and Private Subnets
Configures IGW
Adds NAT Gateway
Adds SGs for both Web and DB Servers
After applying this, I am not able to ping/ssh from public ec2 instance to the private ec2 instance.
Not sure what is missing.
# Custom VPC
resource "aws_vpc" "MyVPC" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default" # For Prod use "dedicated"
tags = {
Name = "MyVPC"
}
}
# Creates "Main Route Table", "NACL" & "default Security Group"
# Create Public Subnet, Associate with our VPC, Auto assign Public IP
resource "aws_subnet" "PublicSubNet" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = "eu-west-2a" # AZ within London, 1 Subnet = 1 AZ
cidr_block = "10.0.1.0/24" # Check using this later > "${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}"
map_public_ip_on_launch = "true" # Auto assign Public IP for Public Subnet
tags = {
Name = "PublicSubNet"
}
}
# Create Private Subnet, Associate with our VPC
resource "aws_subnet" "PrivateSubNet" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = "eu-west-2b" # AZ within London region, 1 Subnet = 1 AZ
cidr_block = "10.0.2.0/24" # Check using this later > "${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}"
tags = {
Name = "PrivateSubNet"
}
}
# Only 1 IGW per VPC
resource "aws_internet_gateway" "MyIGW" {
vpc_id = aws_vpc.MyVPC.id
tags = {
Name = "MyIGW"
}
}
# New Public route table, so we can keep "default main" route table as Private. Route out to MyIGW
resource "aws_route_table" "MyPublicRouteTable" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
route { # Route out IPV4
cidr_block = "0.0.0.0/0" # IPV4 Route Out for all
# ipv6_cidr_block = "::/0" The parameter destinationCidrBlock cannot be used with the parameter destinationIpv6CidrBlock # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
route { # Route out IPV6
ipv6_cidr_block = "::/0" # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
tags = {
Name = "MyPublicRouteTable"
}
}
# Associate "PublicSubNet" with the public route table created above, removes it from default main route table
resource "aws_route_table_association" "PublicSubNetnPublicRouteTable" {
subnet_id = aws_subnet.PublicSubNet.id
route_table_id = aws_route_table.MyPublicRouteTable.id
}
# Create new security group "WebDMZ" for WebServer
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC
ingress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
ingress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # You can use Load Balancer
}
ingress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
egress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
}
# Create new EC2 instance (WebServer01) in Public Subnet
# Get ami id from Console
resource "aws_instance" "WebServer01" {
ami = "ami-01a6e31ac994bbc09"
instance_type = "t2.micro"
subnet_id = aws_subnet.PublicSubNet.id
key_name = "MyEC2KeyPair" # To connect using key pair
security_groups = [aws_security_group.WebDMZ.id] # Assign WebDMZ security group created above
# vpc_security_group_ids = [aws_security_group.WebDMZ.id]
tags = {
Name = "WebServer01"
}
}
# Create new security group "MyDBSG" for WebServer
resource "aws_security_group" "MyDBSG" {
name = "MyDBSG"
description = "Allows Public WebServer to Communicate with Private DB Server"
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC
ingress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too as below
security_groups = [aws_security_group.WebDMZ.id] # Tried this as above was not working, but still doesn't work
}
ingress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvKey.pem Private Key pasted in MyPvKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows HTTP requests: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows HTTPS requests : IPV4"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows MySQL/Aurora requests"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too
}
egress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvtKey.pem Private Key pasted in MyPvKey.pem chmod 400 MyPvtKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows HTTP requests: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows HTTPS requests : IPV4"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows MySQL/Aurora requests"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
}
# Create new EC2 instance (DBServer) in Private Subnet, Associate "MyDBSG" Security Group
resource "aws_instance" "DBServer" {
ami = "ami-01a6e31ac994bbc09"
instance_type = "t2.micro"
subnet_id = aws_subnet.PrivateSubNet.id
key_name = "MyEC2KeyPair" # To connect using key pair
security_groups = [aws_security_group.MyDBSG.id] # THIS WAS GIVING ERROR WHEN ASSOCIATING
# vpc_security_group_ids = [aws_security_group.MyDBSG.id]
tags = {
Name = "DBServer"
}
}
# Elastic IP required for NAT Gateway
resource "aws_eip" "nateip" {
vpc = true
tags = {
Name = "NATEIP"
}
}
# DBServer in private subnet cannot access internet, so add "NAT Gateway" in Public Subnet
# Add Target as NAT Gateway in default main route table. So there is route out to Internet.
# Now you can do yum update on DBServer
resource "aws_nat_gateway" "NATGW" { # Create NAT Gateway in each AZ so in case of failure it can use other
allocation_id = aws_eip.nateip.id # Elastic IP allocation
subnet_id = aws_subnet.PublicSubNet.id # Public Subnet
tags = {
Name = "NATGW"
}
}
# Main Route Table add NATGW as Target
resource "aws_default_route_table" "DefaultRouteTable" {
default_route_table_id = aws_vpc.MyVPC.default_route_table_id
route {
cidr_block = "0.0.0.0/0" # IPV4 Route Out for all
nat_gateway_id = aws_nat_gateway.NATGW.id # Target : NAT Gateway created above
}
tags = {
Name = "DefaultRouteTable"
}
}
Why is ping timing out from WebServer01 to DBServer?
There are no specific NACLs and the default NACLs are wide open, so they should not be relevant here.
For this to work the security group on DBServer needs to allow egress to the security group of DBServer or to a CIDR that includes it.
aws_instance.DBServer uses aws_security_group.MyDBSG.
aws_instance.WebServer01 uses aws_security_group.WebDMZ.
The egress rules on aws_security_group.WebDMZ are as follows:
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
egress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
Those egress rules mean:
Allow SSH to all IPv4 hosts
Allow HTTP to all IPv4 and IPv6 hosts
ICMP isn't listed, so the ICMP echo request will be dropped before it leaves aws_security_group.WebDMZ. This should be visible as a REJECT in the VPC FlowLog for the ENI of aws_instance.WebServer01.
Adding this egress rule to aws_security_group.WebDMZ should fix that:
egress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.2.0/24"]
}
DBServer may NOT respond to ICMP, so you may still see timeouts after making this change. Referencing the VPC FlowLog will help determine the difference. If you see ACCEPTs in the VPC FlowLog for the ICMP flows, then the issue is that DBServer doesn't respond to ICMP.
Nothing in aws_security_group.WebDMZ prevents SSH, so the problem with that must be elsewhere.
The ingress rules on aws_security_group.MyDBSG are as follows.
ingress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too as below
security_groups = [aws_security_group.WebDMZ.id] # Tried this as above was not working, but still doesn't work
}
ingress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvKey.pem Private Key pasted in MyPvKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
Those egress rules mean:
Allow all ICMP from the Public Subnet (10.0.1.0/24)
Allow SSH from the Public Subnet (10.0.1.0/24)
SSH should be working. DBServer likely does not accept SSH connections.
Assuming your DBServer is 10.0.2.123, then if SSH is not running it would look like this from WebServer01 running ssh -v 10.0.2.123:
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 47: Applying options for *
debug1: Connecting to 10.0.2.123 [10.0.2.123] port 22.
debug1: connect to address 10.0.2.123 port 22: Operation timed out
ssh: connect to host 10.0.2.123 port 22: Operation timed out
Referencing the VPC FlowLog will help determine the difference. If you see ACCEPTs in the VPC FlowLog for the SSH flows, then the issue is that DBServer doesn't accept SSH connections.
Enabling VPC Flow Logs
Since this has changed a few times over the years, I will point you to AWS' own maintained, step by step guide for creating a flow log that publishes to CloudWatch Logs.
I recommend CloudWatch over S3 at the moment since querying using CloudWatch Logs Insights is pretty easy compared to setting up and querying S3 with Athena. You just pick the CloudWatch Log Stream you used for your Flow Log and search for the IPs or ports of interest.
This example CloudWatch Logs Insights query will get the most recent 20 rejects on eni-0123456789abcdef0 (not a real ENI, use the actual ENI ID you are debugging):
fields #timestamp,#message
| sort #timestamp desc
| filter #message like 'eni-0123456789abcdef0'
| filter #message like 'REJECT'
| limit 20
In the VPC Flow Log a missing egress rule shows up as a REJECT on the source ENI.
In the VPC Flow Log a missing ingress rule shows up as a REJECT on the destination ENI.
Security Groups are stateful
Stateless packet filters require you to handle arcane things like the fact that most (not all) OSes use ports 32767-65535 for reply traffic in TCP flows. That's a pain, and it makes NACLs (which a stateless) a huge pain.
A stateful firewall like Security Groups tracks connections (the state in stateful) automatically, so you just need to allow the service port to the destination's SG or IP CIDR block in the source SG's egress rules and from the source SG or IP CIDR block in the destination SG's ingress rules.
Even though Security Groups (SGs) are stateful, they are default deny. This includes the outbound initiated traffic from a source. If a source SG does not allow traffic outbound to a destination, it's not allowed even if the destination has an SG that allows it. This is a common misconception. SG rules are not transitive, they need to be made on both sides.
AWS's Protecting Your Instance with Security Groups video explains this very well and visually.
Aside: Style recommendation
You should use aws_security_group_rule resources instead of inline egress and ingress rules.
NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules.
Take this old-style aws_security_group with inline rules:
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id
ingress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # You can use Load Balancer
}
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
And replace it with this modern-style aws_security_group with aws_security_group_rule resources for each rule:
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id
}
resource "aws_security_group_rule" "WebDMZ_HTTP_in" {
security_group_id = aws_security_group.WebDMZ.id
type = "ingress"
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "WebDMZ_SSH_out" {
security_group_id = aws_security_group.WebDMZ.id
type = "egress"
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Allowing all outbound rules from "WebDMZ" security group as pasted below resolves the issue.
egress { # Allow allow traffic outbound, THIS WAS THE REASON YOU WAS NOT ABLE TO PING FROM WebServer to DBServer
description = "Allows All Traffic Outbound from Web Server"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}