background:
I used terraform to build an AWS autoscaling group with a few instances spread across availability zones and linked by a load balancer. Everything is created properly, but the load balancer has no valid targets because they're nothing listening on port 80.
Fine, I thought. I'll install NGINX and throw up a basic config.
expected behavior
instances should be able to reach yum repos
actual behavior
I'm unable to ping anything or run any of the package manager commands, getting the following error
Could not retrieve mirrorlist https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list error was
12: Timeout on https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list: (28, 'Failed to connect to amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com port 443 after 2700 ms: Connection timed out')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
troubleshooting steps taken
I'm new to Terraform, and I'm still having issues doing the automated provisioning of user_data, so I SSH'd into the instance. The instance is set up in a public subnet with an auto-provisioned public IP. Below is the code for the security groups.
resource "aws_security_group" "elb_webtrafic_sg" {
name = "elb-webtraffic-sg"
description = "Allow inbound web trafic to load balancer"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "HTTPS trafic from vpc"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP trafic from vpc"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "allow SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic out"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "elb-webtraffic-sg"
}
}
resource "aws_security_group" "instance_sg" {
name = "instance-sg"
description = "Allow traffic from load balancer to instances"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 80
to_port = 80
protocol = "tcp"
}
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 443
to_port = 443
protocol = "tcp"
}
ingress {
description = "ssh traffic from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic to load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = {
Name = "instance-sg"
}
}
#this is a workaround for the cyclical security group id call
#I would like to figure out a way for this to destroy this first
#it currently takes longer to destroy than to set up
#terraform hangs because of the dependancy each SG has on each other,
#but will eventually struggle down to this rule and delete it, clearing the deadlock
resource "aws_security_group_rule" "elb_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 80
to_port = 80
protocol = "tcp"
}
resource "aws_security_group_rule" "elb_tls_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 443
to_port = 443
protocol = "tcp"
}
Since I was able to SSH into the machine, I tried to set up the web instance security group to allow direct connection from the internet to the instance. Same errors: cannot ping outside web addresses, same error on YUM commands.
I can ping the default gateway in each subnet. 10.0.0.1, 10.0.1.1, 10.0.2.1.
Here is the routing configuration I currently have setup.
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production-vpc"
}
}
resource "aws_key_pair" "aws_key" {
key_name = "Tanchwa_pc_aws"
public_key = file(var.public_key_path)
}
#internet gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "internet-gw"
}
}
resource "aws_route_table" "route_table" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "production-route-table"
}
}
resource "aws_subnet" "public_us_east_2a" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.0.0/24"
availability_zone = "us-east-2a"
tags = {
Name = "Public-Subnet us-east-2a"
}
}
resource "aws_subnet" "public_us_east_2b" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-2b"
tags = {
Name = "Public-Subnet us-east-2b"
}
}
resource "aws_subnet" "public_us_east_2c" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-2c"
tags = {
Name = "Public-Subnet us-east-2c"
}
}
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.public_us_east_2a.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "b" {
subnet_id = aws_subnet.public_us_east_2b.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "c" {
subnet_id = aws_subnet.public_us_east_2c.id
route_table_id = aws_route_table.route_table.id
}
I have created a Custom VPC using the Terraform code below.
I have applied the Terraform module below, which:
Adds Public and Private Subnets
Configures IGW
Adds NAT Gateway
Adds SGs for both Web and DB Servers
After applying this, I am not able to ping/ssh from public ec2 instance to the private ec2 instance.
Not sure what is missing.
# Custom VPC
resource "aws_vpc" "MyVPC" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default" # For Prod use "dedicated"
tags = {
Name = "MyVPC"
}
}
# Creates "Main Route Table", "NACL" & "default Security Group"
# Create Public Subnet, Associate with our VPC, Auto assign Public IP
resource "aws_subnet" "PublicSubNet" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = "eu-west-2a" # AZ within London, 1 Subnet = 1 AZ
cidr_block = "10.0.1.0/24" # Check using this later > "${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}"
map_public_ip_on_launch = "true" # Auto assign Public IP for Public Subnet
tags = {
Name = "PublicSubNet"
}
}
# Create Private Subnet, Associate with our VPC
resource "aws_subnet" "PrivateSubNet" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = "eu-west-2b" # AZ within London region, 1 Subnet = 1 AZ
cidr_block = "10.0.2.0/24" # Check using this later > "${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}"
tags = {
Name = "PrivateSubNet"
}
}
# Only 1 IGW per VPC
resource "aws_internet_gateway" "MyIGW" {
vpc_id = aws_vpc.MyVPC.id
tags = {
Name = "MyIGW"
}
}
# New Public route table, so we can keep "default main" route table as Private. Route out to MyIGW
resource "aws_route_table" "MyPublicRouteTable" {
vpc_id = aws_vpc.MyVPC.id # Our VPC
route { # Route out IPV4
cidr_block = "0.0.0.0/0" # IPV4 Route Out for all
# ipv6_cidr_block = "::/0" The parameter destinationCidrBlock cannot be used with the parameter destinationIpv6CidrBlock # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
route { # Route out IPV6
ipv6_cidr_block = "::/0" # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
tags = {
Name = "MyPublicRouteTable"
}
}
# Associate "PublicSubNet" with the public route table created above, removes it from default main route table
resource "aws_route_table_association" "PublicSubNetnPublicRouteTable" {
subnet_id = aws_subnet.PublicSubNet.id
route_table_id = aws_route_table.MyPublicRouteTable.id
}
# Create new security group "WebDMZ" for WebServer
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC
ingress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
ingress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # You can use Load Balancer
}
ingress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
egress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
}
# Create new EC2 instance (WebServer01) in Public Subnet
# Get ami id from Console
resource "aws_instance" "WebServer01" {
ami = "ami-01a6e31ac994bbc09"
instance_type = "t2.micro"
subnet_id = aws_subnet.PublicSubNet.id
key_name = "MyEC2KeyPair" # To connect using key pair
security_groups = [aws_security_group.WebDMZ.id] # Assign WebDMZ security group created above
# vpc_security_group_ids = [aws_security_group.WebDMZ.id]
tags = {
Name = "WebServer01"
}
}
# Create new security group "MyDBSG" for WebServer
resource "aws_security_group" "MyDBSG" {
name = "MyDBSG"
description = "Allows Public WebServer to Communicate with Private DB Server"
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC
ingress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too as below
security_groups = [aws_security_group.WebDMZ.id] # Tried this as above was not working, but still doesn't work
}
ingress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvKey.pem Private Key pasted in MyPvKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows HTTP requests: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows HTTPS requests : IPV4"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
ingress {
description = "Allows MySQL/Aurora requests"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too
}
egress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvtKey.pem Private Key pasted in MyPvKey.pem chmod 400 MyPvtKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows HTTP requests: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows HTTPS requests : IPV4"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
egress {
description = "Allows MySQL/Aurora requests"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
}
# Create new EC2 instance (DBServer) in Private Subnet, Associate "MyDBSG" Security Group
resource "aws_instance" "DBServer" {
ami = "ami-01a6e31ac994bbc09"
instance_type = "t2.micro"
subnet_id = aws_subnet.PrivateSubNet.id
key_name = "MyEC2KeyPair" # To connect using key pair
security_groups = [aws_security_group.MyDBSG.id] # THIS WAS GIVING ERROR WHEN ASSOCIATING
# vpc_security_group_ids = [aws_security_group.MyDBSG.id]
tags = {
Name = "DBServer"
}
}
# Elastic IP required for NAT Gateway
resource "aws_eip" "nateip" {
vpc = true
tags = {
Name = "NATEIP"
}
}
# DBServer in private subnet cannot access internet, so add "NAT Gateway" in Public Subnet
# Add Target as NAT Gateway in default main route table. So there is route out to Internet.
# Now you can do yum update on DBServer
resource "aws_nat_gateway" "NATGW" { # Create NAT Gateway in each AZ so in case of failure it can use other
allocation_id = aws_eip.nateip.id # Elastic IP allocation
subnet_id = aws_subnet.PublicSubNet.id # Public Subnet
tags = {
Name = "NATGW"
}
}
# Main Route Table add NATGW as Target
resource "aws_default_route_table" "DefaultRouteTable" {
default_route_table_id = aws_vpc.MyVPC.default_route_table_id
route {
cidr_block = "0.0.0.0/0" # IPV4 Route Out for all
nat_gateway_id = aws_nat_gateway.NATGW.id # Target : NAT Gateway created above
}
tags = {
Name = "DefaultRouteTable"
}
}
Why is ping timing out from WebServer01 to DBServer?
There are no specific NACLs and the default NACLs are wide open, so they should not be relevant here.
For this to work the security group on DBServer needs to allow egress to the security group of DBServer or to a CIDR that includes it.
aws_instance.DBServer uses aws_security_group.MyDBSG.
aws_instance.WebServer01 uses aws_security_group.WebDMZ.
The egress rules on aws_security_group.WebDMZ are as follows:
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # SSH restricted to my laptop public IP <My PUBLIC IP>/32
}
egress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allows HTTP requests for VPC: IPV6"
from_port = 80
to_port = 80
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
}
Those egress rules mean:
Allow SSH to all IPv4 hosts
Allow HTTP to all IPv4 and IPv6 hosts
ICMP isn't listed, so the ICMP echo request will be dropped before it leaves aws_security_group.WebDMZ. This should be visible as a REJECT in the VPC FlowLog for the ENI of aws_instance.WebServer01.
Adding this egress rule to aws_security_group.WebDMZ should fix that:
egress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.2.0/24"]
}
DBServer may NOT respond to ICMP, so you may still see timeouts after making this change. Referencing the VPC FlowLog will help determine the difference. If you see ACCEPTs in the VPC FlowLog for the ICMP flows, then the issue is that DBServer doesn't respond to ICMP.
Nothing in aws_security_group.WebDMZ prevents SSH, so the problem with that must be elsewhere.
The ingress rules on aws_security_group.MyDBSG are as follows.
ingress {
description = "Allows ICMP requests: IPV4" # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["10.0.1.0/24"] # Public Subnet CIDR block, can be "WebDMZ" security group id too as below
security_groups = [aws_security_group.WebDMZ.id] # Tried this as above was not working, but still doesn't work
}
ingress {
description = "Allows SSH requests: IPV4" # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user#Private Ip Address -i MyPvKey.pem Private Key pasted in MyPvKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24"]
}
Those egress rules mean:
Allow all ICMP from the Public Subnet (10.0.1.0/24)
Allow SSH from the Public Subnet (10.0.1.0/24)
SSH should be working. DBServer likely does not accept SSH connections.
Assuming your DBServer is 10.0.2.123, then if SSH is not running it would look like this from WebServer01 running ssh -v 10.0.2.123:
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 47: Applying options for *
debug1: Connecting to 10.0.2.123 [10.0.2.123] port 22.
debug1: connect to address 10.0.2.123 port 22: Operation timed out
ssh: connect to host 10.0.2.123 port 22: Operation timed out
Referencing the VPC FlowLog will help determine the difference. If you see ACCEPTs in the VPC FlowLog for the SSH flows, then the issue is that DBServer doesn't accept SSH connections.
Enabling VPC Flow Logs
Since this has changed a few times over the years, I will point you to AWS' own maintained, step by step guide for creating a flow log that publishes to CloudWatch Logs.
I recommend CloudWatch over S3 at the moment since querying using CloudWatch Logs Insights is pretty easy compared to setting up and querying S3 with Athena. You just pick the CloudWatch Log Stream you used for your Flow Log and search for the IPs or ports of interest.
This example CloudWatch Logs Insights query will get the most recent 20 rejects on eni-0123456789abcdef0 (not a real ENI, use the actual ENI ID you are debugging):
fields #timestamp,#message
| sort #timestamp desc
| filter #message like 'eni-0123456789abcdef0'
| filter #message like 'REJECT'
| limit 20
In the VPC Flow Log a missing egress rule shows up as a REJECT on the source ENI.
In the VPC Flow Log a missing ingress rule shows up as a REJECT on the destination ENI.
Security Groups are stateful
Stateless packet filters require you to handle arcane things like the fact that most (not all) OSes use ports 32767-65535 for reply traffic in TCP flows. That's a pain, and it makes NACLs (which a stateless) a huge pain.
A stateful firewall like Security Groups tracks connections (the state in stateful) automatically, so you just need to allow the service port to the destination's SG or IP CIDR block in the source SG's egress rules and from the source SG or IP CIDR block in the destination SG's ingress rules.
Even though Security Groups (SGs) are stateful, they are default deny. This includes the outbound initiated traffic from a source. If a source SG does not allow traffic outbound to a destination, it's not allowed even if the destination has an SG that allows it. This is a common misconception. SG rules are not transitive, they need to be made on both sides.
AWS's Protecting Your Instance with Security Groups video explains this very well and visually.
Aside: Style recommendation
You should use aws_security_group_rule resources instead of inline egress and ingress rules.
NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource (a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules.
Take this old-style aws_security_group with inline rules:
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id
ingress {
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # You can use Load Balancer
}
egress {
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
And replace it with this modern-style aws_security_group with aws_security_group_rule resources for each rule:
resource "aws_security_group" "WebDMZ" {
name = "WebDMZ"
description = "Allows SSH & HTTP requests"
vpc_id = aws_vpc.MyVPC.id
}
resource "aws_security_group_rule" "WebDMZ_HTTP_in" {
security_group_id = aws_security_group.WebDMZ.id
type = "ingress"
description = "Allows HTTP requests for VPC: IPV4"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "WebDMZ_SSH_out" {
security_group_id = aws_security_group.WebDMZ.id
type = "egress"
description = "Allows SSH requests for VPC: IPV4"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Allowing all outbound rules from "WebDMZ" security group as pasted below resolves the issue.
egress { # Allow allow traffic outbound, THIS WAS THE REASON YOU WAS NOT ABLE TO PING FROM WebServer to DBServer
description = "Allows All Traffic Outbound from Web Server"
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
I have this terraform script:
provider "aws" {
region = "us-weast-1"
}
resource "aws_security_group" "allow_all" {
name = "allow_all"
description = "Allow all inbound traffic"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = "vpc-154c1701"
}
resource "aws_instance" "wso2-testing" {
ami = "ami-0f9cf087c1f27d9b1"
instance_type = "t2.small"
key_name = "mykeypair"
vpc_security_group_ids = ["${aws_security_group.allow_all.id}"]
}
The machine is created correctly, but i canĀ“t connect to ec2 instance using my key pair with ssh.
Always i have the error:
ssh: connect to host x.x.x.x port 22: Operation timed out
The VPC es aws default with internet gateway
You can add your own IP to security group using below snippet:
data "http" "myip"{
url = "https://ipv4.icanhazip.com"
}
ingress {
# TCP (change to whatever ports you need)
from_port = 0
to_port = 0
protocol = "-1"
# Please restrict your ingress to only necessary IPs and ports.
# Opening to 0.0.0.0/0 can lead to security vulnerabilities.
cidr_blocks = ["${chomp(data.http.myip.body)}/32"]
}
egress {
# Outbound traffic is set to all
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
You need to add your own IP into inbound rule of your security group.
Check my blog or git sample
https://sv-technical.blogspot.com/2019/12/terraform.html
https://github.com/svermaji/terraform
HTH
I have creating t2.micro instance using following security group in terraform.
Allow 80 Port
resource "aws_security_group" "access-http" {
name = "Allow-80"
description = "Allow 80 inbound traffic"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Allow 8080 Port
resource "aws_security_group" "access-http-web" {
name = ""
description = "Allow 8080 inbound traffic"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Allow 22 Port
resource "aws_security_group" "access-ssh" {
name = "Access-ssh"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 1194
to_port = 1194
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
}
If any security issue came in due this security group. I have used Network(VPC) and Subnet option is default selected one. Please advise me.
If this was me I would terminate the instance and create a new one. When you create the "Allow 22 Port" security group you should make sure the source address is your current IP address. If your public (home/office) IP address is dynamic then you will need to check each time you connect and update it in the Security Group via the console if it has changed.
I want to create 2 VPC security groups.
One for the Bastion host of the VPC and one for the Private subnet.
# BASTION #
resource "aws_security_group" "VPC-BastionSG" {
name = "VPC-BastionSG"
description = "The sec group for the Bastion instance"
vpc_id = "aws_vpc.VPC.id"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["my.super.ip/32"]
}
egress {
# Access to the Private subnet from the bastion host[ssh]
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-PrivateSG.id}"]
}
egress {
# Access to the Private subnet from the bastion host[jenkins]
from_port = 8686
to_port = 8686
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-PrivateSG.id}"]
}
tags = {
Name = "VPC-BastionSG"
}
}
# PRIVATE #
resource "aws_security_group" "VPC-PrivateSG" {
name = "VPC-PrivateSG"
description = "The sec group for the private subnet"
vpc_id = "aws_vpc.VPC.id"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-BastionSG.id}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-PublicSG.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-PublicSG.id}"]
}
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-PublicSG.id}"]
}
ingress {
from_port = 8686
to_port = 8686
protocol = "tcp"
security_groups = ["${aws_security_group.VPC-BastionSG.id}"]
}
ingress {
# ALL TRAFFIC from the same subnet
from_port = 0
to_port = 0
protocol = "-1"
self = true
}
egress {
# ALL TRAFFIC to outside world
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "VPC-PrivateSG"
}
}
When I terraform plan it, this error is returned:
**`Error configuring: 1 error(s) occurred:
* Cycle: aws_security_group.VPC-BastionSG, aws_security_group.VPC-PrivateSG`**
If I comment out the ingress rules for the BastionSG from the PrivateSG the plan executes fine.
Also, if I comment out the egress rules for the PrivateSG from the BastionSG it also executes fine.
The AWS Scenario 2 for building a VPC with Public/Private subnets and Bastion host describes the architecture I am trying to setup.
I have the exact same settings configured via the AWS console and it plays fine.
Why isn't Terraform accepting it?
Is there another way to connect the Bastion security group with the Private security group?
EDIT
As I understand there is a circular reference between the two sec groups that somehow needs to break even though in AWS it is valid.
So, I thought of allowing all outbound traffic (0.0.0.0/0) from the Bastion sec group and not specifying it to individual security groups.
Would it have a bad security impact?
Terraform attempts to build a dependency chain for all of the resources defined in the folder that it is working on. Doing this enables it to work out if it needs to build things in a specific order and is pretty key to how it all works.
Your example is going to fail because you have a cyclic dependency (as Terraform helpfully points out) where each security group is dependent on the other one being created already.
Sometimes these can be tricky to solve and may mean you need to rethink what you're trying to do (as you mention, one option would be to simply allow all egress traffic out from the bastion host and only restrict the ingress traffic on the private instances) but in this case you have the option of using the aws_security_group_rule resource in combination with the aws_security_group resource.
This means we can define empty security groups with no rules in them at first which we can then use as targets for the security group rules we create for the groups.
A quick example might look something like this:
resource "aws_security_group" "bastion" {
name = "bastion"
description = "Bastion security group"
}
resource "aws_security_group_rule" "bastion-to-private-ssh-egress" {
type = "egress"
from_port = 22
to_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.bastion.id}"
source_security_group_id = "${aws_security_group.private.id}"
}
resource "aws_security_group" "private" {
name = "private"
description = "Private security group"
}
resource "aws_security_group_rule" "private-from-bastion-ssh-ingress" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.private.id}"
source_security_group_id = "${aws_security_group.bastion.id}"
}
Now, Terraform can see that the dependency chain says that both security groups must be created before either of those security group rules as both of them are dependent on the groups already having been created.