background:
I used terraform to build an AWS autoscaling group with a few instances spread across availability zones and linked by a load balancer. Everything is created properly, but the load balancer has no valid targets because they're nothing listening on port 80.
Fine, I thought. I'll install NGINX and throw up a basic config.
expected behavior
instances should be able to reach yum repos
actual behavior
I'm unable to ping anything or run any of the package manager commands, getting the following error
Could not retrieve mirrorlist https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list error was
12: Timeout on https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list: (28, 'Failed to connect to amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com port 443 after 2700 ms: Connection timed out')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
troubleshooting steps taken
I'm new to Terraform, and I'm still having issues doing the automated provisioning of user_data, so I SSH'd into the instance. The instance is set up in a public subnet with an auto-provisioned public IP. Below is the code for the security groups.
resource "aws_security_group" "elb_webtrafic_sg" {
name = "elb-webtraffic-sg"
description = "Allow inbound web trafic to load balancer"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "HTTPS trafic from vpc"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP trafic from vpc"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "allow SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic out"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "elb-webtraffic-sg"
}
}
resource "aws_security_group" "instance_sg" {
name = "instance-sg"
description = "Allow traffic from load balancer to instances"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 80
to_port = 80
protocol = "tcp"
}
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 443
to_port = 443
protocol = "tcp"
}
ingress {
description = "ssh traffic from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic to load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = {
Name = "instance-sg"
}
}
#this is a workaround for the cyclical security group id call
#I would like to figure out a way for this to destroy this first
#it currently takes longer to destroy than to set up
#terraform hangs because of the dependancy each SG has on each other,
#but will eventually struggle down to this rule and delete it, clearing the deadlock
resource "aws_security_group_rule" "elb_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 80
to_port = 80
protocol = "tcp"
}
resource "aws_security_group_rule" "elb_tls_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 443
to_port = 443
protocol = "tcp"
}
Since I was able to SSH into the machine, I tried to set up the web instance security group to allow direct connection from the internet to the instance. Same errors: cannot ping outside web addresses, same error on YUM commands.
I can ping the default gateway in each subnet. 10.0.0.1, 10.0.1.1, 10.0.2.1.
Here is the routing configuration I currently have setup.
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production-vpc"
}
}
resource "aws_key_pair" "aws_key" {
key_name = "Tanchwa_pc_aws"
public_key = file(var.public_key_path)
}
#internet gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "internet-gw"
}
}
resource "aws_route_table" "route_table" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "production-route-table"
}
}
resource "aws_subnet" "public_us_east_2a" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.0.0/24"
availability_zone = "us-east-2a"
tags = {
Name = "Public-Subnet us-east-2a"
}
}
resource "aws_subnet" "public_us_east_2b" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-2b"
tags = {
Name = "Public-Subnet us-east-2b"
}
}
resource "aws_subnet" "public_us_east_2c" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-2c"
tags = {
Name = "Public-Subnet us-east-2c"
}
}
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.public_us_east_2a.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "b" {
subnet_id = aws_subnet.public_us_east_2b.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "c" {
subnet_id = aws_subnet.public_us_east_2c.id
route_table_id = aws_route_table.route_table.id
}
Related
I've created the following key pair and EC2 instance using Terraform. I'll leave the SG config out of it, but it allows SSH from the internet.
When I try to SSH into this instance I get the errors "Server Refused our Key" and "No supported authentication methods available (server sent: publickey).
However I am able to login when I create a separate EC2 instance in the console and assign it the same key pair assigned in the TF script.
Has anyone seen this behavior? What causes it?
# Create Dev VPC
resource "aws_vpc" "dev_vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
enable_dns_hostnames = "true"
tags = {
Name = "dev"
}
}
# Create an Internet Gateway Resource
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.dev_vpc.id
tags = {
Name = "dev-engineering-igw"
}
}
# Create a Route Table
resource "aws_route_table" " _dev_public_routes" {
vpc_id = aws_vpc. _dev.id
tags = {
name = " _dev_public_routes"
}
}
# Create a Route
resource "aws_route" " _dev_internet_access" {
route_table_id = aws_route_table. _dev_public_routes.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
# Associate the Route Table to our Public Subnet
resource "aws_route_table_association" " _dev_public_subnet_assoc" {
subnet_id = aws_subnet. _dev_public.id
route_table_id = aws_route_table. _dev_public_routes.id
}
# Create public subnet for hosting customer-facing Django app
resource "aws_subnet" " _dev_public" {
vpc_id = aws_vpc. _dev.id
cidr_block = "10.0.0.0/17"
availability_zone = "us-west-2a"
tags = {
Env = "dev"
}
}
resource "aws_security_group" "allow_https" {
name = "allow_https"
description = "Allow http and https inbound traffic"
vpc_id = aws_vpc. _dev.id
ingress {
description = "HTTP and HTTPS into VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP and HTTPS into VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "HTTP and HTTPS out of VPC for Session Manager"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_https"
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu20.id
instance_type = "t3.micro"
subnet_id = aws_subnet. _dev_public.id
associate_public_ip_address = "true"
vpc_security_group_ids = ["${aws_security_group.allow_https.id}"]
key_name = "key_name"
metadata_options { #Enabling IMDSv2
http_endpoint = "disabled"
http_tokens = "required"
}
tags = {
Env = "dev"
}
}
As specified in the comments, removing the metadata_options from the instance resource resolves the issue.
The fix is to update the metadata_options to be:
metadata_options { #Enabling IMDSv2
http_endpoint = "enabled"
http_tokens = "required"
}
Looking at the Terraform documentation for metadata_options shows that:
http_endpoint = "disabled" means that the metadata service is unavailable.
http_tokens = "required" means that the metadata service requires session tokens (ie IMDSv2).
This is an invalid configuration, as specified in the AWS docs:
You can opt in to require that IMDSv2 is used when requesting instance metadata. Use the modify-instance-metadata-options CLI command and set the http-tokens parameter to required. When you specify a value for http-tokens, you must also set http-endpoint to enabled.
I am deploying my AWS resources with Terraform, one of the resources happen to be of type aws_instance (EC2) this is acting as my Bastion Host. It is on the public subnet, I created a security group which allows SSH from my home IP. This security group works, as i am able to SSH into the Bastion Host.
resource "aws_security_group" "allow_home_to_bastion_ssh" {
name = "Home to bastion"
description = "Allow SSH - Home to Bastion"
vpc_id = var.vpc_id
ingress {
description = "SSH from Bastion"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["<My-Home-IP>/32"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "Home to bastion"
}
}
I also created other security groups i'm adding to the node group configuration under the remote_access section as shown below
resource "aws_eks_node_group" "node_group" {
cluster_name = var.cluster_name
node_group_name = var.node_group_name
node_role_arn = var.node_pool_role_arn
subnet_ids = [var.subnet_1_id, var.subnet_2_id]
instance_types = ["t2.medium"]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
update_config {
max_unavailable = 1
}
remote_access {
ec2_ssh_key = "<My-Key-Pair.pem>"
source_security_group_ids = [
var.allow_http_id,
var.allow_ssh_id,
var.allow_tls_id,
var.allow_bastion_to_eks_node_id
]
}
}
The allow_ssh_id is shown below, as shown above this is added to the source_security_group_ids. I expect this to allow me to SSH from my Bastion Host to the EKS Node created by the node group since theyre all on the same CIDR range and VPC
resource "aws_security_group" "allow_ssh" {
name = var.sg_allow_ssh_name
description = "Allow SSH from CIDR"
vpc_id = var.vpc_id
ingress {
description = "SSH from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.vpc_cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.vpc_cidr_block]
ipv6_cidr_blocks = ["::/0"]
}
the allow_bastion_to_eks_node_id is an additional security group i created which is also added to the node group, this is to specifically allow SSH the External IP of the Bastion Host onto the EKS Node. See code below
resource "aws_security_group" "bastion_allow_ssh" {
name = var.sg_allow_bastion_ssh_name
description = "Allow SSH - Bastion to EKS"
vpc_id = var.vpc_id
ingress {
description = "SSH from Bastion"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.sg_allow_bastion_elastic_ssh}/32"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = var.sg_allow_bastion_ssh_name
}
}
as shown above, i am using the bastion elastic ip. Yet i cannot SSH to my EKS node from the Bastion Host. Not sure what is going on.
Not the bastion host is in a public subnet but using the same VPC as the EKS Node which is in the private subnet
SSH ERROR
ssh: connect to host port 22: Operation timed out
I have been following youtube guide to learn terraform and have followed each steps.
After running terraform apply it was able to setup everything as expected. I have verified this on aws console. But while trying to access the public ip it is saying connection refused.
Below is the content of my main.tf file.
provider "aws" {
region = "us-east-1"
access_key = "ACCESS-KEY"
secret_key = "SECERT-KEY"
}
# VPC
resource "aws_vpc" "prod-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production"
}
}
# create internet gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.prod-vpc.id
tags = {
Name : "Prod gateway"
}
}
# create custom route table
resource "aws_route_table" "prod-route-table" {
vpc_id = aws_vpc.prod-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "Prod"
}
}
# Create a subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.prod-vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "prod-subnet"
}
}
# Associate subnet with Route Table
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet-1.id
route_table_id = aws_route_table.prod-route-table.id
}
# Create Security Group to allow port 22, 80, 443
resource "aws_security_group" "allow_web" {
name = "allow_web_traffic"
description = "Allow Web traffic"
vpc_id = aws_vpc.prod-vpc.id
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 2
to_port = 2
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "allow_web"
}
}
# Create a network interface with an ip in the subnet that was created earlier
resource "aws_network_interface" "web-server-nic" {
subnet_id = aws_subnet.subnet-1.id
private_ips = ["10.0.1.50"]
security_groups = [aws_security_group.allow_web.id]
tags = {
Name : "prod-network-interface"
}
}
# Assign an elastic ip to the network interface created in previous step
resource "aws_eip" "one" {
vpc = true
network_interface = aws_network_interface.web-server-nic.id
associate_with_private_ip = "10.0.1.50"
depends_on = [aws_internet_gateway.gw, aws_instance.web-server-instance]
tags = {
Name : "Prod-Elastic-ip"
}
}
# Create Ubuntu server and install/enable apache2
resource "aws_instance" "web-server-instance" {
ami = "ami-0747bdcabd34c712a"
instance_type = "t2.micro"
availability_zone = "us-east-1a"
key_name = "main-key"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.web-server-nic.id
}
user_data = <<-EOF
#! /bin/bash
sudo apt update -y
sudo apt install apache2
sudo bash -c 'echo your very first web server > /var/www/html/index.html'
EOF
tags = {
Name : "Web-Server"
}
}
You are missing -y in your user data, so your user-data will just hang for confirmation. It should be:
sudo apt install -y apache2
You missed that another command need to start this apache2 after installed it
sudo systemctl start apache2
It also seems that the SSH port on the security group is not configured correctly. It should probably read 22 instead of 2 for both from_port and to_port.
The major problem here is at security group. For SSH configuration you should open port 22, the default SSH port.
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Besides that, you should fix your userdata with:
#! /bin/bash
sudo apt update -y
sudo apt install apache2 -y
sudo systemctl start apache2
sudo bash -c 'echo your very first web server > /var/www/html/index.html'
I hope this can help you and other people with the same or similar issue.
I am new to terraform and I am trying to create a simple structure with one ALB 2 servers with a simple app and one db instance, but I get a 504 error when accesing to the ALB`s DNS whoch checking the amazon documentation means The load balancer established a connection to the target but the target did not respond before the idle timeout period elapsed. I have gone over the code 100 times but I cannot find the mistake. This is my alb config:
#ASG
resource "aws_launch_configuration" "web-lc" {
name = "web-lc"
image_id = "ami-0fc970315c2d38f01"
instance_type = "t2.micro"
security_groups = [aws_security_group.ec2-webServers-sg.id]
key_name = "practica_final_kp"
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo docker run -d --name rtb -p 8080:8080 vermicida/rtb
EOF
}
resource "aws_autoscaling_group" "ec2-web-asg" {
name = "ec2-web-asg"
max_size = 2
min_size = 2
force_delete = true
launch_configuration = aws_launch_configuration.web-lc.name
vpc_zone_identifier = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
tag {
key = "Name"
value = "ec2-web-asg"
propagate_at_launch = "true"
}
}
#ALB
resource "aws_alb_target_group" "tg-alb" {
name = "tg-alb"
port = 80
protocol = "HTTP"
target_type = "instance"
vpc_id = aws_vpc.final-vpc.id
}
resource "aws_alb" "web-alb" {
name = "web-alb"
internal = false
subnets = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
security_groups = [aws_security_group.lb-sg.id]
}
resource "aws_alb_listener" "front_end" {
load_balancer_arn = aws_alb.web-alb.id
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.tg-alb.id
type = "forward"
}
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = aws_autoscaling_group.ec2-web-asg.id
alb_target_group_arn = aws_alb_target_group.tg-alb.arn
}
this is the security group:
resource "aws_security_group" "ec2-webServers-sg" {
name = "ec2-webServers-sg"
vpc_id = aws_vpc.final-vpc.id
ingress {
description = "APP"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24"]
}
egress {
description = "SQL"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.10.0/24", "10.0.20.0/24"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "F-web-servers-sg"
}
}
It looks like your service on the EC2 instance is running on port 8080, but your target group is pointing to port 80. You need to change the target group port to 8080.
There could also be a problem with security groups and VPC Network ACLs blocking the traffic, but you didn't include the definition of aws_security_group.ec2-webServers-sg.id in your question.
I hope everyone that sees this is doing well.
I'm still learning the ropes with Terraform and AWS.
I've created a VPC with 4 subnets in it. 1 subnet is public and the other 3 are private. I currently have 1 EC2 instance in my public subnet (a bastion box/server). I have also created a security group for this instance and have created a NACL rule that allows me to connect via ssh to this instance from my IP only. For some reason when I try to ssh onto this instance my terminal hangs and I see the following message:
OpenSSH_8.2p1 Ubuntu-4ubuntu0.1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug1: Connecting to 'instance_public_ip [instance_public_ip] port 22
and then it tells me the connection timed out.
I changed the rule to allow an ssh connection from all IPs (i.e. 0.0.0.0/0) but still get the same problem. The terraform code for the infrastructure is as follows:
# Elastic IP for bastion server
resource "aws_eip" "bastion_eip" {
instance = aws_instance.Bastion.id
vpc = true
}
# EIP association for bastion server
resource "aws_eip_association" "eip_assoc" {
instance_id = aws_instance.Bastion.id
allocation_id = aws_eip.bastion_eip.id
}
# Create internet gateway
resource "aws_internet_gateway" "main-gateway" {
vpc_id = aws_vpc.main-vpc.id
tags = {
Name = "main"
}
}
# Create route table for public subnet
resource "aws_route_table" "public-route-table" {
vpc_id = aws_vpc.main-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main-gateway.id
}
tags = {
Name = "public-route-table"
}
}
# Create subnet 4
resource "aws_subnet" "subnet-4" {
vpc_id = aws_vpc.main-vpc.id
cidr_block = "10.0.4.0/24"
availability_zone = "eu-west-2a"
tags = {
Name = "subnet-public"
}
}
# Associate subnet 4 with public route table
resource "aws_route_table_association" "subnet-4" {
subnet_id = aws_subnet.subnet-4.id
route_table_id = aws_route_table.public-route-table.id
}
# Create bastion server security group (subnet 4)
resource "aws_security_group" "bastion-sg" {
name = "bastion-sg"
description = "Allow web traffic from specific IPs"
vpc_id = aws_vpc.main-vpc.id
# SSH Traffic
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] #allow web traffic.
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_access_bastion_server"
}
}
# Create NACL for public subnet with Prod server & bastion server
resource "aws_network_acl" "public_nacl" {
vpc_id = aws_vpc.main-vpc.id
subnet_ids = [aws_subnet.subnet-4.id]
# Allow inbound http traffic from internet
ingress {
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 80
to_port = 80
}
# Allow outbound http traffic to internet
egress {
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 80
to_port = 80
}
# Allow inbound SSH traffic from specific IP
ingress {
protocol = "tcp"
rule_no = 103
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 22
to_port = 22
}
# Allow outbound SSH traffic from specific IP
egress {
protocol = "tcp"
rule_no = 103
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 22
to_port = 22
}
tags = {
Name = "public NACL"
}
}
# Create bastion box
resource "aws_instance" "Bastion" {
ami = var.ami-id
instance_type = var.instance-type
key_name = "aws_key_name"
vpc_security_group_ids = ["security_group_id"]
subnet_id = "subnet_id"
tags = {
Name = "Bastion Server"
}
}
I've been looking at this a while now and can't really see where I've gone wrong. Is the issue with my security group or my IGW or route table? If there's any other information you feel is needed let me know :) and thanks for any help in advance
I think the problem is on the security group.
# SSH Traffic
ingress {
description = "SSH"
from_port = 0 # SSH client port is not a fixed port
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] #allow web traffic. 46.64.73.251/32
}
egress {
from_port = 22
to_port = 0 # SSH client port is not a fixed port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}