I am trying to deploy EC2 instances using Terrafom and I can see the following error:
Error: Error launching source instance: InvalidGroup.NotFound: The security group 'prod-web-servers-sg' does not exist in VPC 'vpc-db3a3cb3'
Here is the Terraform template I'm using:
resource "aws_default_vpc" "default" {
}
resource "aws_security_group" "prod-web-servers-sg" {
name = "prod-web-servers-sg"
description = "security group for production grade web servers"
vpc_id = "${aws_default_vpc.default.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
#Subnet
resource "aws_subnet" "private_subnet" {
vpc_id = "${aws_default_vpc.default.id}"
cidr_block = "172.31.0.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_instance" "prod-web-server" {
ami = "ami-04b1ddd35fd71475a"
count = 2
key_name = "test_key"
instance_type = "r5.large"
security_groups = ["prod-web-servers-sg"]
subnet_id = "${aws_subnet.private_subnet.id}"
}
You have a race condition there because Terraform doesn't know to wait until the security group is created before creating the instance.
To fix this you should interpolate the aws_security_group.prod-web-servers-sg.id into aws_instance.prod-web-server resource so that it can work out the dependency chain between the resources. You should also use vpc_security_group_ids instead of security_groups as mentioned in the aws_instance resource documentation:
security_groups - (Optional, EC2-Classic and default VPC only) A list of security group names (EC2-Classic) or IDs (default VPC) to associate with.
NOTE:
If you are creating Instances in a VPC, use vpc_security_group_ids instead.
So you should have something like the following:
resource "aws_default_vpc" "default" {}
resource "aws_security_group" "prod-web-servers-sg" {
name = "prod-web-servers-sg"
description = "security group for production grade web servers"
vpc_id = aws_default_vpc.default.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
#Subnet
resource "aws_subnet" "private_subnet" {
vpc_id = aws_default_vpc.default.id
cidr_block = "172.31.0.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_instance" "prod-web-server" {
ami = "ami-04b1ddd35fd71475a"
count = 2
key_name = "test_key"
instance_type = "r5.large"
vpc_security_group_ids = [aws_security_group.prod-web-servers-sg.id]
subnet_id = aws_subnet.private_subnet.id
}
Related
I am currently learning Terraform and I need help with regard to the below code. I want to create a simple architecture of an autoscaling group of EC2 instances behind an Application load balancer. The setup gets completed but when I try to access the application endpoint, it gets timed out. When I tried to access the EC2 instances, I was unable to (because EC2 instances were in a security group allowing access from the ALB security group only). I changed the instance security group ingress values and ran the user_data script manually following which I reverted the changes to the instance security group to complete my setup.
My question is why is my setup not working via the below code? Is it because the access is being restricted by the load balancer security group or is my launch configuration block incorrect?
data "aws_ami" "amazon-linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-2.0.20220426.0-x86_64-gp2"]
}
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"
name = "main-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
public_subnets = ["10.0.4.0/24","10.0.5.0/24","10.0.6.0/24"]
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_launch_configuration" "TestLC" {
name_prefix = "Lab-Instance-"
image_id = data.aws_ami.amazon-linux.id
instance_type = "t2.nano"
key_name = "CloudformationKeyPair"
user_data = file("./user_data.sh")
security_groups = [aws_security_group.TestInstanceSG.id]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "TestASG" {
min_size = 1
max_size = 3
desired_capacity = 2
launch_configuration = aws_launch_configuration.TestLC.name
vpc_zone_identifier = module.vpc.public_subnets
}
resource "aws_lb_listener" "TestListener"{
load_balancer_arn = aws_lb.TestLB.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.TestTG.arn
}
}
resource "aws_lb" "TestLB" {
name = "Lab-App-Load-Balancer"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
subnets = module.vpc.public_subnets
}
resource "aws_lb_target_group" "TestTG" {
name = "LabTargetGroup"
port = "80"
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
}
resource "aws_autoscaling_attachment" "TestAutoScalingAttachment" {
autoscaling_group_name = aws_autoscaling_group.TestASG.id
lb_target_group_arn = aws_lb_target_group.TestTG.arn
}
resource "aws_security_group" "TestInstanceSG" {
name = "LAB-Instance-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
vpc_id = module.vpc.vpc_id
}
resource "aws_security_group" "TestLoadBalanceSG" {
name = "LAB-LoadBalancer-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = module.vpc.vpc_id
}
I am trying to get the following Terraform script to run:-
provider "aws" {
region = "us-west-2"
}
provider "random" {}
resource "random_pet" "name" {}
resource "aws_instance" "web" {
ami = "ami-a0cfeed8"
instance_type = "t2.micro"
user_data = file("init-script.sh")
subnet_id = "subnet-0422e48590002d10d"
vpc_security_group_ids = [aws_security_group.web-sg.id]
tags = {
Name = random_pet.name.id
}
}
resource "aws_security_group" "web-sg" {
name = "${random_pet.name.id}-sg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "domain-name" {
value = aws_instance.web.public_dns
}
output "application-url" {
value = "${aws_instance.web.public_dns}/index.php"
}
However it errors with the following:-
Error: Error authorizing security group egress rules: InvalidGroup.NotFound: The security group 'sg-0c181b93d98173b0f' does not exist status code: 400, request id: 6cd8681b-ee70-4ec0-8509-6239c56169a1
The SG gets created with the correct name, but it claims it does not exist.
I am am unsure how to resolve this.
Typically I worked it out straight after posting. I had neglected to add the vpc_id property to the aws_security_group which meant it was an EC2 Classic SG which cannot have egress rules.
I'm running the below terraform code to deploy an ec2 instance inside a VPC to work as web server but for some reason I cant reach the website and cant shh to it, I have set the ingress and egress rules properly I believe:
########Provider########
provider "aws" {
region = "us-west-2"
access_key = "[redacted]"
secret_key = "[redacted]"
}
########VPC########
resource "aws_vpc" "vpc1" {
cidr_block = "10.1.0.0/16"
tags = {
Name = "Production"
}
}
########Internet GW########
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.vpc1.id
}
########Route table########
resource "aws_route_table" "rt" {
vpc_id = aws_vpc.vpc1.id
route {
cidr_block = "0.0.0.0/24"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gw.id
}
}
########Sub Net########
resource "aws_subnet" "subnet1" {
vpc_id = aws_vpc.vpc1.id
cidr_block = "10.1.0.0/24"
availability_zone = "us-west-2a"
map_public_ip_on_launch = "true"
tags = {
Name = "prod-subnet-1"
}
}
########RT assosiation########
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet1.id
route_table_id = aws_route_table.rt.id
}
########Security Group########
resource "aws_security_group" "sec1" {
name = "allow_web"
description = "Allow web inbound traffic"
vpc_id = aws_vpc.vpc1.id
ingress {
description = "HTTP from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.1.0.0/16"]
}
#SSH access from anywhere
ingress {
description = "SSH from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_web"
}
}
########Net Interface for the Instance########
#resource "aws_network_interface" "wsn" {
# subnet_id = aws_subnet.subnet1.id
# private_ips = ["10.0.1.50"]
# security_groups = [aws_security_group.sec1.id]
#}
########Load Balancer########
resource "aws_elb" "elb" {
name = "lb"
subnets = [aws_subnet.subnet1.id]
security_groups = [aws_security_group.sec1.id]
instances = [aws_instance.web1.id]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
########EC2 Instance########
resource "aws_instance" "web1" {
ami = "ami-003634241a8fcdec0" #ubuntu 18.4
instance_type = "t2.micro"
availability_zone = "us-west-2a"
key_name = "main-key"
subnet_id = aws_subnet.subnet1.id
#network_interface {
# device_index = 0
# network_interface_id = aws_network_interface.wsn.id
#}
user_data = <<-EOF
#!/bin/bash
sudo apt update -y
sudo apt install apache2 -y
sudo systemctl start apache2
sudo bash -c 'echo Hello world!!! > /var/www/html/index.html'
EOF
tags = {
Name = "HelloWorld"
}
}
output "aws_elb_public_dns" {
value = aws_elb.elb.dns_name
}
The plan and the apply runs all fine but in the loadbalancer the instance is "outofservice"
what could be wrong here??
You are missing security group to your instance: vpc_security_group_ids.
Subsequently, you won't be able to ssh to it nor the http traffic will be allowed from the outside.
Also your route to IGW is incorrect. It should be:
cidr_block = "0.0.0.0/0"
Same for SG for your ELB to allow traffic from the internet. It should be:
cidr_blocks = ["0.0.0.0/0"]
I have put together my first terraform script for asset provisioning on AWS. However, I am not able to connect to the EC2 instance in the public subnet
I can see that all of the expected resources are created:
subnets/instances/route tables/gateway etc
I have excluded provider.tf because it contains sensitive secrets.
My region is ap-south-1.
resource "aws_vpc" "vpc1" {
cidr_block = "10.20.0.0/16"
tags = {
name = "tf_vpc"
}
}
# subnets below
resource "aws_subnet" "subnet_public"{
vpc_id = "${aws_vpc.vpc1.id}"
cidr_block = "10.20.10.0/24"
availability_zone = "ap-south-1a"
map_public_ip_on_launch = true
}
resource "aws_subnet" "subnet_private"{
vpc_id = "${aws_vpc.vpc1.id}"
cidr_block = "10.20.20.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_security_group" "sg-web" {
name ="allow80"
description="allows traffic on port 80"
vpc_id ="${aws_vpc.vpc1.id}"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name="allowhttp"
}
}
resource "aws_default_route_table" "public" {
default_route_table_id = "${aws_vpc.vpc1.main_route_table_id}"
tags = {
name = "route-default"
}
}
resource "aws_internet_gateway" "ig"{
vpc_id = "${aws_vpc.vpc1.id}"
}
resource "aws_route_table" "route_public"{
vpc_id = "${aws_vpc.vpc1.id}"
}
resource "aws_route" "r1" {
route_table_id = "${aws_route_table.route_public.id}"
destination_cidr_block = "0.0.0.0/16"
gateway_id = "${aws_internet_gateway.ig.id}"
}
resource "aws_route_table_association" "public" {
subnet_id = "${aws_subnet.subnet_public.id}"
route_table_id = "${aws_route_table.route_public.id}"
}
resource "aws_instance" "ins1_web"{
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_public.id}"
vpc_security_group_ids = ["${aws_security_group.sg-web.id}"]
key_name = "myBOMkey-2"
tags = {
name="tf-1"
}
}
resource "aws_instance" "ins1_db"{
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_private.id}"
vpc_security_group_ids = ["${aws_security_group.sg-web.id}"]
key_name = "myBOMkey-2"
tags = {
name="tf-1"
}
}
Why can't I connect to my ec2 instance after apply?
Take a look at the CIDR (0.0.0.0/16), which does not seem to be correct. Might be a typo. Any-IP is represented with "0.0.0.0/0" , as any-IP destination needs to be routed to Internet gateway.
resource "aws_route" "r1" {
route_table_id = "${aws_route_table.route_public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.ig.id}"
}
Also missing from your Security group configuration is egress (outbound )traffic as terraform does not keep ALL traffic allowed as default in outbound traffic. Refer to terraform security group documentation.
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
Hope this helps !
I'm trying to get the output of VPC instance IP and the IP not correct here's my configuration
resource "aws_vpc" "default" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags {
Name = "terraform-aws-vpc"
}
}
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
/*
NAT Instance
*/
resource "aws_security_group" "nat" {
name = "vpc_nat"
description = "Allow traffic to pass from the private subnet to the internet"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.private_subnet_cidr}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.private_subnet_cidr}"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
}
egress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = "${aws_vpc.default.id}"
tags {
Name = "NAT"
}
}
resource "aws_instance" "nat" {
ami = "ami-30913f47" # this is a special ami preconfigured to do NAT
availability_zone = "eu-west-1a"
instance_type = "m1.small"
key_name = "admin_key"
vpc_security_group_ids = ["${aws_security_group.nat.id}"]
subnet_id = "${aws_subnet.eu-west-1a-public.id}"
associate_public_ip_address = true
source_dest_check = false
tags {
Name = "VPC NAT"
}
}
resource "aws_eip" "nat" {
instance = "${aws_instance.nat.id}"
vpc = true
}
/*
Public Subnet
*/
resource "aws_subnet" "eu-west-1a-public" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_cidr}"
availability_zone = "eu-west-1a"
tags {
Name = "Public Subnet"
}
}
resource "aws_route_table" "eu-west-1a-public" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
tags {
Name = "Public Subnet"
}
}
resource "aws_route_table_association" "eu-west-1a-public" {
subnet_id = "${aws_subnet.eu-west-1a-public.id}"
route_table_id = "${aws_route_table.eu-west-1a-public.id}"
}
/*
Private Subnet
*/
resource "aws_subnet" "eu-west-1a-private" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_subnet_cidr}"
availability_zone = "eu-west-1a"
tags {
Name = "Private Subnet"
}
}
resource "aws_route_table" "eu-west-1a-private" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.nat.id}"
}
tags {
Name = "Private Subnet"
}
}
resource "aws_route_table_association" "eu-west-1a-private" {
subnet_id = "${aws_subnet.eu-west-1a-private.id}"
route_table_id = "${aws_route_table.eu-west-1a-private.id}"
}
output "NAT_Private_IP" {
value = "${aws_instance.nat.private_ip}"
}
I have tested the following
aws_instance.nat.public_ip
and
aws_eip.nat.public_ip
but no chance about this when using aws_instance.nat.public_ip it's gives not correct ip, this code based on terraform AWS and I'm trying to make VPC bastion host
Looks like you are trying modify the default VPC, see the following:
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-ip-address-range/
https://github.com/terraform-providers/terraform-provider-aws/issues/3403