How to create aws_instance with existing VPC? - amazon-web-services

Is it possible to create an EC2 instance while reusing already existing VPC?
Running the following code yields Error launching source instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC. (status code: 400):
data "aws_security_groups" "my_tib_sg" {
tags = {
Name = "my-security-group"
}
}
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [data.aws_security_groups.my_tib_sg.id]
# more, irrelevant stuff...
}
FWIU from the error, the aws_instance block requires a reference to my VPC, which basically exists in my security group. Besides, I can't find a way to refer a VPC in an aws_instance block.
Updating code per answers:
I updated the code per answers below:
data "aws_security_groups" "my_tib_sg" {
tags = {
Name = "my-tib-sg"
}
}
data "aws_subnet" "my_subnet" {
tags = {
Name = "my-tib-subnet-1"
}
}
resource "aws_network_interface" "my_ani" {
subnet_id = data.aws_subnet.my_subnet.id
private_ips = ["10.0.0.10"]
tags = {
Name = "my-tib-ani"
by = "TF_TF"
}
}
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [data.aws_security_groups.my_tib_sg.id]
network_interface {
network_interface_id = aws_network_interface.my_ani.id
device_index = 0
}
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
But the error changes to "network_interface": conflicts with vpc_security_group_ids.
(needless to mention: both my_subnet and my_tib_sg use same VPC)

I typically use the subnet_id parameter, directly on the aws_instance resource:
data "aws_security_groups" "my_tib_sg" {
tags = {
Name = "my-tib-sg"
}
}
data "aws_subnet" "my_subnet" {
tags = {
Name = "my-tib-subnet-1"
}
}
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [data.aws_security_groups.my_tib_sg.ids[0]]
# specify the subnet_id here
subnet_id = data.aws_subnet.my_subnet.id
# more, irrelevant stuff...
}

Yes, you can add a new EC2 instance to an existing VPC.
You should provide the subnet_id to aws_instance. You would typically pass that into Terraform as a parameter, rather than hard-coding its value into your template.
Note: the subnet ID implicitly indicates the actual VPC (because a subnet only exists in one VPC).

Is it possible to create an EC2 instance while reusing already existing VPC?
yes you can create an ec2 instance with an existing VPC. You can use a Data Source: aws_vpc to query existing VPC and then further reference the same in your resource like Resource: aws_instance below:
variable "vpc_id" {}
data "aws_vpc" "selected" {
id = var.vpc_id
}
resource "aws_subnet" "example" {
vpc_id = data.aws_vpc.selected.id
availability_zone = "us-west-2a"
cidr_block = cidrsubnet(data.aws_vpc.selected.cidr_block, 4, 1)
}
resource "aws_security_group" "allow_tls" {
name = "allow_tls"
description = "Allow TLS inbound traffic"
vpc_id = data.aws_vpc.selected.id
ingress {
description = "TLS from VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [aws_vpc.main.cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_tls"
}
}
resource "aws_network_interface" "foo" {
subnet_id = aws_subnet.example.id
private_ips = ["172.16.10.100"]
security_groups = [aws_security_group.allow_tls.id]
tags = {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
network_interface {
network_interface_id = aws_network_interface.foo.id
device_index = 0
}
credit_specification {
cpu_credits = "unlimited"
}
}

Related

ec2 instances created by a launch template in auto-scaling group are not being registered with the target group

These are my alb configurations in case they are necessary. Skip down to see the meat of the problem.
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = aws_autoscaling_group.web.id
lb_target_group_arn = aws_lb_target_group.main.arn
}
resource "aws_lb" "main" {
name = "test-${var.env}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.elb.id]
subnets = [aws_subnet.main1.id, aws_subnet.main2.id]
tags = {
Name = "${var.project_name}-${var.env}-alb"
Project = var.project_name
Environment = var.env
ManagedBy = "terraform"
}
}
resource "aws_lb_target_group" "main" {
name = "${var.project_name}-${var.env}-alb-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
deregistration_delay = 30
health_check {
interval = 10
matcher = "200-299"
path = "/"
}
}
resource "aws_lb_listener" "main" {
load_balancer_arn = aws_lb.main.arn
protocol = "HTTPS"
port = "443"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01"
certificate_arn = aws_acm_certificate.main.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.main.arn
}
}
============
I have an auto-scaling group created through terraform:
resource "aws_autoscaling_group" "web" {
vpc_zone_identifier = [aws_subnet.main1.id]
launch_template {
id = aws_launch_template.web.id
version = "$Latest"
}
min_size = 1
max_size = 10
lifecycle {
create_before_destroy = true
}
}
the launch template looks like this:
resource "aws_launch_template" "web" {
name_prefix = "${var.project_name}-${var.env}-autoscale-web-"
image_id = var.web_ami
instance_type = "t3.small"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.webserver.id]
}
if i use a launch configuration instead of a launch template, it works:
resource "aws_launch_configuration" "web" {
image_id = var.web_ami
instance_type = "t3.small"
key_name = var.key_name
security_groups = [aws_security_group.webserver.id]
root_block_device {
volume_size = 8 # GB
volume_type = "gp3"
}
}
when using the launch configuration, the one line in the autoscaling_group is added:
launch_configuration = aws_launch_configuration.web.name
and the launch_template section is removed.
aws_launch_configuration is deprecated, so I'd like to use the launch_template.
Everything is working fine; the instance spins up and I can connect to it and it passes the health check. The problem is that the EC2 instance doesn't automatically register with the target group. When I manually register it with the target group, then everything works fine.
How can I get the EC2 instances that spin up with a launch template to automatically get added to the target group?
It turns out aws_autoscaling_attachment is also deprecated, and I needed to add:
target_group_arns = [aws_lb_target_group.main.arn]
to my aws_autoscaling_group.

Terraform AWS not able to ping, or ssh just created EC2 instances

I would like to ask for assistance.
I wrote terraform script which is creating 5 EC2 instances but I am not able to ping or SSH them.
Do you see any potential issue with this? I have opened icmp, ssh, not when I checked form other computers/sites I get port is closed.
When I create manually EC2 is working from my computer, I am able to ssh/ping, but not with this terraform script.
provider "aws" {
version = "~> 3.0"
region = "us-east-1"
access_key = "AKxxxxxxxxxxx"
secret_key = "2CLBj/s9dC5r52Y"
}
# Create a VPC
resource "aws_vpc" "BrokenByteVPC" {
cidr_block = "192.168.100.0/28"
tags = {
Name = "BrokenByteVPC"
}
}
resource "aws_subnet" "BrokenbyteLB-subnet" {
vpc_id = aws_vpc.BrokenByteVPC.id
cidr_block = "192.168.100.0/28"
availability_zone = "us-east-1a"
tags = {
Name = "BrokenbyteLB-subnet"
}
}
resource "aws_internet_gateway" "BrokenByte-gateway" {
vpc_id = aws_vpc.BrokenByteVPC.id
tags = {
Name = "BrokenByte-gateway"
}
}
resource "aws_route_table" "BrokenByte-Route-table" {
vpc_id = aws_vpc.BrokenByteVPC.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.BrokenByte-gateway.id
}
}
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
route_table_id = aws_route_table.BrokenByte-Route-table.id
}
resource "aws_security_group" "allow_traffic" {
name = "allow_Traffic"
description = "Allow SSH,HTTP and HTTPS inbound traffic"
vpc_id = aws_vpc.BrokenByteVPC.id
ingress {
description = "Dozvoli SVEEEEEEEE"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH traffic"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP traffic"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS traffic"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Allow_ssh_http_https"
}
}
resource "aws_network_interface" "NginX-public" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
#private_ips = ["192.168.100.2"]
security_groups = [aws_security_group.allow_traffic.id]
}
resource "aws_network_interface" "NginX-LB" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
private_ips = ["192.168.100.10"]
security_groups = [aws_security_group.allow_traffic.id]
}
resource "aws_network_interface" "www1" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
private_ips = ["192.168.100.11"]
security_groups = [aws_security_group.allow_traffic.id]
}
resource "aws_network_interface" "www2" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
private_ips = ["192.168.100.12"]
security_groups = [aws_security_group.allow_traffic.id]
}
resource "aws_network_interface" "www3" {
subnet_id = aws_subnet.BrokenbyteLB-subnet.id
private_ips = ["192.168.100.13"]
security_groups = [aws_security_group.allow_traffic.id]
}
resource "aws_eip" "BrokenByte-PublicIP" {
vpc = true
network_interface = aws_network_interface.NginX-public.id
#associate_with_private_ip = "192.168.100.10"
depends_on = [aws_internet_gateway.BrokenByte-gateway, aws_instance.BrokenByteNginX]
}
resource "aws_instance" "BrokenByteNginX" {
ami = "ami-0dba2cb6798deb6d8"
availability_zone = "us-east-1a"
instance_type = "t2.micro"
key_name = "aws_test"
network_interface {
device_index=0
network_interface_id = aws_network_interface.NginX-LB.id
}
network_interface {
device_index=1
network_interface_id = aws_network_interface.NginX-public.id
}
tags = {
Name = "BrokenByteNginXLB"
}
user_data = <<-EOF
#!/bin/bash
sudo apt-get update -y
EOF
}
resource "aws_instance" "BrokenByteWWW1" {
ami = "ami-0dba2cb6798deb6d8"
availability_zone = "us-east-1a"
instance_type = "t2.micro"
key_name = "aws_test"
network_interface {
device_index=0
network_interface_id = aws_network_interface.www1.id
}
tags = {
Name = "BrokenByteWWW1"
}
}
resource "aws_instance" "BrokenByteWWW2" {
ami = "ami-0dba2cb6798deb6d8"
availability_zone = "us-east-1a"
instance_type = "t2.micro"
key_name = "aws_test"
network_interface {
device_index=0
network_interface_id = aws_network_interface.www2.id
}
tags = {
Name = "BrokenByteWWW2"
}
}
resource "aws_instance" "BrokenByteWWW3" {
ami = "ami-0dba2cb6798deb6d8"
availability_zone = "us-east-1a"
instance_type = "t2.micro"
key_name = "aws_test"
network_interface {
device_index=0
network_interface_id = aws_network_interface.www3.id
}
tags = {
Name = "BrokenByteWWW3"
}
}
None of your instances have public IP address (except the one with aws_eip.BrokenByte-PublicIP), since your public subnet is missing map_public_ip_on_launch. You can rectify the issue by:
resource "aws_subnet" "BrokenbyteLB-subnet" {
vpc_id = aws_vpc.BrokenByteVPC.id
cidr_block = "192.168.100.0/28"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "BrokenbyteLB-subnet"
}
}
I was sure is something related to network card, but wasn't sure what.
Now is fine, I can ping and SSH, just swapped public IP to be network 0, and I removed code for network.
#Marcin, your first reply showed me in which direction to look.
# network_interface {
# device_index=0
# network_interface_id = aws_network_interface.NginX-LB.id
# }
network_interface {
device_index=0
network_interface_id = aws_network_interface.NginX-public.id
}

Why can't I connect to my ec2 instance after apply?

I have put together my first terraform script for asset provisioning on AWS. However, I am not able to connect to the EC2 instance in the public subnet
I can see that all of the expected resources are created:
subnets/instances/route tables/gateway etc
I have excluded provider.tf because it contains sensitive secrets.
My region is ap-south-1.
resource "aws_vpc" "vpc1" {
cidr_block = "10.20.0.0/16"
tags = {
name = "tf_vpc"
}
}
# subnets below
resource "aws_subnet" "subnet_public"{
vpc_id = "${aws_vpc.vpc1.id}"
cidr_block = "10.20.10.0/24"
availability_zone = "ap-south-1a"
map_public_ip_on_launch = true
}
resource "aws_subnet" "subnet_private"{
vpc_id = "${aws_vpc.vpc1.id}"
cidr_block = "10.20.20.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_security_group" "sg-web" {
name ="allow80"
description="allows traffic on port 80"
vpc_id ="${aws_vpc.vpc1.id}"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name="allowhttp"
}
}
resource "aws_default_route_table" "public" {
default_route_table_id = "${aws_vpc.vpc1.main_route_table_id}"
tags = {
name = "route-default"
}
}
resource "aws_internet_gateway" "ig"{
vpc_id = "${aws_vpc.vpc1.id}"
}
resource "aws_route_table" "route_public"{
vpc_id = "${aws_vpc.vpc1.id}"
}
resource "aws_route" "r1" {
route_table_id = "${aws_route_table.route_public.id}"
destination_cidr_block = "0.0.0.0/16"
gateway_id = "${aws_internet_gateway.ig.id}"
}
resource "aws_route_table_association" "public" {
subnet_id = "${aws_subnet.subnet_public.id}"
route_table_id = "${aws_route_table.route_public.id}"
}
resource "aws_instance" "ins1_web"{
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_public.id}"
vpc_security_group_ids = ["${aws_security_group.sg-web.id}"]
key_name = "myBOMkey-2"
tags = {
name="tf-1"
}
}
resource "aws_instance" "ins1_db"{
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_private.id}"
vpc_security_group_ids = ["${aws_security_group.sg-web.id}"]
key_name = "myBOMkey-2"
tags = {
name="tf-1"
}
}
Why can't I connect to my ec2 instance after apply?
Take a look at the CIDR (0.0.0.0/16), which does not seem to be correct. Might be a typo. Any-IP is represented with "0.0.0.0/0" , as any-IP destination needs to be routed to Internet gateway.
resource "aws_route" "r1" {
route_table_id = "${aws_route_table.route_public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.ig.id}"
}
Also missing from your Security group configuration is egress (outbound )traffic as terraform does not keep ALL traffic allowed as default in outbound traffic. Refer to terraform security group documentation.
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
Hope this helps !

AWS ECS Terraform: The requested configuration is currently not supported. Launching EC2 instance failed

I have been trying to spin up ECS using terraform. About two days ago it was working as expected, however today I tried to run terraform apply and I keep getting an error saying
"The requested configuration is currently not supported. Launching EC2 instance failed"
I have researched a lot about this issue, I tried hardcoding the VPC tenancy to default, I've tried changing the region, the instance type and nothing seems to fix the issue.
The is my terraform config:
provider "aws" {
region = var.region
}
data "aws_availability_zones" "available" {}
# Define a vpc
resource "aws_vpc" "motivy_vpc" {
cidr_block = var.motivy_network_cidr
tags = {
Name = var.motivy_vpc
}
enable_dns_support = "true"
instance_tenancy = "default"
enable_dns_hostnames = "true"
}
# Internet gateway for the public subnet
resource "aws_internet_gateway" "motivy_ig" {
vpc_id = aws_vpc.motivy_vpc.id
tags = {
Name = "motivy_ig"
}
}
# Public subnet 1
resource "aws_subnet" "motivy_public_sn_01" {
vpc_id = aws_vpc.motivy_vpc.id
cidr_block = var.motivy_public_01_cidr
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "motivy_public_sn_01"
}
}
# Public subnet 2
resource "aws_subnet" "motivy_public_sn_02" {
vpc_id = aws_vpc.motivy_vpc.id
cidr_block = var.motivy_public_02_cidr
availability_zone = data.aws_availability_zones.available.names[1]
tags = {
Name = "motivy_public_sn_02"
}
}
# Routing table for public subnet 1
resource "aws_route_table" "motivy_public_sn_rt_01" {
vpc_id = aws_vpc.motivy_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.motivy_ig.id
}
tags = {
Name = "motivy_public_sn_rt_01"
}
}
# Routing table for public subnet 2
resource "aws_route_table" "motivy_public_sn_rt_02" {
vpc_id = aws_vpc.motivy_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.motivy_ig.id
}
tags = {
Name = "motivy_public_sn_rt_02"
}
}
# Associate the routing table to public subnet 1
resource "aws_route_table_association" "motivy_public_sn_rt_01_assn" {
subnet_id = aws_subnet.motivy_public_sn_01.id
route_table_id = aws_route_table.motivy_public_sn_rt_01.id
}
# Associate the routing table to public subnet 2
resource "aws_route_table_association" "motivy_public_sn_rt_02_assn" {
subnet_id = aws_subnet.motivy_public_sn_02.id
route_table_id = aws_route_table.motivy_public_sn_rt_02.id
}
# ECS Instance Security group
resource "aws_security_group" "motivy_public_sg" {
name = "motivys_public_sg"
description = "Test public access security group"
vpc_id = aws_vpc.motivy_vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 5000
to_port = 5000
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
var.motivy_public_01_cidr,
var.motivy_public_02_cidr
]
}
egress {
# allow all traffic to private SN
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = [
"0.0.0.0/0"]
}
tags = {
Name = "motivy_public_sg"
}
}
data "aws_ecs_task_definition" "motivy_server" {
task_definition = aws_ecs_task_definition.motivy_server.family
}
resource "aws_ecs_task_definition" "motivy_server" {
family = "motivy_server"
container_definitions = file("task-definitions/service.json")
}
data "aws_ami" "latest_ecs" {
most_recent = true # get the latest version
filter {
name = "name"
values = [
"amzn2-ami-ecs-*"] # ECS optimized image
}
owners = [
"amazon" # Only official images
]
}
resource "aws_launch_configuration" "ecs-launch-configuration" {
name = "ecs-launch-configuration"
image_id = data.aws_ami.latest_ecs.id
instance_type = "t2.micro"
iam_instance_profile = aws_iam_instance_profile.ecs-instance-profile.id
root_block_device {
volume_type = "standard"
volume_size = 100
delete_on_termination = true
}
enable_monitoring = true
lifecycle {
create_before_destroy = true
}
security_groups = [aws_security_group.motivy_public_sg.id]
associate_public_ip_address = "true"
key_name = var.ecs_key_pair_name
user_data = <<EOF
#!/bin/bash
echo ECS_CLUSTER=${var.ecs_cluster} >> /etc/ecs/ecs.config
EOF
}
resource "aws_appautoscaling_target" "ecs_motivy_server_target" {
max_capacity = 2
min_capacity = 1
resource_id = "service/${aws_ecs_cluster.motivy_ecs_cluster.name}/${aws_ecs_service.motivy_server_service.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
depends_on = [ aws_ecs_service.motivy_server_service ]
}
resource "aws_iam_role" "ecs-instance-role" {
name = "ecs-instance-role"
path = "/"
assume_role_policy = data.aws_iam_policy_document.ecs-instance-policy.json
}
data "aws_iam_policy_document" "ecs-instance-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" {
role = aws_iam_role.ecs-instance-role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
resource "aws_iam_instance_profile" "ecs-instance-profile" {
name = "ecs-instance-profile"
path = "/"
role = aws_iam_role.ecs-instance-role.id
provisioner "local-exec" {
command = "sleep 10"
}
}
resource "aws_autoscaling_group" "motivy-server-autoscaling-group" {
name = "motivy-server-autoscaling-group"
termination_policies = [
"OldestInstance" # When a “scale down” event occurs, which instances to kill first?
]
default_cooldown = 30
health_check_grace_period = 30
max_size = var.max_instance_size
min_size = var.min_instance_size
desired_capacity = var.desired_capacity
# Use this launch configuration to define “how” the EC2 instances are to be launched
launch_configuration = aws_launch_configuration.ecs-launch-configuration.name
lifecycle {
create_before_destroy = true
}
# Refer to vpc.tf for more information
# You could use the private subnets here instead,
# if you want the EC2 instances to be hidden from the internet
vpc_zone_identifier = [aws_subnet.motivy_public_sn_01.id, aws_subnet.motivy_public_sn_02.id]
tags = [{
key = "Name",
value = var.ecs_cluster,
# Make sure EC2 instances are tagged with this tag as well
propagate_at_launch = true
}]
}
resource "aws_alb" "motivy_server_alb_load_balancer" {
name = "motivy-alb-load-balancer"
security_groups = [aws_security_group.motivy_public_sg.id]
subnets = [aws_subnet.motivy_public_sn_01.id, aws_subnet.motivy_public_sn_02.id]
tags = {
Name = "motivy_server_alb_load_balancer"
}
}
resource "aws_alb_target_group" "motivy_server_target_group" {
name = "motivy-server-target-group"
port = 5000
protocol = "HTTP"
vpc_id = aws_vpc.motivy_vpc.id
deregistration_delay = "10"
health_check {
healthy_threshold = "2"
unhealthy_threshold = "6"
interval = "30"
matcher = "200,301,302"
path = "/"
protocol = "HTTP"
timeout = "5"
}
stickiness {
type = "lb_cookie"
}
tags = {
Name = "motivy-server-target-group"
}
}
resource "aws_alb_listener" "alb-listener" {
load_balancer_arn = aws_alb.motivy_server_alb_load_balancer.arn
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.motivy_server_target_group.arn
type = "forward"
}
}
resource "aws_autoscaling_attachment" "asg_attachment_motivy_server" {
autoscaling_group_name = aws_autoscaling_group.motivy-server-autoscaling-group.id
alb_target_group_arn = aws_alb_target_group.motivy_server_target_group.arn
}
This is the exact error I get
Error: "motivy-server-autoscaling-group": Waiting up to 10m0s: Need at least 2 healthy instances in ASG, have 0. Most recent activity: {
ActivityId: "a775c531-9496-fdf9-5157-ab2448626293",
AutoScalingGroupName: "motivy-server-autoscaling-group",
Cause: "At 2020-04-05T22:10:28Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 2.",
Description: "Launching a new EC2 instance. Status Reason: The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.",
Details: "{\"Subnet ID\":\"subnet-05de5fc0e994d05fe\",\"Availability Zone\":\"us-east-1a\"}",
EndTime: 2020-04-05 22:10:29 +0000 UTC,
Progress: 100,
StartTime: 2020-04-05 22:10:29.439 +0000 UTC,
StatusCode: "Failed",
StatusMessage: "The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed."
}
I'm not sure why it worked two days ago.
But recent Amazon ECS-optimized AMIs' volume_type is gp2.
You should choose gp2 as root_block_device.volume_type.
resource "aws_launch_configuration" "ecs-launch-configuration" {
# ...
root_block_device {
volume_type = "gp2"
volume_size = 100
delete_on_termination = true
}
# ...
}
data "aws_ami" "latest_ecs" {
most_recent = true # get the latest version
filter {
name = "name"
values = ["amzn2-ami-ecs-hvm-*-x86_64-ebs"] # ECS optimized image
}
owners = [
"amazon" # Only official images
]
}
For me worked using t3 gen instead of t2

Key Files in Terraform

Whenever I add key_name to my amazon resource, I can never actually connect to the resulting instance:
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
"key_name" = "po"
tags {
Name = "API_Server"
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}
When I do
ssh -i ~/Downloads/po.pem bitnami#IP
I just a blank line in my terminal, as if I was putting in a wrong IP. However, checking the Amazon console, I can see the instance is running. I'm not getting any errors on my Terraform either.
By default all network access is not allowed. You need to explicitly allow network access by setting a security group.
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
key_name = "po"
security_groups = ["${aws_security_group.api_server.id}"]
tags {
Name = "API_Server"
}
}
resource "aws_security_group" "api_server" {
name = "api_server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["XXX.XXX.XXX.XXX/32"] // Allow SSH from your global IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}