I am struggling to understand CIDR blocks in the way I am using them. My understanding is (probably wrong) that they are a way of reserving a range of IP addresses for your environment, and you can apportion them across applications. But I can't get it working in my case. I am using terraform to manage a simple environment. A VPC containing a Lambda and an RDS instance. The RDS will not be publicly accessible, the lambda will be invoked by an HTTP trigger. Each of the Lambda and RDS instance need their own subnets, the RDS needs two. I have this configuration in terraform which keeps failing with this and similar errors:
The new Subnets are not in the same Vpc as the existing subnet group
The terraform set up is:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "vpc"
}
}
resource "aws_subnet" "rds_subnet_1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "eu-west-1a"
tags = {
Name = "rds_subnet_1a"
}
}
resource "aws_subnet" "rds_subnet_1b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "eu-west-1b"
tags = {
Name = "rds_subnet_1b"
}
}
resource "aws_subnet" "lambda_subnet_1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "eu-west-1a"
tags = {
Name = "lambda_subnet_1a"
}
}
resource "aws_db_subnet_group" "default" {
name = "main"
subnet_ids = [aws_subnet.rds_subnet_1a.id, aws_subnet.rds_subnet_1b.id]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_security_group" "rds" {
name = "rds-sg"
vpc_id = aws_vpc.main.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
egress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
tags = {
Name = "rds-sg"
}
}
resource "aws_security_group" "lambda" {
name = "lambda_sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = -1
self = true
from_port = 0
to_port = 0
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["10.0.0.0/16"]
}
tags = {
Name = "lambda_sg"
}
}
I know this is basic, but I just think if I get some answers to my situation it may help me understand the concepts better.
EDIT - lambda config:
resource "aws_lambda_function" "api_uprn" {
function_name = "api-uprn"
s3_bucket = aws_s3_bucket.lambdas_bucket.id
s3_key = "api-uprn/function_0.0.8.zip"
runtime = "python3.9"
handler = "app.main.handler"
role = aws_iam_role.lambda_exec.arn
vpc_config {
subnet_ids = [aws_subnet.subnet_1a.id]
security_group_ids = [aws_security_group.lambda.id]
}
}
resource "aws_cloudwatch_log_group" "api_uprn" {
name = "/aws/lambda/${aws_lambda_function.api_uprn.function_name}"
retention_in_days = 30
}
resource "aws_iam_role" "lambda_exec" {
name = "api_uprn"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "rds_read" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "lambda_vpc_access" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
Can you please post here, the full error? It will be easier to understand what state is throwing the error!
My tip is that you need to change your subnet_ids at your lambda configuration. From what I understand, your lambda configuration should be like this:
resource "aws_lambda_function" "api_uprn" {
function_name = "api-uprn"
s3_bucket = aws_s3_bucket.lambdas_bucket.id
s3_key = "api-uprn/function_0.0.8.zip"
runtime = "python3.9"
handler = "app.main.handler"
role = aws_iam_role.lambda_exec.arn
vpc_config {
subnet_ids = [aws_subnet. lambda_subnet_1a.id]
security_group_ids = [aws_security_group.lambda.id]
}
}
Related
Hi i'm beginner and i'm trying to play with VPC on AWS and Terraform and i'm stuck on ALB health check issue
I have 2 az with a ec2 and a webserver on each ec2 my goal is to setup the load balancer and be able to be redirected on one of my ec2, my ec2 are on a private subnet and a Nat Gatway and elastic IP
I've tried to setup a bastion host to check on the ssh if my ec2 was well link to internet and the answer is yes
this is my setup terraform : ( baybe there is an obvious error that i haven't seen )
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
shared_credentials_file = "./aws/credentials"
region = "us-east-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "my-vpc"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "my-internet-gateway"
}
}
resource "aws_subnet" "public_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "my-public-a-subnet"
}
}
resource "aws_subnet" "public_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true
tags = {
Name = "my-public-b-subnet"
}
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "my-private-a-subnet"
}
}
resource "aws_subnet" "private_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.4.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "my-private-b-subnet"
}
}
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.main.id
subnet_id = aws_subnet.public_a.id
}
resource "aws_eip" "main" {
vpc = true
tags = {
Name = "my-nat-gateway-eip"
}
}
resource "aws_security_group" "main" {
name = "my-security-group"
description = "Allow HTTP and SSH access"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-security-group"
}
}
resource "aws_instance" "ec2_a" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.private_a.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-ec2-a"
}
key_name = "vockey"
user_data = file("user_data.sh")
}
resource "aws_instance" "ec2_b" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.private_b.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-ec2-b"
}
key_name = "vockey"
user_data = file("user_data.sh")
}
resource "aws_instance" "bastion" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.public_a.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-bastion"
}
key_name = "vockey"
user_data = file("user_data_bastion.sh")
}
resource "aws_alb" "main" {
name = "my-alb"
internal = false
security_groups = [aws_security_group.main.id]
subnets = [aws_subnet.public_a.id, aws_subnet.public_b.id]
tags = {
Name = "my-alb"
}
}
resource "aws_alb_target_group" "ec2" {
name = "my-alb-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
tags = {
Name = "my-alb-target-group"
}
}
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.main.id
}
tags = {
Name = "my-private-route-table"
}
}
resource "aws_route_table_association" "private_a" {
subnet_id = aws_subnet.private_a.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "private_b" {
subnet_id = aws_subnet.private_b.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "My Public Route Table"
}
}
resource "aws_route_table_association" "public_a" {
subnet_id = aws_subnet.public_a.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public_b" {
subnet_id = aws_subnet.public_b.id
route_table_id = aws_route_table.public.id
}
resource "aws_alb_listener" "main" {
load_balancer_arn = aws_alb.main.arn
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.ec2.arn
type = "forward"
}
}
resource "aws_alb_target_group_attachment" "ec2_a" {
target_group_arn = aws_alb_target_group.ec2.arn
target_id = aws_instance.ec2_a.id
port = 80
}
resource "aws_alb_target_group_attachment" "ec2_b" {
target_group_arn = aws_alb_target_group.ec2.arn
target_id = aws_instance.ec2_b.id
port = 80
}
It looks like you don't have a health_check block on the aws_alb_target_group resource. Try adding something like this:
resource "aws_alb_target_group" "ec2" {
name = "my-alb-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
path = "/"
matcher = "200"
}
tags = {
Name = "my-alb-target-group"
}
}
Also, make sure that the HTTP services on your EC2 instances are listening and accepting connections on port 80. You should be able to curl http://<ec2 ip address> with a 200 response.
I am quite desperate with an issue very similar to the one described into this thread.
https://github.com/OpenDroneMap/opendronemap-ecs/issues/14#issuecomment-432004023
When I attach the network interface to my EC2 instance, so that my custom VPC is used instead of the default one, the EC2 instance no longer joins the ECS cluster.
This is my terraform definition.
provider "aws" {}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
assign_generated_ipv6_cidr_block = true
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
}
resource "aws_subnet" "main" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.0.0/16"
availability_zone = "us-west-2a"
map_public_ip_on_launch = true
}
resource "aws_route_table" "main" {
vpc_id = aws_vpc.main.id
}
resource "aws_route_table_association" "rta1" {
subnet_id = aws_subnet.main.id
route_table_id = aws_route_table.main.id
}
resource "aws_route_table_association" "rta2" {
gateway_id = aws_internet_gateway.main.id
route_table_id = aws_route_table.main.id
}
resource "aws_security_group" "sg-jenkins" {
name = "sg_jenkins"
description = "Allow inbound traffic for Jenkins instance"
vpc_id = aws_vpc.main.id
ingress = [
{
description = "inbound all"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
self = null
prefix_list_ids = null
security_groups = null
}
]
egress = [
{
description = "outbound all"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
self = null
prefix_list_ids = null
security_groups = null
}
]
}
resource "aws_network_interface" "main" {
subnet_id = aws_subnet.main.id
security_groups = [aws_security_group.sg-jenkins.id]
}
resource "aws_instance" "ec2_instance" {
ami = "ami-07764a7d8502d36a2"
instance_type = "t2.micro"
iam_instance_profile = "ecsInstanceRole"
key_name = "fran"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.main.id
}
user_data = <<EOF
#!/bin/bash
echo ECS_CLUSTER=cluster >> /etc/ecs/ecs.config
EOF
depends_on = [aws_internet_gateway.main]
}
### Task definition
resource "aws_ecs_task_definition" "jenkins-task" {
family = "namespace"
container_definitions = jsonencode([
{
name = "jenkins"
image = "cnservices/jenkins-master"
cpu = 10
memory = 512
essential = true
portMappings = [
{
containerPort = 8080
hostPort = 8080
}
]
}
])
# network_mode = "awsvpc"
volume {
name = "service-storage"
host_path = "/ecs/service-storage"
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a]"
}
}
### Cluster
resource "aws_ecs_cluster" "cluster" {
name = "cluster"
setting {
name = "containerInsights"
value = "enabled"
}
}
### Service
resource "aws_ecs_service" "jenkins-service" {
name = "jenkins-service"
cluster = aws_ecs_cluster.cluster.id
task_definition = aws_ecs_task_definition.jenkins-task.arn
desired_count = 1
# iam_role = aws_iam_role.foo.arn
# depends_on = [aws_iam_role_policy.foo]
# network_configuration {
# security_groups = [aws_security_group.sg-jenkins.id]
# subnets = [aws_subnet.main.id]
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a]"
}
}
You haven't created a route to your IGW. Thus your instance can't connect to the ECS service to register with your cluster. So remove rta2 and add a route:
# not needed. to be removed.
# resource "aws_route_table_association" "rta2" {
# gateway_id = aws_internet_gateway.main.id
# route_table_id = aws_route_table.main.id
# }
# add a missing route to the IGW
resource "aws_route" "r" {
route_table_id = aws_route_table.main.id
gateway_id = aws_internet_gateway.main.id
destination_cidr_block = "0.0.0.0/0"
}
I'm trying to create both services within the same VPC and give them appropriate security groups but they I can't make it work.
variable "vpc_cidr_block" {
default = "10.1.0.0/16"
}
variable "cidr_block_subnet_public" {
default = "10.1.1.0/24"
}
variable "cidr_block_subnets_private" {
default = ["10.1.2.0/24", "10.1.3.0/24", "10.1.4.0/24"]
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
}
resource "aws_subnet" "private" {
count = length(var.cidr_block_subnets_private)
cidr_block = var.cidr_block_subnets_private[count.index]
vpc_id = aws_vpc.vpc.id
availability_zone = data.aws_availability_zones.available.names[count.index]
}
resource "aws_security_group" "lambda" {
vpc_id = aws_vpc.vpc.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
// security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private: s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.identifier}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}
resource "aws_db_subnet_group" "db" {
subnet_ids = aws_subnet.private.*.id
}
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "random_password" "password" {
length = 32
special = false
}
I tried to not clutter the question by only posting the relevant part of my HCL. Please let me know if I missed anything important.
The biggest issue is the commented out security_groups parameter on the ingress block of the rds security group. Uncommenting that should then allow Postgresql traffic from the lambda security group:
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
As well as that your JDBC string is basically resolving to something like jdbc:postgresql://terraform-20091110230000000000000001.xxxx.us-east-1.rds.amazonaws.com:5432/terraform-20091110230000000000000001 because you aren't specifying an identifier for the RDS instance and so it defaults to generating an identifier prefixed with terraform- plus the timestamp and a counter. The important part to note here is that your RDS instance doesn't yet include a database of the name terraform-20091110230000000000000001 for your application to connect to because you haven't specified it.
You can have RDS create a database on the RDS instance by using the name parameter. You can then update your JDBC connection string to specify the database name as well:
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
name = "foo"
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private : s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.name}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}
I am trying to use Terraform to create an AWS EKS cluster with an ALB load balancer and kubernetes ingress.
I have been using this git repo and this blog to guide me.
The deploy fails with the following errors immediately after the cluster has been created.
Error: Post "https://E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps": dial tcp: lookup E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com on 8.8.8.8:53: no such host
on modules/alb/alb_ingress_controller.tf line 1, in resource "kubernetes_config_map" "aws_auth":
1: resource "kubernetes_config_map" "aws_auth" {
Error: Post "https://E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp: lookup E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com on 8.8.8.8:53: no such host
on modules/alb/alb_ingress_controller.tf line 20, in resource "kubernetes_cluster_role" "alb-ingress":
20: resource "kubernetes_cluster_role" "alb-ingress" {
Error: Post "https://E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com/apis/rbac.authorization.k8s.io/v1/clusterrolebindings": dial tcp: lookup E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com on 8.8.8.8:53: no such host
on modules/alb/alb_ingress_controller.tf line 41, in resource "kubernetes_cluster_role_binding" "alb-ingress":
41: resource "kubernetes_cluster_role_binding" "alb-ingress" {
Error: Post "https://E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com/api/v1/namespaces/kube-system/serviceaccounts": dial tcp: lookup E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com on 8.8.8.8:53: no such host
on modules/alb/alb_ingress_controller.tf line 62, in resource "kubernetes_service_account" "alb-ingress":
62: resource "kubernetes_service_account" "alb-ingress" {
Error: Failed to create Ingress 'default/main-ingress' because: Post "https://E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com/apis/extensions/v1beta1/namespaces/default/ingresses": dial tcp: lookup E8475B1B3693C979073BF0D721D876A7.sk1.ap-southeast-1.eks.amazonaws.com on 8.8.8.8:53: no such host
on modules/alb/kubernetes_ingress.tf line 1, in resource "kubernetes_ingress" "main":
1: resource "kubernetes_ingress" "main" {
Error: Post "https://641480DEC80EB445C6CBBEDC9D1F0234.yl4.ap-southeast-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps": dial tcp 10.0.21.192:443: connect: no route to host
on modules/eks/allow_nodes.tf line 22, in resource "kubernetes_config_map" "aws_auth":
22: resource "kubernetes_config_map" "aws_auth" {
Here is my Terraform code:
provider "aws" {
region = var.aws_region
version = "~> 2.65.0"
ignore_tags {
keys = ["kubernetes.io/role/internal-elb", "app.kubernetes.io/name"]
key_prefixes = ["kubernetes.io/cluster/", "alb.ingress.kubernetes.io/"]
}
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<EOF
- rolearn: ${var.iam_role_node}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
}
depends_on = [
var.eks_cluster_name
]
}
resource "kubernetes_cluster_role" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
rule {
api_groups = ["", "extensions"]
resources = ["configmaps", "endpoints", "events", "ingresses", "ingresses/status", "services"]
verbs = ["create", "get", "list", "update", "watch", "patch"]
}
rule {
api_groups = ["", "extensions"]
resources = ["nodes", "pods", "secrets", "services", "namespaces"]
verbs = ["get", "list", "watch"]
}
}
resource "kubernetes_cluster_role_binding" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "alb-ingress-controller"
}
subject {
kind = "ServiceAccount"
name = "alb-ingress-controller"
namespace = "kube-system"
}
}
resource "kubernetes_service_account" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
automount_service_account_token = true
}
resource "kubernetes_deployment" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
namespace = "kube-system"
}
spec {
selector {
match_labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
spec {
volume {
name = kubernetes_service_account.alb-ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.alb-ingress.default_secret_name
}
}
container {
# This is where you change the version when Amazon comes out with a new version of the ingress controller
image = "docker.io/amazon/aws-alb-ingress-controller:v1.1.7"
name = "alb-ingress-controller"
args = [
"--ingress-class=alb",
"--cluster-name=${var.eks_cluster_name}",
"--aws-vpc-id=${var.vpc_id}",
"--aws-region=${var.aws_region}"]
}
service_account_name = "alb-ingress-controller"
}
}
}
}
########################################################################################
# setup provider for kubernetes
//data "external" "aws_iam_authenticator" {
// program = ["sh", "-c", "aws-iam-authenticator token -i ${var.cluster_name} | jq -r -c .status"]
//}
data "aws_eks_cluster_auth" "tf_eks_cluster" {
name = aws_eks_cluster.tf_eks_cluster.name
}
provider "kubernetes" {
host = aws_eks_cluster.tf_eks_cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.tf_eks_cluster.certificate_authority.0.data)
//token = data.external.aws_iam_authenticator.result.token
token = data.aws_eks_cluster_auth.tf_eks_cluster.token
load_config_file = false
version = "~> 1.9"
}
# Allow worker nodes to join cluster via config map
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<EOF
- rolearn: ${aws_iam_role.tf-eks-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
}
depends_on = [aws_eks_cluster.tf_eks_cluster, aws_autoscaling_group.tf_eks_cluster]
}
resource "kubernetes_ingress" "main" {
metadata {
name = "main-ingress"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/subnets" = var.app_subnet_stringlist
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.api.arn}, ${data.aws_acm_certificate.gitea.arn}"
"alb.ingress.kubernetes.io/listen-ports" = <<JSON
[
{"HTTP": 80},
{"HTTPS": 443}
]
JSON
"alb.ingress.kubernetes.io/actions.ssl-redirect" = <<JSON
{
"Type": "redirect",
"RedirectConfig": {
"Protocol": "HTTPS",
"Port": "443",
"StatusCode": "HTTP_301"
}
}
JSON
}
}
spec {
rule {
host = "api.xactpos.com"
http {
path {
backend {
service_name = "ssl-redirect"
service_port = "use-annotation"
}
path = "/*"
}
path {
backend {
service_name = "app-service1"
service_port = 80
}
path = "/service1"
}
path {
backend {
service_name = "app-service2"
service_port = 80
}
path = "/service2"
}
}
}
rule {
host = "gitea.xactpos.com"
http {
path {
backend {
service_name = "ssl-redirect"
service_port = "use-annotation"
}
path = "/*"
}
path {
backend {
service_name = "api-service1"
service_port = 80
}
path = "/service3"
}
path {
backend {
service_name = "api-service2"
service_port = 80
}
path = "/graphq4"
}
}
}
}
}
resource "aws_security_group" "eks-alb" {
name = "eks-alb-public"
description = "Security group allowing public traffic for the eks load balancer."
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = map(
"Name", "terraform-eks-alb",
"kubernetes.io/cluster/tf-eks-cluster", "owned"
)
}
resource "aws_security_group_rule" "eks-alb-public-https" {
description = "Allow eks load balancer to communicate with public traffic securely."
cidr_blocks = ["0.0.0.0/0"]
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks-alb.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "eks-alb-public-http" {
description = "Allow eks load balancer to communicate with public traffic."
cidr_blocks = ["0.0.0.0/0"]
from_port = 80
protocol = "tcp"
security_group_id = aws_security_group.eks-alb.id
to_port = 80
type = "ingress"
}
resource "aws_eks_cluster" "tf_eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.tf-eks-cluster.arn
vpc_config {
security_group_ids = [aws_security_group.tf-eks-cluster.id]
subnet_ids = var.app_subnet_ids
endpoint_private_access = true
endpoint_public_access = false
}
depends_on = [
aws_iam_role_policy_attachment.tf-eks-cluster-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.tf-eks-cluster-AmazonEKSServicePolicy,
]
}
# Setup for IAM role needed to setup an EKS cluster
resource "aws_iam_role" "tf-eks-cluster" {
name = "tf-eks-cluster"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "tf-eks-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.tf-eks-cluster.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.tf-eks-cluster.name
}
########################################################################################
# Setup IAM role & instance profile for worker nodes
resource "aws_iam_role" "tf-eks-node" {
name = "tf-eks-node"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_instance_profile" "tf-eks-node" {
name = "tf-eks-node"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-AmazonEC2FullAccess" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_role_policy_attachment" "tf-eks-node-alb-ingress_policy" {
policy_arn = aws_iam_policy.alb-ingress.arn
role = aws_iam_role.tf-eks-node.name
}
resource "aws_iam_policy" "alb-ingress" {
name = "alb-ingress-policy"
policy = file("${path.module}/alb_ingress_policy.json")
}
# generate KUBECONFIG as output to save in ~/.kube/config locally
# save the 'terraform output eks_kubeconfig > config', run 'mv config ~/.kube/config' to use it for kubectl
locals {
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.tf_eks_cluster.endpoint}
certificate-authority-data: ${aws_eks_cluster.tf_eks_cluster.certificate_authority.0.data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${var.cluster_name}"
KUBECONFIG
}
########################################################################################
# Setup AutoScaling Group for worker nodes
# Setup data source to get amazon-provided AMI for EKS nodes
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-v*"]
}
most_recent = true
owners = ["602401143452"] # Amazon EKS AMI Account ID
}
# Is provided in demo code, no idea what it's used for though! TODO: DELETE
# data "aws_region" "current" {}
# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encode this
# information and write it into the AutoScaling Launch Configuration.
# More information: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
locals {
tf-eks-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.tf_eks_cluster.endpoint}' --b64-cluster-ca '${aws_eks_cluster.tf_eks_cluster.certificate_authority.0.data}' '${var.cluster_name}'
USERDATA
}
resource "aws_launch_configuration" "tf_eks_cluster" {
associate_public_ip_address = true
iam_instance_profile = aws_iam_instance_profile.tf-eks-node.name
image_id = data.aws_ami.eks-worker.id
instance_type = var.instance_type
name_prefix = "tf-eks-spot"
security_groups = [aws_security_group.tf-eks-node.id]
user_data_base64 = base64encode(local.tf-eks-node-userdata)
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "tf_eks_cluster" {
name = "tf-eks-cluster"
port = 31742
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
}
resource "aws_autoscaling_group" "tf_eks_cluster" {
desired_capacity = "2"
launch_configuration = aws_launch_configuration.tf_eks_cluster.id
max_size = "3"
min_size = 1
name = "tf-eks-cluster"
vpc_zone_identifier = var.app_subnet_ids
target_group_arns = [aws_lb_target_group.tf_eks_cluster.arn]
tag {
key = "Name"
value = "tf-eks-cluster"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
propagate_at_launch = true
}
}
resource "aws_security_group" "tf-eks-cluster" {
name = "terraform-eks-cluster"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform-eks"
}
}
resource "aws_security_group" "tf-eks-node" {
name = "terraform-eks-node"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform-eks"
}
}
# Allow inbound traffic from your local workstation external IP
# to the Kubernetes. You will need to replace A.B.C.D below with
# your real IP. Services like icanhazip.com can help you find this.
resource "aws_security_group_rule" "tf-eks-cluster-ingress-workstation-https" {
cidr_blocks = [var.accessing_computer_ip]
description = "Allow workstation to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.tf-eks-cluster.id
to_port = 443
type = "ingress"
}
########################################################################################
# Setup worker node security group
resource "aws_security_group_rule" "tf-eks-node-ingress-self" {
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.tf-eks-node.id
source_security_group_id = aws_security_group.tf-eks-node.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "tf-eks-node-ingress-cluster" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.tf-eks-node.id
source_security_group_id = aws_security_group.tf-eks-cluster.id
to_port = 65535
type = "ingress"
}
# allow worker nodes to access EKS master
resource "aws_security_group_rule" "tf-eks-cluster-ingress-node-https" {
description = "Allow pods to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.tf-eks-node.id
source_security_group_id = aws_security_group.tf-eks-cluster.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "tf-eks-node-ingress-master" {
description = "Allow cluster control to receive communication from the worker Kubelets"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.tf-eks-cluster.id
source_security_group_id = aws_security_group.tf-eks-node.id
to_port = 443
type = "ingress"
}
resource "aws_internet_gateway" "eks" {
vpc_id = aws_vpc.eks.id
tags = {
Name = "internet_gateway"
}
}
resource "aws_eip" "nat_gateway" {
count = var.subnet_count
vpc = true
}
resource "aws_nat_gateway" "eks" {
count = var.subnet_count
allocation_id = aws_eip.nat_gateway.*.id[count.index]
subnet_id = aws_subnet.gateway.*.id[count.index]
tags = {
Name = "nat_gateway"
}
depends_on = [aws_internet_gateway.eks]
}
resource "aws_route_table" "application" {
count = var.subnet_count
vpc_id = aws_vpc.eks.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.eks.*.id[count.index]
}
tags = {
Name = "eks_application"
}
}
resource "aws_route_table" "vpn" {
vpc_id = aws_vpc.eks.id
tags = {
Name = "eks_vpn"
}
}
resource "aws_route_table" "gateway" {
vpc_id = aws_vpc.eks.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.eks.id
}
tags = {
Name = "eks_gateway"
}
}
resource "aws_route_table_association" "application" {
count = var.subnet_count
subnet_id = aws_subnet.application.*.id[count.index]
route_table_id = aws_route_table.application.*.id[count.index]
}
resource "aws_route_table_association" "vpn" {
count = var.subnet_count
subnet_id = aws_subnet.vpn.*.id[count.index]
route_table_id = aws_route_table.vpn.id
}
resource "aws_route_table_association" "gateway" {
count = var.subnet_count
subnet_id = aws_subnet.gateway.*.id[count.index]
route_table_id = aws_route_table.gateway.id
}
data "aws_availability_zones" "available" {}
resource "aws_subnet" "gateway" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.1${count.index}.0/24"
vpc_id = aws_vpc.eks.id
map_public_ip_on_launch = true
tags = {
Name = "eks_gateway"
}
}
resource "aws_subnet" "application" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.2${count.index}.0/24"
vpc_id = aws_vpc.eks.id
map_public_ip_on_launch = true
tags = map(
"Name", "eks_application",
"kubernetes.io/cluster/${var.cluster_name}", "shared"
)
}
resource "aws_subnet" "vpn" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.3${count.index}.0/24"
vpc_id = aws_vpc.eks.id
tags = {
Name = "eks_vpn"
}
}
resource "aws_vpc" "eks" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = map(
"Name", "eks-vpc",
"kubernetes.io/cluster/${var.cluster_name}", "shared"
)
}
I had tried to create the Kubernetes deployment in a single massive Terraform manifest. I needed to separate the Kubernetes deployment into a separate Terraform manifest which I applied after updating the ~/.kube/config file.
The DNS errors were due to this file not being current for the new cluster.
Additionally, I needed to ensure that endpoint_private_access = true is set in the eks cluster resource.
I have been trying to spin up ECS using terraform. About two days ago it was working as expected, however today I tried to run terraform apply and I keep getting an error saying
"The requested configuration is currently not supported. Launching EC2 instance failed"
I have researched a lot about this issue, I tried hardcoding the VPC tenancy to default, I've tried changing the region, the instance type and nothing seems to fix the issue.
The is my terraform config:
provider "aws" {
region = var.region
}
data "aws_availability_zones" "available" {}
# Define a vpc
resource "aws_vpc" "motivy_vpc" {
cidr_block = var.motivy_network_cidr
tags = {
Name = var.motivy_vpc
}
enable_dns_support = "true"
instance_tenancy = "default"
enable_dns_hostnames = "true"
}
# Internet gateway for the public subnet
resource "aws_internet_gateway" "motivy_ig" {
vpc_id = aws_vpc.motivy_vpc.id
tags = {
Name = "motivy_ig"
}
}
# Public subnet 1
resource "aws_subnet" "motivy_public_sn_01" {
vpc_id = aws_vpc.motivy_vpc.id
cidr_block = var.motivy_public_01_cidr
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "motivy_public_sn_01"
}
}
# Public subnet 2
resource "aws_subnet" "motivy_public_sn_02" {
vpc_id = aws_vpc.motivy_vpc.id
cidr_block = var.motivy_public_02_cidr
availability_zone = data.aws_availability_zones.available.names[1]
tags = {
Name = "motivy_public_sn_02"
}
}
# Routing table for public subnet 1
resource "aws_route_table" "motivy_public_sn_rt_01" {
vpc_id = aws_vpc.motivy_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.motivy_ig.id
}
tags = {
Name = "motivy_public_sn_rt_01"
}
}
# Routing table for public subnet 2
resource "aws_route_table" "motivy_public_sn_rt_02" {
vpc_id = aws_vpc.motivy_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.motivy_ig.id
}
tags = {
Name = "motivy_public_sn_rt_02"
}
}
# Associate the routing table to public subnet 1
resource "aws_route_table_association" "motivy_public_sn_rt_01_assn" {
subnet_id = aws_subnet.motivy_public_sn_01.id
route_table_id = aws_route_table.motivy_public_sn_rt_01.id
}
# Associate the routing table to public subnet 2
resource "aws_route_table_association" "motivy_public_sn_rt_02_assn" {
subnet_id = aws_subnet.motivy_public_sn_02.id
route_table_id = aws_route_table.motivy_public_sn_rt_02.id
}
# ECS Instance Security group
resource "aws_security_group" "motivy_public_sg" {
name = "motivys_public_sg"
description = "Test public access security group"
vpc_id = aws_vpc.motivy_vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 5000
to_port = 5000
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
var.motivy_public_01_cidr,
var.motivy_public_02_cidr
]
}
egress {
# allow all traffic to private SN
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = [
"0.0.0.0/0"]
}
tags = {
Name = "motivy_public_sg"
}
}
data "aws_ecs_task_definition" "motivy_server" {
task_definition = aws_ecs_task_definition.motivy_server.family
}
resource "aws_ecs_task_definition" "motivy_server" {
family = "motivy_server"
container_definitions = file("task-definitions/service.json")
}
data "aws_ami" "latest_ecs" {
most_recent = true # get the latest version
filter {
name = "name"
values = [
"amzn2-ami-ecs-*"] # ECS optimized image
}
owners = [
"amazon" # Only official images
]
}
resource "aws_launch_configuration" "ecs-launch-configuration" {
name = "ecs-launch-configuration"
image_id = data.aws_ami.latest_ecs.id
instance_type = "t2.micro"
iam_instance_profile = aws_iam_instance_profile.ecs-instance-profile.id
root_block_device {
volume_type = "standard"
volume_size = 100
delete_on_termination = true
}
enable_monitoring = true
lifecycle {
create_before_destroy = true
}
security_groups = [aws_security_group.motivy_public_sg.id]
associate_public_ip_address = "true"
key_name = var.ecs_key_pair_name
user_data = <<EOF
#!/bin/bash
echo ECS_CLUSTER=${var.ecs_cluster} >> /etc/ecs/ecs.config
EOF
}
resource "aws_appautoscaling_target" "ecs_motivy_server_target" {
max_capacity = 2
min_capacity = 1
resource_id = "service/${aws_ecs_cluster.motivy_ecs_cluster.name}/${aws_ecs_service.motivy_server_service.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
depends_on = [ aws_ecs_service.motivy_server_service ]
}
resource "aws_iam_role" "ecs-instance-role" {
name = "ecs-instance-role"
path = "/"
assume_role_policy = data.aws_iam_policy_document.ecs-instance-policy.json
}
data "aws_iam_policy_document" "ecs-instance-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" {
role = aws_iam_role.ecs-instance-role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
resource "aws_iam_instance_profile" "ecs-instance-profile" {
name = "ecs-instance-profile"
path = "/"
role = aws_iam_role.ecs-instance-role.id
provisioner "local-exec" {
command = "sleep 10"
}
}
resource "aws_autoscaling_group" "motivy-server-autoscaling-group" {
name = "motivy-server-autoscaling-group"
termination_policies = [
"OldestInstance" # When a “scale down” event occurs, which instances to kill first?
]
default_cooldown = 30
health_check_grace_period = 30
max_size = var.max_instance_size
min_size = var.min_instance_size
desired_capacity = var.desired_capacity
# Use this launch configuration to define “how” the EC2 instances are to be launched
launch_configuration = aws_launch_configuration.ecs-launch-configuration.name
lifecycle {
create_before_destroy = true
}
# Refer to vpc.tf for more information
# You could use the private subnets here instead,
# if you want the EC2 instances to be hidden from the internet
vpc_zone_identifier = [aws_subnet.motivy_public_sn_01.id, aws_subnet.motivy_public_sn_02.id]
tags = [{
key = "Name",
value = var.ecs_cluster,
# Make sure EC2 instances are tagged with this tag as well
propagate_at_launch = true
}]
}
resource "aws_alb" "motivy_server_alb_load_balancer" {
name = "motivy-alb-load-balancer"
security_groups = [aws_security_group.motivy_public_sg.id]
subnets = [aws_subnet.motivy_public_sn_01.id, aws_subnet.motivy_public_sn_02.id]
tags = {
Name = "motivy_server_alb_load_balancer"
}
}
resource "aws_alb_target_group" "motivy_server_target_group" {
name = "motivy-server-target-group"
port = 5000
protocol = "HTTP"
vpc_id = aws_vpc.motivy_vpc.id
deregistration_delay = "10"
health_check {
healthy_threshold = "2"
unhealthy_threshold = "6"
interval = "30"
matcher = "200,301,302"
path = "/"
protocol = "HTTP"
timeout = "5"
}
stickiness {
type = "lb_cookie"
}
tags = {
Name = "motivy-server-target-group"
}
}
resource "aws_alb_listener" "alb-listener" {
load_balancer_arn = aws_alb.motivy_server_alb_load_balancer.arn
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.motivy_server_target_group.arn
type = "forward"
}
}
resource "aws_autoscaling_attachment" "asg_attachment_motivy_server" {
autoscaling_group_name = aws_autoscaling_group.motivy-server-autoscaling-group.id
alb_target_group_arn = aws_alb_target_group.motivy_server_target_group.arn
}
This is the exact error I get
Error: "motivy-server-autoscaling-group": Waiting up to 10m0s: Need at least 2 healthy instances in ASG, have 0. Most recent activity: {
ActivityId: "a775c531-9496-fdf9-5157-ab2448626293",
AutoScalingGroupName: "motivy-server-autoscaling-group",
Cause: "At 2020-04-05T22:10:28Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 2.",
Description: "Launching a new EC2 instance. Status Reason: The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.",
Details: "{\"Subnet ID\":\"subnet-05de5fc0e994d05fe\",\"Availability Zone\":\"us-east-1a\"}",
EndTime: 2020-04-05 22:10:29 +0000 UTC,
Progress: 100,
StartTime: 2020-04-05 22:10:29.439 +0000 UTC,
StatusCode: "Failed",
StatusMessage: "The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed."
}
I'm not sure why it worked two days ago.
But recent Amazon ECS-optimized AMIs' volume_type is gp2.
You should choose gp2 as root_block_device.volume_type.
resource "aws_launch_configuration" "ecs-launch-configuration" {
# ...
root_block_device {
volume_type = "gp2"
volume_size = 100
delete_on_termination = true
}
# ...
}
data "aws_ami" "latest_ecs" {
most_recent = true # get the latest version
filter {
name = "name"
values = ["amzn2-ami-ecs-hvm-*-x86_64-ebs"] # ECS optimized image
}
owners = [
"amazon" # Only official images
]
}
For me worked using t3 gen instead of t2