Sorry for long post but hope that will provide good background.
Do not know if that is a bug or my code is wrong. I want to create ECS cluster with EC2 spot instances with help of launch template and ASG. My code is as follows:
For ECS service, cluster, task definition:
resource "aws_ecs_cluster" "main" {
name = "test-ecs-cluster"
}
resource "aws_ecs_service" "ec2_service" {
for_each = data.aws_subnet_ids.all_subnets.ids
name = "myservice_${replace(timestamp(), ":", "-")}"
task_definition = aws_ecs_task_definition.task_definition.arn
cluster = aws_ecs_cluster.main.id
desired_count = 1
launch_type = "EC2"
health_check_grace_period_seconds = 10
load_balancer {
container_name = "test-container"
container_port = 80
target_group_arn = aws_lb_target_group.alb_ec2_ecs_tg.id
}
network_configuration {
security_groups = [aws_security_group.ecs_ec2.id]
subnets = [each.value]
assign_public_ip = "false"
}
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
}
resource "aws_ecs_task_definition" "task_definition" {
container_definitions = data.template_file.task_definition_template.rendered
family = "test-ec2-task-family"
execution_role_arn = aws_iam_role.ecs_task_exec_role_ec2_ecs.arn
task_role_arn = aws_iam_role.ecs_task_exec_role_ec2_ecs.arn
network_mode = "awsvpc"
memory = 1024
cpu = 1024
requires_compatibilities = ["EC2"]
lifecycle {
create_before_destroy = true
}
}
data "template_file" "task_definition_template" {
template = file("${path.module}/templates/task_definition.json.tpl")
vars = {
container_port = var.container_port
region = var.region
log_group = var.cloudwatch_log_group
}
}
Launch template:
resource "aws_launch_template" "template_for_spot" {
name = "test-spor-ecs-launch-template"
disable_api_termination = false
instance_type = "t3.small"
image_id = data.aws_ami.amazon_linux_2_ecs_optimized.id
key_name = "FrankfurtRegion"
user_data = data.template_file.user_data.rendered
vpc_security_group_ids = [aws_security_group.ecs_ec2.id]
monitoring {
enabled = var.enable_spot == "true" ? false : true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
}
}
iam_instance_profile {
arn = aws_iam_instance_profile.ecs_instance_profile.arn
}
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = file("${path.module}/user_data.tpl")
vars = {
cluster_name = aws_ecs_cluster.main.name
}
}
ASG with scaling policy:
resource "aws_autoscaling_group" "ecs_spot_asg" {
name = "test-asg-for-ecs"
max_size = 4
min_size = 2
desired_capacity = 2
termination_policies = [
"OldestInstance"]
vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids
health_check_type = "ELB"
health_check_grace_period = 300
mixed_instances_policy {
instances_distribution {
on_demand_percentage_above_base_capacity = 0
spot_instance_pools = 2
spot_max_price = "0.03"
}
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.template_for_spot.id
version = "$Latest"
}
override {
instance_type = "t3.large"
}
override {
instance_type = "t3.medium"
}
override {
instance_type = "t3a.large"
}
override {
instance_type = "t3a.medium"
}
}
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_policy" "ecs_cluster_scale_policy" {
autoscaling_group_name = aws_autoscaling_group.ecs_spot_asg.name
name = "test-ecs-cluster-scaling-policy"
policy_type = "TargetTrackingScaling"
adjustment_type = "ChangeInCapacity"
target_tracking_configuration {
target_value = 70
customized_metric_specification {
metric_name = "ECS-cluster-metric"
namespace = "AWS/ECS"
statistic = "Average"
metric_dimension {
name = aws_ecs_cluster.main.name
value = aws_ecs_cluster.main.name
}
}
}
}
EDIT:
I'm getting :
Error: InvalidParameterException: Creation of service was not idempotent. "test-ec2-service-qaz"
on ecs.tf line 5, in resource "aws_ecs_service" "ec2_service":
5: resource "aws_ecs_service" "ec2_service" {
EDIT2:
Changed ecs_service name to name = "myservice_${replace(timestamp(), ":", "-")}", still getting same error.
Was reading from other issues that it is becouse of usage lifecycle with create_before_destroy statement in ecs_service, but it is not declared in my code. Maybe it is something related to something else, can't say what.
Thanks #Marko E and #karnauskas on github with name = "myservice_${each.value}" was able to deploy three ECS services. With correction to sub-nets handling I was able to deploy all the "stuff" as required. Subnets:
data "aws_subnet_ids" "all_subnets" {
vpc_id = data.aws_vpc.default.id
}
data "aws_subnet" "subnets" {
for_each = data.aws_subnet_ids.all_subnets.ids
id = each.value
}
Related
I am trying to create EKS cluster with maxpods limit =110
Creating node group using aws_eks_node_group
resource "aws_eks_node_group" "eks-node-group" {
cluster_name = var.cluster-name
node_group_name = var.node-group-name
node_role_arn = var.eks-nodes-role.arn
subnet_ids = var.subnet-ids
version = var.cluster-version
release_version = nonsensitive(data.aws_ssm_parameter.eks_ami_release_version.value)
capacity_type = "SPOT"
lifecycle {
create_before_destroy = true
}
scaling_config {
desired_size = var.scale-config.desired-size
max_size = var.scale-config.max-size
min_size = var.scale-config.min-size
}
instance_types = var.scale-config.instance-types
update_config {
max_unavailable = var.update-config.max-unavailable
}
depends_on = [var.depends-on]
launch_template {
id = aws_launch_template.node-group-launch-template.id
version = aws_launch_template.node-group-launch-template.latest_version
}
}
resource "aws_launch_template" "node-group-launch-template" {
name_prefix = "eks-node-group"
image_id = var.template-image-id
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = var.ebs_size
}
}
ebs_optimized = true
user_data = base64encode(data.template_file.test.rendered)
# user_data = filebase64("${path.module}/example.sh")
}
data "template_file" "test" {
template = <<EOF
/etc/eks/bootstrap.sh ${var.cluster-name} --use-max-pods false --kubelet-extra-args '--max-pods=110'
EOF
}
Launch template is created just to provide bootstrap arguments. I have tried supplying the same in aws_eks_cluster resource as well
module "eks__user_data" {
source = "terraform-aws-modules/eks/aws//modules/_user_data"
version = "18.30.3"
cluster_name = aws_eks_cluster.metashape-eks.name
bootstrap_extra_args = "--use-max-pods false --kubelet-extra-args '--max-pods=110'"
}
but unable to achieve desired effect till now.
Trying to follow
https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html. CNI driver is enabled 1.12 and all other configurations seems correct too.
I have this terraform template that adds/removes instances based on the load on running instances behind a TG.
main.tf
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_launch_template" "web-asg" {
name = "web-asg"
capacity_reservation_specification {
capacity_reservation_preference = "open"
}
image_id = var.web_image_id
instance_initiated_shutdown_behavior = "terminate"
instance_type = "t2.micro"
key_name = "keyname"
update_default_version = true
monitoring {
enabled = true
}
network_interfaces {
security_groups = var.web_server_security_group_ids
associate_public_ip_address = true
}
lifecycle {
create_before_destroy = true
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "web-asg-launch-template"
}
}
user_data = filebase64("external-files/instance_provisioner.sh")
}
resource "aws_autoscaling_group" "web-asg" {
name = "web-asg"
min_size = 1
max_size = 3
desired_capacity = 1
termination_policies = ["OldestInstance"]
launch_template {
name = aws_launch_template.web-asg.name
version = aws_launch_template.web-asg.latest_version
}
vpc_zone_identifier = var.subnet_ids
lifecycle {
create_before_destroy = true
ignore_changes = [load_balancers, target_group_arns]
}
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 80
instance_warmup = 10
}
}
}
resource "aws_autoscaling_attachment" "web-asg" {
autoscaling_group_name = aws_autoscaling_group.web-asg.id
lb_target_group_arn = var.target_group_arn
}
resource "aws_autoscaling_policy" "scale_down" {
name = "web-asg-scale-down"
autoscaling_group_name = aws_autoscaling_group.web-asg.name
adjustment_type = "ChangeInCapacity"
scaling_adjustment = -1
cooldown = 120
}
resource "aws_cloudwatch_metric_alarm" "scale_down" {
alarm_description = "Monitors CPU utilization for web-asg ASG"
alarm_actions = [aws_autoscaling_policy.scale_down.arn]
alarm_name = "web-asg-scale-down"
comparison_operator = "LessThanOrEqualToThreshold"
namespace = "AWS/EC2"
metric_name = "CPUUtilization"
threshold = "30"
evaluation_periods = "2"
period = "120"
statistic = "Average"
dimensions = {
AutoScalingGroupName = aws_autoscaling_group.web-asg.name
}
}
resource "aws_autoscaling_policy" "scale_up" {
name = "web-asg-scale-up"
autoscaling_group_name = aws_autoscaling_group.web-asg.name
adjustment_type = "ChangeInCapacity"
scaling_adjustment = 1
cooldown = 120
}
resource "aws_cloudwatch_metric_alarm" "scale_up" {
alarm_description = "Monitors CPU utilization for web-asg ASG"
alarm_actions = [aws_autoscaling_policy.scale_up.arn]
alarm_name = "web-asg-scale-up"
comparison_operator = "GreaterThanOrEqualToThreshold"
namespace = "AWS/EC2"
metric_name = "CPUUtilization"
threshold = "75"
evaluation_periods = "2"
period = "120"
statistic = "Average"
dimensions = {
AutoScalingGroupName = aws_autoscaling_group.web-asg.name
}
}
Problem is, lets say one instance is already serving behind the TG, I change the AMI ID in terraform template and then run terraform apply.
What happens is the running instance gets removed from the TG, then a new Instance gets created and then gets added into the TG. The TG becomes empty and could not serve any traffic for a good 2 minutes.
I expect AWS to first add the new instance behind TG and then remove the old instance so that no traffic is lost.
What am I missing here?
I am unable to register the ec2 instance into the ecs cluster, I have created the cluster, service and registered the task into it. But the ec2 instance is not registered. I have given the userdata to register the instance into the cluster but unable to register it. I am attaching the files which are needed. Ec2 instance are provisioning just not registering to the ECS cluster. I am implementing module wise structure. I am attaching the screenshot at the end of the question
Autoscaling:
resource "aws_launch_configuration" "ec2" {
image_id = var.image_id
instance_type = var.instance_type
name = "ec2-${terraform.workspace}"
user_data = <<EOF
#!/bin/bash
echo 'ECS_CLUSTER=${var.cluster_name.name}' >> /etc/ecs/ecs.config
echo 'ECS_DISABLE_PRIVILEGED=true' >> /etc/ecs/ecs.config
EOF
key_name = var.key_name
iam_instance_profile = var.instance_profile
security_groups = [aws_security_group.webserver.id]
}
resource "aws_autoscaling_group" "asg" {
vpc_zone_identifier = var.public_subnet
desired_capacity = 2
max_size = 2
min_size = 2
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.ec2.name
target_group_arns = [var.tg.arn]
}
resource "aws_security_group" "webserver" {
name = "webserver-${terraform.workspace}"
description = "Allow internet traffic"
vpc_id = var.vpc_id
ingress {
description = "incoming for ec2-instance"
from_port = 0
to_port = 0
protocol = -1
security_groups = [var.alb_sg]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "webserver-sg"
}
}
output "ec2_sg" {
value = aws_security_group.webserver.id
}
Cluster:
resource "aws_ecs_cluster" "cluster" {
name = "wordpress-${terraform.workspace}"
}
output "cluster" {
value = aws_ecs_cluster.cluster.id
}
output "cluster1" {
value = aws_ecs_cluster.cluster
}
Service:
resource "aws_ecs_service" "wordpress" {
name = "Wordpress-${terraform.workspace}"
cluster = var.cluster
task_definition = var.task.id
desired_count = 2
scheduling_strategy = "REPLICA"
load_balancer {
target_group_arn = var.tg.arn
container_name = "wordpress"
container_port = 80
}
deployment_controller {
type = "ECS"
}
}
Task:
data "template_file" "init" {
template = "${file("${path.module}/template/containerdef.json")}"
vars = {
rds_endpoint = "${var.rds_endpoint}"
name = "${var.name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "wordpress"
container_definitions = "${data.template_file.init.rendered}"
network_mode = "bridge"
requires_compatibilities = ["EC2"]
memory = "1GB"
cpu = "1 vCPU"
task_role_arn = var.task_execution.arn
}
main.tf
data "aws_availability_zones" "azs" {}
data "aws_ssm_parameter" "name" {
name = "Dbname"
}
data "aws_ssm_parameter" "password" {
name = "db_password"
}
module "my_vpc" {
source = "./modules/vpc"
vpc_cidr = var.vpc_cidr
public_subnet = var.public_subnet
private_subnet = var.private_subnet
availability_zone = data.aws_availability_zones.azs.names
}
module "db" {
source = "./modules/rds"
ec2_sg = "${module.autoscaling.ec2_sg}"
allocated_storage = var.db_allocated_storage
storage_type = var.db_storage_type
engine = var.db_engine
engine_version = var.db_engine_version
instance_class = var.db_instance_class
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "alb" {
source = "./modules/alb"
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "task" {
source = "./modules/task"
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
rds_endpoint = "${module.db.rds_endpoint}"
task_execution = "${module.role.task_execution}"
}
module "autoscaling" {
source = "./modules/autoscaling"
vpc_id = "${module.my_vpc.vpc_id}"
#public_subnet = "${module.my_vpc.public_subnets_ids}"
tg = "${module.alb.tg}"
image_id = var.image_id
instance_type = var.instance_type
alb_sg = "${module.alb.alb_sg}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
instance_profile = "${module.role.instance_profile}"
key_name = var.key_name
cluster_name = "${module.cluster.cluster1}"
}
module "role" {
source = "./modules/Iam_role"
}
module "cluster" {
source = "./modules/Ecs-cluster"
}
module "service" {
source = "./modules/services"
cluster = "${module.cluster.cluster}"
tg = "${module.alb.tg}"
task = "${module.task.task}"
}
ec2-instance role:
resource "aws_iam_role" "container_instance" {
name = "container_instance-${terraform.workspace}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_instance_profile" "ec2_instance_role" {
name = "iam_instance_profile-${terraform.workspace}"
role = aws_iam_role.container_instance.name
}
resource "aws_iam_role_policy_attachment" "ec2_instance" {
role = aws_iam_role.container_instance.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
Screenshot:
[![enter image description here][2]][2]
Based on the chat discussion.
The issue could be caused by using incorrect instance profile:
iam_instance_profile = var.instance_profile.name
The important thing is that, now the two instances are correctly registered with the cluster.
Terraform newbie here. I have a code here for the ECS schedule task. Whenever I change this and apply the change, the first version of task definition is getting set in the ECS task. So I tried adding the lifecycle method to it.
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
}
Tried:
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
lifecycle {
ignore_changes = [task_definition_arn]
}
}
}
and
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
lifecycle {
ignore_changes = [ecs_target.task_definition_arn]
}
}
How do you ignore a nested field via lifecycle?
Found the solution. This works
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
lifecycle {
ignore_changes = [ecs_target.0.task_definition_arn]
}
}
The unusual syntax for me but this is how it is :).
I deploy ecs using terraform.
When I run terraform apply everything is okay but when I browse to ecs service on events tab I have this error:
service nginx-ecs-service was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.
How do I fix that? What is missing in my terraform file?
locals {
name = "myapp"
environment = "prod"
# This is the convention we use to know what belongs to each other
ec2_resources_name = "${local.name}-${local.environment}"
}
resource "aws_iam_server_certificate" "lb_cert" {
name = "lb_cert"
certificate_body = "${file("./www.example.com/cert.pem")}"
private_key = "${file("./www.example.com/privkey.pem")}"
certificate_chain = "${file("./www.example.com/chain.pem")}"
}
resource "aws_security_group" "bastion-sg" {
name = "bastion-security-group"
vpc_id = "${module.vpc.vpc_id}"
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "bastion" {
depends_on = ["aws_security_group.bastion-sg"]
ami = "ami-0d5d9d301c853a04a"
key_name = "myapp"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.bastion-sg.id}"]
associate_public_ip_address = true
subnet_id = "${element(module.vpc.public_subnets, 0)}"
tags = {
Name = "bastion"
}
}
# VPC Definition
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 2.0"
name = "my-vpc"
cidr = "10.1.0.0/16"
azs = ["us-east-2a", "us-east-2b", "us-east-2c"]
private_subnets = ["10.1.1.0/24", "10.1.2.0/24", "10.1.3.0/24"]
public_subnets = ["10.1.101.0/24", "10.1.102.0/24", "10.1.103.0/24"]
single_nat_gateway = true
enable_nat_gateway = true
enable_vpn_gateway = false
enable_dns_hostnames = true
public_subnet_tags = {
Name = "public"
}
private_subnet_tags = {
Name = "private"
}
public_route_table_tags = {
Name = "public-RT"
}
private_route_table_tags = {
Name = "private-RT"
}
tags = {
Environment = local.environment
Name = local.name
}
}
# ------------
resource "aws_ecs_cluster" "public-ecs-cluster" {
name = "myapp-${local.environment}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group" "ecs-vpc-secgroup" {
name = "ecs-vpc-secgroup"
description = "ecs-vpc-secgroup"
# vpc_id = "vpc-b8daecde"
vpc_id = "${module.vpc.vpc_id}"
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "ecs-security-group"
}
}
resource "aws_lb" "nginx-ecs-alb" {
name = "nginx-ecs-alb"
internal = false
load_balancer_type = "application"
subnets = module.vpc.public_subnets
security_groups = ["${aws_security_group.ecs-vpc-secgroup.id}"]
}
resource "aws_alb_target_group" "nginx-ecs-tg" {
name = "nginx-ecs-tg"
port = "80"
protocol = "HTTP"
vpc_id = "${module.vpc.vpc_id}"
health_check {
healthy_threshold = 3
unhealthy_threshold = 10
timeout = 5
interval = 10
path = "/"
}
depends_on = ["aws_lb.nginx-ecs-alb"]
}
resource "aws_alb_listener" "alb_listener" {
load_balancer_arn = "${aws_lb.nginx-ecs-alb.arn}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.nginx-ecs-tg.arn}"
type = "forward"
}
}
resource "aws_ecs_task_definition" "nginx-image" {
family = "nginx-server"
network_mode = "bridge"
container_definitions = <<DEFINITION
[
{
"name": "nginx-web",
"image": "nginx:latest",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "tcp"
}
],
"memory": 128,
"cpu": 10
}
]
DEFINITION
}
data "aws_ecs_task_definition" "nginx-image" {
depends_on = ["aws_ecs_task_definition.nginx-image"]
task_definition = "${aws_ecs_task_definition.nginx-image.family}"
}
resource "aws_launch_configuration" "ecs-launch-configuration" {
name = "ecs-launch-configuration"
image_id = "ami-0d5d9d301c853a04a"
instance_type = "t2.micro"
iam_instance_profile = "ecsInstanceRole"
root_block_device {
volume_type = "standard"
volume_size = 35
delete_on_termination = true
}
security_groups = ["${aws_security_group.ecs-vpc-secgroup.id}"]
associate_public_ip_address = "true"
key_name = "myapp"
user_data = <<-EOF
#!/bin/bash
echo ECS_CLUSTER=${aws_ecs_cluster.public-ecs-cluster.name} >> /etc/ecs/ecs.config
EOF
}
resource "aws_autoscaling_group" "ecs-autoscaling-group" {
name = "ecs-autoscaling-group"
max_size = "1"
min_size = "1"
desired_capacity = "1"
# vpc_zone_identifier = ["subnet-5c66053a", "subnet-9cd1a2d4"]
vpc_zone_identifier = module.vpc.public_subnets
launch_configuration = "${aws_launch_configuration.ecs-launch-configuration.name}"
health_check_type = "EC2"
default_cooldown = "300"
lifecycle {
create_before_destroy = true
}
tag {
key = "Name"
value = "wizardet972_ecs-instance"
propagate_at_launch = true
}
tag {
key = "Owner"
value = "Wizardnet972"
propagate_at_launch = true
}
}
resource "aws_autoscaling_policy" "ecs-scale" {
name = "ecs-scale-policy"
policy_type = "TargetTrackingScaling"
autoscaling_group_name = "${aws_autoscaling_group.ecs-autoscaling-group.name}"
estimated_instance_warmup = 60
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = "70"
}
}
resource "aws_ecs_service" "nginx-ecs-service" {
name = "nginx-ecs-service"
cluster = "${aws_ecs_cluster.public-ecs-cluster.id}"
task_definition = "${aws_ecs_task_definition.nginx-image.family}:${max("${aws_ecs_task_definition.nginx-image.revision}", "${aws_ecs_task_definition.nginx-image.revision}")}"
launch_type = "EC2"
desired_count = 1
load_balancer {
target_group_arn = "${aws_alb_target_group.nginx-ecs-tg.arn}"
container_name = "nginx-web"
container_port = 80
}
depends_on = ["aws_ecs_task_definition.nginx-image"]
}
Update:
I tried to create the terraform stack you shared with me, I was able to reproduce the issue.
The issue was, The ec2 instance was unhealthy and the autoscaling group was continuously terminating the instance and launch a new one.
the solution was to remove the following configuration.I think the volume_type standard was causing trouble.
root_block_device {
volume_type = "standard"
volume_size = 100
delete_on_termination = true
}
See if you have done the basic steps to prepare the ec2 instance. You should use an ecs-optimized ami to create the instance and then attach the AmazonEC2ContainerServiceforEC2Role permission to IAM role.
Reference:
AWS ECS Error when running task: No Container Instances were found in your cluster
setup instance role