Terraform newbie here. I have a code here for the ECS schedule task. Whenever I change this and apply the change, the first version of task definition is getting set in the ECS task. So I tried adding the lifecycle method to it.
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
}
Tried:
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
lifecycle {
ignore_changes = [task_definition_arn]
}
}
}
and
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
lifecycle {
ignore_changes = [ecs_target.task_definition_arn]
}
}
How do you ignore a nested field via lifecycle?
Found the solution. This works
resource "aws_cloudwatch_event_target" "sqs" {
rule = aws_cloudwatch_event_rule.sqs.name
target_id = local.namespace
arn = aws_ecs_cluster.app.arn
role_arn = aws_iam_role.ecsRole.arn
input = "{}"
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.sqs.arn
launch_type = "FARGATE"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.nsg_task.id]
subnets = split(",", var.private_subnets)
}
}
lifecycle {
ignore_changes = [ecs_target.0.task_definition_arn]
}
}
The unusual syntax for me but this is how it is :).
Related
I use circleci with orbs: aws/ecr, aws/ecs that uploads the new docker image, and supposed to update ecs service.
My issue is that it uses the first, and not the latest task definition, and cannot seem to find a way to set the deployment to update with the latest task definition.
I use terraform to manage my infrastructure which looks like:
# Get subnets
data "aws_subnets" "subnets" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
# Get security groups
data "aws_security_groups" "security_groups" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
# Create load balancer
resource "aws_lb" "load_balancer" {
name = "${var.app_name}-load-balancer-${var.env}"
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
load_balancer_type = "application"
tags = {
Environment = "${var.env}"
}
}
resource "aws_lb_target_group" "blue_target" {
name = "${var.app_name}-blue-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
resource "aws_lb_target_group" "green_target" {
name = "${var.app_name}-green-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
data "aws_acm_certificate" "cert" {
domain = var.domain
statuses = ["ISSUED"]
most_recent = true
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.load_balancer.arn
port = var.port
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = data.aws_acm_certificate.cert.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.blue_target.arn
}
}
# ECS
resource "aws_ecs_cluster" "cluster" {
name = "${var.app_name}-cluster-${var.env}"
}
data "aws_ecr_repository" "ecr_repository" {
name = var.image_repo_name
}
resource "aws_iam_role" "ecs_task_role" {
name = "EcsTaskRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "",
Effect = "Allow",
Action = [
"secretsmanager:GetRandomPassword",
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds",
"secretsmanager:ListSecrets"
],
Resource = "*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = aws_iam_policy.secrets_manager_read_policy.arn
}
resource "aws_iam_role_policy_attachment" "attach_s3_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "attach_ses_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role" "ecs_exec_role" {
name = "EcsExecRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "attach_ecs_task_exec_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role_policy_attachment" "attach_fault_injection_simulator_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSFaultInjectionSimulatorECSAccess"
}
resource "aws_ecs_task_definition" "task_def" {
family = "${var.app_name}-task-def-${var.env}"
network_mode = "awsvpc"
task_role_arn = aws_iam_role.ecs_task_role.arn
execution_role_arn = aws_iam_role.ecs_exec_role.arn
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
runtime_platform {
cpu_architecture = "X86_64"
operating_system_family = "LINUX"
}
container_definitions = jsonencode([
{
name = "${var.app_name}-container-${var.env}"
image = "${data.aws_ecr_repository.ecr_repository.repository_url}:latest"
cpu = 0
essential = true
portMappings = [
{
containerPort = var.port
hostPort = var.port
},
]
environment = [
{
name = "PORT",
value = tostring("${var.port}")
},
{
name = "NODE_ENV",
value = var.env
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-create-group" = "true"
"awslogs-group" = "${var.app_name}-task-def-${var.env}"
"awslogs-region" = "${var.region}"
"awslogs-stream-prefix" = "ecs"
}
}
},
])
}
resource "aws_ecs_service" "service" {
lifecycle {
ignore_changes = [
task_definition,
load_balancer,
]
}
cluster = aws_ecs_cluster.cluster.arn
name = "${var.app_name}-service-${var.env}"
task_definition = aws_ecs_task_definition.task_def.arn
load_balancer {
target_group_arn = aws_lb_target_group.blue_target.arn
container_name = "${var.app_name}-container-${var.env}"
container_port = var.port
}
capacity_provider_strategy {
capacity_provider = "FARGATE"
base = 0
weight = 1
}
scheduling_strategy = "REPLICA"
deployment_controller {
type = "CODE_DEPLOY"
}
platform_version = "1.4.0"
network_configuration {
assign_public_ip = true
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
}
desired_count = 1
}
# DEPLOYMENT
resource "aws_codedeploy_app" "codedeploy_app" {
name = "${var.app_name}-application-${var.env}"
compute_platform = "ECS"
}
resource "aws_iam_role" "codedeploy_role" {
name = "CodedeployRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "codedeploy.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole"
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role_for_ecs" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS"
}
resource "aws_codedeploy_deployment_group" "deployment_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
wait_time_in_minutes = 0
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 5
}
}
deployment_group_name = "${var.app_name}-deployment-group-${var.env}"
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = [aws_lb_listener.listener.arn]
}
target_group {
name = aws_lb_target_group.blue_target.name
}
target_group {
name = aws_lb_target_group.green_target.name
}
}
}
service_role_arn = aws_iam_role.codedeploy_role.arn
ecs_service {
service_name = aws_ecs_service.service.name
cluster_name = aws_ecs_cluster.cluster.name
}
}
resource "aws_appautoscaling_target" "scalable_target" {
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
min_capacity = 1
max_capacity = 5
}
resource "aws_appautoscaling_policy" "cpu_scaling_policy" {
name = "${var.app_name}-cpu-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
resource "aws_appautoscaling_policy" "memory_scaling_policy" {
name = "${var.app_name}-memory-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageMemoryUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
And my circleci config (the relevant parts):
deploy:
executor: aws-ecr/default
working_directory: ~/code
parameters:
env:
type: string
steps:
- attach_workspace:
at: ~/
- aws-ecr/build-and-push-image:
# attach-workspace: true
# create-repo: true
repo: "test-<< parameters.env >>"
tag: "latest,${CIRCLE_BUILD_NUM}"
- aws-ecs/update-service: # orb built-in job
family: "test-task-def-<< parameters.env >>"
service-name: "test-service-<< parameters.env >>"
cluster: "test-cluster-<< parameters.env >>"
container-image-name-updates: "container=test-container-<< parameters.env >>,tag=${CIRCLE_BUILD_NUM}"
deployment-controller: "CODE_DEPLOY"
codedeploy-application-name: "test-application-<< parameters.env >>"
codedeploy-deployment-group-name: "test-deployment-group-<< parameters.env >>"
codedeploy-load-balanced-container-name: "test-container-<< parameters.env >>"
codedeploy-load-balanced-container-port: 3000
Can anyone spot why my service isn't using the latest task definition, and the deployments are using the old task definition for the service?
Some additional info:
The deployment created in circleci uses the latest task definition, and it's visible in the deployment revision as well (appspec).
I've made a mistake in the first creation of the task definiton, I used container_definitions.image as an ecr image's id (which is sha id), and it cannot pull images.
The issue is that the ecs service is using that task definition, constantly trying to create tasks, and failing while codedeploy deployment - which is somewhat unexpected, as it should use the new task definition to create new tasks.
So I have a service with wrongly configured task definition, and when I try to fix it, the deployment cannot go through with the fix, because it's using the wrong task definition to create new tasks, and the deployment is stuck.
I'm new on terraform and I'm trying to create an EKS cluster with terraform 0.14.2 . I'm trying to create it on a existing VPC passing the subnet_ids and VPC ID, but I don't know how to pass private and public subnets to the EKS cluster resource:
resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
version = var.eks_version
vpc_config {
subnet_ids = var.priv_subnet_id
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}
And my vars are:
priv_subnet_id = {
pre = [ "subnet-XXX", "subnet-XXX", "subnet-XXX"]
}
Could you guide me the best way to do it?
Thanks
It's really strange that you will have a list of subnet ids as a variable...
Most of the time the subnets are resources we create, like this:
resource "aws_subnet" "example1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
}
resource "aws_subnet" "example2" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
}
resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
version = var.eks_version
vpc_config {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}
If they are existing resources, then you want to get them using a data source:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet_ids
data "aws_subnet_ids" "example" {
vpc_id = var.vpc_id
}
resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
version = var.eks_version
vpc_config {
subnet_ids = data.aws_subnet_ids.example.ids
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}
In the very strange case that you have to absolutely have them as a variable...
it could be something like:
variable "subnet_ids" {
type = list(string)
default = ["subnet-X", "subnet-Y", "subnet-Z"]
}
resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
version = var.eks_version
vpc_config {
subnet_ids = var.subnet_ids
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}
I am unable to register the ec2 instance into the ecs cluster, I have created the cluster, service and registered the task into it. But the ec2 instance is not registered. I have given the userdata to register the instance into the cluster but unable to register it. I am attaching the files which are needed. Ec2 instance are provisioning just not registering to the ECS cluster. I am implementing module wise structure. I am attaching the screenshot at the end of the question
Autoscaling:
resource "aws_launch_configuration" "ec2" {
image_id = var.image_id
instance_type = var.instance_type
name = "ec2-${terraform.workspace}"
user_data = <<EOF
#!/bin/bash
echo 'ECS_CLUSTER=${var.cluster_name.name}' >> /etc/ecs/ecs.config
echo 'ECS_DISABLE_PRIVILEGED=true' >> /etc/ecs/ecs.config
EOF
key_name = var.key_name
iam_instance_profile = var.instance_profile
security_groups = [aws_security_group.webserver.id]
}
resource "aws_autoscaling_group" "asg" {
vpc_zone_identifier = var.public_subnet
desired_capacity = 2
max_size = 2
min_size = 2
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.ec2.name
target_group_arns = [var.tg.arn]
}
resource "aws_security_group" "webserver" {
name = "webserver-${terraform.workspace}"
description = "Allow internet traffic"
vpc_id = var.vpc_id
ingress {
description = "incoming for ec2-instance"
from_port = 0
to_port = 0
protocol = -1
security_groups = [var.alb_sg]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "webserver-sg"
}
}
output "ec2_sg" {
value = aws_security_group.webserver.id
}
Cluster:
resource "aws_ecs_cluster" "cluster" {
name = "wordpress-${terraform.workspace}"
}
output "cluster" {
value = aws_ecs_cluster.cluster.id
}
output "cluster1" {
value = aws_ecs_cluster.cluster
}
Service:
resource "aws_ecs_service" "wordpress" {
name = "Wordpress-${terraform.workspace}"
cluster = var.cluster
task_definition = var.task.id
desired_count = 2
scheduling_strategy = "REPLICA"
load_balancer {
target_group_arn = var.tg.arn
container_name = "wordpress"
container_port = 80
}
deployment_controller {
type = "ECS"
}
}
Task:
data "template_file" "init" {
template = "${file("${path.module}/template/containerdef.json")}"
vars = {
rds_endpoint = "${var.rds_endpoint}"
name = "${var.name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "wordpress"
container_definitions = "${data.template_file.init.rendered}"
network_mode = "bridge"
requires_compatibilities = ["EC2"]
memory = "1GB"
cpu = "1 vCPU"
task_role_arn = var.task_execution.arn
}
main.tf
data "aws_availability_zones" "azs" {}
data "aws_ssm_parameter" "name" {
name = "Dbname"
}
data "aws_ssm_parameter" "password" {
name = "db_password"
}
module "my_vpc" {
source = "./modules/vpc"
vpc_cidr = var.vpc_cidr
public_subnet = var.public_subnet
private_subnet = var.private_subnet
availability_zone = data.aws_availability_zones.azs.names
}
module "db" {
source = "./modules/rds"
ec2_sg = "${module.autoscaling.ec2_sg}"
allocated_storage = var.db_allocated_storage
storage_type = var.db_storage_type
engine = var.db_engine
engine_version = var.db_engine_version
instance_class = var.db_instance_class
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "alb" {
source = "./modules/alb"
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "task" {
source = "./modules/task"
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
rds_endpoint = "${module.db.rds_endpoint}"
task_execution = "${module.role.task_execution}"
}
module "autoscaling" {
source = "./modules/autoscaling"
vpc_id = "${module.my_vpc.vpc_id}"
#public_subnet = "${module.my_vpc.public_subnets_ids}"
tg = "${module.alb.tg}"
image_id = var.image_id
instance_type = var.instance_type
alb_sg = "${module.alb.alb_sg}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
instance_profile = "${module.role.instance_profile}"
key_name = var.key_name
cluster_name = "${module.cluster.cluster1}"
}
module "role" {
source = "./modules/Iam_role"
}
module "cluster" {
source = "./modules/Ecs-cluster"
}
module "service" {
source = "./modules/services"
cluster = "${module.cluster.cluster}"
tg = "${module.alb.tg}"
task = "${module.task.task}"
}
ec2-instance role:
resource "aws_iam_role" "container_instance" {
name = "container_instance-${terraform.workspace}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_instance_profile" "ec2_instance_role" {
name = "iam_instance_profile-${terraform.workspace}"
role = aws_iam_role.container_instance.name
}
resource "aws_iam_role_policy_attachment" "ec2_instance" {
role = aws_iam_role.container_instance.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
Screenshot:
[![enter image description here][2]][2]
Based on the chat discussion.
The issue could be caused by using incorrect instance profile:
iam_instance_profile = var.instance_profile.name
The important thing is that, now the two instances are correctly registered with the cluster.
Sorry for long post but hope that will provide good background.
Do not know if that is a bug or my code is wrong. I want to create ECS cluster with EC2 spot instances with help of launch template and ASG. My code is as follows:
For ECS service, cluster, task definition:
resource "aws_ecs_cluster" "main" {
name = "test-ecs-cluster"
}
resource "aws_ecs_service" "ec2_service" {
for_each = data.aws_subnet_ids.all_subnets.ids
name = "myservice_${replace(timestamp(), ":", "-")}"
task_definition = aws_ecs_task_definition.task_definition.arn
cluster = aws_ecs_cluster.main.id
desired_count = 1
launch_type = "EC2"
health_check_grace_period_seconds = 10
load_balancer {
container_name = "test-container"
container_port = 80
target_group_arn = aws_lb_target_group.alb_ec2_ecs_tg.id
}
network_configuration {
security_groups = [aws_security_group.ecs_ec2.id]
subnets = [each.value]
assign_public_ip = "false"
}
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
}
resource "aws_ecs_task_definition" "task_definition" {
container_definitions = data.template_file.task_definition_template.rendered
family = "test-ec2-task-family"
execution_role_arn = aws_iam_role.ecs_task_exec_role_ec2_ecs.arn
task_role_arn = aws_iam_role.ecs_task_exec_role_ec2_ecs.arn
network_mode = "awsvpc"
memory = 1024
cpu = 1024
requires_compatibilities = ["EC2"]
lifecycle {
create_before_destroy = true
}
}
data "template_file" "task_definition_template" {
template = file("${path.module}/templates/task_definition.json.tpl")
vars = {
container_port = var.container_port
region = var.region
log_group = var.cloudwatch_log_group
}
}
Launch template:
resource "aws_launch_template" "template_for_spot" {
name = "test-spor-ecs-launch-template"
disable_api_termination = false
instance_type = "t3.small"
image_id = data.aws_ami.amazon_linux_2_ecs_optimized.id
key_name = "FrankfurtRegion"
user_data = data.template_file.user_data.rendered
vpc_security_group_ids = [aws_security_group.ecs_ec2.id]
monitoring {
enabled = var.enable_spot == "true" ? false : true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
}
}
iam_instance_profile {
arn = aws_iam_instance_profile.ecs_instance_profile.arn
}
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = file("${path.module}/user_data.tpl")
vars = {
cluster_name = aws_ecs_cluster.main.name
}
}
ASG with scaling policy:
resource "aws_autoscaling_group" "ecs_spot_asg" {
name = "test-asg-for-ecs"
max_size = 4
min_size = 2
desired_capacity = 2
termination_policies = [
"OldestInstance"]
vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids
health_check_type = "ELB"
health_check_grace_period = 300
mixed_instances_policy {
instances_distribution {
on_demand_percentage_above_base_capacity = 0
spot_instance_pools = 2
spot_max_price = "0.03"
}
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.template_for_spot.id
version = "$Latest"
}
override {
instance_type = "t3.large"
}
override {
instance_type = "t3.medium"
}
override {
instance_type = "t3a.large"
}
override {
instance_type = "t3a.medium"
}
}
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_policy" "ecs_cluster_scale_policy" {
autoscaling_group_name = aws_autoscaling_group.ecs_spot_asg.name
name = "test-ecs-cluster-scaling-policy"
policy_type = "TargetTrackingScaling"
adjustment_type = "ChangeInCapacity"
target_tracking_configuration {
target_value = 70
customized_metric_specification {
metric_name = "ECS-cluster-metric"
namespace = "AWS/ECS"
statistic = "Average"
metric_dimension {
name = aws_ecs_cluster.main.name
value = aws_ecs_cluster.main.name
}
}
}
}
EDIT:
I'm getting :
Error: InvalidParameterException: Creation of service was not idempotent. "test-ec2-service-qaz"
on ecs.tf line 5, in resource "aws_ecs_service" "ec2_service":
5: resource "aws_ecs_service" "ec2_service" {
EDIT2:
Changed ecs_service name to name = "myservice_${replace(timestamp(), ":", "-")}", still getting same error.
Was reading from other issues that it is becouse of usage lifecycle with create_before_destroy statement in ecs_service, but it is not declared in my code. Maybe it is something related to something else, can't say what.
Thanks #Marko E and #karnauskas on github with name = "myservice_${each.value}" was able to deploy three ECS services. With correction to sub-nets handling I was able to deploy all the "stuff" as required. Subnets:
data "aws_subnet_ids" "all_subnets" {
vpc_id = data.aws_vpc.default.id
}
data "aws_subnet" "subnets" {
for_each = data.aws_subnet_ids.all_subnets.ids
id = each.value
}
When I do terraform apply every time the desired count is changing from 1 -> 2. Please suggest.
But in variables.tf, I declared as desired_count=2.
~ resource "aws_ecs_service" "ecs_service" {
cluster = "arn:aws:ecs::cluster/"
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
~ desired_count = 1 -> 2
enable_ecs_managed_tags = false
force_new_deployment = true
health_check_grace_period_seconds = 0
iam_role = "aws-service-role"
id = "arn:aws:ecs:-ecs-service"
launch_type = "FARGATE"
name = "ecs-service"
platform_version = "LATEST"
propagate_tags = "NONE"
scheduling_strategy = "REPLICA"
tags = {}
task_definition = "arn:aws:ecs:"
deployment_controller {
type = "ECS"
}
load_balancer {
container_name = "abcd"
container_port = 80
target_group_arn = "arn.*"
}
network_configuration {
assign_public_ip = false
security_groups = []
subnets = []
}
}
Here is the main.tf:-
Note:- I am using modules to invoke main.tf.
############ ECS Service ##############
resource "aws_ecs_service" "ecs_service" {
name = "${var.name}-ecs-service"
cluster = var.cluster_id
task_definition = var.task_definition_arn
desired_count = var.desired_count
iam_role = var.ecs_iam_role
force_new_deployment = var.force_new_deployment
launch_type = "FARGATE"
deployment_controller {
type = "ECS"
}
deployment_maximum_percent = "200"
deployment_minimum_healthy_percent = "100"
load_balancer {
target_group_arn = var.lb_tg_arn
container_name = var.name
container_port = var.port
}
network_configuration {
security_groups = var.security_group_id
subnets = var.subnets
assign_public_ip = var.assign_public_ip
}
depends_on = [var.aws_alb]
}
Variables are assigned on the requirement of my project by making default=null
Not sure what is really causing the change, but if you only want to mute the change you can go with ignoring the change as described in the terraform docs
resource "aws_ecs_service" "example" {
# ... other configurations ...
# Example: Create service with 2 instances to start
desired_count = 2
# Optional: Allow external changes without Terraform plan difference
lifecycle {
ignore_changes = [desired_count]
}
}