Cannot create an ECS Service using Terraform on AWS - amazon-web-services

I'm trying to create an ECS service using Terraform. I have some modules defined to create some necessary resources (like the alb, vpc, subnets, etc). All of those have been created successfully, but the aws_ecs_service is not being created.
This is the Terraform code I'm using:
terraform {
required_version = ">= 0.13"
}
resource "aws_ecs_task_definition" "main" {
family = "task-definition"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.fargate_cpu
memory = var.fargate_memory
container_definitions = jsonencode([
{
name = "container-definition"
image = var.container_image
cpu = var.fargate_cpu
memory = var.fargate_memory
command = ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
port_mappings = [
{
container_port = var.app_port
host_port = var.app_port
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = "/ecs/task-definition"
awslogs-region = var.aws_region
awslogs-stream-prefix = "ecs"
}
}
}
])
}
module "load_balancer" {
source = "../alb"
vpc_id = var.vpc_id
app_port = var.app_port
public_subnets_ids = var.public_subnets_ids
health_check_path = "/"
}
resource "aws_ecs_service" "main" {
name = "testing-service"
cluster = var.ecs_cluster_id
task_definition = aws_ecs_task_definition.main.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.load_balancer.sg_id]
subnets = var.private_subnet_ids
assign_public_ip = true
}
load_balancer {
target_group_arn = module.load_balancer.alb_tg_arn
container_name = "container-definition"
container_port = var.app_port
}
depends_on = [
module.load_balancer
]
}
I'm fully aware that fragment of code is not enough to reproduce the problem, but I have not been able to make a smaller example reproducing the problem. If you need the rest of the files, I can create a public repo or something like with the rest of the code.
The error I'm getting is:
╷
│ Error: error creating testing-service service: error waiting for ECS service (testing-service) creation: InvalidParameterException: The container container-definition did not have a container port 8000 defined.
│
│ with module.service.aws_ecs_service.main,
│ on service/main.tf line 47, in resource "aws_ecs_service" "main":
│ 47: resource "aws_ecs_service" "main"
Update
Taking a look at the generated resources, I have seen that the port mapping has not been generated! Even though I have it specified in the terraform code:
That's a screenshot from the task definition created by that code.

You have a typo in your container definition. Instead of this:
port_mappings = [
{
container_port = var.app_port
host_port = var.app_port
}
]
You should have:
portMappings = [
{
containerPort = var.app_port
hostPort = var.app_port
}
]

Related

Error: Unsupported argument terraform ecs with load balancer

appreciate your support in solving this issue i have main.tf file like below
resource "aws_ecs_service" "nodejs-service" {
name = "nodejs-service"
cluster = aws_ecs_cluster.project_cluster.id
task_definition = aws_ecs_task_definition.nodejs.arn
launch_type = "FARGATE"
desired_count = 1
load_balancer {
target_group_arns = module.alb.target_group_arns
container_name = "${aws_ecs_task_definition.nodejs.family}"
container_port = 8080 # Specifying the container port
}
network_configuration {
subnets = var.vpc.public_subnets
assign_public_ip = true
}
}
module "alb" {
source = "terraform-aws-modules/alb/aws"
version = "~> 8.0"
name = var.namespace
load_balancer_type = "application"
vpc_id = var.vpc.vpc_id
subnets = var.vpc.public_subnets
security_groups = [var.sg.lb]
http_tcp_listeners = [
{
port = 80
protocol = "HTTP"
target_group_index = 0
}
]
target_groups = [
{ name_prefix = "nodejs-service"
backend_protocol = "HTTP"
backend_port = 8080
target_type = "instance"
}
]
}
i receive error
│ Error: Unsupported argument
│
│ on modules/ecs/main.tf line 58, in resource "aws_ecs_service" "nodejs-service":
│ 58: target_group_arns = module.alb.target_group_arns
│
│ An argument named "target_group_arns" is not expected here. Did you mean "target_group_arn"?
even if i changed target_groups on the service parameters to be target_group_arn i receive error "target_group_arn" is not defined
also with module.alb.target_groups[0] the same error appear with terraform plan
load_balancer {
target_group_arn = module.alb.target_groups[0]
container_name = "${aws_ecs_task_definition.nodejs.family}"
container_port = 8080 # Specifying the container port
}
Error:
│ Error: Unsupported attribute
│
│ on modules/ecs/main.tf line 58, in resource "aws_ecs_service" "nodejs-service":
58: target_group_arn = module.alb.target_groups[0]
├────────────────
│ module.alb is a object
This object does not have an attribute named "target_groups".
as per main.tf file how can i select the target group which is defined in alb module
Thanks,
tried: terraform plan and expected alb with target group pointing on nodejs-service container
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.27"
}
null = {
source = "hashicorp/null"
version = ">= 2.0"
}
}
}
The issue is not in the module, rather in the argument you are trying to use in the aws_ecs_service resource. You are currently setting it to target_group_arns while the argument is singular, i.e., target_group_arn [1]:
load_balancer {
target_group_arn = module.alb.target_group_arns[0]
container_name = "${aws_ecs_task_definition.nodejs.family}"
container_port = 8080 # Specifying the container port
}
The example is with the first of the target groups returned from the module, so make sure you are using the correct one.
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#target_group_arn
output from the module alb is an array.
In your case it would be module.alb.target_group_arns[0]
Replace it with this code
load_balancer {
target_group_arns = module.alb.target_group_arns[0]
container_name = "${aws_ecs_task_definition.nodejs.family}"
container_port = 8080 # Specifying the container port
}

use terraform to create an aws codedeploy ecs infrastructure

I tried to use terraform to setup aws codeploy ecs infrastructure, following aws documentation to understand aws deploy : https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-blue-green.html , reading this post to have an example (it uses EC2 instance) : https://hiveit.co.uk/techshop/terraform-aws-vpc-example/02-create-the-vpc/ and finally use reference into terraform documentation : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codedeploy_deployment_group
The probleme is when I tried to make a deploy from aws codedeploy, the deployment is stuck in the install phase
Here is the terraform configuration I have done
# main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
# defined in AWS_REGION env
# defined in AWS_ACCESS_KEY_ID env
# defined in AWS_SECRET_ACCESS_KEY env
}
# create repository to store docker image
resource "aws_ecr_repository" "repository" {
name = "test-repository"
}
# network.tf
resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "terraform-example-vpc"
}
}
resource "aws_internet_gateway" "gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "terraform-example-internet-gateway"
}
}
resource "aws_route" "route" {
route_table_id = aws_vpc.vpc.main_route_table_id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gateway.id
}
resource "aws_subnet" "main" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.${count.index}.0/24"
map_public_ip_on_launch = true
availability_zone = element(data.aws_availability_zones.available.names, count.index)
tags = {
Name = "public-subnet-${element(data.aws_availability_zones.available.names, count.index)}"
}
}
# loadbalancer.tf
resource "aws_security_group" "lb_security_group" {
name = "terraform_lb_security_group"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform-example-lb-security-group"
}
}
resource "aws_lb" "lb" {
name = "terraform-example-lb"
security_groups = [aws_security_group.lb_security_group.id]
subnets = aws_subnet.main.*.id
tags = {
Name = "terraform-example-lb"
}
}
resource "aws_lb_target_group" "group1" {
name = "terraform-example-lb-target1"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.vpc.id
target_type = "ip"
}
resource "aws_lb_listener" "listener_http" {
load_balancer_arn = aws_lb.lb.arn
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_lb_target_group.group1.arn
type = "forward"
}
}
# cluster.tf
resource "aws_ecs_cluster" "cluster" {
name = "terraform-example-cluster"
tags = {
Name = "terraform-example-cluster"
}
}
resource "aws_iam_role" "ecsTaskExecutionRole" {
name = "ecsTaskExecutionRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
"Sid" : "",
"Effect" : "Allow",
"Principal" : {
"Service" : "ecs-tasks.amazonaws.com"
},
"Action" : "sts:AssumeRole"
}
]
})
}
resource "aws_ecs_task_definition" "task_definition" {
family = "deployment-app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 256
memory = 512
execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
container_definitions = jsonencode([
{
"name" : "app",
"image" : "httpd:2.4",
"portMappings" : [
{
"containerPort" : 80,
"hostPort" : 80,
"protocol" : "tcp"
}
],
"essential" : true
}
])
}
resource "aws_ecs_service" "service" {
cluster = aws_ecs_cluster.cluster.id
name = "terraform-example-service"
task_definition = "deployment-app"
launch_type = "FARGATE"
scheduling_strategy = "REPLICA"
platform_version = "LATEST"
desired_count = 1
load_balancer {
target_group_arn = aws_lb_target_group.group1.arn
container_name = "app"
container_port = 80
}
deployment_controller {
type = "CODE_DEPLOY"
}
network_configuration {
assign_public_ip = true
security_groups = [aws_security_group.lb_security_group.id]
subnets = aws_subnet.main.*.id
}
lifecycle {
ignore_changes = [desired_count, task_definition, platform_version]
}
}
# codedeploy.tf
resource "aws_codedeploy_app" "codedeploy_app" {
name = "example-codedeploy-app"
compute_platform = "ECS"
}
resource "aws_lb_target_group" "group2" {
name = "terraform-example-lb-target2"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.vpc.id
target_type = "ip"
}
resource "aws_codedeploy_deployment_group" "codedeploy_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_group_name = "deployment_group_name"
service_role_arn = "###"
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
wait_time_in_minutes = 0
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 1
}
}
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
target_group_pair_info {
target_group {
name = aws_lb_target_group.group1.name
}
target_group {
name = aws_lb_target_group.group2.name
}
prod_traffic_route {
listener_arns = [aws_lb_listener.listener_http.arn]
}
}
}
ecs_service {
cluster_name = aws_ecs_cluster.cluster.name
service_name = aws_ecs_service.service.name
}
}
# datasource.tf
data "aws_availability_zones" "available" {}
note: replace ### with the arn of the role AWSCodeDeployRoleForECS : https://docs.aws.amazon.com/AmazonECS/latest/developerguide/codedeploy_IAM_role.html I don't add it into terraform yet
after using
terraform plan
terraform apply
all the stack is set and i have access to the it works of httpd through the load balancer dns name
My probleme is when I push a new image to the repository, update the task definition and create a new deployment, this last one is stuck in the Step 1 without any error or whatever
For the example, I tried to push an nginx image instead of httpd
aws ecs register-task-definition \
--family=deployment-app \
--network-mode=awsvpc \
--cpu=256 \
--memory=512 \
--execution-role-arn=arn:aws:iam::__AWS_ACCOUNT__:role/ecsTaskExecutionRole \
--requires-compatibilities='["FARGATE"]' \
--container-definitions='[{"name": "app","image": "nginx:latest","portMappings": [{"containerPort": 80,"hostPort": 80,"protocol": "tcp"}],"essential": true}]'
I am using aws console to create deployment, with yaml appspec :
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:aws:ecs:eu-west-3:__AWS_ACCOUNT__:task-definition/deployment-app:9"
LoadBalancerInfo:
ContainerName: "app"
ContainerPort: 80
PlatformVersion: "LATEST"
Can anyone help me to understand my mistake ?
Thanks !
I didn't know where to find a log from codeploy to know what was the problem. Finally, I just needed to go to the service, and check the provisionning task, after that the task failed with error message.
The problem came from my ecsTaskExecutionRole because it didn't has enought ECR rights to pull the image I built

Unable to register ec2 instance into ECS using terraform

I am unable to register the ec2 instance into the ecs cluster, I have created the cluster, service and registered the task into it. But the ec2 instance is not registered. I have given the userdata to register the instance into the cluster but unable to register it. I am attaching the files which are needed. Ec2 instance are provisioning just not registering to the ECS cluster. I am implementing module wise structure. I am attaching the screenshot at the end of the question
Autoscaling:
resource "aws_launch_configuration" "ec2" {
image_id = var.image_id
instance_type = var.instance_type
name = "ec2-${terraform.workspace}"
user_data = <<EOF
#!/bin/bash
echo 'ECS_CLUSTER=${var.cluster_name.name}' >> /etc/ecs/ecs.config
echo 'ECS_DISABLE_PRIVILEGED=true' >> /etc/ecs/ecs.config
EOF
key_name = var.key_name
iam_instance_profile = var.instance_profile
security_groups = [aws_security_group.webserver.id]
}
resource "aws_autoscaling_group" "asg" {
vpc_zone_identifier = var.public_subnet
desired_capacity = 2
max_size = 2
min_size = 2
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.ec2.name
target_group_arns = [var.tg.arn]
}
resource "aws_security_group" "webserver" {
name = "webserver-${terraform.workspace}"
description = "Allow internet traffic"
vpc_id = var.vpc_id
ingress {
description = "incoming for ec2-instance"
from_port = 0
to_port = 0
protocol = -1
security_groups = [var.alb_sg]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "webserver-sg"
}
}
output "ec2_sg" {
value = aws_security_group.webserver.id
}
Cluster:
resource "aws_ecs_cluster" "cluster" {
name = "wordpress-${terraform.workspace}"
}
output "cluster" {
value = aws_ecs_cluster.cluster.id
}
output "cluster1" {
value = aws_ecs_cluster.cluster
}
Service:
resource "aws_ecs_service" "wordpress" {
name = "Wordpress-${terraform.workspace}"
cluster = var.cluster
task_definition = var.task.id
desired_count = 2
scheduling_strategy = "REPLICA"
load_balancer {
target_group_arn = var.tg.arn
container_name = "wordpress"
container_port = 80
}
deployment_controller {
type = "ECS"
}
}
Task:
data "template_file" "init" {
template = "${file("${path.module}/template/containerdef.json")}"
vars = {
rds_endpoint = "${var.rds_endpoint}"
name = "${var.name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "wordpress"
container_definitions = "${data.template_file.init.rendered}"
network_mode = "bridge"
requires_compatibilities = ["EC2"]
memory = "1GB"
cpu = "1 vCPU"
task_role_arn = var.task_execution.arn
}
main.tf
data "aws_availability_zones" "azs" {}
data "aws_ssm_parameter" "name" {
name = "Dbname"
}
data "aws_ssm_parameter" "password" {
name = "db_password"
}
module "my_vpc" {
source = "./modules/vpc"
vpc_cidr = var.vpc_cidr
public_subnet = var.public_subnet
private_subnet = var.private_subnet
availability_zone = data.aws_availability_zones.azs.names
}
module "db" {
source = "./modules/rds"
ec2_sg = "${module.autoscaling.ec2_sg}"
allocated_storage = var.db_allocated_storage
storage_type = var.db_storage_type
engine = var.db_engine
engine_version = var.db_engine_version
instance_class = var.db_instance_class
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "alb" {
source = "./modules/alb"
vpc_id = "${module.my_vpc.vpc_id}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
}
module "task" {
source = "./modules/task"
name = data.aws_ssm_parameter.name.value
username = data.aws_ssm_parameter.name.value
password = data.aws_ssm_parameter.password.value
rds_endpoint = "${module.db.rds_endpoint}"
task_execution = "${module.role.task_execution}"
}
module "autoscaling" {
source = "./modules/autoscaling"
vpc_id = "${module.my_vpc.vpc_id}"
#public_subnet = "${module.my_vpc.public_subnets_ids}"
tg = "${module.alb.tg}"
image_id = var.image_id
instance_type = var.instance_type
alb_sg = "${module.alb.alb_sg}"
public_subnet = "${module.my_vpc.public_subnets_ids}"
instance_profile = "${module.role.instance_profile}"
key_name = var.key_name
cluster_name = "${module.cluster.cluster1}"
}
module "role" {
source = "./modules/Iam_role"
}
module "cluster" {
source = "./modules/Ecs-cluster"
}
module "service" {
source = "./modules/services"
cluster = "${module.cluster.cluster}"
tg = "${module.alb.tg}"
task = "${module.task.task}"
}
ec2-instance role:
resource "aws_iam_role" "container_instance" {
name = "container_instance-${terraform.workspace}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_instance_profile" "ec2_instance_role" {
name = "iam_instance_profile-${terraform.workspace}"
role = aws_iam_role.container_instance.name
}
resource "aws_iam_role_policy_attachment" "ec2_instance" {
role = aws_iam_role.container_instance.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
Screenshot:
[![enter image description here][2]][2]
Based on the chat discussion.
The issue could be caused by using incorrect instance profile:
iam_instance_profile = var.instance_profile.name
The important thing is that, now the two instances are correctly registered with the cluster.

Stuck at Code Deploy when trying to do Blue Green Deploy in CI/CD pipeline on ECS

I am trying to do BlueGreen Deployment on ECS. My service when I deploy on ECS cluster manually is running fine and it is passing all the health checks. But Whenever I do blue-green deployment on the same service on ECS it get stuck in install phase untill timeout.
After Timeout I get this error "The deployment timed out while waiting for the replacement task set to become healthy. This time out period is 60 minutes." I am not sure what to do now.
I have applied everything, tested Load Balancer, target groups, and ecr all of them seems working fine when I manually deploy service and test. Please find my terraform code and help me out on this. And let me know if you need furthur details.
ECS Cluster
resource "aws_ecs_cluster" "production-fargate-cluster" {
name = "Production-Fargate-Cluster"
}
#Application Load Balancer
resource "aws_alb" "ecs_cluster_alb" {
name = var.ecs_cluster_name
internal = false
security_groups = [aws_security_group.ecs_alb_security_group.id]
subnets = data.terraform_remote_state.infrastructure.outputs.two_public_subnets
tags = {
Name = "${var.ecs_cluster_name} - Application Load Balancer"
}
}
#First Target group
resource "aws_alb_target_group" "ecs_default_target_group" {
name = "${var.ecs_cluster_name}-BlueTG"
port = var.alb_target_group_port #port 80
protocol = "HTTP"
vpc_id = data.terraform_remote_state.infrastructure.outputs.vpc_id
target_type = "ip"
health_check {
enabled = true
path = "/actuator/health"
interval = 30
healthy_threshold = 3
unhealthy_threshold = 2
}
tags = {
Name = "Blue-TG"
}
}
#First Load balancer's listener
resource "aws_alb_listener" "ecs_alb_http_listener" {
load_balancer_arn = aws_alb.ecs_cluster_alb.arn
port = var.first_load_balancer_listener_port #80 port
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.ecs_default_target_group.arn
}
lifecycle {
ignore_changes = [default_action]
}
}
#Second Load balancer's listener
resource "aws_alb_listener" "ecs_alb_http_listener_second" {
load_balancer_arn = aws_alb.ecs_cluster_alb.arn
port = 8080
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.ecs_default_target_group_second.arn
}
lifecycle {
ignore_changes = [default_action]
}
}
#Second Target group
resource "aws_alb_target_group" "ecs_default_target_group_second" {
name = "${var.ecs_cluster_name}-GreenTG"
port = 8080
protocol = "HTTP"
vpc_id = data.terraform_remote_state.infrastructure.outputs.vpc_id
target_type = "ip"
health_check {
enabled = true
path = "/actuator/health"
interval = 30
healthy_threshold = 3
unhealthy_threshold = 2
}
tags = {
Name = "Blue-TG"
}
}
Fargate ECS Service
resource "aws_ecs_service" "ecs_service" {
name = var.ecs_service_name
task_definition = aws_ecs_task_definition.task_definition_for_application.arn
cluster = data.terraform_remote_state.platform.outputs.ecs_cluster_name
launch_type = "FARGATE"
network_configuration {
#since we have a load balancer and nat gateway attached we should be deploying in private subnets
#but I deployed in public subnet just to try some few things
#you can deploy services in private subnet!! And you should :)
subnets = data.terraform_remote_state.platform.outputs.ecs_public_subnets
security_groups = [aws_security_group.app_security_group.id]
assign_public_ip = true
}
load_balancer {
container_name = var.task_definition_name
container_port = var.docker_container_port
target_group_arn = data.terraform_remote_state.platform.outputs.aws_alb_target_group_arn[0] #target group with port 80 is given here
}
desired_count = 2
deployment_controller {
type = "CODE_DEPLOY"
}
lifecycle {
ignore_changes = [load_balancer, task_definition, desired_count]
}
}
#Task definition for application
resource "aws_ecs_task_definition" "task_definition_for_application" {
container_definitions = data.template_file.ecs_task_definition_template.rendered
family = var.task_definition_name
cpu = var.cpu
memory = var.memory
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
execution_role_arn = aws_iam_role.fargate_iam_role.arn
task_role_arn = aws_iam_role.ecs_task_execution_role.arn
}
#Role
resource "aws_iam_role" "fargate_iam_role" {
name = "fargate_iam_role"
assume_role_policy = data.aws_iam_policy_document.ecs-task-assume-role.json
}
resource "aws_iam_role_policy_attachment" "fargate_iam_role_policy" {
role = aws_iam_role.fargate_iam_role.name
policy_arn = data.aws_iam_policy.ecs-task-execution-role.arn
}
#Security Group
resource "aws_security_group" "app_security_group" {
name = "${var.ecs_service_name}-SG"
description = "Security group for springbootapp to communicate in and out"
vpc_id = data.terraform_remote_state.platform.outputs.vpc_id
ingress {
from_port = 80
protocol = "TCP"
to_port = 8080
cidr_blocks = [data.terraform_remote_state.platform.outputs.vpc_cidr_block]
}
egress {
from_port = 0
protocol = "-1"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.ecs_service_name}-SG"
}
}
#CloudWatch
resource "aws_cloudwatch_log_group" "application_log_group" {
name = "/ecs/sun-api"
}
Code Pipeline
#Code Pipeline
resource "aws_codepipeline" "codepipeline_for_blue_green_deployment" {
name = var.pipeline_name
role_arn = aws_iam_role.codepipeline_roles.arn
artifact_store {
location = var.bucket_for_codepipeline
type = var.artifact_store_type
}
stage {
name = "github_Source"
action {
name = "github_Source"
category = "Source"
owner = var.source_stage_owner
provider = var.source_stage_provider
version = "1"
output_artifacts = ["SourceArtifact"]
configuration = {
PollForSourceChanges = true
OAuthToken = var.github_token
Owner = var.git_hub_owner
Repo = var.repo_name
Branch = var.branch_name
}
}
action {
name = "Image"
category = "Source"
owner = "AWS"
provider = "ECR"
version = "1"
output_artifacts = ["MyImage"]
run_order = 1
configuration = {
ImageTag: "latest"
RepositoryName:"umar-tahir-terraform-repo"
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeployToECS"
version = "1"
input_artifacts = ["SourceArtifact","MyImage"]
configuration ={
ApplicationName = aws_codedeploy_app.application_deploy.name
DeploymentGroupName = aws_codedeploy_deployment_group.code_deployment_group.deployment_group_name
TaskDefinitionTemplateArtifact: "SourceArtifact",
AppSpecTemplateArtifact: "SourceArtifact",
TaskDefinitionTemplatePath: "taskdef.json",
AppSpecTemplatePath: "appspec.yaml",
Image1ArtifactName: "MyImage",
Image1ContainerName: "IMAGE1_NAME",
}
}
}
}
Code Deploy
resource "aws_codedeploy_app" "application_deploy" {
compute_platform = var.compute_platform
name = var.aws_codedeploy_app_name
}
resource "aws_codedeploy_deployment_group" "code_deployment_group" {
app_name = aws_codedeploy_app.application_deploy.name
deployment_group_name = var.deployment_group_name
deployment_config_name = var.deployment_config_name
service_role_arn = aws_iam_role.codedeploy_role_blue_green.arn
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = var.action_on_timeout
}
terminate_blue_instances_on_deployment_success {
action = var.terminate_blue_instances_on_deployment_success_action
}
}
ecs_service {
cluster_name = data.terraform_remote_state.aws_modules_state.outputs.ecs_cluster_name
service_name = "generalapplication"
}
deployment_style {
deployment_option = var.deployment_option
deployment_type = var.deployment_type
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = [data.terraform_remote_state.aws_modules_state.outputs.listener_arns]
}
target_group {
name = data.terraform_remote_state.aws_modules_state.outputs.green_target_group_name
}
target_group {
name = data.terraform_remote_state.aws_modules_state.outputs.blue_target_group_name
}
}
}
}
appSpec.yml
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "springboottaskdefinition"
ContainerPort: 8080
PlatformVersion: "LATEST"
task def
{
"taskRoleArn": "arn-xxxx",
"executionRoleArn": "arn-xxxx",
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/sun-api",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "springboottaskdefinition-LogGroup-stream"
}
},
"portMappings": [
{
"hostPort": 8080,
"protocol": "tcp",
"containerPort": 8080
}
],
"image": "<IMAGE1_NAME>",
"essential": true,
"name": "springboottaskdefinition"
}
],
"memory": "1024",
"family": "springboottaskdefinition",
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "512"
}

AWS ECS Fargate disable Health check

I have a container running within AWS ECS using Fargate.
I register it with the Terraform below and add the hostname to the service discovery. I see it adds it into route53 and then it removes it, I'm assuming this is due to it failing a health check.
It fails
with this below error when I start the container:
company/cmd createServer net.Listen company.qcap.prod:50031 listen tcp:
lookup company.qcap.prod on 10.0.0.2:53: no such host
My service can't listen on the address until the host is registered within Route53 but then my Route53 removes the DNS entry if it fails a health check.
Is there a way to tell the task to wait until that entry is added and then do the health check?
Below is my Terraform for the service
data "template_file" "company" {
template = file("./templates/ecs/company.json.tpl")
vars = {
app_port = 50010
fargate_cpu = var.fargate_cpu
fargate_memory = var.fargate_memory
aws_region = var.region
}
}
resource "aws_ecs_task_definition" "company" {
family = "company-app-task"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
network_mode = "awsvpc"
requires_compatibilities = [
"FARGATE"]
cpu = var.fargate_cpu
memory = var.fargate_memory
container_definitions = data.template_file.company.rendered
}
resource "aws_ecs_service" "company-service" {
name = "company-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.company.id
desired_count = var.app_count
launch_type = "FARGATE"
network_configuration {
security_groups = [
aws_security_group.ecs_tasks.id
]
subnets = module.vpc.private_subnets
assign_public_ip = true
}
service_registries {
container_name = "company"
registry_arn = aws_service_discovery_service.company.arn
}
lifecycle {
create_before_destroy = true
}
depends_on = [
aws_iam_role_policy_attachment.ecs_task_execution_role
]
}
resource "aws_service_discovery_service" "company" {
name = "company"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.qcap_prod_sd.id
routing_policy = "MULTIVALUE"
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 10
}
}