Take ECS Task Definition environment variables from Terraform input variables - amazon-web-services

I am trying to deploy ECS task definition with Terraform. Here is my ECS task definition resource code:
resource "aws_ecs_task_definition" "my_TD" {
family = "my_container"
container_definitions = <<DEFINITION
[{
"name": "my_container",
"image": "${format("%s:qa", var.my_ecr_arn)}",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 300,
"networkMode": "awsvpc",
"environment": [
{
"name": "PORT",
"value": "80"
},
{
"name": "Token",
"value": "xxxxxxxx"
}
]
}
]
DEFINITION
requires_compatibilities = ["EC2"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
task_role_arn = var.ecs_role
execution_role_arn = var.ecs_role
}
The environment variables are hardcoded here. So, I tried to take those environment variables from terraform input. So, I modified with:
variable "my_env_variables"{
default = [
{
"name": "PORT",
"value": "80"
},
{
"name": "token",
"value": "xxxxx"
}
]
}
...
...
"environment" : "${var.my_env_variables}"
...
...
It's giving me an issue like this:
var.my_env_variables is tuple with 1 element
Cannot include the given value in a string template: string required.
I am new to Terraform. How can I solve this issue?

You need json string, which you can get using jsonencode. So you could try the following:
resource "aws_ecs_task_definition" "my_TD" {
family = "my_container"
container_definitions = <<DEFINITION
[{
"name": "my_container",
"image": "${format("%s:qa", var.my_ecr_arn)}",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 300,
"networkMode": "awsvpc",
"environment": ${jsonencode(var.my_env_variables)}
}
]
DEFINITION
requires_compatibilities = ["EC2"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
task_role_arn = var.ecs_role
execution_role_arn = var.ecs_role
}

Related

Error: InvalidParameterException: Task definition does not support launch_type FARGATE

I use Terraform to create ECS I got error
"InvalidParameterException: Task definition does not support launch_type FARGATE"
I checked other solution from other post but it does not help. Can anyone tell my why I got that error?
container-definition template
[
{
"name": "api",
"image": "${app_image}",
"essential": true,
"memoryReservation": 256,
"environment": [
{"name": "ALLOWED_HOSTS", "value": "${allowed_hosts}"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "api"
}
},
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
ecs.tf
resource "aws_ecs_task_definition" "api" {
family = "${local.prefix}-api"
container_definitions = data.template_file.api_container_definitions.rendered
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 256
memory = 512
execution_role_arn = aws_iam_role.task_execution_role.arn
task_role_arn = aws_iam_role.task_role.arn
volume {
name = "static"
}
tags = local.common_tags
}
resource "aws_ecs_service" "api" {
name = "${local.prefix}-api"
cluster = aws_ecs_cluster.main.name
task_definition = aws_ecs_task_definition.api.family
desired_count = 1
launch_type = "FARGATE"
network_configuration {
subnets = [
aws_subnet.public_a.id,
aws_subnet.public_b.id,
]
security_groups = [aws_security_group.ecs_service.id]
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_alb_target_group.ecs_tg.id
container_name = "api"
container_port = var.app_port
}
}
Thank you

Grafana Docker container will not launch on Fargate via Terraform, issue with mounted file storage

My Grafana container will not launch on fargate via Terraform, as there is an issue with mounted file storage. The logs display three entries:
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
GF_PATHS_DATA='/var/lib/grafana' is not writable.
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
My task role assigned to the task definition does have full EFS access.
I very new to AWS and Terraform, so I am having trouble understanding what the Grafana documentation is suggesting. I've added a "user" field in the task definition and set it to 472 but I still have the same errors. Thank you so much for reading my issue.
resource "aws_ecs_task_definition" "grafana" {
family = "grafana"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "512"
memory = "1024"
task_role_arn = "arn:aws:iam::692869463706:role/grafanaTaskRole"
execution_role_arn = "arn:aws:iam::692869463706:role/ecsTaskExecutionRole"
container_definitions = <<DEFINITION
[
{
"memory": 512,
"cpu": 256,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "ecs-grafana",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "grafana-task"
}
},
"essential": true,
"mountPoints": [
{
"containerPath": "/var/lib/grafana",
"sourceVolume": "efs-html",
"readOnly": null
}
],
"name": "grafana",
"image": "grafana/grafana:latest",
"user": "472"
}
]
DEFINITION
volume {
name = "efs-html"
efs_volume_configuration {
file_system_id = aws_efs_file_system.myfilesystem.id
root_directory = "/grafana"
transit_encryption = "ENABLED"
transit_encryption_port = 2999
authorization_config {
access_point_id = aws_efs_access_point.test.id
iam = "ENABLED"
}
}
}
}

Terraform ECS Task Definition jsonencode issue

I get this error when applying terraform. It's clearly something wrong with my env_Vars. I've tried making name and value in quotes or without.
Error: ECS Task Definition container_definitions is invalid: Error
decoding JSON: json: cannot unmarshal number into Go struct field
KeyValuePair.Environment.Value of type string
locals:
locals {
task_name = "${var.project_name}-${var.environment}-pgadmin"
env_vars = [
{
name = "ENV",
value = var.environment
},
{
name = "POSTGRES_HOST",
value = module.rds.db_address
},
{
name = "POSTGRES_USER",
value = module.rds.db_username
},
{
name = "POSTGRES_PORT",
value = module.rdsdb_port
}
]
}
task def template:
data "template_file" "task-definition" {
template = file("${path.module}/container_definition_template.json.tpl")
vars = {
container_name = local.task_name
container_image = "dpage/pgadmin4"
container_port = 3001
env_variables = jsonencode(local.env_vars)
secrets = jsonencode(local.secrets)
}
}
Task def:
resource "aws_ecs_task_definition" "pgadmin_task_definition" {
family = local.task_name
container_definitions = data.template_file.task-definition.rendered
task_role_arn = aws_iam_role.ecsTaskRole.arn
network_mode = "awsvpc"
cpu = 1024
memory = 2048
requires_compatibilities = ["FARGATE"]
execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
}
actual json template is :
[
{
"name": "${container_name}",
"image": "${container_image}",
"startTimeout": 120,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${aws_logs_group}",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "${aws_log_stream_prefix}"
}
},
"environment": ${env_variables},
"secrets": ${secrets},
"portMappings": [
{
"containerPort": ${container_port},
"hostPort": ${container_port}
}
]
}
]
I think this happens becuase module.rdsdb_port is number in local.env_vars, not string. You can try with:
value = tostring(module.rdsdb_port)

EFS volume hangs when accessed from an ECS task

I was previously having issues providing access to EFS from an ECS task (Providing access to EFS from ECS task)
This has now been resolved, inasmuch as the task starts, and it all looks fine.
The problem is that running df, or ls or touch on the mountpoint hangs indefinitely.
The task definition is below:
{
"taskDefinitionArn": "arn:aws:ecs:eu-west-2:000000000000:task-definition/backend-app-task:53",
"containerDefinitions": [
{
"name": "server",
"image": "000000000000.dkr.ecr.eu-west-2.amazonaws.com/foo-backend:latest-server",
"cpu": 512,
"memory": 1024,
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
],
"mountPoints": [
{
"sourceVolume": "persistent",
"containerPath": "/opt/data/",
"readOnly": false
}
],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/foo",
"awslogs-region": "eu-west-2",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "backend-app-task",
"taskRoleArn": "arn:aws:iam::000000000000:role/ecsTaskRole",
"executionRoleArn": "arn:aws:iam::000000000000:role/myEcsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 53,
"volumes": [
{
"name": "persistent",
"efsVolumeConfiguration": {
"fileSystemId": "fs-00000000000000000",
"rootDirectory": "/",
"transitEncryption": "ENABLED",
"transitEncryptionPort": 2049,
"authorizationConfig": {
"accessPointId": "fsap-00000000000000000",
"iam": "ENABLED"
}
}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "ecs.capability.efsAuth"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.efs"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.25"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "512",
"memory": "1024",
"registeredAt": "2022-03-08T14:23:47.391Z",
"registeredBy": "arn:aws:iam::000000000000:root",
"tags": []
}
According to the docs, hanging can occur when large amounts of data are being written to the EFS volume. This is not the case here, the EFS volume is new, and empty, with a size of 6KiB. I also tried configuring it with provisioned throughput, but that did not make any difference.
EDIT
IAM role definition:
data "aws_iam_policy_document" "ecs_task_execution_role_base" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
# ECS task execution role
resource "aws_iam_role" "ecs_task_execution_role" {
name = var.ecs_task_execution_role_name
assume_role_policy = data.aws_iam_policy_document.ecs_task_execution_role_base.json
}
# ECS task execution role policy attachment
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role2" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonElasticFileSystemClientFullAccess"
}
resource "aws_iam_policy" "ecs_exec_policy" {
name = "ecs_exec_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_role" "ecs_task_role" {
name = "ecsTaskRole"
assume_role_policy = data.aws_iam_policy_document.ecs_task_execution_role_base.json
managed_policy_arns = ["arn:aws:iam::aws:policy/AmazonElasticFileSystemClientFullAccess","arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy", aws_iam_policy.ecs_exec_policy.arn]
}

Django and Nginx Docker Containers with AWS ECS Error

I am currently building a project using Terraform and AWS ECS with two containers: Django App and Nginx (to host the static files). Currently it works great; however, I am receiving an error in the logs of Nginx (using CloudWatch Logs) saying,
CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.
I know this has to do with Django's ALLOWED_HOSTS since my DEBUG is set to False in the settings.py file, but I feel everything should be working as it should. Here is what my settings.py has for ALLOWED_HOSTS:
ALLOWED_HOSTS = os.getenv('ALLOWED_HOSTS', '').split()
From here, I have my task definition file named container-def.json to do the job in AWS ECS:
[
{
"name": "django-app",
"image": "${django_docker_image}",
"cpu": 10,
"memory": 256,
"memoryReservation": 128,
"links": [],
"essential": true,
"portMappings": [
{
"hostPort": 0,
"containerPort": 8000,
"protocol": "tcp"
}
],
"command": ["gunicorn", "-w", "3", "-b", ":8000", "project.wsgi:application"],
"environment": [
{
"name": "RDS_DB_NAME",
"value": "${rds_db_name}"
},
{
"name": "RDS_USERNAME",
"value": "${rds_username}"
},
{
"name": "RDS_PASSWORD",
"value": "${rds_password}"
},
{
"name": "RDS_PORT",
"value": "5432"
},
{
"name": "ALLOWED_HOSTS",
"value": "${allowed_hosts}"
}
],
"mountPoints": [
{
"containerPath": "/usr/src/app/staticfiles",
"sourceVolume": "static_volume"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group" : "/ecs/frontend-container",
"awslogs-region": "us-east-1"
}
}
},
{
"name": "nginx",
"image": "${ngnix_docker_image}",
"essential": true,
"cpu": 10,
"memory": 128,
"links": ["django-app"],
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"mountPoints": [
{
"containerPath": "/usr/src/app/staticfiles",
"sourceVolume": "static_volume"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/nginx",
"awslogs-region": "us-east-1"
}
}
}
]
My var.tf file where it is contained is as such:
####### Input URL of ALLOWED_HOSTS in Django's settings ############
variable "allowed_hosts" {
description = "Domain name for allowed hosts"
default = ".example.org"
}
And lastly where I call all these variables to is in my data template for Terraform:
### Here lies the container-definition.json file to input what each container's parameters
### must have.
data "template_file" "ecs-containers" {
template = file("container-definitions/container-def.json")
vars = {
django_docker_image = var.django_docker_image
ngnix_docker_image = var.ngnix_docker_image
rds_db_name = var.rds_db_name
rds_username = var.rds_username
rds_password = var.rds_password
allowed_hosts = var.allowed_hosts
}
}
I would appreciate any feedback on this. I know I'm ALMOST there to fixing the issue. Thanks all.