I get this error when applying terraform. It's clearly something wrong with my env_Vars. I've tried making name and value in quotes or without.
Error: ECS Task Definition container_definitions is invalid: Error
decoding JSON: json: cannot unmarshal number into Go struct field
KeyValuePair.Environment.Value of type string
locals:
locals {
task_name = "${var.project_name}-${var.environment}-pgadmin"
env_vars = [
{
name = "ENV",
value = var.environment
},
{
name = "POSTGRES_HOST",
value = module.rds.db_address
},
{
name = "POSTGRES_USER",
value = module.rds.db_username
},
{
name = "POSTGRES_PORT",
value = module.rdsdb_port
}
]
}
task def template:
data "template_file" "task-definition" {
template = file("${path.module}/container_definition_template.json.tpl")
vars = {
container_name = local.task_name
container_image = "dpage/pgadmin4"
container_port = 3001
env_variables = jsonencode(local.env_vars)
secrets = jsonencode(local.secrets)
}
}
Task def:
resource "aws_ecs_task_definition" "pgadmin_task_definition" {
family = local.task_name
container_definitions = data.template_file.task-definition.rendered
task_role_arn = aws_iam_role.ecsTaskRole.arn
network_mode = "awsvpc"
cpu = 1024
memory = 2048
requires_compatibilities = ["FARGATE"]
execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
}
actual json template is :
[
{
"name": "${container_name}",
"image": "${container_image}",
"startTimeout": 120,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${aws_logs_group}",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "${aws_log_stream_prefix}"
}
},
"environment": ${env_variables},
"secrets": ${secrets},
"portMappings": [
{
"containerPort": ${container_port},
"hostPort": ${container_port}
}
]
}
]
I think this happens becuase module.rdsdb_port is number in local.env_vars, not string. You can try with:
value = tostring(module.rdsdb_port)
Related
Hello I'm using Terraform to try and spin up Zookeeper for a development environment and I keep running into the following issue when I spin up the terraform.
Stopped reason Error response from daemon: create
ecs-clearstreet-basis-dev-Zookeeper-46-clearstreet-confluent-c2cf998e98d1afd45900:
VolumeDriver.Create: mounting volume failed: Specified port [2999] is
unavailable. Try selecting a different port.
I don't have this issue when attaching an EFS to a Fargate container.
Here is the terraform for reference.
resource "aws_ecs_task_definition" "ecs-zookeeper" {
family = "${var.project}-Zookeeper"
container_definitions = templatefile("./task-definitions/zookeeper.tftpl",
{
zookeeper_image = "${var.zookeeper_image}:${var.zookeeper_image_version}"
zookeeper_port = "${var.zookeeper_port}"
zookeeper_port_communication = "${var.zookeeper_port_communication}"
zookeeper_port_election = "${var.zookeeper_port_election}"
zookeeper-servers = "server.1=${var.project}1.${var.dns_zone}:2888:3888;2181"
zookeeper-elect-port-retry = "${var.zookeeper-elect-port-retry}"
zookeeper_4lw_commands_whitelist = "${var.zookeeper_4lw_commands_whitelist}"
aws_region = "${var.aws_region}"
}
)
task_role_arn = var.ecs-task-role-arn
network_mode = "awsvpc"
volume {
name = "resolv"
host_path = "/etc/docker_resolv.conf"
}
volume {
name = "Client-confluent"
efs_volume_configuration {
file_system_id = var.efs-fsid
root_directory = "/Platform/confluent"
transit_encryption = "ENABLED"
transit_encryption_port = 2999
authorization_config {
access_point_id = var.efs-confluent-fsap
iam = "ENABLED"
}
}
}
}
resource "aws_ecs_service" "ecs-zookeeper" {
name = "Zookeeper"
cluster = aws_ecs_cluster.ecs.id
task_definition = aws_ecs_task_definition.ecs-zookeeper.arn
enable_ecs_managed_tags = true
enable_execute_command = true
desired_count = 1
propagate_tags = "SERVICE"
launch_type = "EC2"
# only manual task rotation via task stop
deployment_minimum_healthy_percent = 33
deployment_maximum_percent = 100
network_configuration {
subnets = var.vpc_subnets
security_groups = [var.ECS-EC2-SG]
assign_public_ip = false
}
service_registries {
registry_arn = aws_service_discovery_service.discovery_service-zookeeper.arn
}
ordered_placement_strategy {
type = "spread"
field = "host"
}
ordered_placement_strategy {
type = "spread"
field = "attribute:ecs.availability-zone"
}
placement_constraints {
type = "memberOf"
expression = "attribute:program == PLATFORM"
}
lifecycle {
create_before_destroy = true
}
# count = var.zookeeper-instance-number
}
resource "aws_service_discovery_service" "discovery_service-zookeeper" {
name = "${var.project}-zookeeper"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.discovery_namespace.id
dns_records {
ttl = 10
type = "A"
}
routing_policy = "MULTIVALUE"
}
health_check_custom_config {
failure_threshold = 1
}
# count = var.zookeeper-instance-number
}
Here is the Task Definition for reference
[
{
"name": "zookeeper",
"image": "${zookeeper_image}",
"cpu": 256,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": ${zookeeper_port},
"hostPort": ${zookeeper_port}
},
{
"containerPort": ${zookeeper_port_communication},
"hostPort": ${zookeeper_port_communication}
},
{
"containerPort": ${zookeeper_port_election},
"hostPort": ${zookeeper_port_election}
}
],
"environment": [
{
"name": "ZOO_SERVERS",
"value": "${zookeeper-servers}"
},
{
"name": "ZOO_STANDALONE_ENABLED",
"value": "false"
},
{
"name": "ZOO_ELECT_PORT_RETRY",
"value": "${zookeeper-elect-port-retry}"
},
{
"name": "ZOO_4LW_COMMANDS_WHITELIST",
"value": "${zookeeper_4lw_commands_whitelist}"
}
],
"mountPoints": [
{
"sourceVolume": "resolv",
"containerPath": "/etc/resolv.conf"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region" : "${aws_region}",
"awslogs-group" : "/fargate/client/basis/program-zookeeper",
"awslogs-create-group": "true",
"awslogs-stream-prefix" : "program-zookeeper"
}
},
"workingDir": "/var/lib/zookeeper",
"mountPoints":[{
"sourceVolume": "client-confluent",
"containerPath": "/var/lib/zookeeper"
}]
}
]
Any help would be greatly appreciated!
I use Terraform to create ECS I got error
"InvalidParameterException: Task definition does not support launch_type FARGATE"
I checked other solution from other post but it does not help. Can anyone tell my why I got that error?
container-definition template
[
{
"name": "api",
"image": "${app_image}",
"essential": true,
"memoryReservation": 256,
"environment": [
{"name": "ALLOWED_HOSTS", "value": "${allowed_hosts}"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "api"
}
},
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
ecs.tf
resource "aws_ecs_task_definition" "api" {
family = "${local.prefix}-api"
container_definitions = data.template_file.api_container_definitions.rendered
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 256
memory = 512
execution_role_arn = aws_iam_role.task_execution_role.arn
task_role_arn = aws_iam_role.task_role.arn
volume {
name = "static"
}
tags = local.common_tags
}
resource "aws_ecs_service" "api" {
name = "${local.prefix}-api"
cluster = aws_ecs_cluster.main.name
task_definition = aws_ecs_task_definition.api.family
desired_count = 1
launch_type = "FARGATE"
network_configuration {
subnets = [
aws_subnet.public_a.id,
aws_subnet.public_b.id,
]
security_groups = [aws_security_group.ecs_service.id]
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_alb_target_group.ecs_tg.id
container_name = "api"
container_port = var.app_port
}
}
Thank you
My Grafana container will not launch on fargate via Terraform, as there is an issue with mounted file storage. The logs display three entries:
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
GF_PATHS_DATA='/var/lib/grafana' is not writable.
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
My task role assigned to the task definition does have full EFS access.
I very new to AWS and Terraform, so I am having trouble understanding what the Grafana documentation is suggesting. I've added a "user" field in the task definition and set it to 472 but I still have the same errors. Thank you so much for reading my issue.
resource "aws_ecs_task_definition" "grafana" {
family = "grafana"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "512"
memory = "1024"
task_role_arn = "arn:aws:iam::692869463706:role/grafanaTaskRole"
execution_role_arn = "arn:aws:iam::692869463706:role/ecsTaskExecutionRole"
container_definitions = <<DEFINITION
[
{
"memory": 512,
"cpu": 256,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "ecs-grafana",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "grafana-task"
}
},
"essential": true,
"mountPoints": [
{
"containerPath": "/var/lib/grafana",
"sourceVolume": "efs-html",
"readOnly": null
}
],
"name": "grafana",
"image": "grafana/grafana:latest",
"user": "472"
}
]
DEFINITION
volume {
name = "efs-html"
efs_volume_configuration {
file_system_id = aws_efs_file_system.myfilesystem.id
root_directory = "/grafana"
transit_encryption = "ENABLED"
transit_encryption_port = 2999
authorization_config {
access_point_id = aws_efs_access_point.test.id
iam = "ENABLED"
}
}
}
}
I have the settings.py file below of a django application using terraform , docker compose and im trying to get the value of the database stored in aws secret manager in ecs task definition
settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"NAME": os.environ.get("POSTGRES_DB"),
"USER": os.environ.get("POSTGRES_USER"),
"PASSWORD": os.environ.get("POSTGRES_PASSWORD"),
"HOST": os.environ.get("POSTGRES_HOST"),
"PORT": 5432,
}
}
task definition
resource "aws_ecs_task_definition" "ecs_task_definition" {
family = "ecs_container"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 256
memory = 512
execution_role_arn = data.aws_iam_role.fargate.arn
task_role_arn = data.aws_iam_role.fargate.arn
container_definitions = jsonencode([
{
"name" : "ecs_test_backend",
"image" : "${aws_ecr_repository.ecr_repository.repository_url}:latest",
"cpu" : 256,
"memory" : 512,
"essential" : true,
"portMappings" : [
{
containerPort = 8000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region": "us-east-1",
"awslogs-group": "/ecs/ecs_test_backend",
"awslogs-stream-prefix": "ecs"
}
}
"environment" : [
{
"name": "POSTGRES_DB",
"value": "${var.POSTGRES_NAME}" <=== HERE
},
{
"name": "POSTGRES_PASSWORD",
"value": "${var.POSTGRES_PASSWORD}" <=== HERE
},
{
"name": "POSTGRES_USERNAME",
"value": "${var.POSTGRES_USERNAME}" <=== HERE
},
{
"name": "POSTGRES_PORT",
"value": "${var.POSTGRES_PORT}" <=== HERE
},
{
"name": "POSTGRES_HOST",
"value": "${var.POSTGRES_HOST}" <=== HERE
},
]
}
])
}
variables
variable "POSTGRES_PASSWORD" {
type = string
default = "somepassword"
}
The variables above is the same used while creating the postgres rds instance.
The configuration below var.XXX does not seem to work as the task logs return psycopg2.OperationalError: FATAL: password authentication failed for user "root"
It probably because its not able to read the value.
Is the above the correct way to grab value from AWS Secret Manager using Terraform and ECS?
You should use the sensitive data handling feature of ECS tasks, documented here.
You would move the environment variables from the environment block of your task definition to the secrets block, and give the ARN of the secret instead of the value. The ECS container will then read those secrets when it starts your container, and set them in the container's environment.
I have been looking and I couldn't find any good examples to achieve this in the proper way, and while the response from #Mark B adds some context is true that some code snippets can be helpful since sometimes is more complicated to code it (with all the requirements) than the theoretical explanation. With that said I will split into pieces what you need to complete:
The parameter store
You need to save your secrets somewhere and the parameters store is a good place for it according AWS best practices:
resource "aws_ssm_parameter" "main" {
for_each = var.parameters
name = "/path/${var.app_name}/${each.key}"
description = "Secrets for application ${var.app_name}"
type = "SecureString"
value = each.value
}
variable "parameters" {
type = map(any)
}
variable "app_name" {
type = string
}
output "arns" {
value = [for k, v in var.parameters : { name = k, valueFrom = aws_ssm_parameter.main[k].arn }]
}
(The code above creates several parameters)
Be sure of adding the correct permissions to your task execution role:
resource "aws_iam_role_policy" "parameter_policy" {
name = "mypolicy"
role = aws_iam_role.your_role.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ssm:GetParameters",
]
Effect = "Allow"
Resource = ["*"]
},
]
})
}
Be aware that the above is just an example, to follow best practices be sure to limit the resources to the arn(s) of the parameters created above.
Finally how to use it
container_definitions = <<TASK_DEFINITION
[
{
"name": "hello-world",
"image": "nginxdemos/hello",
"cpu": 1024,
"memory": 2048,
"essential": true,
"secrets": ${jsonencode(module.hello-world-secrets.arns)},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-region": "your_region",
"awslogs-group": "hello_world",
"awslogs-stream-prefix": "hello_world"
}
}
}
]
TASK_DEFINITION
module "hello-world-secrets" {
source = "./modules/secrets"
app_name = "hello-world"
parameters = {
hello = "world"
}
}
You need to decide in which point create the secret / store the secret, e.g. on your CI/CD inject your secret and create the parameter store resource.
Hopefully, this help someone else looking for a way of achieving this.
I am trying to deploy ECS task definition with Terraform. Here is my ECS task definition resource code:
resource "aws_ecs_task_definition" "my_TD" {
family = "my_container"
container_definitions = <<DEFINITION
[{
"name": "my_container",
"image": "${format("%s:qa", var.my_ecr_arn)}",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 300,
"networkMode": "awsvpc",
"environment": [
{
"name": "PORT",
"value": "80"
},
{
"name": "Token",
"value": "xxxxxxxx"
}
]
}
]
DEFINITION
requires_compatibilities = ["EC2"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
task_role_arn = var.ecs_role
execution_role_arn = var.ecs_role
}
The environment variables are hardcoded here. So, I tried to take those environment variables from terraform input. So, I modified with:
variable "my_env_variables"{
default = [
{
"name": "PORT",
"value": "80"
},
{
"name": "token",
"value": "xxxxx"
}
]
}
...
...
"environment" : "${var.my_env_variables}"
...
...
It's giving me an issue like this:
var.my_env_variables is tuple with 1 element
Cannot include the given value in a string template: string required.
I am new to Terraform. How can I solve this issue?
You need json string, which you can get using jsonencode. So you could try the following:
resource "aws_ecs_task_definition" "my_TD" {
family = "my_container"
container_definitions = <<DEFINITION
[{
"name": "my_container",
"image": "${format("%s:qa", var.my_ecr_arn)}",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 300,
"networkMode": "awsvpc",
"environment": ${jsonencode(var.my_env_variables)}
}
]
DEFINITION
requires_compatibilities = ["EC2"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
task_role_arn = var.ecs_role
execution_role_arn = var.ecs_role
}