At the moment I have multiple aws_ecs_task_definition that are blueprints for my ECS tasks. Rather than having a separate module to store each of these unique definitions, I would like to somehow structure my task definition to be agnostic so that it can be used for each of my containers. The main challenge I am facing is that the environment variables differ between task definitions. For example, one task definition might have 4 environment variables and another might have 12. Also, some of the values these variables have are provided by other modules by way of using Outputs (e.g. databases endpoints). Below is an example of an existing task definition.
resource "aws_ecs_task_definition" "service" {
execution_role_arn = var.ecsTaskExecutionRole
task_role_arn = var.ecsTaskExecutionRole
requires_compatibilities = ["FARGATE"]
family = "${var.name}-definition"
container_definitions = jsonencode([
{
"environment" : [
{ "name" : "database_endpoint",
"value" : "${var.database_endpoint}"
},
{ "name" : "database_password",
"value" : "admin"
}
],
"healthCheck" : {
"command" : [
"CMD-SHELL",
"echo \"hello\""
],
"interval" : 10,
"timeout" : 60,
"retries" : 10,
"startPeriod" : 60
},
command = ["start", "--auto-build"]
name = "${var.name}"
image = "${var.image}"
cpu = 1024
memory = 2048
essential = true
portMappings = [
{
containerPort = 7278
hostPort = 7278
}
]
},
])
network_mode = "awsvpc"
cpu = 1024
memory = 2048
runtime_platform {
operating_system_family = "LINUX"
}
}
In essence, I want to be able to dynamically build the environment list based on an object that is provided as a variable when making the module call. For example, that variable might look like this and the values that are empty string are updated at a later point:
containers = {
image1 = {
image = "imageUrl"
}
image2 = {
image = "imageUrl"
documentdb_endpoint = ""
}
image3 = {
image = "imageUrl"
redis_endpoint = ""
}
image4 = {
image = "imageUrl"
}
}
Would appreciate any advice to help make the code more concise.
Related
I have a terraform template that create aws ecs task.
I filled a variable with a list of object like this:
`
variables.tf
variable "microservices" {
description = "the microservices to implement"
type = list(object({
name = string,
port = number,
secrets = optional(list(object({
key = string,
arn = string
})))
}))
`
Then in my main.tf I have the following:
`
main.tf
resource "aws_ecs_task_definition" "task_definition" {
count = length("${var.microservices}")
family = "${var.microservices[count.index].name}-${var.environment}"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory= 2048
execution_role_arn = "arn:aws:iam::xxxxx:role/service-role/xxxx-test-service-role"
container_definitions = jsonencode([
{
name = "${var.microservices[count.index].name}"
image = "${aws_ecr_repository.microservices_ecr_repos[count.index].repository_url}"
cpu = 1
essential = true
Ulimits = [{
Name = "nofile"
SoftLimit = 65535
HardLimit = 65535
}]
//length("${var.microservices[count.index].secrets}") > 0 ?
Secrets = [{
Name = length("${var.microservices[count.index].secrets}") > 0 ? "${var.microservices[count.index].secrets[0].key}" : 0
ValueFrom = length("${var.microservices[count.index].secrets}") > 0 ? "${var.microservices[count.index].secrets[0].arn}" : 0
//Name = "${var.microservices[count.index].secrets[0].key}"
//ValueFrom = "${var.microservices[count.index].secrets[0].arn}"
`
I don't understand how can I create Secrets parsing the variables.
The secrets can be optional (it could exist or not).
I should need a sort of for_each only in Secrets section in order to check if secret exist in input and then fill this filed.
An example of inputs is the following:
`
microservices = [
{
"name" = "api",
"port" = 3000,
"secrets" = [{ "key" = "test123", "arn" = "0123"},{ "key" = "testXXX", "arn" = "1010"}] },
{
"name" = "web",
"port" = 3000
"secrets" = [{ "key" = "test456", "arn" = "4567"}]
}]
`
Anyone approach this kind of issue/configuration? What I would like to achieve is to create a task definition in aws ecs with secrets field (or empty secrets section) based on microservices input.
I tested a different data structure like here:
flatten object made of nested list in terraform
But in this scenario I was able to create a new data structure but when I create the resource (e.g.) aws_ecs_task_definition with a For_each it replicate some configuration like ecs tasks with the same name:
`
locals {
microservices_and_secrets = merge([
for ecs_taks, group in var.microservices:
{
for secrets_key, secret in group["secrets"]:
"${ecs_taks}-${secrets_key}" => {
name = group["name"]
port = group["port"]
secret = secret
}
}
]...)
}
`
`
resource "aws_ecs_task_definition" "task_definition" {
for_each = local.microservices_and_secrets
family = "${each.value.name}-${var.environment}" <-- ISSUE with creation because it replicates the ecs task microservice name due to foreach
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory= 2048
`
The problem is also that with this solution I can't have a microservice without any secret. e.g. the issue is the following:
`
microservices = [
{
"name" = "api",
"port" = 3000,
"secrets" = [{ "key" = "test123", "arn" = "0123"},{ "key" = "testXXX", "arn" = "1010"}] },
{
"name" = "web",
"port" = 3000
"secrets" = [{ "key" = "test456", "arn" = "4567"}]
},
{
"name" = "ciaotask",
"port" = 3000
}
]
`
`
Error: Iteration over null value
│
│ on main-aws-ecs.tf line 153, in locals:
│ 152: {
│ 153: for secrets_key, secret in group["secrets"]:
│ 154: "${ecs_taks}-${secrets_key}" => {
│ 155: name = group["name"]
│ 156: port = group["port"]
│ 157: secret = secret
│ 158: }
│ 159: }
│ ├────────────────
│ │ group["secrets"] is null
│
│ A null value cannot be used as the collection in a 'for' expression.
`
Anyone could help how can I manage the ecs task creation based on microservice input posted above?
The question is, how can I create one aws_ecs_task_definition for each microservice present into microservices variable and it can have zero to n Secrets, starting from microservices variable list of objects.
I solved the issue.
I started from this guide https://codeburst.io/how-to-securely-use-aws-secrets-manager-to-inject-secrets-into-ecs-using-infrastructure-as-code-ff2b39b420b6
then I created a template file like this:
`container_definitions.json.tpl
[{
"name" : "${name}",
"image": "${image}",
"cpu" : 1,
"essential" : true,
"Ulimits" : [{
"Name" : "nofile",
"SoftLimit" : 65535,
"HardLimit" : 65535
}],
"Secrets" : ${secrets},
"Environment" : ${environment},
"LogConfiguration" : {
"LogDriver" : "awslogs",
"Options" : {
"awslogs-group" : "${awslogs-group}",
"awslogs-region" : "${aws_region}",
"awslogs-stream-prefix" : "ecs"
}
},
"portMappings" : [
{
"containerPort" : 3000,
"hostPort" : 3000
}
]
}]
`
in my main.tf instead I created the resources in this way:
`
*/
data "template_file" "container_definitions" {
count = length("${var.microservices}")
template = file("${path.module}/template_dir/container_definitions.json.tpl")
vars = {
aws_region = "${var.aws_region}"
cpu = 1
image = "${aws_ecr_repository.microservices_ecr_repos[count.index].repository_url}"
name = "${var.microservices[count.index].name}"
awslogs-group = "${aws_cloudwatch_log_group.cloudwatch_log_groups[count.index].id}"
environment = jsonencode("${var.microservices[count.index].environment}")
secrets = jsonencode("${var.microservices[count.index].secrets}")
}
}
/*
AWS ECS Task definition
*/
resource "aws_ecs_task_definition" "task_definition" {
count = length("${var.microservices}")
family = "${var.microservices[count.index].name}-${var.environment}"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "${var.microservices[count.index].cpu}"
memory= "${var.microservices[count.index].memory}"
execution_role_arn = "${aws_iam_role.task_execution_roles[count.index].arn}"
task_role_arn = "${aws_iam_role.task_execution_roles[count.index].arn}"
container_definitions = "${data.template_file.container_definitions[count.index].rendered}" //file("./containers_file/api.json")
}
`
In this way I was able to create a task definition in aws ecs with 0..n secrets and 0..n environment variables based on this (e.g.) input.
`
microservices = [
{
"name" = "api",
"port" = 3000,
"cpu" = 1024,
"memory" = 2048,
"secrets" = [{ "name" = "test123", "valuefrom" = "0123"},{ "name" = "testXXX", "valuefrom" = "1010"}] },
{
"name" = "web",
"port" = 3000,
"cpu" = 1024,
"memory" = 2048,
"secrets" = [{ "name" = "test456", "valuefrom" = "4567"}],
"environment" = [{ "name" = "weenv", "value" = "emi_is_ok" },{ "name" = "weenv123", "value" = "emi_is_ok123" } ]
},
{
"name" = "ciaotask",
"port" = 3000
"cpu" = 1024,
"memory" = 2048
}
]
`
I hope this could help someone else that ran in the same issue.
I want to create multiple resource(GCP's multiple cloudSQL instances) and this is what I have:
locals {
psql_settings = [
{ "name" : "psql1", "location" : "us-central1", "zone" : "us-central1-c" },
{ "name" : "psql2", "location" : "us-east1", "zone" : "us-east1-b" }
]
}
I have to use them in json format because this will be stored in consul for dynamic changes.
Using this locals value, how I can create multiple resources. I am trying:
module "postgresql-db" {
depends_on = [
module.vpc
]
source = "../modules/postgres"
for_each = local.psql_settings[0]
name = each.value.name
random_instance_name = true
database_version = "POSTGRES_13"
project_id = "xyz-project
zone = each.value.zone
region = each.value.location
...
...
It should be:
for_each = {for idx,val in local.psql_settings: idx => val}
The code changes your list of maps, into map of maps, which is required by for_each.
I am bringing up aws_ecs_task_defintion with following terraform configuration.
I pass local.image_tag as variable to control the deployment of our ecr image through terraform.
I am able to bring up the ecs_cluster on initial terraform plan/apply cycle just fine.
However, on the subsequent terraform plan/apply cycle, terraform is forcing the new container definition and thats why redeploying the entire task definition even though our ecr image local.image_tag remains just same
This behaviour, is causing the unintended task definition recycle without any changes to the ecr image and just terraform forcing values with defaults.
TF Config
resource "aws_ecs_task_definition" "this_task" {
family = "this-service"
execution_role_arn = var.this_role
task_role_arn = var.this_role
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 256
memory = var.env != "prod" ? 512 : 1024
tags = local.common_tags
# Log the to datadog if it's running in the prod account.
container_definitions = (
<<TASK_DEFINITION
[
{
"essential": true,
"image": "AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/thisisservice:${local.image_tag}",
"environment" :[
{"name":"ID", "value":"${jsondecode(data.aws_secretsmanager_secret_version.this_decrypt.secret_string)["id"]}"},
{"name":"SECRET","value":"${jsondecode(data.aws_secretsmanager_secret_version.this_decrypt.secret_string)["secret"]}"},
{"name":"THIS_SOCKET_URL","value":"${local.websocket_url}"},
{"name":"THIS_PLATFORM_API","value":"${local.platform_api}"},
{"name":"REDISURL","value":"${var.redis_url}"},
{"name":"BASE_S3","value":"${aws_s3_bucket.ec2_vp.id}"}
],
"name": "ec2-vp",
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "datadog",
"apikey": "${jsondecode(data.aws_secretsmanager_secret_version.datadog_api_key[0].secret_string)["api_key"]}",
"Host": "http-intake.logs.datadoghq.com",
"dd_service": "this",
"dd_source": "this",
"dd_message_key": "log",
"dd_tags": "cluster:${var.cluster_id},Env:${var.env}",
"TLS": "on",
"provider": "ecs"
}
},
"portMappings": [
{
"containerPort": 443,
"hostPort": 443
}
]
},
{
"essential": true,
"image": "amazon/aws-for-fluent-bit:latest",
"name": "log_router",
"firelensConfiguration": {
"type": "fluentbit",
"options": { "enable-ecs-log-metadata": "true" }
}
}
]
TASK_DEFINITION
)
}
-/+ resource "aws_ecs_task_definition" "this_task" {
~ arn = "arn:aws:ecs:ca-central-1:AWS_ACCOUNT_ID:task-definition/this:4" -> (known after apply)
~ container_definitions = jsonencode(
~ [ # forces replacement
~ {
- cpu = 0 -> null
environment = [
{
name = "BASE_S3"
value = "thisisthevalue"
},
{
name = "THIS_PLATFORM_API"
value = "thisisthevlaue"
},
{
name = "SECRET"
value = "thisisthesecret"
},
{
name = "ID"
value = "thisistheid"
},
{
name = "THIS_SOCKET_URL"
value = "thisisthevalue"
},
{
name = "REDISURL"
value = "thisisthevalue"
},
]
essential = true
image = "AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/this:v1.0.0-develop.6"
logConfiguration = {
logDriver = "awsfirelens"
options = {
Host = "http-intake.logs.datadoghq.com"
Name = "datadog"
TLS = "on"
apikey = "thisisthekey"
dd_message_key = "log"
dd_service = "this"
dd_source = "this"
dd_tags = "thisisthetags"
provider = "ecs"
}
}
- mountPoints = [] -> null
name = "ec2-vp"
~ portMappings = [
~ {
containerPort = 443
hostPort = 443
- protocol = "tcp" -> null
},
]
- volumesFrom = [] -> null
} # forces replacement,
~ {
- cpu = 0 -> null
- environment = [] -> null
essential = true
firelensConfiguration = {
options = {
enable-ecs-log-metadata = "true"
}
type = "fluentbit"
}
image = "amazon/aws-for-fluent-bit:latest"
- mountPoints = [] -> null
name = "log_router"
- portMappings = [] -> null
- user = "0" -> null
- volumesFrom = [] -> null
} # forces replacement,
]
)
cpu = "256"
execution_role_arn = "arn:aws:iam::AWS_ACCOUNTID:role/thisistherole"
family = "this"
~ id = "this-service" -> (known after apply)
memory = "512"
network_mode = "awsvpc"
requires_compatibilities = [
"FARGATE",
]
~ revision = 4 -> (known after apply)
tags = {
"Cluster" = "this"
"Env" = "this"
"Name" = "this"
"Owner" = "this"
"Proj" = "this"
"SuperCluster" = "this"
"Terraform" = "true"
}
task_role_arn = "arn:aws:iam::AWS_ACCOUNT+ID:role/thisistherole"
}
Above is the terraform plan that is forcing new task definition/container definition.
As you can see , terraform is replacing all default values with null or empty. I have double check the terraform.tfstate file it already generated from the previous run and those values are exactly the same as its showing on the above plan.
I am not sure why this unintended behaviour is happening and want to have some clues on how to fix this.
I am using terraform 0.12.25 and latest terraform aws provider.
There is a known terraform aws provider bug for this issue.
In order to make terraform not replace the running task / container definition, I have to fill out all the default values that its showing on terraform plan with either null or empty sets of configuration.
Once all the parameters are filled out, I ran the terafform plan/apply cycle again to ensure its not replacing the container definition like it was doing it before.
I got the same issue when I have the aws-for-fluent-bit as a sidecar container. Adding "user": "0" in this container definition is the least thing that can prevent the task definition from being force recreated.
{
"name": "log_router",
"image": "public.ecr.aws/aws-observability/aws-for-fluent-bit:latest",
"logConfiguration": null,
"firelensConfiguration": {
"type": "fluentbit",
"options": {
"enable-ecs-log-metadata": "true"
}
},
"user": "0"
}
Problem: I am using terraform to get a list of AMI for a specific OS - ubuntu 20.08
I have checked different examples link
When I use the script this does not give me list of AMI
Script
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-20.08-amd64-server-*"]
}
filter {
name = "virtualization - type"
values = ["hvm"]
}
owners = ["AWS"]
}
I have referred the below link as well
How are data sources used in Terraform?
Output:
[ec2-user#ip-172-31-84-148 ~]$ terraform plan
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-east-1
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_ami.std_ami: Refreshing state...
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your configuration and real physical resources that exist. As a result, no actions need to be performed.
i am not sure where am I going wrong i have checked a lot of links some i have listed below.
Your data should be:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
output "test" {
value = data.aws_ami.ubuntu
}
The owner of Ubuntu is not AWS, and the image is ubuntu-focal-20.04-amd64-server-, not ubuntu-xenial-20.08-amd64-server-.
The above results in (us-east-1):
{
"architecture" = "x86_64"
"arn" = "arn:aws:ec2:us-east-1::image/ami-0dba2cb6798deb6d8"
"block_device_mappings" = [
{
"device_name" = "/dev/sda1"
"ebs" = {
"delete_on_termination" = "true"
"encrypted" = "false"
"iops" = "0"
"snapshot_id" = "snap-0f06f1549ff7327c9"
"volume_size" = "8"
"volume_type" = "gp2"
}
"no_device" = ""
"virtual_name" = ""
},
{
"device_name" = "/dev/sdb"
"ebs" = {}
"no_device" = ""
"virtual_name" = "ephemeral0"
},
{
"device_name" = "/dev/sdc"
"ebs" = {}
"no_device" = ""
"virtual_name" = "ephemeral1"
},
]
"creation_date" = "2020-09-08T00:55:25.000Z"
"description" = "Canonical, Ubuntu, 20.04 LTS, amd64 focal image build on 2020-09-07"
"filter" = [
{
"name" = "name"
"values" = [
"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*",
]
},
{
"name" = "virtualization-type"
"values" = [
"hvm",
]
},
]
"hypervisor" = "xen"
"id" = "ami-0dba2cb6798deb6d8"
"image_id" = "ami-0dba2cb6798deb6d8"
"image_location" = "099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907"
"image_type" = "machine"
"most_recent" = true
"name" = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200907"
"owner_id" = "099720109477"
"owners" = [
"099720109477",
]
"product_codes" = []
"public" = true
"root_device_name" = "/dev/sda1"
"root_device_type" = "ebs"
"root_snapshot_id" = "snap-0f06f1549ff7327c9"
"sriov_net_support" = "simple"
"state" = "available"
"state_reason" = {
"code" = "UNSET"
"message" = "UNSET"
}
"tags" = {}
"virtualization_type" = "hvm"
}
I'm trying to create google_compute_instance_template with image container.
On the GUI under instance template you've to check the checkbox:
"Deploy a container image to this VM instance"
After that, I can add the container image URI and on the advanced options, I can add environment params, args, etc ...
Unfortunately, I didn't find how to do it from Terraform.
Thanks for help.
I think this terraform module is what you're looking for - https://github.com/terraform-google-modules/terraform-google-container-vm
example usage:
module "gce-container" {
source = "github.com/terraform-google-modules/terraform-google-container-vm"
version = "0.1.0"
container = {
image="gcr.io/google-samples/hello-app:1.0"
env = [
{
name = "TEST_VAR"
value = "Hello World!"
}
],
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = "false"
},
{
mountPath = "/persistent-data"
name = "data-disk-0"
readOnly = "false"
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
{
name = "data-disk-0"
gcePersistentDisk = {
pdName = "data-disk-0"
fsType = "ext4"
}
},
]
restart_policy = "Always"
}
The Managed Instance example in https://github.com/terraform-google-modules/terraform-google-container-vm has not been updated for Terraform version 1.1.2 which is what i am using so i'm going to post my configuration.
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
container = {
image="gcr.io/project-name/image-name:tag"
securityContext = {
privileged : true
}
tty : true
env = [
{
name = "PORT"
value = "3000"
}
],
# Declare volumes to be mounted.
# This is similar to how docker volumes are declared.
volumeMounts = []
}
# Declare the Volumes which will be used for mounting.
volumes = []
restart_policy = "Always"
}
data "google_compute_image" "gce_container_vm_image" {
family = "cos-stable"
project = "cos-cloud"
}
resource "google_compute_instance_template" "my_instance_template" {
name = "instance-template"
description = "This template is used to create app server instances"
// the `gce-container-declaration` key is very important
metadata = {
"gce-container-declaration" = module.gce-container.metadata_value
}
labels = {
"container-vm" = module.gce-container.vm_container_label
}
machine_type = "e2-small"
can_ip_forward = false
scheduling {
automatic_restart = true
on_host_maintenance = "MIGRATE"
}
// Create a new boot disk from an image
disk {
source_image = data.google_compute_image.gce_container_vm_image.self_link
auto_delete = true
boot = true
disk_type = "pd-balanced"
disk_size_gb = 10
}
network_interface {
subnetwork = google_compute_subnetwork.my-network-subnet.name
// Add an ephemeral external IP.
access_config {
// Ephemeral IP
}
}
service_account {
# Compute Engine default service account
email = "577646309382-compute#developer.gserviceaccount.com"
scopes = ["cloud-platform"]
}
}