Terraform import ECS task definition from another project - amazon-web-services

I have multiple projects, each with their own Terraform to manage the AWS infrastructure specific to that project. Infrastructure that's shared (a VPC for example): I import into the projects that need it.
I want to glue together a number of different tasks from across different services using step functions, but some of them are Fargate ECS tasks. This means I need to specify the task definition ARN in the step function.
I can import a task definition but if I later update the project that manages that task definition, the revision will change while the step function will continue to point at the old task definition revision.
At this point I might as well hard-code the task ARN into the step function and just have to remember to update it in the future.
Anyone know a way around this?

You can use the aws_ecs_task_definition data source to look up the latest revision of a task definition family:
data "aws_ecs_task_definition" "example" {
task_definition = "example"
}
output "example" {
value = data.aws_ecs_task_definition.example
}
Applying this gives the following output (assuming you have an example service in your AWS account):
example = {
"family" = "example"
"id" = "arn:aws:ecs:eu-west-1:1234567890:task-definition/example:333"
"network_mode" = "bridge"
"revision" = 333
"status" = "ACTIVE"
"task_definition" = "example"
"task_role_arn" = "arn:aws:iam::1234567890:role/example"
}
So you could do something like this:
data "aws_ecs_task_definition" "example" {
task_definition = "example"
}
data "aws_ecs_cluster" "example" {
cluster_name = "example"
}
resource "aws_sfn_state_machine" "sfn_state_machine" {
name = "my-state-machine"
role_arn = aws_iam_role.iam_for_sfn.arn
definition = <<EOF
{
"StartAt": "Manage ECS task",
"States": {
"Manage ECS task": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.waitForTaskToken",
"Parameters": {
"LaunchType": "FARGATE",
"Cluster": ${data.aws_ecs_cluster.example.arn},
"TaskDefinition": ${data.aws_ecs_task_definition.example.id},
"Overrides": {
"ContainerOverrides": [
{
"Name": "example",
"Environment": [
{
"Name": "TASK_TOKEN_ENV_VARIABLE",
"Value.$": "$$.Task.Token"
}
]
}
]
}
},
"End": true
}
}
}
EOF
}

Related

Cloudwatch rule not being triggered as event pattern is deployed in lexicographic order usung terraform

I'm trying to write a config where lambda function is being triggered if there is an instance class change in AWS RDS resource.
This is the custom event pattern:
{
"source": [
"aws.rds"
],
"detail-type": [
"RDS DB Instance Event"
],
"detail": {
"EventID": [
"RDS-EVENT-0014"
]
}
}
The following is my terraform config for cloud watch event rule resource:
resource "aws_cloudwatch_event_rule" "rds_instance_event" {
name = "${var.region}-rds-instance-event"
description = "This event trigger is for RDS instance events"
event_pattern = <<EOF
{
"source": [
"aws.rds"
],
"detail-type": [
"RDS DB Instance Event"
],
"detail": {
"EventID": [
"RDS-EVENT-0014"
]
}
}
EOF
}
The problem is that event_patter get's uploaded in lexicographical order and the cloud watch event is not being triggered. When I change the event_pattern manually to the original order, it works.
Does anyone know how to fix this?
I tried rendering it from a data template as following, still didn't work.
data "template_file" "event_pattern" {
template = file("${path.module}/manifests/rds-notification-event-rule.json")
}
resource "aws_cloudwatch_event_rule" "rds_instance_event" {
name = "${var.region}-rds-instance-event"
description = "This event trigger is for RDS instance events"
event_pattern = data.template_file.event_pattern.rendered
}

How to remove bigquery dataset permission via CLI

Want to remove multiple dataset permission by cli (if possible by one go). Is there a way for the same via CLI?
for example,
abc#gmail.com roles/bigquery.dataowner to dataset/demo_aa
xyzGroupSA#gmail.com role/bigquery.user to dataset/demo_bb
Want to remove the email id permission from the respective dataset via CLI with "bq".
[Went through the ref https://cloud.google.com/bigquery/docs/dataset-access-controls#bq_1 , its having local file reference and its very lengthy. But what about when you having a jump server in production environment and need to perform via running commands.]
You can delegate this responsability to CI CD for example.
Solution 1 :
You can create a project :
my-project
------dataset_accesses.json
Run your script on Cloud Shell on your production project or in a Docker image from gcloud-sdk image.
Use a service account having the permissions to update dataset accesses
run the bq script with bq update-access-control :
bq update --source dataset_accesses.json mydataset
dataset_accesses.json :
{
"access": [
{
"role": "READER",
"specialGroup": "projectReaders"
},
{
"role": "WRITER",
"specialGroup": "projectWriters"
},
{
"role": "OWNER",
"specialGroup": "projectOwners"
},
{
"role": "READER",
"specialGroup": "allAuthenticatedUsers"
},
{
"role": "READER",
"domain": "domain_name"
},
{
"role": "WRITER",
"userByEmail": "user_email"
},
{
"role": "WRITER",
"userByEmail": "service_account_email"
},
{
"role": "READER",
"groupByEmail": "group_email"
}
],
...
}
Solution 2
Use Terraform to update the permission on your dataset :
resource "google_bigquery_dataset_access" "access" {
dataset_id = google_bigquery_dataset.dataset.dataset_id
role = "OWNER"
user_by_email = google_service_account.bqowner.email
}
With Terraform, it is also easy to pass a list and apply the resource on it :
Json file :
{
"datasets_members": {
"dataset_your_group1": {
"dataset_id" : "your_dataset",
"member": "group:your_group#loreal.com",
"role": "bigquery.dataViewer"
},
"dataset_your_group2": {
"dataset_id" : "your_dataset",
"member": "group:your_group2#loreal.com",
"role": "bigquery.dataViewer"
}
}
}
locals.tf :
locals {
datasets_members = jsondecode(file("${path.module}/resource/datasets_members.json"))["datasets_members"]
}
resource "google_bigquery_dataset_access" "accesses" {
for_each = local.datasets_members
dataset_id = each.value["dataset_id"]
role = each.value["role"]
group_by_email = each.value["member"]
}
This work also with google_bigquery_dataset_iam_binding

Terraform Import AWS Secrets Manager Secret Version

AWS maintains a secret versioning system, a new version is created if the secret value is updated or if the secret is rotated.
I am in the process of getting existing secrets in AWS under the purview of Terraform. As step 1 I declared all the Terraform resources I needed :
resource "aws_secretsmanager_secret" "secret" {
name = var.secret_name
description = var.secret_description
kms_key_id = aws_kms_key.main.id
recovery_window_in_days = var.recovery_window_in_days
tags = var.secret_tags
policy = data.aws_iam_policy_document.secret_access_policy.json
}
// AWS secrets manager secret version
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = jsonencode(var.secret_name_in_secrets_file)
}
Next I imported :
Import secret to state :
terraform import module.<module_name>.aws_secretsmanager_secret.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>```
Import secret version to state :
terraform import module.<module_name>.aws_secretsmanager_secret_version.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>|<unique_secret_id aka AWSCURRENT>
Post this I expected the Terraform plan to only involve changes to the resource policy. But Terraform tried to destroy and recreate the secret version, which did not make sense to me.
After going ahead with the plan the secret version that was initially associated with the AWSCURRENT staging label, the one that I used above in the import became the AWSPREVIOUS staging label id and a new AWSCURRENT was created.
Before terraform import :
{
"Versions": [
{
"VersionId": "initial-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-previous",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
After TF import and apply:
{
"Versions": [
{
"VersionId": "post-import-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-current",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
I was expecting initial-current to remain in the AWSCURRENT stage part. Why did AWS make the initial AWSCURRENT secret ID that I imported using TF into AWSPREVIOUS and create a new one since nothing changed in terms of value or rotation? I expected no changes on that front since TF imported the version

Not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken

I'm a newbie to Terraform and I'm trying to deploy a Docker image from AWS ECR to ECS. However, I'm getting the following Error. Can someone help to to resolve this?
ResourceInitializationError: unable to pull secrets or registry auth:
execution resource retrieval failed: unable to retrieve ecr registry
auth: service call has been retried 1 time(s):
AccessDeniedException: User: arn:aws:sts::AccountID:assumed-role/ecsExecution-1/25d077c2af604f4e93feead72a141e3g is not authorized to perform:
ecr:GetAuthorizationToken on resource: *
because no identity-based policy allows the
ecr:GetAuthorizationToken action
status code: 400, request id: 1a1bee4c-5ab6-4b44-bbf8-5586edea6b3g*
This is my code
resource "aws_ecs_cluster" "first-cluster" {
name = "test-docker-deploy"
}
resource "aws_ecs_task_definition" "first-task" {
family = "first-task"
container_definitions = <<TASK_DEFINITION
[
{
"name": "first-task",
"image": "899696473236.dkr.ecr.us-east-1.amazonaws.com/first-repo:nginx-demo",
"cpu": 256,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
TASK_DEFINITION
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 256
memory = 512
execution_role_arn = "${aws_iam_role.Execution_Role.arn}"
}
resource "aws_iam_role" "Execution_Role" {
name = "ecsExecution-1"
assume_role_policy = "${data.aws_iam_policy_document.role_policy.json}"
}
data "aws_iam_policy_document" "role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
resource "aws_ecs_service" "first-service"{
name = "docker-service"
cluster = "${aws_ecs_cluster.first-cluster.id}"
task_definition = "${aws_ecs_task_definition.first-task.arn}"
launch_type = "FARGATE"
desired_count = 1
network_configuration {
subnets = ["${aws_default_subnet.subnet-a.id}"]
assign_public_ip = true
}
}
resource "aws_default_vpc" "default" {
}
resource "aws_default_subnet" "subnet-a" {
availability_zone = "us-east-1a"
}
Besides having the assume role policy (i.e., permissions or trust policy), you need to have the execution policy [1]. The former one says that ECS task is allowed to assume the role in the background and the latter one says what ECS task can do when it assumes that role. So, the permission policy is correct, but you need the following piece of code for this to work (i.e., the ecs_task_policy):
data "aws_iam_policy_document" "ecs_task_policy" {
statement {
sid = "EcsTaskPolicy"
actions = [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
]
resources = [
"*" # you could limit this to only the ECR repo you want
]
}
statement {
actions = [
"ecr:GetAuthorizationToken"
]
resources = [
"*"
]
}
statement {
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = [
"*"
]
}
}
resource "aws_iam_role" "Execution_Role" {
name = "ecsExecution-1"
assume_role_policy = data.aws_iam_policy_document.role_policy.json
inline_policy {
name = "EcsTaskExecutionPolicy"
policy = data.aws_iam_policy_document.ecs_task_policy.json
}
}
data "aws_iam_policy_document" "role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
Also note that depending on what is inside of the Docker image that you use for the task, it might be required to add more AWS permissions to the execution policy. The ECR repo access can be limited to the ARN of the ECR repo where the Docker image is located. In theory, the log permissions might not be required at this time, but if you want to see if there are any errors you are going to need to send the logs somewhere. If you need that, you will have to add the logConfiguration section to the task definition as well [2].
[1] https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_execution_IAM_role.html
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#create_awslogs_loggroups

Adding tags to ECS Service - InvalidParameterException

I have a fully working Fargate application up and running in AWS. I went back to add tags to all my resources to better monitor costs in a microservice architecture. Upon adding tags to my aws_ecs_service resource, I got the following exception when running terraform apply:
aws_ecs_service.main: error tagging ECS Cluster (arn:aws:ecs:*region*:*account_number*:service/*service_name*): InvalidParameterException: Long arn format must be used for tagging operations
After some research, I found that on November 15, AWS introduced a new ARN and ID format: https://aws.amazon.com/ecs/faqs/#Transition_to_new_ARN_and_ID_format
I know that I need to apply the settings to the IAM Role that I have assigned to my service, but I can't figure out how. Here is a link to the AWS docs for account settings: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Setting.html
Below is a snippet of the ecs service resource as well as the task definition:
resource "aws_ecs_task_definition" "app" {
family = "${var.app_name}"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "${var.app_cpu}"
memory = "${var.app_memory}"
execution_role_arn = "${var.execution_role_arn}"
task_role_arn = "${var.task_role_arn}"
tags {
Name = "${var.app_name}-ecs-task-definition-${var.environment}"
Service = "${var.app_name}"
Environment = "${var.environment}"
Cost_Center = "${var.tag_cost_center}"
Cost_Code = "${var.tag_cost_code}"
}
container_definitions = <<DEFINITION
[
{
"cpu": ${var.app_cpu},
"image": "${var.app_image}",
"memory": ${var.app_memory},
"name": "${var.app_name}",
"networkMode": "awsvpc",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "stash-${var.app_name}",
"awslogs-region": "${var.aws_region}",
"awslogs-stream-prefix": "${var.app_name}"
}
},
"portMappings": [
{
"containerPort": ${var.app_port},
"hostPort": ${var.app_port}
}
]
}
]
DEFINITION
}
resource "aws_ecs_service" "main" {
name = "${var.app_name}-service"
cluster = "${var.cluster_id}"
task_definition = "${aws_ecs_task_definition.app.arn}"
desired_count = "1"
launch_type = "FARGATE"
network_configuration {
security_groups = ["${var.security_groups}"]
subnets = ["${var.subnets}"]
}
load_balancer {
target_group_arn = "${var.target_group_arn}"
container_name = "${var.app_name}"
container_port = "${var.app_port}"
}
lifecycle {
ignore_changes = ["desired_count"]
}
tags {
Name = "${var.app_name}-ecs-service-${var.environment}"
Service = "${var.app_name}"
Environment = "${var.environment}"
Cost_Center = "${var.tag_cost_center}"
Cost_Code = "${var.tag_cost_code}"
}
}
Here is a look into my security resource:
resource "aws_iam_role" "task_role" {
name = "${var.app_name}-task-${var.environment}"
assume_role_policy = <<END
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
END
}
I am using terraform version 0.11.8.
Since you mentioned terraform, let me add this (I am also using terraform and hit a very similar problem). You can use AWS CLI, with the ECS subcommand put-account-setting to set the three LongArnFormat's
aws ecs put-account-setting --name containerInstanceLongArnFormat --value enabled --region _yourRegion_
aws ecs put-account-setting --name serviceLongArnFormat --value enabled --region _yourRegion_
aws ecs put-account-setting --name taskLongArnFormat --value enabled --region _yourRegion_
Reference: AWS Doc
Per the online documentation for opting in to the new ARN format, you'll need Root account access to opt-in for a specific IAM role.
The steps detailed in the above link state you should
Create an IAM role for your cluster (you have done this)
Log in as root
Head to the opt in page and select that IAM role to opt in
Hopefully profit!
Note that you can also opt-in for your entire account, until Jan 2020 at which point this change will become mandatory.