Once you've created a task definition in Amazon's EC2 Container Service, how do you delete or remove it?
It's a known issue. Once you de-register a Task Definition it goes into INACTIVE state and clutters up the ECS Console.
If you want to vote for it to be fixed, there is an issue on Github. Simply give it a thumbs up, and it will raise the priority of the request.
I've recently found this gist (thanks a lot to the creator for sharing!) which will deregister all task definitions for your specific region - maybe you can adapt it to skip some which you want to keep: https://gist.github.com/jen20/e1c25426cc0a4a9b53cbb3560a3f02d1
You need to have jq to run it:
brew install jq
I "hard-coded" my region, for me it's eu-central-1, so be sure to adapt it for your use-case:
#!/usr/bin/env bash
get_task_definition_arns() {
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]'
}
delete_task_definition() {
local arn=$1
aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition "${arn}" > /dev/null
}
for arn in $(get_task_definition_arns)
do
echo "Deregistering ${arn}..."
delete_task_definition "${arn}"
done
Then when I run it, it starts removing them:
Deregistering arn:aws:ecs:REGION:YOUR_ACCOUNT_ID:task-definition/NAME:REVISION...
Oneline approach inspired by Anna A reply:
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]' \
| xargs -I {} aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition {} \
| jq -r '.taskDefinition.taskDefinitionArn'
There is no option to delete a task definition on the AWS console.
But, you can deregister (delete) a task definition by executing the following command number of revisions that you have:
aws ecs deregister-task-definition --task-definitiontask_defination_name:revision_no
Created following gist to safely review, filter and deregister AWS task-definitions and revisions in bulk (max 100 at a time) using JS CLI.
https://gist.github.com/shivam-nagar/aa79b02b74f616f8714d51e419bd10de
Can use this to deregister all revisions for task-definition. This will result in task-definition itself marked as inactive.
Now its supported
I just went inside the Task Definations and clicked on Actions and click on Deregister and it was removed from the UI
Related
I am using CircleCI for my CI/CD along with CodeDeploy. I would like to run an ecs run-task command and would like the task to complete before moving on to the more intricate deployment stages, which we use CodeDeploy for, and is triggered through the CircleCI config. In a previous version of the aws cli the --wait flag was an option for this, but is not an option in aws version 2+. Are there any other simple alternatives that people are using to get around this?
Adding my solution here thanks to Mark B's response.
TASK_ID=$(aws ecs run-task \
--profile staging \
--cluster <cluster-name> \
--task-definition <task-definition> \
--count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[$SUBNET_ONE_STAGING, $SUBNET_TWO_STAGING, $SUBNET_THREE_STAGING],securityGroups=[$SECURITY_GROUP_IDS_STAGING],assignPublicIp=ENABLED}" \
--overrides '{"containerOverrides":[{"name": "my-app","command": ["/bin/sh", "-c", "bundle exec rake db:migrate && bundle exec rake after_party:run"]}]}' \
| jq -r '.tasks[0].taskArn') \
&& aws ecs wait tasks-stopped --cluster <cluster-name> --tasks ${TASK_ID}
You would use the aws ecs wait capability in the CLI. Note that this is the same in version 1 of the CLI and version 2, there was never a --wait for ECS tasks in the core AWS CLI as far as I'm aware.
Specifically, after starting the task and getting the task ID returned from the run-task command, you would use aws ecs wait task-stopped --tasks <task-id> to wait for the task to be done/stopped.
I have around 200 lambda functions that I need to delete. Using the console I can only delete one at a time, which would be really painful. Does anyone know a cli command to bulk delete all the functions? Thanks!
I've just found an answer using an old script to delete IAM users I had:
aws lambda list-functions --region us-east-1 | jq -r '.Functions | .[] | .FunctionName' |
while read uname1; do
echo "Deleting $uname1";
aws lambda delete-function --region us-east-1 --function-name $uname1;
done
Whats the easiest way to display the currently running container images in a ECS service from a command line? Currently I'm using an ugly mix of aws cli and jq:
aws ecs list-services --cluster CLUSTER_NAME | jq -r '.serviceArns | .[] | select(. | contains("SERVICE_NAME"))|split("/")|last' |xargs -I {} aws ecs describe-services --cluster CLUSTER_NAME --services {}|jq -r '.services[0].taskDefinition|split("/")|last'| xargs -I {} aws ecs describe-task-definition --task-definition {}|jq '.taskDefinition.containerDefinitions[0].image'
Surely there must be an easier way?
You can do a little better if you describe task definitions directly instead of going through services first. You can get rid of jq if you use --query. And you can get rid of xargs if you use a nested command.
aws ecs describe-tasks --output text --query tasks[].containers[].[image] --tasks `aws ecs list-tasks --desired-status RUNNING --query taskArns --output text`
list-tasks supports --cluster if you want to limit it to just one of the clusters.
ecs-cli is a command line tool specifically made for ECS, but it doesn't print out the image, only the task definition name.
I have just tried to simplify your given query & it works. Please give it try:
aws ecs describe-services \
--services SERVICE_NAME \
--cluster CLUSTER_NAME | jq ".services[0].deployments[0].taskDefinition" | xargs -I {} aws ecs describe-task-definition --task-definition {} | jq '.taskDefinition.containerDefinitions[0].image'
I am trying to launch an ECS contianer instance and passing through userdata to register it to a cluster and also start run a task definition.
When the task is complete the instance will be terminated.
I am using the guide on AWS docs to start a task at container launch.
Below userdata(cluster and task def params omitted)
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=my_cluster
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
When the instance is created it is launched to the default cluster not the one I specify in the userdata and no tasks are started.
I have deconstructed the above script to work out where it is failing but Ive had no luck.
Any help would be appreciated.
From the AWS Documentation.
Configure your Amazon ECS container instance with user data, such as
the agent environment variables from Amazon ECS Container Agent
Configuration. Amazon EC2 user data scripts are executed only one
time, when the instance is first launched.
By default, your container instance launches into your default
cluster. To launch into a non-default cluster, choose the Advanced
Details list. Then, paste the following script into the User data
field, replacing your_cluster_name with the name of your cluster.
So, in order for you to be able to add that EC2 instance to your ECS cluster, You should change this variable to the name of your cluster:
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
Change your_cluster_name to whatever the name is of your cluster.
I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?