AWS deplyong ECS services through CLI - amazon-web-services

I want to deploy (restart) my ECS tasks (of launch type Fargate) through aws cli (in last step of CI/CD).
The issue with them is that it seems I have to stop tasks, and update their status again. Still ok, but in the following command:
aws --region regionName ecs stop-task --cluster example-cluster --task taskID, for --task I either must use task UUID or task's ARN, both of which are not fully fixed.
Task's UUID changes by each revision and ARN is also a name whose last part is the revision number. Is there an identifier fully fixed that I can use as ARN?
Also, in ARN, for example if I have nginx:4, I cannot use "latest" instead of 4, making it completely difficult to handle and automate.

I found the solution, it was a mistake to use *-task family of commands. To deploy a service, we simply must use update-service command, like this:
aws --region regionName ecs update-service --cluster clusterName --force-new-deployment --service serviceName
The point is with --force-new-deployment, and this command is useful for those who do not use CodeDeploy.

Related

How to create an Amazon Elastic Container Service Task Role to be configured to the task definition initialized by Fargate in ECS

I created a task by using Fargate and then would like to access to the container by means of aws ecs exec for debugging purpose, thus tried the following command with aws CLI to enable the execute command:
aws ecs update-service --cluster dictionary --service dictionary-service --region eu-north-1 --enable-execute-command --force-new-deployment
But encountered the following error:
An error occurred (InvalidParameterException) when calling the UpdateService operation: The service couldn't be updated because a valid taskRoleArn is not being used. Specify a valid task role in your task definition and try again.
Question: How to create an Amazon Elastic Container Service Task Role in IAM?

Updating an ECS Service after Registering a new Task Definition gets the old task

I'm using Gitlab and the AWS CLI to build code to an image, register a new task with the new image and -- hopefully -- updating the service with the new task definition
In my building script, registering the new task definition with the new image is working, using the command:
aws ecs register-task-definition --cli-input-json file://input.json --region $AWS_REGION
And then I'm trying to update my service with the new task definition using
aws ecs update-service --cluster $AWS_ECS_CLUSTER --service $AWS_ECS_SERVICE --force-new-deployment --region $AWS_REGION
When I go to the task definitions in ECS, I can see the new one, but the service alwasy creates new tasks using the old task definition, even though a new one is there.
I'm guessing that there's some delay between when the register-task-definition command returns and when the new definition is ready. Is there a way to check the status of the new task, or get an update so that I get new tasks with the new definition?
You can force your ECS service to redeploy using the latest task definition revision, just adding the parameter --task-definition to your ecs update-service command.
If you specify the task definition family name, it will deploy the latest available revision:
aws ecs update-service --cluster $ECS_CLUSTER --service $ECS_SERVICE --task-definition $ECS_TASKDEFINITION_FAMILY_NAME --force-new-deployment --region $AWS_REGION
Reference here.

cloudformation ecs force new deployement

What is the equivalent of --force-new-deployment flag in cloudformation ?
(aws ecs update-service --cluster --service --force-new-deployment)
In my cloudformation template, my docker image tag is latest. However I want to be able to force the redeployment if I replay the stack.
Thanks you
Think of CloudFormation as just a provisioning service or an orchestrator.
Specifying an Image as repository-url/image:tag will definitely fetch the specified image with specific tag from your repo but only upon a stack operation. Once a stack operation is finished, any changes to service will be depending on the next stack operation.
What you can do here is either having either
Cloudwatch Event based rule targeting a Lambda Function whenever there is an image upload to ECR.
Or in case of using some other repo than ECR, have a hook configured in a way that will invoke update-service --cluster --service --force-new-deployment ... command whenever there is a new image upload.
Considerations:
This can lead your stack being in DRIFTED status.
Just make sure to keep your stack's Image property's value in sync with the latest one running within service, whenever you plan to update the stack.
Here's something I came up with to force a deploy after a stack update if needed. The service name is the same as the stack name, so if yours is different you will need to adjust accordingly.
START_TIME=`date +%s`
aws cloudformation deploy --stack-name $STACK_NAME [other options]
LAST_DEPLOY=`aws ecs describe-services --services $STACK_NAME --query "services[0].deployments[?status=='PRIMARY'].createdAt" --output text`
if [ "$START_TIME" -gt "${LAST_DEPLOY%.*}" ]; then aws ecs update-service --service $STACK_NAME --force-new-deployment; fi
The best I have been able to come up with is to change my healthcheck from curl -f to curl --fail and back to force a new deployment. Not ideal but it works.
I'd be interested in a better way, ideally built in to CloudFormation too.

How do I get the docker image ID of a Fargate task using the CLI/SDK?

I want to make sure that the task is running the latest image.
Within the container, I can get the docker image ID (such as 183f9552f0a5) by calling http://169.254.170.2/v2/metadata, however I am looking for a way to get it on my laptop.
Is this possible with AWS CLI or SDK?
You first need to get the Task Definition ARN for the Task using describe_tasks. You can skip this step if you already know the ARN.
aws ecs describe-tasks --tasks TASK_ARN
Then you can use describe_task_definition to get the image name.
aws ecs describe-task-definition --task-definition TASKDEF_ARN

AWS ECS restart Service with the same task definition and image with no downtime

I am trying to restart an AWS service (basically stop and start all tasks within the service) without making any changes to the task definition.
The reason for this is because the image has the latest tag attached with every build.
I have tried stopping all tasks and having the services recreate them but this means that there is some temporarily unavailable error when the services are restarting in my instances (2).
What is the best way to handle this? Say, A blue-green deployment strategy so that there is no downtime?
This is what I have currently. It'shortcomings is that my app will be down for a couple of seconds as the service's tasks are being rebuilt after deleting them.
configure_aws_cli(){
aws --version
aws configure set default.region us-east-1
aws configure set default.output json
}
start_tasks() {
start_task=$(aws ecs start-task --cluster $CLUSTER --task-definition $DEFINITION --container-instances $EC2_INSTANCE --group $SERVICE_GROUP --started-by $SERVICE_ID)
echo "$start_task"
}
stop_running_tasks() {
tasks=$(aws ecs list-tasks --cluster $CLUSTER --service $SERVICE | $JQ ".taskArns | . []");
tasks=( $tasks )
for task in "${tasks[#]}"
do
[[ ! -z "$task" ]] && stop_task=$(aws ecs stop-task --cluster $CLUSTER --task "$task")
done
}
push_ecr_image(){
echo "Push built image to ECR"
eval $(aws ecr get-login --region us-east-1)
docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/repository:$TAG
}
configure_aws_cli
push_ecr_image
stop_running_tasks
start_tasks
Use update-service and the --force-new-deployment flag:
aws ecs update-service --force-new-deployment --service my-service --cluster cluster-name
Hold on a sec.
If I understood you usecase correctly, this is addressed in the official docs:
If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest), you do not need to create a new revision of your task definition. You can update the service using the procedure below, keep the current settings for your service, and select Force new deployment....
To avoid downtime, you should manipulate 2 parameters: minimum healthy percent and maximum percent:
For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
This basically mean, that regardless of whether your task definition changed and to what extent, there can be an "overlap" between the old and the new ones, and this is the way to achieve resilience and reliability.
UPDATE:
Amazon has just introduced External Deployment Controllers for ECS(both EC2 and Fargate). It includes a new level of abstraction called TaskSet. I haven't tried it myself yet, but such fine grain control over service and task management(both APIs are supported) can potentially solve the problem akin this one.
After you push your new image to your Docker repository, you can create a new revision of your task definition (it can be identical to the existing task definition) and update your service to use the new task definition revision. This will trigger a service deployment, and your service will pull the new image from your repository.
This way your task definition stays the same (although updating the service to a new task definition revision is required to trigger the image pull), and still uses the "latest" tag of your image, but you can take advantage of the ECS service deployment functionality to avoid downtime.
The fact that I have to create a new revision of my task definition every time even when there is no change in the task definition itself is not right.
There are a bunch of crude bash implementations on this which means that AWS should have the ECS service scheduler listen for changes/updates in the image, especially for an automated build process.
My crude work-around to this was have two identical task definitions and switch between them for every build. That way I don't have redundant revisions.
Here is the specific script snippet that does that.
update_service() {
echo "change task definition and update service"
taskDefinition=$(aws ecs describe-services --cluster $CLUSTER --services $SERVICE | $JQ ".services | . [].taskDefinition")
if [ "$taskDefinition" = "$TASK_DEF_1" ]; then
newDefinition="$TASK_DEF_2"
else
newDefinition="$TASK_DEF_1"
fi
rollUpdate=$(aws ecs update-service --cluster $CLUSTER --service $SERVICE --task-definition $newDefinition)
}
Did you have this question solved? Perhaps this will work for you.
With a new release image pushed to ECR with a version tag, i.e. v1.05, and the latest tag, the image locator in my task definition needed to be explicitly updated to have this version tag postfixed like :v1.05.
With :latest, this new image did not get pulled by the new container after aws ecs update-service --force-new-deployment --service my-service.
I was doing tagging and pushing like this:
docker tag ${imageId} ${ecrRepoUri}:v1.05
docker tag ${imageId} ${ecrRepoUri}:latest
docker push ${ecrRepoUri}
...where as this is the proper way of pushing multiple tags:
docker tag ${imageId} ${ecrRepoUri}
docker push ${ecrRepoUri}:v1.05
docker push ${ecrRepoUri}:latest
This was briefly mentioned in the official docs without a proper example.
Works great https://github.com/fdfk/ecsServiceRestart
python ecsServiceRestart.py restart --services="app app2" --cluster=test
The quick and dirty way:
login to EC2 instance running the task
find your container with docker container list
use docker restart [container]