Track status when you update ECS service task definition - amazon-web-services

I am programatically updating the task definition in ECS Service (service has one task) using the update-service api. What is the best way to track the status of this? The API returns immediately and does not wait for the update to complete. One way would be to continuously describe the tasks. Is there a better way?

Is there a better way?
It depends. But an alternative way could by creating dedicated CI/CD pipeline to update your service, especially you this happens regularly. This could be done with AWS CodePipeline with an example of:
Tutorial: Amazon ECS Standard Deployment with CodePipeline
Initial setup can be slow, especially if done for the first time, but once done, you can have a lot of benefits. They include:
automated deployments
notifications if deployment fails or succeeds
clear history of all deployments, and more.

Related

AWS - Batch vs Fargate

I have a docker image. I would like to create a container periodically and execute as a job, say every 1 hour, by creating CloudWatch Rule.
As we are using AWS cloud, I am looking at the AWS Batch service. Interestingly there is also a ECS Scheduled task.
What is the difference between these 2?
Note: I have an init container - that is I have 2 docker containers to run one after another. It seems to be possible with ECS Scheduled Task. But not with Batch.
AWS Batch is for batch jobs, such as processing numerous images or videos in parallel (one container per image/video). This is mostly useful in batch-type workloads for research purposes.
AWS Batch is based on ECS (also supports EC2), and it allows you to simply to run your containers. It does not have specific use-case, it is more generic. If you don't have batch-type projects, then ECS would be probably better choice for you.
The other answers are spot on. I just wanted to add that we (AWS container team) ran a session at re:Invent last year that covered these options and provided hints about when using one over the other. The session covers the relationship between ECS, EC2 and Fargate (something that is often missed) as well as when to use "raw" ECS, Vs Step Functions Vs Batch as an entry point for running your batch jobs. This is the link to the session.
If you want to run two containers in sequence, using AWS Fargate, then you probably want to orchestrate it with AWS Step Functions. Step Functions will allow you to call arbitrary tasks in serial, and it has direct integration with AWS Fargate.
Amazon EventBridge Rule (hourly) ----- uses AWS IAM role to gain permission to trigger Step Functions
|
| triggers
|
AWS Step Functions ----- Uses AWS IAM role to gain permission to trigger Fargate
|
| triggers
|
AWS Fargate (Amazon ECS) Task Definition
AWS Batch is designed for data processing tasks that need to scale out across many nodes. If your use case is simply to spin up a couple of containers in sequence, then AWS Batch will be overkill.
CloudWatch Event Rules
FYI CloudWatch Event Rules still work, but the service has been rebranded as Amazon EventBridge. I'd recommend using the Amazon EventBridge console and APIs instead of Amazon CloudWatch Events APIs going forward.

How to run a schedule job in a AWS?

In my application, I need to run a fargate job(Job1) which loops through a particular task and invokes multiple tasks of fargate Job(Job2). So I want to know what are the possible ways to run this whole operation as a scheduled task? I tried to create ECS cluster with 2 containers and schedule both job1, and job2 using cloud watch events to run. But i was wondering what is the use of AWS Batch? Was is it an alternative for Cloud watch events? Suggest your thoughts please
You could use AWS EventBridge for this task, it uses the same underlying API as CloudWatch Events but with some relevant architectural changes to better implement an event-driven architecture.
Here's the official documentation how to implement a schedule rule, you're looking to use a ECS Target
AWS Batch serves a different purpose than the one from your use case, as per their official documentation:
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
What you're trying to do is quite simple, I recommend you keep it simple and don't try to overcomplicate it.

How can i Update container image with imagedigest parameter in aws fargate cluster with aws cli

I have running my cluster and task is running.
My need is want to update container image in running task in cluster how to do?
My Image is with latest tag and every time any new changes come will push to ecr on latest tag.
Deploying with the tag latest isn't a best practice because you loose a lot of visibility into what you are doing (e.g. scale out events where you deploy more tasks as part of a service will all end up using LATEST but will be effectively running different versions of the code, etc.).
This pontificating aside, you didn't say if you started your task(s) as standalone using the run-task API or if you started your task(s) as part of a service.
If the former, you need to stop your task and run it again. If the latter, you need to redeploy your service using the --force-new-deployment flag.

AWS ECS run latest task definition

I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.

How do I kill a deployment in AWS Opsworks?

How do I kill a long running deployment in Amazon Opsworks?
We run deployments to an integration environment everytime we commit to our code repo. Our current deployments are taking a long time, which causes deployments to stack on top of each other in Opsworks. We're working on making our deployment process for the application more efficient, but until we get that sorted out, is there an easy way to kill a deployment so we can just run the latest one in the queue?
Unfortunately there is no easy way.
There is no API call to cancel them.
So the only possible approach would be checking on the instance if it's necessary to run the deployment or skip it. You can achieve this with a custom cookbook.