I have a docker image. I would like to create a container periodically and execute as a job, say every 1 hour, by creating CloudWatch Rule.
As we are using AWS cloud, I am looking at the AWS Batch service. Interestingly there is also a ECS Scheduled task.
What is the difference between these 2?
Note: I have an init container - that is I have 2 docker containers to run one after another. It seems to be possible with ECS Scheduled Task. But not with Batch.
AWS Batch is for batch jobs, such as processing numerous images or videos in parallel (one container per image/video). This is mostly useful in batch-type workloads for research purposes.
AWS Batch is based on ECS (also supports EC2), and it allows you to simply to run your containers. It does not have specific use-case, it is more generic. If you don't have batch-type projects, then ECS would be probably better choice for you.
The other answers are spot on. I just wanted to add that we (AWS container team) ran a session at re:Invent last year that covered these options and provided hints about when using one over the other. The session covers the relationship between ECS, EC2 and Fargate (something that is often missed) as well as when to use "raw" ECS, Vs Step Functions Vs Batch as an entry point for running your batch jobs. This is the link to the session.
If you want to run two containers in sequence, using AWS Fargate, then you probably want to orchestrate it with AWS Step Functions. Step Functions will allow you to call arbitrary tasks in serial, and it has direct integration with AWS Fargate.
Amazon EventBridge Rule (hourly) ----- uses AWS IAM role to gain permission to trigger Step Functions
|
| triggers
|
AWS Step Functions ----- Uses AWS IAM role to gain permission to trigger Fargate
|
| triggers
|
AWS Fargate (Amazon ECS) Task Definition
AWS Batch is designed for data processing tasks that need to scale out across many nodes. If your use case is simply to spin up a couple of containers in sequence, then AWS Batch will be overkill.
CloudWatch Event Rules
FYI CloudWatch Event Rules still work, but the service has been rebranded as Amazon EventBridge. I'd recommend using the Amazon EventBridge console and APIs instead of Amazon CloudWatch Events APIs going forward.
Related
This is going to be a fairly general question. I have a pipeline that I would like to execute in real time. The pipeline can have sudden and unpredictable load changes, so scalability (both up and down) are important. The pipeline stages can be packaged as docker containers though they don't necessarily start that way.
I see three ways to build said pipeline on AWS. 1) I can write an Airflow DAG and use AWS managed workflows for Apache airflow. 2) I can write an AWS lambda pipeline with AWS step functions. 3) I can write a Kubeflow pipeline on top of AWS EKS.
These three options have different ramifications in terms of cost and scalability, I would presume. E.g. scaling a Kubernetes cluster in AWS EKS will be a lot slower than scaling Lambda functions assuming I don't hit the service quota for Lambdas. Can someone comment on the scalability of AWS managed Airflow? Does it scale faster than EKS? How does it compare to AWS Lambdas?
Why not use Airflow to orchestrate the entire pipeline? Airflow can certainly invoke a Step Function using the StepFunctionStartExecutionOperator or by writing a custom Python function to do the same with the PythonOperator.
Seems like this solution would be the best of both worlds: true data orchestration, monitoring, and alerting in Airflow (while keeping a fairly light Airflow instance since it's pure orchestration) with the scalability and responsiveness in AWS Lambda.
I've used this method for a very similar use case in the past and it worked like a charm. Plus, if you need to scale this pipeline to integrate with other services and systems in the future, Airflow gives you that flexibility because it's an orchestrator and is system- and provider-agnostic.
In my application, I need to run a fargate job(Job1) which loops through a particular task and invokes multiple tasks of fargate Job(Job2). So I want to know what are the possible ways to run this whole operation as a scheduled task? I tried to create ECS cluster with 2 containers and schedule both job1, and job2 using cloud watch events to run. But i was wondering what is the use of AWS Batch? Was is it an alternative for Cloud watch events? Suggest your thoughts please
You could use AWS EventBridge for this task, it uses the same underlying API as CloudWatch Events but with some relevant architectural changes to better implement an event-driven architecture.
Here's the official documentation how to implement a schedule rule, you're looking to use a ECS Target
AWS Batch serves a different purpose than the one from your use case, as per their official documentation:
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
What you're trying to do is quite simple, I recommend you keep it simple and don't try to overcomplicate it.
The goal is to migrate our jobs from Control M to AWS, but before I do that I want to better understand the differences between AWS batch and AWS step functions. From what I've understood, AWS step functions seems more encompassing in that I can have one of my steps run AWS batch.
Can you explain the difference between AWS Batch and AWS Step functions? Which is better suited to migrate to from Control M? (Maybe this is preference)
AWS Batch is service to run an offline workload. With Batch, you can easily set up your offline workload using Docker and defining the set of instances types and how many instances will run this workload.
AWS Step Functions is a serverless workflow management service. It only serves you a way to connect to other AWS Services; you cannot run a script in Step Functions itself and you only define the workflow with input/output from other AWS services.
That said, you can use both services to migrate Control M to AWS and possibly other AWS Services like Lambda (for minor workload), SNS (for e-mail) and S3 (for storage).
I have an AWS CLI invocation (in this case, to launch a configured EMR cluster to do some steps and then shut down) but I'm not sure how to go about running it daily.
I guess one way to do it is an EC2 micro instance running a cron job, or an ECS task in a micro that launches the command, but that all seems like it might be overkill. It looks like there's also a way to do it in Lambda, but rom what I can tell it'd be kludgy.
This doesn't have to be a good long-term solution, something that's suitable until I can do it right (Data Pipelines) would work just fine.
Suggestions?
If it is not a strict requirement to use the AWS CLI, you can use one of the AWS SDK instead to programmatically invoke Lambda.
Schedule a CloudWatch Rules using cron
When configured, the CloudWatch Rules will trigger a Lambda function
Implement a Lambda function that calls EMR using one of the supported SDKs (e.g. the EMR class in the AWS JavaScript SDK)
Make sure that you have the IAM configuration in place
Full example is available in the Schedule AWS Lambda Functions Using CloudWatch Events
Kludgy? Yes, configuration is needed, however if you take into account the amount of work required to launch EC2 / ECS (and make sure that it re-launches in the event of failure), I'd say it evens out.
Not sure about the whole task that you are doing, but to avoid doing it:
Manually
Avoid another set up for resources in AWS (as you mentioned)
I would create a simple job in a Continuous Integration (CI) server like jenkins,bamboo,circleci ..... (list can go on). I would assume that you might already have a CI server running, why not use it?
We can add tags to EC2 instances to help us better track billing usages and to manage instances.
Is there a way to achieve when deploying containers in ECS? I would like the running container to have the ability to know what tag it currently have attached.
It really depends on what you're ultimately trying to visualize after the fact. I'll share a few off-the-cuff thoughts below, and maybe you can extrapolate on these to build something that satisfies your needs.
As you probably are aware, ECS Tasks themselves don't support the notion of tags, however there are some workarounds that you could consider. For example, depending on how you're logging your application's behavior (eg. batching logs to CloudWatch Logs), you could create a Log Stream name, for each ECS Task, that contains a delimited array of tags.
As part of a POC I was building recently, I used the auto-generated computer name to dynamically create CloudWatch Log Stream names. You could easily append or prepend the tag data that you embed in your container images, and then query the tag information from the CloudWatch Log Streams later on.
Another option would be to simply log a metric to CloudWatch Metrics, based on the number of ECS Tasks running off of each unique Task Definition in ECR.
You could build a very simple Lambda function that queries your ECS Tasks, on each cluster, and writes the Task count, for each unique Task Definition, to CloudWatch Metrics on a per-minute basis. CloudWatch Event Rules allow you to trigger Lambda functions on a cron schedule, so you can customize the period to your liking.
You can use this metric data to help drive scaling decisions about the ECS Cluster, the Services and Tasks running on it, and the underlying EC2 compute instances that support the ECS Cluster.
Hope this helps.
Just found this while trying to work out the current situation. For future searchers: I believe tagging was added some time after this question, in late 2018.
I've not yet worked out if you can set this up in the Console or if it's a feature of the API only, but e.g. the Terraform AWS provider now lets you set service or task definition tags to 'bubble through' to tasks – including Fargate ones – via propagate_tags.
I've just enabled this and it works but forces a new ECS service – I guess this is related to it not being obviously editable in the Console UI.