Amazon ECS schedule container - amazon-web-services

How can I schedule a container to turn off overnight according to a specified time zone in Amazon ECS?

If you are running an ECS service, you can use time-based auto-scaling to scale down to 0 task instances of that service at a specific time of day, and scale-up to 1 or more instances of that task at another time.
All schedules like this in AWS use the UTC time zone. You would have to convert the times in your time zone to UTC before configuring the schedule.

Related

AWS ASG target tracking an ECS took 15 minutes to scale-in after the desired tasks of ECS is 0

I have an ECS on AWS which uses a capacity provider. The ASG associated with the capacity provider is responsible to scale-out and scale-in EC2 instances based on the ECS desired task count of ECS. It is worth mentioning that the desired task is managed by a lambda function and updated based on some metrics (calculate the depth of an SQS and based on that, change the desired task of ECS).
Scaling-out is happening almost immediately (without considering the provisioning and pending time) but when the desired task is set to zero in ECS (By lambda function), it takes at least 15 minutes for ASG to turn off the instances. Sinec we are using high performance EC2 types with large numbers, this scaling-in time costs a lot of money to us. I want to know is there any way to reduce this cooldown time to a minutes?
P.S: I have set the default cooldown to 120 but it didn't change anything

How to shutdown EC2 instances backed by ECS to save the cost for staging/QA

We have hosted a docker container on AWS ECS with EC2 instances and would like to terminate/showdown these EC2 instances in the night & weekend for Staging/QA to save the cost.
Thanks in advance :)
The AWS Instance Scheduler is a simple AWS-provided solution that enables customers to easily configure custom start and stop schedules for their Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances. The solution is easy to deploy and can help reduce operational costs for both development and production environments.
https://aws.amazon.com/solutions/implementations/instance-scheduler/
If you run the instances in an AutoScaling Group (ASG) , you could use scheduled policy to set a desired capacity of the ASG to zero for the off-peak times. A second policy would start it for work time.
Alternative would be setup a CloudWatch Event scheduled rule using cron with target of lambda function. The function would do same as the scaling policy. But because this is lambda function, you could also do some other things there. For example, do some pre-shutdown checks or post-shutdown cleanup.
This will work, because if your tasks run in service, ECS will automatically relaunch your tasks when the instances are back.
You could also manage the number of tasks using scheduling capability of Amazon ECS.

Fargate ThrottlingException Rate exceeded

I am attempting to run 30 Fargate tasks at once and I am receiving "ThrottlingException: Rate exceeded".
In the ECS Service Limits, it mentions that the default limit for concurrent Fargate tasks is 50.
Am I being throttled for something other than the number of concurrent Fargate tasks? For example, is Fargate registering a container instance for each task; and thus I'm exceeding the container instance registration rate?
I reached out to AWS support and got the following answer:
ECS' run-task API, when launching a Fargate task, is throttled at 1 TPS
by default with a burst rate of 10. This means that you can--at
most--launch 10 tasks every 10 seconds. As such, we recommend that
[you] use some backoff strategy on [your] end when launching tasks.
Alternatively, [you] can use ECS create-service, in which case ECS will
ensure that all tasks are run in time while honoring the throttle
rate.
Essentially, although I could run 30 tasks concurrently, I couldn't start all 30 tasks at the same time due to the throttling of the run-task API for Fargate tasks.
As of November 7th 2018, this limit is not mentioned in the AWS documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_limits.html

Can multiple AWS Data Pipelines share an EC2 resource?

I have 10 pipelines that run at least twice per hour each and use an EC2 resource to copy data from an external MySQL server to S3.
My preference is to let the pipelines launch their own resources (as opposed to use a long-running instance launched manually), but I don't want 10 EC2 instances running continuously (EC2 instances are billed per hour) just to perform a 1-minute job twice an hour. Is there a way to have the pipelines share a launched instance?
You could have a long running EC2 instance combined with Task runner.
[...] In this pattern, you assume almost complete control over which
resources are used and how they are managed [...].
Also, EC2 instances are not billed per hour: https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/
Not to be that guy lol but Google offers per minute charges (actually first 10 minutes are billed, then per minute)

Will Amazon charge for the launching and terminating time of the cluster

When I spawn a cluster on Amazon EMR, it takes some time to launch. If I terminate the cluster before it gets created, do I get charged at all.
The cluster also takes some time to terminate. Suppose I terminate the cluster at 58 minutes and the cluster takes an additional 5 minutes to terminate. Do I get charged for 1 hour or 2 hours.
Amazon EMR has two cost components: Amazon EC2 and Amazon EMR
Both are charged based upon the running time of the Amazon EC2 instances. So, you will be charged for the number of 60-minute periods that each Amazon EC2 instance was running (rounded up).
In your example, if you terminated the cluster at 58 minutes but the instances were still shown as "running" beyond 60 minutes, they would be charged for an additional hour.
If you are using Auto Scaling for EMR clusters, instances will automatically remain running until the end of the billing hour (giving extra capacity at no extra charge, through to the end of the hour).