I'm migrating a software that I use to extract images from documents from iron.io to ECS Fargate, and the start of a container in EC2 is very slow, somethings takes 3 minutes for a container change the state from PENDING to RUNNING. Is it possible to improve this speed? I've searched about this subject, and there is a lack of information about why it takes so much time sometimes, and others are faster (but still very slow).
The majority of time spent transitioning an ECS Fargate task from PENDING to RUNNING will be spent in pulling images over the network from a docker repository. Improving pull speed will most likely include examing the images being pulled and trying to reduce layer sizes if at all possible.
If your service is behind an ALB, you can also try adjusting the health check period and the healthy threshold counts to a lower number.
Related
I've been experiencing this with my ECS service for a few months now. Previously, when we would update the service with a new task definition, it would perform the rolling update correctly, deregistering them from the target group and draining all http connections to the old tasks before eventually stopping them. However, lately ECS is going straight to stopping the old tasks before draining connections or removing them from the target group. This is resulting in 8-12 seconds of API down time for us while new http requests continue to be routed to the now-stopped tasks that are still in the target group. This happens now whether we trigger the service update via the CLI or the console - same behaviour. Shown here are a screenshot showing a sample sequence of Events from ECS demonstrating the issue as well as the corresponding ECS agent logs for the same instance.
Of particular note when reviewing these ECS agent logs against the sequence of events is that the logs do not have an entry at 21:04:50 when the task was stopped. This feels like a clue to me, but I'm not sure where to go from here with it. Has anyone experienced something like this, or have any insights as to why the tasks wouldn't drain and be removed from the target group before being stopped?
For reference, the service is behind an AWS application load balancer. Happy to provide additional details if someone thinks of what else may be relevant
It turns out that ECS changed the timing of when the events would be logged in the UI in the screenshot. In fact, the targets were actually being drained before being stopped. The "stopped n running task(s)" message is now logged at the beginning of the task shutdown lifecycle steps (before deregistration) instead of at the end (after deregistration) like it used to.
That said, we were still getting brief downtime spikes on our service at the load balancer level during deployments, but ultimately this turned out to be because of the high startup overhead on the new versions of the tasks spinning up briefly pegging the CPU of the instances in the cluster to 100% when there was also sufficient taffic happening during the deployment, thus causing some requests to get dropped.
A good-enough for now solution was to adjust our minimum healthy deployment percentage up to 100% and set the maximum deployment percentage to 150% (as opposed to the old 200% setting), which forces the deployments to "slow down", only launching 50% of the intended new tasks at a time and waiting until they are stable before launching the rest. This spreads out the high task startup overhead to two smaller CPU spikes rather than one large one and has so far successfully prevented any more downtime during deployments. We'll also be looking into reducing the startup overhead itself. Figured I'd update this in case it helps anyone else out there.
I am currently running a video encoding application on ECS but auto scaling is my biggest problem.
Users start live video encoding jobs from a front end. Once a job is placed, this is added as a redis queue (rq) job that runs on an ECS task placed on a c5d.large instance using ffmpeg.
Autoscaling is currently based on alarms. If cpu is > than a set percentage, a new instance and task is spawned. If cpu is low, instances are checked and if no jobs are running they are destroyed.
This is not a bad solution but it feels clunky and slow. If a user wants to start two jobs one right after the other, it takes a couple of minutes for the instance to spawn + task to be placed (even using warm groups).
Plus cloudwatch alarms take a while to refresh and are not a super reliable way of defining work that is being done (a video encoding at 720p will use less cpu than one at 1080p and thus mess all my alarm settings).
Is there a better solution that someone can guide me to that allows for fast and precise autoscaling other than relying on cloudwatch alarms? I am tempted to try to create my own autoscaling system based on current executing jobs / workers and spawn/destroy instances directly calling the API from my code, but I'm hoping to find a better solution directly from within AWS.
Thanks
I too have this exact problem, AWS already has mediaconvert/elastictranscoder but it's just too expensive & I decided to create my own firstly on lambda with SST.dev (serverless) where all jobs are a single function invocation but I had issues with 15mins function timeout mostly because I'm not copying codecs.
scaling at this point I would think is Kubernetes. This is the sort of problem that Kubernetes is intended to handle (dynamic resource scaling on demand). Kubernetes is rather non-trivial. K8s is what the industry has settled on for the most part, so there are probably a lot of reasons to just go that route. You could start with K3S (psst! i just knew that today) and move up to K8s when you are ready.
Since you're trying to find a solution directly from within AWS, you can try EKS but I'm not completely sure what the best would be.
Premise
I'm trying to come up with the right choice of AWS construct for a containerized microservice (set of microservices in fact) deployment. The application will have an average load of 50% through the day and little to nothing during the night and at very specific times in the day(which is not always pre-determinable) there is a burst of high-volume requests. Also, it's not a super-busy set of microservices ( in other words, 2 instances of 1VCPU and 8GB RAM will just be fine )
The fargate compute option seems to be a better option for this type of a setup, except of course that
When my application has little or no load during the night, I will still be charged for the full 1VCPU and 8GB (which according to me is not true "pay as you use" as I might be using only 0.05 or 0.25 VCPU - hypothetical numbers )
The only way to get around this is to write some redefinition strategies myself: watch Cloudwatch events and recreate the Fargate tasks with lesser VCPU. However, it will have some extra overhead in terms of deployment time (even if I ensure staggered deployments, it still means a 'lot of work' each time there is a material event). Is there a better way to do this or is there a more 'truly' out of the box pay-as-you-use arrangement that can let you consume resources in a range continuously based on what you actually are using at that moment without having to jump through the hoop?
Lastly, the purist in me still cannot reconcile in theory the fact that a microservice isn't a 'task' really and use of a Fargate compute option doesn't sound intuitively right to me even if I could think of a microservice as an extreme case of a task running permanently Costwise, am I better off using EC2 as some options seem to get me a cost that is lesser than Fargate (I'm aware of the additional responsibility in maintaining/patching those EC2 instances )?
After migrating from EC2 cluster instances to AWS Fargate, I realized that deployments take a lot longer. Before they would take 1-2 minutes, now some deplyoments take up to 5 minutes. This post claims that their deployments on Fargate even take up to 10 minutes.
Does anybody know of a way to speed them up? I can't find any documentation on this topic.
Through further googling I found this Reddit thread. An AWS employee wrote:
With regard to time to provision and start a container it is
definitely longer when using Fargate. We may reduce the length of the
provisioning state in the future, but Fargate is doing much more under
the hood than ECS on your own self managed hosts. When you self manage
hosts they are already up and running, and may even already have your
docker image downloaded and cached locally, so ECS is able to launch
the container very quickly. That's not the case with Fargate.
So shrinking the image should help a little. But in general I guess I'll have to live with it and hope for optimizations on AWS' side.
Here's the breakdown of tasks and possible improvements that I've found while researching options to improve my deployment times with ECS Fargate:
Fargate Deployment Overview
Here's a breakdown of what's going on behind the scenes that attribute to the deployment duration:
Provision the Fargate worker instance
Provision/attach the ENI
Download the Docker image
Here you have opportunities for improvement:
reduce the size of your Docker image
Networking throughput is based on the CPU allocations to the Fargate Task - if you allocate more CPU then you get more networking and the image will download faster
Application Startup time
Becomes a factor if your application requires a health check grace period, again effected by CPU allocation
If your task is associated with a load balancer the deployment will also need to pass health checks, and you'll need to account for:
Load balancer deregistration delay
Pass health checks: (Health Check Interval * Threshold)
How to deploy Fargate Task updates faster
Over allocate the CPU
Reduce the deregistration delay
Set the health check threshold to 2 and interval to 5 seconds
don't forget to account for a health check grace period if your app needs it
My Results
During my testing, I was able to deploy my application that typically takes about 8 minutes w/1024 CPU (1vCPU) in under 4 minutes w/4096 CPU (4vCPU)
Disclaimer
Likely your tasks typically require considerably less CPU and you don't want to be always paying for over-allocating the CPU. So, run your deployment with overallocated resources and then run another deployment right after with the original CPU allocation.
Probably not a solution you want to use for every deployment, but could be a solution for hotfix deployments.
Additional Reading
Highly recommend reading Scaling containers on AWS in 2022
Two reasons they're slower, in my experience:
awsvpc network mode attaches an ENI to the task. When it has to do this to a Lambda, if the Lambda is running in a VPC, it is known to dramatically increase the initial spin up time.
Docker image size also affects startup time, since the image will usually need to be downloaded to whatever hidden host for a task to launch. I've done some benchmarking with a small 200MB container and a 2.5GB container. The former did start up quicker.
You can't do much about awsvpc, since Fargate requires it. Shrinking down that image would be your next biggest impact.
I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?
Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.
Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.