AWS Fargate Prices Tasks - amazon-web-services

I have set up a Task Definition with CPU maximum allocation of 1024 units and 2048 MiB of memory with Fargate being the launch type. When I looked at the costs it was way more expensive than I thought ($ 1.00 per day or $ 0.06 per hour [us-east-1]). What I did was to reduce to 256 units and I am waiting to see if the costs goes down. But How does the Task maximum allocation work? Is the task definition maximum allocation responsible for Fargate provisioning a more powerfull server with a higher cost even if I dont use 100%?
The apps in containers running 24/7 are NestJS application + apache (do not ask why) + redis and I can see that it has low CPU usage but the price is too high for me. Is the fargate the wrong choice for this? Should I go for EC2 instances with ECS?

When you run a task, Fargate provisions a container with the resources you have requested. It's not a question of "use up to this maximum CPU and memory," but rather "use this much CPU and memory." You'll pay for that much CPU and memory for as long as it runs, as per the AWS Fargate pricing. At the current costs, the CPU and memory you listed (1024 CPU units, 2048MiB), the cost would come to $0.04937/hour, or $1.18488/day, or $35.55/month.
Whether Fargate is the right or wrong choice is subjective. It depends what you're optimizing for. If you just want to hand off a container and allow AWS to manage everything about how it runs, it's hard to beat ECS Fargate. OTOH, if you are optimizing for lowest cost, on-demand Fargate is probably not the best choice. You could use Fargate Spot ($10.66/month) if you can tolerate the constraints of spot. Alternatively, you could use an EC2 instance (t3.small # $14.98/month), but then you'll be responsible for managing everything.
You didn't mention how you're running Redis which will factor in here as well. If you're running Redis on Elasticache, you'll incur that cost as well, but you won't have to manage anything. If you end up using an EC2 instance, you could run Redis on the same instance, saving latency and expense, with the trade off that you'll have to install/operate Redis yourself.
Ultimately, you're making tradeoffs between time saved and money spent on managed services.

Related

AWS Serverless for Microservices and true "pay-as-you-use"

Premise
I'm trying to come up with the right choice of AWS construct for a containerized microservice (set of microservices in fact) deployment. The application will have an average load of 50% through the day and little to nothing during the night and at very specific times in the day(which is not always pre-determinable) there is a burst of high-volume requests. Also, it's not a super-busy set of microservices ( in other words, 2 instances of 1VCPU and 8GB RAM will just be fine )
The fargate compute option seems to be a better option for this type of a setup, except of course that
When my application has little or no load during the night, I will still be charged for the full 1VCPU and 8GB (which according to me is not true "pay as you use" as I might be using only 0.05 or 0.25 VCPU - hypothetical numbers )
The only way to get around this is to write some redefinition strategies myself: watch Cloudwatch events and recreate the Fargate tasks with lesser VCPU. However, it will have some extra overhead in terms of deployment time (even if I ensure staggered deployments, it still means a 'lot of work' each time there is a material event). Is there a better way to do this or is there a more 'truly' out of the box pay-as-you-use arrangement that can let you consume resources in a range continuously based on what you actually are using at that moment without having to jump through the hoop?
Lastly, the purist in me still cannot reconcile in theory the fact that a microservice isn't a 'task' really and use of a Fargate compute option doesn't sound intuitively right to me even if I could think of a microservice as an extreme case of a task running permanently Costwise, am I better off using EC2 as some options seem to get me a cost that is lesser than Fargate (I'm aware of the additional responsibility in maintaining/patching those EC2 instances )?

AWS EC2 - Do auto scaled instances run for a minumum amount of time (CPU load average based)

I've been running a scheduler for my work load for awhile now. Recently demand has become more inconsistent, and the workload has been backing up at what should be slow points of the week. I've started implementing auto scale groups in two of my regions that scale based on CPU load.
I've got it set at 80% CPU load average, and my queued work is good at maximizing the CPU, and I opted for more, smaller instances that are cheaper to run. Everything appears to be operating ideally, but I just have a concern about instances being started and stopped too often. I know on EC2 you pay for the full hour regardless of how long it runs during that hour, so...
Is the auto scaling taking this into account and leaving them running for at least a certain amount of time like ~30-45 minutes?
Do I have to instead work with the CPU average and the various timeouts to help prevent wasteful start/stops?
Depending on which AMI you're running, you might benefit from per-second billing. In this case, you'll only be charged a minimum of 60 seconds. From my understanding of your use case, this billing method would be ideal (cost-wise) for you, as you seem to frequently start and stop instances that live for short amounts of time.
To my knowledge, there's no built-in mechanism in autoscaling that will try to optimize your EC2 usage to minimise costs.
If, however, you're using an AMI that is not eligible for per-second billing, you could look into Spot instances to further minimse your costs, if your workload applies to this scheduling model.

How to speed up deployments on AWS Fargate?

After migrating from EC2 cluster instances to AWS Fargate, I realized that deployments take a lot longer. Before they would take 1-2 minutes, now some deplyoments take up to 5 minutes. This post claims that their deployments on Fargate even take up to 10 minutes.
Does anybody know of a way to speed them up? I can't find any documentation on this topic.
Through further googling I found this Reddit thread. An AWS employee wrote:
With regard to time to provision and start a container it is
definitely longer when using Fargate. We may reduce the length of the
provisioning state in the future, but Fargate is doing much more under
the hood than ECS on your own self managed hosts. When you self manage
hosts they are already up and running, and may even already have your
docker image downloaded and cached locally, so ECS is able to launch
the container very quickly. That's not the case with Fargate.
So shrinking the image should help a little. But in general I guess I'll have to live with it and hope for optimizations on AWS' side.
Here's the breakdown of tasks and possible improvements that I've found while researching options to improve my deployment times with ECS Fargate:
Fargate Deployment Overview
Here's a breakdown of what's going on behind the scenes that attribute to the deployment duration:
Provision the Fargate worker instance
Provision/attach the ENI
Download the Docker image
Here you have opportunities for improvement:
reduce the size of your Docker image
Networking throughput is based on the CPU allocations to the Fargate Task - if you allocate more CPU then you get more networking and the image will download faster
Application Startup time
Becomes a factor if your application requires a health check grace period, again effected by CPU allocation
If your task is associated with a load balancer the deployment will also need to pass health checks, and you'll need to account for:
Load balancer deregistration delay
Pass health checks: (Health Check Interval * Threshold)
How to deploy Fargate Task updates faster
Over allocate the CPU
Reduce the deregistration delay
Set the health check threshold to 2 and interval to 5 seconds
don't forget to account for a health check grace period if your app needs it
My Results
During my testing, I was able to deploy my application that typically takes about 8 minutes w/1024 CPU (1vCPU) in under 4 minutes w/4096 CPU (4vCPU)
Disclaimer
Likely your tasks typically require considerably less CPU and you don't want to be always paying for over-allocating the CPU. So, run your deployment with overallocated resources and then run another deployment right after with the original CPU allocation.
Probably not a solution you want to use for every deployment, but could be a solution for hotfix deployments.
Additional Reading
Highly recommend reading Scaling containers on AWS in 2022
Two reasons they're slower, in my experience:
awsvpc network mode attaches an ENI to the task. When it has to do this to a Lambda, if the Lambda is running in a VPC, it is known to dramatically increase the initial spin up time.
Docker image size also affects startup time, since the image will usually need to be downloaded to whatever hidden host for a task to launch. I've done some benchmarking with a small 200MB container and a 2.5GB container. The former did start up quicker.
You can't do much about awsvpc, since Fargate requires it. Shrinking down that image would be your next biggest impact.

How to get consistent CPU utilization on AWS

I've now learnt that when I start a new EC2 instance it has a certain number of CPU credits due to which it's performance is high when it starts processing but gradually reduces over time as the credits run out. Past that point, the instance runs at which appears to be the baseline CPU utilisation rate. To numerate, when I started the EC2 instance (t2.nano), Cloudwatch reported around 80% CPU utilisation gradually decreasing down to 5%.
Now I'm happy to use one of the better instance types pending the instance limit request. But whilst that is in progress, I'd like to know whether the issue of reducing performance over time will still hold even with the better instance type?
Would I require a dedicated host setup if I wish to ensure I get consistent CPU utilisation? The only problem I can see here is that I'm running a SQS worker queue and Elastic Beanstalk allows us to easily setup a worker environment which reads messages from the queue. From what I've read and from looking at the configuration options available in Elastic Beanstalk, I don't think I'll be able to launch instances into a dedicated host directly. Most of my reading has lead me to believe that I'll have to learn how to use a VPC. Would that be correct?
So I guess my questions are - would simply increasing the instance type to a more powerful instance guarantee consistent CPU utilisation performance or is a dedicated host required and if so, is it possible to set up one with Elastic Beanstalk or would it have to be setup manually and if it is set up manually can it be configured to work with an SQS queue automatically?
If you want consistent CPU performance, you should avoid the burstable performance instances (the T2 family). All other families of instances (M5, C5, etc) will have consistent CPU performance over time. You can use any instance family with Elastic Beanstalk. No need for a dedicated host.

Computing power of AWS Elastic Beanstalk instances

I have a CPU-intensive application that I'm considering hosting on 1+ AWS Elastic Beanstalk instances. If at all possible, I'd like to throttle it so that I don't dip over the "free" utilization of the instances.
So I need to figure out what kind of hardware/virtualized hardware the Beanstalk instances are running on, and compare that to the maximum CPU utilization of the free versions.
So for instance, if each Beanstalk instance is running on, say, 2GHz CPUs, and my app performs a specific "supercalc" operation that takes 50 million CPU operations, but the free version of the app only allows me to utilize 100 billion operations per day, then I am limited to 100billion/50million = 2,000 "supercalcs" per day on a free instance. So if the CPU is 2GHz, then my app instance could only run for 2GHz/50million = 40 seconds before I've already "maxed out" the free CPU utilization on the Beanstalk instance.
This is probably not a great example, but hopefully illustrates what I'm trying to achieve. I need to figure out how much I need to throttle my app, or how long my app could run before I max out the Beanstalk CPU utilization, and it really comes down to how beefy the AWS Beanstalk machines are. Thanks in advance!
Amazon EC2 instances aren't based on a "CPU utilisation" billing system (I think Google App Engine is?) - EC2 instance billing is based on the amount of time the machine is "on" regardless of what is doing. See the Amazon EC2 Pricing for the amount it costs to run the different instances sizes in different regions.
There is a special case which is the "Micro" instance - this provides the ability to have short bursts of higher CPU usage than the "small" instance at a lower cost, but if you overuse it you get throttled back for a period (which you don't with a Small). This isn't the same as having an operation limit though, and the price remains the same whether you're throttled or not.
Also note that with Elastic Beanstalk you're also paying for the Elastic Loadbalancer, any storage and bandwidth, and also any database service you are using.
Given all that though - AWS does have a Free Tier - however this is only for the first 12 months of a new account. The Free Tier will cover the cost of a micro EC2 instance, Elastic Loadbalancer, RDS database and other ancillary services - see the link for more info.