Amazon m1.small vs micro instance CPU perfomance - amazon-web-services

I have Amazon micro instance and looks like CPU is not enough. Going to upgrade to the next cheapest instance with more CPU available.
Can it be m1.small instance ? According to the description they have same number of compute units. And looks like micro can even overperform small instance when more cores becomes available for short CPU bursts.

Update: note that this information is only really applicable to the previous generation t1.micro instance type, which had a cyclical clamping throttle algorithm. The current generation t2 instance class, including the t2.micro, has much better performance than the t1.micro and an entirely different algorithm controlling the throttling. Throttling on the t2 instance class is driven by CPU credits, which are visible in the CloudWatch metrics for the instance, throttling is much more graceful, and kicks in much later. Throttling on the t1.micro was essentially a black box, and the system would repeatedly shift in and out of the throttled mode, under high loads. There is no longer a compelling reason to use a t1 instance, unless you are running a PV AMI. The t2 is HVM.
ECUs are "EC2 Compute Units" and represent, approximately, the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron processor.
This Comparison of t1.micro and m1.small explains that a small instance has 1 ECU continually available, while a Micro can operate in short bursts of up to 2 ECU, but with an ongoing baseline of much less.
In my testing, I've found that consuming 100% CPU for about 10-15 seconds on a micro instance, gets you throttled down to a fraction of that -- approximately 0.2 ECU -- for about the next 2-3 minutes, when the throttling lifts for a few seconds, then the cycle repeats, though it only repeats if you are still pulling the hard burst. They accomplish the throttling via the hypervisor "stealing" a large percentage of your available cycles. You can see this in "top" when it's happening. If you go long enough without demanding 100% CPU, the 2 ECU burst is immediately available with you need it -- it's not as if they are cycling the performance up and down with a timer -- the throttling is reactive to the imposed load.
Over time, the small instance will get more processing done, since the micro is throttled so aggressively after a few seconds of heavy usage, long enough to more than counteract the brief periods of nice burstablity. This makes sense, though since the micro is a lower cost instance.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html
...so, yes, try a small instance.

Related

Cloud Run, ideal vCPU and memory amount per instance?

When setting up a cloud run, I am worried about how many memory and vCPU should be set each time per server instance.
I use Cloud Run for mobile apps.
I am confused about when to increase vCPU and memory instead of increasing server instances, and when to increase server instances instead of vCPU and memory.
How should I calculate it?
There isn't a good answer to that question. You have to know the limits:
The max number of concurrent requests that you can handle concurrently with 4cpu or/and 32Gb of memory (up to 1000 concurrent requests)
The max number on instance on Cloud Run (1000)
Then it's a matter of tradeoff, and it's highly dependent of your use case.
Bigger instances reduce the number of cold starts (and so high latency when your service scale up). But, if you have only 1 request at a time, you will pay a BIG instance for a small processing
Smaller instances allow you to optimize cost and to add only a small slice of resource in your cluster, but you will have to spawn often a new instance and you will have several cold start to endure.
Optimize what you prefer, find the right balance. No magic formula!!
You can simulate a load of requests in your current settings using k6.io, check the memory and cpu percentage of your container and adjust them to a lower or higher setting to see if you can get more RPS out of a single container.
Once you are satisfied with a single container instance's let's say 100 rps per container instance, you can then specify using gcloud the flags --min-instances and --max-instances depending of course on the --concurrency flag, which in my explanation would be set to 100.
Also note that it starts at the default of 80 and can go up to 1000.
More info about this can be read on the links below:
https://cloud.google.com/run/docs/about-concurrency
https://cloud.google.com/sdk/gcloud/reference/run/deploy
I would also recommend you investigating if you need to pass the --cpu-throttling flag or the --no-cpu-throttling depending on your need for adjusting for cold starts.

high cpu in redis 2.8 (elasticache) cache.r3.large

looking for some help in ElasticCache
We're using ElasticCache Redis to run a Resque based Qing system.
this means it's a mix of sorted sets and Lists.
at normal operation, everything is OK and we're seeing good response times & throughput.
CPU level is around 7-10%, Get+Set commands are around 120-140K operations. (All metrics are cloudwatch based. )
but - when the system experiences a (mild) burst of data, enqueing several K messages, we see the server become near non-responsive.
the CPU is steady # 100% utilization (metric says 50, but it's using a single core)
number of operation drops to ~10K
response times are slow to a matter of SECONDS per request
We would expect, that even IF the CPU got loaded to such an extent, the throughput level would have stayed the same, this is what we experience when running Redis locally. redis can utilize CPU, but throughput stays high. as it is natively single-cored, not context switching appears.
AFAWK - we do NOT impose any limits, or persistence, no replication. using the basic config.
the size: cache.r3.large
we are nor using periodic snapshoting
This seems like a characteristic of a rouge lua script.
having a defect in such a script could cause a big CPU load, while degrading the overall throughput.
are you using such ? try to look in the Redis slow log for one

CPU credits on AWS EC2 instance how does that work

I have a micro amazon EC2 instance, and whenever the hosted application at this platform is given a large load for a couple of hours, the application slows down and CPU credits reach almost to zero.
I have turned auto scaling option on but still it does not work can some help me to figure out how to get around this?
All t2 instances use a burstable model. Which is not really intended for sustained heavy usage. The instance, when idling, will build up CPU credits up to a cap. When the CPU is maxed, the credits are spent. Once you run out, you are capped at a very low rate. The amount of credits you can get and the rate at which you earn them depend on which t2 instance you are using.
Autoscaling is for horizontal scaling. With it you can launch extra instances based on certain triggers. But you need to use a load balancer to spread traffic accross instances.
To the question how you see on the CPU utilization if you are using all your credited CPU at 100%, my experience is you don't. What you see is in top or iostat for exmaple, your CPU% is reported at something quite low. Like 30%, while IO is not bottlenecked, and you wonder why it is stuck to low CPU% usage.
But there is a value you might see in top, at the far right end, something like "68% st" that is the "steal" value. This means that you only get to have 32% of that CPU, and so your 30% CPU value is actually 94% of what you get.
I have also observed that when you add the CPU% of the processes that are in running state (R) in top, you arrive at a number relative to your actually available CPU. For example, I had 24 processes running at 8% each on a t2.medium instance with 2 virtual CPUs, that means 192% actually running, that is 96% of the available CPU cycles, not 32% as reported by top and iostat as % user.
If I was to generate an auto scale trigger, I would look at what I can get from the /proc file system and consider the "steal" amount.

AWS EC2 reaches high CPU uses during nighttime

I just implemented a few alarms with CloudWatch last week and I noticed a strange behavior with EC2 small instances everyday between 6h30 and 6h45 (UTC time).
I implemented one alarm to warn me when a AutoScallingGroup have its CPU over 50% during 3 minutes (average sample) and another alarm to warn me when the same AutoScallingGroup goes back to normal, which I considered to be CPU under 30% during 3 minutes (also average sample). Did that 2 times: one for zone A, and another time for zone B.
Looks OK, but something is happening during 6h30 to 6h45 that takes certain amount of processing for 2 to 5 min. The CPU rises, sometimes trigger the "High use alarm", but always triggers the "returned to normal alarm". Our system is currently under early stages of development, so no user have access to it, and we don't have any process/backups/etc scheduled. We barely have Apache+PHP installed and configured, so it can only be something related to the host machines I guess.
Anybody can explain what is going on and how can we solve it besides increasing the sample time or % in the "return to normal" alarm? Folks at Amazon forum said the Service Team would have a look once they get a chance, but it's been almost a week with no return.

Auto scaling batch jobs on Elastic Beanstalk

I am trying to set up a scalable background image processing using beanstalk.
My setup is the following:
Application server (running on Elastic Beanstalk) receives a file, puts it on S3 and sends a request to process it over SQS.
Worker server (also running on Elastic Beanstalk) polls the SQS queue, takes the request, load original image from S3, processes it resulting in 10 different variants and stores them back on S3.
These upload events are happening at a rate of about 1-2 batches per day, 20-40 pics each batch, at unpredictable times.
Problem:
I am currently using one micro-instance for the worker. Generating one variant of the picture can take anywhere from 3 seconds to 25-30 (it seems first ones are done in 3, but then micro instance slows down, I think this is by its 2 second bursty workload design). Anyway, when I upload 30 pictures that means the job takes: 30 pics * 10 variants each * 30 seconds = 2.5 hours to process??!?!
Obviously this is unacceptable, I tried using "small" instance for that, the performance is consistent there, but its about 5 seconds per variant, so still 30*10*5 = 26 minutes per batch. Still not really acceptable.
What is the best way to attack this problem which will get fastest results and will be price efficient at the same time?
Solutions I can think of:
Rely on beanstalk auto-scaling. I've tried that, setting up auto scaling based on CPU utilization. That seems very slow to react and unreliable. I've tried setting measurement time to 1 minute, and breach duration at 1 minute with thresholds of 70% to go up and 30% to go down with 1 increments. It takes the system a while to scale up and then a while to scale down, I can probably fine tune it, but it still feels weird. Ideally I would like to get a faster machine than micro (small, medium?) to use for these spikes of work, but with beanstalk that means I need to run at least one all the time, since most of the time the system is idle that doesn't make any sense price-wise.
Abandon beanstalk for the worker, implement my own monitor of of the SQS queue running on a micro, and let it fire up larger machine(or group of larger machines) when there are enough pending messages in the queue, terminate them the moment we detect queue is idle. That seems like a lot of work, unless there is a solution for this ready out there. In any case, I lose the benefits of beanstalk of deploying the code through git, managing environments etc.
I don't like any of these two solutions
Is there any other nice approach I am missing?
Thanks
CPU utilization on a micro instance is probably not the best metric to use for autoscaling in this case.
Length of the SQS queue would probably be the better metric to use, and the one that makes the most natural sense.
Needless to say, if you can budget for a bigger base-line machine everything would run that much faster.