We are running a docker container on AWS elastic beanstalk, which was running fine for a few weeks but suddenly started to experience very sudden CPU spikes (from ~5% to ~60% in a matter of minutes), who sometimes drop back down quickly, and sometimes stay high for enough time to produce an autoscaling event and spin up a few instances for extra help (Which are terminated some time after that when the CPU spike dies down).
The funny thing is, I wanted to investigate the problem today, so I've sshed into every instance (4 in total) and ran top on all of them, trying to locate the CPU consuming process, and was surprised to discover all instances have ~15% CPU busy (system + user combined), while the EBS monitoring page still shows the servers are at 60% CPU.
I've measured these figures for the good part of the hour, making sure the CPU high load stays high, while the top command still shows low values.
I've also tried to measure CPU for a while using the advice found here -https://askubuntu.com/questions/22021/how-to-log-cpu-load, and got the same very low CPU stats when querying the server directly.
My question is - is it possible AWS monitoring system is not showing me accurate data? Is there anyway to verify the data displayed in the monitoring page?
Any help would be appreciated.
Related
i have a newbie question in here, but i'm new to clouds and linux, i'm using google cloud now and wondering when choosing a machine config
what if my machine is too slow? will it make the app crash? or just slow it down
how fast should my vm be? in the image bellow
last 6 hours of a python scripts i'm running and it's cpu usage, it's obviously running for less than %2 of the cpu for most of it's time, but there's a small spike, should i care about the spike? and also, how much should my cpu usage be max before i upgrade? if a script i'm running is using 50-60% of the cpu most of the i assume i'm safe, or what's the max before you upgrade?
what if my machine is too slow? will it make the app crash? or just
slow it down
It depends.
Some applications will just respond slower. Some will fail if they have timeout restrictions. Some applications will begin to thrash which means that all of a sudden the app becomes very very slow.
A general rule, which varies among architects, is to never consume more than 80% of any resource. I use the rule 50% so that my service can handle burst traffic or denial of service attempts.
Based on your graph, your service is fine. The spike is probably normal system processing. If the spike went to 100%, I would be concerned.
Once your service consumes more than 50% of a resource (CPU, memory, disk I/O, etc) then it is time to upgrade that resource.
Also, consider that there are other services that you might want to add. Examples are load balancers, Cloud Storage, CDNs, firewalls such as Cloud Armor, etc. Those types of services tend to offload requirements from your service and make your service more resilient, available and performant. The biggest plus is your service is usually faster for the end user. Some of those services are so cheap, that I almost always deploy them.
You should choose machine family based on your needs. Check the link below for details and recommendations.
https://cloud.google.com/compute/docs/machine-types
If CPU is your concern you should create a managed instance group that automatically scales based on CPU usage. Usually 80-85% is a good value for a max CPU value. Check the link below for details.
https://cloud.google.com/compute/docs/autoscaler/scaling-cpu
You should also consider the availability needed for your workload to keep costs efficient. See below link for other useful info.
https://cloud.google.com/compute/docs/choose-compute-deployment-option
I have a REST API web server, built in .NetCore, that has data heavy APIs.
This is hosted on AWS EC2, I have noticed that the average response time for certain APIs are ~4 seconds and if I turn up the AWS-EC2 specs, the response time goes down to a few milliseconds. I guess this is expected, what I don't understand is that even when I load test the APIs on a lower end CPU, the server never crosses 50% utilization of memory/CPU. So what is the correct technical explanation that makes the APIs perform faster if the lower end CPU never reaches a 100% utilization of memory/CPU?
There is no simple answer, there are so many ec2 variations you need to first figure out what is slowing down your API.
When you 'turn up' your ec2 instance, you are getting some combination of more memory, faster cpu, faster disk and more network bandwidth - and we can't tell which one of those 'more' features are improving your performance. Different instance classes ar optimized for different problems.
It could be as simple as the better network bandwidth, or it could be that your application is disk-bound and the better instance you chose is optimized for i/O performance.
Depending on what feature your instance is lacking, it would help you decide which type of instance to upgrade to - or as you have found out, just upgrade to something 'bigger' and be happy with the performance (at the tradeoff of being more expensive).
I have a persistent server that unpredictably receives new data from users, needing about 10 GPU instances to crank at the problem for about 5 minutes, and I send the answer back to the users. The server itself is a cheap always-persistent single CPU Google Cloud instance. When a user request comes in, my code launches my 10 created but stopped Google Cloud GPU instances with
gcloud compute instances start (instance list)
In the rare case if the stopped instances don't exist (sometimes they get wiped) that's detected and they're recreated with
gcloud beta compute instances create (...)
This system all works fine. My only complaint is that even with created but stopped instances, the launch time before my GPU code finally starts to run is about 5 minutes. Most of this is just the time for the instance itself to launch its Ubuntu host and call my code.. the delay once Ubuntu is running to start the GPU is only about 10 seconds.
How can I reduce this 5 minute delay? I imagine most of it comes from Google having to copy over the 4GB of instance data to the target machine, but the startup time of (vanilla) Ubuntu adds probably 1 more minute. I'm not even sure if I could quantify these two numbers independently, I only can measure the combined 3-7 minutes delay from the launch until my code starts responding.
I don't think Ubuntu OS startup time is the major startup latency contributor since I timed an actual machine with the same Ubuntu and same GPU on my desk from poweron boot up and it began running my GPU code in 46 seconds.
My goal is to get results back to my users as soon as possible, and that 5 minute startup delay is a bottleneck.
Would making a smaller instance SIZE of say 2GB help? What else can I do to reduce the latency?
2GB is large. That's a heckuva big image. You should be able to cut that down to 100MB, perhaps using Alpine instead of Ubuntu.
Copying 4GB of data is also less than ideal. Given that, I suspect the solution will be more of an architecture change than a code change.
But if you want to take a whack at everything which is NOT about your 4GB of data, there is a capability to prepare a custom image for your VMs. If you can build a slim custom image that will help.
There's good resources for learning more, the two I would start with include:
- Improve GCE Boot Times with Custom Images
- Three steps to Compute Engine startup-time bliss: Google Cloud Performance Atlas
I'm using Gitlab shared runner with Docker (current runner version: 10.0.2, docker storage driver: overlay2), running on AWS t2.small instance. I started experiencing issues with builds slowing down after some time (it's hard to say when exactly they become slow) - they take ~10x more time to finish than before. After killing the instance problem disappears for a while and after some time it slows down again.
Things I already checked:
CPU usage on machine is around 20% the whole time
RAM usage is around 1,5 G during the heaviest build
IOPS on EBS are not exhausting all Burst Balance (e.g. right now burst balance is around 80%)
Download speed
What else might be causing this ?
Just in case, jobs that are running on this runner are mostly yarn install and yarn build of a medium-sized front-end React application.
You mention you're using a t2.small, what's the CPU credit balance on your instance when you see this slow-down?
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-monitoring-cpu-credits
I'm trying to run a locust.io load test on an ec2 instance - a t2.micro. I fire up 50 concurrent users, and initially everything works fine, with the CPU load reaching ~15%. After an hour or so though, the network out shows a drop of about 80% -
Any idea why this is happening? It's certainly not due to CPU credits. Maybe I reached the network limits for a t2 micro instance?
Thanks
Are you sure it's not a CPU credit issue? Can you check your cpu credits over that same time period to see how they look?
Or better yet, run the same test on a non-t2 instance. One that isn't limited in it's CPU usage.
t2.micro's consume CPU credits at usage about 10% of CPU.