What exactly is a "virtual core" on Amazon EC2? - amazon-web-services

The small Standard Instance is:
Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform
Does this mean that you get access to an entire physical CPU core? Or are you sharing a more powerful core with other instances?
Is your performance affected by other people sharing the same "physical core" or other hardware?

You don't get a physical core for a small instance.
"One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation." Amazon EC2 Instance Types
You can run cat /proc/cpuinfo to see what hardware you're on.
For example I have a micro instance which has the underlying processor Intel(R) Xeon(R) CPU E5430 # 2.66GHz.
From my understanding 40% cpu in top equals 1 Compute Unit. So I can burst to 80% with my 2 Compute Unit's.

this is rough estimate so take it for what its worth.
Funny enough the micro instance out preformed both the small & medium instances.
I ran Passmark Proformance Test 8.0 on each instance below.
Each was installed with Windows Server 2008 r2 basic config in Amazon's Virginia based data center.
AWS SIZE=======PASSMARK SCORE====SIMILAR SCORED CPU================
t1.micro=======963===============AMD Dual-Core Mobile ZM-80========
m1.small=======384.7=============Intel Celeron M 1.60GHz===========
m1.medium======961===============AMD Dual-Core Mobile ZM-80========
m1.large=======1249==============Intel Core2 Duo T6400 # 2.00GHz===
m1.xlarge======3010==============AMD Phenom 2 X4 12000=============
m3.xlarge======3911==============Intel Xeon X5365 # 3.00GHz========
m3.2xlarge=====6984==============Intel Xeon E3-1220 V2 # 3.10GHz===
Currently the m3.2xlarge would cost about $7169 pr year for a reserved instance or $1578 pr month on an on-demand instance.
Most unmanaged dedicated hosting companies I've seen offer Intel Xeon E3-1200 setups for around $2000-2500 pr year.
In my opinion AWS is great for scalability but very costly for anything long-term.
As seems to be the case with any "cloud" based server systems.
------UPDATE
Here is a great tool for measuring cloud hosting benchmarks.. http://cloudharmony.com/benchmarks

http://www.cpubenchmark.net/high_end_cpus.html
If look at this table 1 EC2 Compute unit ≈ 350 cpu points

Please go through these blogs to get an idea of virtual cores.Very well explained
http://www.pythian.com/blog/virtual-cpus-with-amazon-web-services/
http://samrueby.com/2015/01/12/what-are-amazon-aws-vcpus/

According to this AWS forum post, a virtual core equates to a physical CPU core. Each virtual core can have one or more EC2 Compute Units, depending on the clock speed of the CPU.
Here is a more detailed analysis.

Related

E2 CPU Usage Goes Up Over Time on Google Compute Engine

It is quite strange that all of my 6 E2-small vm instances (all Debian 10) are increasing in CPU usage over time. Is this a bug from Google?
And I can verify that this does not happen on N1 CPU (g1-small, Debian 10, orange line):
I restarted the E2 instance (blue line) before end of January and created a new N1 instance (orange line). Both VM's are not utilised yet, and you can see that E2 is increasing its CPU usage over time.
Here's my top command on the E2:
Here are 3 more VM's (utilised in production) which shows CPU slowly creeping up over time (restarted Jan 26):
Is this a google bug?
This is a bug with google_os, fixed by:
sudo apt-get update && sudo apt-get upgrade google-osconfig-agent -y
Confirmed that all of the affected vm's are not increasing in cpu usage anymore after several days.
After updating and restarting on Feb 18, the CPU usage is now stable.
No, this is not a bug, E2 small machines are Shared-core machine.
Shared-core machine types use context-switching to share a physical core between vCPUs for the purpose of multitasking. Different shared-core machine types sustain different amounts of time on a physical core. Review the following sections to learn more.
In general, shared-core instances can be more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU machine types.
CPU Bursting
Shared-core machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically. Bursting doesn't incur any additional charges. You are charged the listed on-demand price for f1-micro, g1-small, and e2 shared-core machine types.
E2 shared-core machine types
E2 shared-core machines are cost-effective, have a virtio memory balloon device, and are ideal for small workloads. When you use E2 shared-core machine types, your VM runs two vCPUs simultaneously, shared on one physical core, for a specific fraction of time, depending on the machine type.
*e2-micro sustains 2 vCPUs, each for 12.5% of CPU time, totaling 25% vCPU time.
*e2-small sustains 2 vCPUs, each at 25% of CPU time, totaling 50% of vCPU time.
*e2-medium sustains 2 vCPUs, each at 50% of CPU time, totaling 100% vCPU time.
Each vCPU can burst up to 100% of CPU time, for short periods, before returning to the time limitations here.
It depends on the processes running on the instance for it to burst and increase usage.

Choice of VMWare Esxi host

Given a vsphere client, I am trying to find a way to determine the ESXi host on which a VM of provided specs can be spawned. Does anyone know of any formula by means of which one could relate the available CPU, ram and disk on an ESXi host and make a decision as to which host is a better choice to use to spawn a VM of a defined flavor - flavor here being a specified set of cpus, ram and disk.
Basically, I want to determine the number of VMs of a given specification (CPU, ram and disk) that can be spawned on a host.
You can use Configuration Maximums page to determine how many Esxi instances your physical hardware supports.
The upper limit is 128 virtual CPUs, 6 TB of RAM and 120 devices
There's two main ways to go about this:
If you happen to have access to vROps, they have that capability
built in the "Optimize Capacity" section of their UI.
Use your programming language of choice to perform those
calculations manually. Referencing a single host, divide the host's
available CPU MHz by the desired VM MHz, divide the host's available
RAM by the desired VM RAM, take the host's datastore with the lowest
about of space and divide that by the desired space of the VM. The
lowest returned figure from those 3 calculations will be your max
number of VMs that can be spawned on that particular host.

RAM for vCPUs on gcloud

How do I know the exact RAM of the CPU instance that I have created on gcloud? I have created an instance with n1-standard-8 (8-vCPUs, 30GB memory). Is this RAM? When I am trying to run a model it gives me out of memory error saying I tried to allocate 12GB.
Hence, I want to know what is the RAM for my instance , and how can I increase it to run my model?
This guide describes the available different machine types. According to the document your mentioned machine type explain here:
n1-standard-8: Your machine type
8-vCPUs: For the n1 series of machine types, a vCPU is implemented as a single hardware hyper-thread on a 2.6 GHz Intel Xeon E5 (Sandy Bridge), 2.5 GHz Intel Xeon E5 v2 (Ivy Bridge), 2.3 GHz Intel Xeon E5 v3 (Haswell), 2.2 GHz Intel Xeon E5 v4 (Broadwell), or 2.0 GHz Intel Skylake (Skylake).
30GB: Your system memory(RAM)
Also, you can run this following gcloud command in your Cloud Shell to displays all data associated with a Google Compute Engine virtual machine instance:
gcloud compute instances describe INSTANCE_NAME [--zone=ZONE]
In the meantime, you can run free -m command in your machine to see the total and free memory of your instance if you are using a Linux machine.
The instance must be shutdown in order to edit it and increase the RAM or/and CPU. You can find More information in this article.
Yes, this should be the system RAM you can adjust while creating an instance or template. To check the available RAM on the instance (and assuming you are using Linux), connect to it using the Google SSH shell and type top command. It should display the memory available and in use, as well as the current processes.

Google Compute Engine - Low on Resource Utilisation

I use a VM Instance provided by Google Compute Engine.
Machine Type: n1-standard-8 (8 vCPUs, 30 GB memory).
When I check for the CPU Utilisation, it never uses more than 12%. I use my VM for running Jupyter Notebook. I have tried loading dataframes which costed 7.5 GiB (And it takes a long time to process the data for simple operations). But still the utilisation is same
How can I utilise the CPU power ~ 100%?
Or Does my program use only 1 out of the 8 CPU (1/8)*100 =12.5%?
You can run stress command to impose a configurable amount of CPU, memory, I/O, and disk stress on the system.
Example to stress 4 cores for 90 seconds:
stress --cpu 4 --timeout 90
In the meantime go to your Google Cloud Console on your browser to check your CPU usage on your VM or open new SSH connection to your VM and run TOP command to see your CPU status.
After running those mentioned commands, if your CPU can reach over 99%, your instance is working fine and you have to check your application resources to know why it is restricted and cannot use CPU more than 12%.

What type of EC2 instance is better suitted for Wso2 CEP Standalone?

compute optimized (c3, c4) or a memory optimized (r3) instance for running a stand-alone wso2 cep server?
i searched the documentation but could not find anything regarding running this server on ec2
As per the WSO2 SA recommendations,
Hardware Recommendation
Physical :
3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
Virtual Machine :
2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).
NoSQL-Data Nodes:
4 Core 8 GB (http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architecturePlanningHardware_c.html)
Example
Let's say a customer needs 87 carbon instance. Hence, they need 87 CPU core / 174GB of memory / 870GB free space.
This is calculated by not considering the resources for OS. Per each machine, they need 1CPU core, 2GB memory for OS.
Lets say they want to buy 10 machines, then total requirement will be 97 CPU core (10 core for OS + 87 core for Carbon) 194 GB memory (20 GB for OS + 174GB for Carbon) 870GB free space for carbon (Normally, storage will have more than this).
Which means, each machine will have 1/10 of above and can run about 9 carbon instances. i.e roughly 10 CPU core / 20 GB Memory/ 100 GB of free storage
Reference : https://docs.wso2.com/display/CLUSTER44x/Production+Deployment+Guidelines
Note:
However, everything depends on the what you're going to process using CEP. therefore, please refer #Tharik's answer as well.
It depends on the type of processing CEP node does. CEP node requires alot of memory if the processing event size is large, or the event flowing through put is high and if there are time windows in the queries. For those cases Memory Optimized EC2 instances are better as those provide lowest price for RAM size. If there are a lot of computation on algorithms you have extended you might more processing capabilities of compute optimized instances.