Difference between 1 Shared vCPU and 1 vCPU - google-cloud-platform

In GCP, what is the difference between an f1-micro instance (1 shared vCPU) vs. a n1-standard-1 (1 vCPU)? Specifically, what is the difference between a shared vCPU and a vCPU?

Shared-core machine types
Shared-core machine types provide one virtual CPU that is allowed to
run for a portion of the time on a single hardware hyper-thread on the
host CPU running your instance. Shared-core instances can be more
cost-effective for running small, non-resource intensive applications
than standard, high- memory or high-CPU machine types.
Source: https://cloud.google.com/compute/docs/machine-types#sharedcore
For your information, with one shared vCPU, Google doesn't guarantee it.

Related

Relation between CPUs and vCPUs in GCE

Would like to know whether 1vCPU in GCE VM is equal to 1CPU.
On prem server has got 8CPUs and want to find equivalent server in GCE VM.
Should I opt for 8vCPUs or 16vCPUs?
Would be thankful if any Google documentation reference is provided.
As per this GCP documentation a vCPU is a single hardware hyper-thread on one of the CPU processors. For example, Intel Xeon processors Hyper-threading technology support multiple threads in a single physical core. You can configure an instance with one or more of these hyper-threads as vCPUs.
A physical CPU is actual hardware unit installed in motherboard socket. To calculate vCPU that is equivalent 8CPU to you can follow this doc.

Whats is Burstable instance in GCP?

Whats is burstable instance in GCP? Does it mean , that instance can be automatically deleted if other devices need quota? Is it same as preemptive instance?
Shared-core machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically. Bursting doesn't incur any additional charges. You are charged the listed on-demand price for f1-micro, g1-small, and e2 shared-core machine types.
See also:
CPU bursting

How to effectively compare different type of ec2 instances?

For example
The pricing of c6g.medium ($0.0340) is almost 3x the t2.micro ($0.0116) instances, however I am seeing that c6g.medium only have 1 vcpu which is the same with t2.micro.
So how would you compare the instance performance of c6g.medium cpu (AWS Graviton2 processors) with whatever t2 is using for its cpu?
Is c6g.medium more efficient than 3 t2.micro instance if the t2 used all its cpu credit all the time?
Can I assume all c6g cpu has 3x more thread/core than all t2 cpu?
The difference between t2 and c6g instances is the burstable nature of the t2 instance.
A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically. With the c6g instance, you will be able to use 100% of your CPU at all times if you so desire. The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price.
Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.
The major differences between the instance types can be found on the bottom of the EC2 instance types page:
https://aws.amazon.com/ec2/instance-types
If you need to do a real-world performance comparison of your application, the best way would really be to spin up some instances and run benchmarking / load testing on them to see which one performs better in your specific scenario.

What vCPUs in Fargate really mean?

I was trying to get answers on my question here and here, but I understood that I need to know specifically Fargate implementation of vCPUs. So my question is:
If I allocate 4 vCPUs to my task does that mean that my
single-threaded app running on a container in this task will be able to fully use all this vCPUs as they are essentially only a
portion of time of the processor's core that I can use?
Let's say, I assigned 4vCPUs to my task, but on a technical level I
assigned 4vCPUs to a physical core that can freely process one
thread (or even more with hyperthreading). Is my logic correct for
the Fargate case?
p.s. It's a node.js app that runs session with multiple players interacting with each other so I do want to provide a single node.js process with a maximum capacity.
Fargate uses ECS (Elastic Container Service) in the background to orchestrate Fargate containers. ECS in turn relies on the compute resources provided by EC2 to host containers. According to AWS Fargate FAQ's:
Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances
...
ECS uses containers provisioned by Fargate to automatically scale, load balance, and manage scheduling of your containers
This means that a vCPU is essentially the same as an EC2 instance vCPU. From the docs:
Amazon EC2 instances support Intel Hyper-Threading Technology, which enables multiple threads to run concurrently on a single Intel Xeon CPU core. Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances.
So to answer your questions:
If you allocate 4 vCPUs to a single threaded application - it will only ever use one vCPU, since a vCPU is simply a hyperthread of a single core.
When you select 4 vCPUs you are essentially assigning 4 hyperthreads to a single physical core. So your single threaded application will still only use a single core.
If you want more fine grained control of CPU resources - such as allocating multiple cores (which can be used by a single threaded app) - you will probably have to use the EC2 Launch Type (and manage your own servers) rather than use Fargate.
Edit 2021: It has been pointed out in the comments that most EC2 instances in fact have 2 hyperthreads per CPU core. Some specialised instances such as the c6g and m6g have 1 thread per core, but the majority of EC2 instances have 2 threads/core. It is therefore likely that the instances used by ECS/Fargate also have 2 threads per core. For more details see doco
You can inspect what physical CPU your ECS runs on, by inspecting the /proc/cpuinfo for model name field. You can just cat this file in your ENTRYPOINT / CMD script or use ECS Exec to open a terminal session with your container.
I've actually done this recently, because we've been observing some weird performance drops on some of our ECS Services. Out of 84 ECS Tasks we ran, this was the distribution:
Intel(R) Xeon(R) CPU E5-2686 v4 # 2.30GHz (10 tasks)
Intel(R) Xeon(R) Platinum 8124M CPU # 3.00GHz (22 tasks)
Intel(R) Xeon(R) Platinum 8175M CPU # 2.50GHz (10 tasks)
Intel(R) Xeon(R) Platinum 8259CL CPU # 2.50GHz (25 tasks)
Intel(R) Xeon(R) Platinum 8275CL CPU # 3.00GHz (17 tasks)
Interesting that it's 2022 and AWS is still running CPUs from 2016 (the E5-2686 v4). All these tasks are fully-paid On-Demand ECS Fargate. When running some tasks on SPOT, I even got an E5-2666 v3 which is 2015, I think.
While assigning random CPUs for our ECS Tasks was somewhat expected, the differences in these are so significant that I observed one of my services to report 25% or 45% CPU Utilization in idle, depending on which CPU it hits on the "ECS Instance Type Lottery".

Is it possible to create a custom instance type on aws

One of our applications uses a lot of memory but not much CPU. We are using m4.2xlarge, which means we have 32 GB ram & 8 cores. As per my requirement we need 4 cores and 8gb ram.I searched for these combination of instance type but i didn't get that.So,is there any chance to create custom instance type.
I'm afraid there is no such thing as custom EC2 instance type. You'll need to select one from the offered EC2 instance classes.
Also, your assumption about the number of cores for the m4.2xlarge instance is incorrect. An m4.2xlarge instance has 4 virtual cores, not 8. See Virtual Cores by Amazon EC2 and RDS DB Instance Type.
According to this, each vCPU is a hyperthread of an Intel Xeon core (except for T2 and m3.medium instances). AWS does not guarantee anything beyond that.
So, if you want a 4 virtual CPU (i.e., 2 virtual cores x 2 hyperthreads per core = 4 vCPU) instance with 7.5 GiB RAM, you can select the c4.xlarge instance.