Can I create custom EC2 hardware? [closed] - amazon-web-services

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I would like to specify the hardware components of an EC2 instance instead of selecting an instance type.
For example, instance type A has a CPU from company B with C cores and D GB RAM.
I would like to build my own specifications by choosing every component.
When I google this question, I see results about creating an EC2 instance, which is not the same as creating an instance type.
I also see information about creating a machine image. From what I can tell, this is about making a custom operating system.
I don’t think this is possible. Why? If EC2 machines are virtual, couldn’t you arrange virtual components with ease? If EC2 instances have physical CPUs, is it too inconvenient to offer custom hardware?

This is not possible.
AWS has racks of 'host' computers, each with a particular specification in terms of CPU type, number of CPUs, RAM, directly-attached disks (sometimes), attached GPUs (sometimes), network connectivity, etc.
Each of these hosts is then divided into multiple 'instances', such as:
This is showing that the R5 Host contains 96 virtual CPUs and 768 GB of RAM.
It can be used as an entire computer, known as r5.metal, or
It can be divided into 2 x r5.12xlarge each with 48 vCPUs and 384 GB of RAM -- each being half of the host, or
It can be divided into 6 x r5.4xlarge each with 16 vCPUs and 128 GB of RAM -- each being 1/6th of the host, or
It can be divided into 48 x r5.large each with 2 VCPUs and 16 GB of RAM -- each being 1/48th of the host
And so on
AWS somehow determines how to divide each host computer to support the necessary demand. However, each host can only be divided into smaller versions of the host.
EC2 Instance Families determine what type of CPU is provided and the ratio of CPU:RAM. Each host computer matches one of these Instance Families.
See: Amazon EC2 Instance Types - Amazon Web Services

Related

Why not get 2 nano instances instead 1 micro instance on AWS?

I'm choosing instances do run microservices on an AWS EKS cluster.
When reading about it on this article an taking a look on the aws docs it seems that choosing many small instances instead of on larger instance results on a better deal.
There seems to be no downside on taking, for instance, 2 t3.nano (2 vCPU / 0.5GiB each) vs 1 t3.micro (2 vCPU / 1GiB each). The price and the memory are the same but the CPU provided has a huge difference the more instances you get.
I assume there are some processes running on each machine by default, but I found no places metioning its impact on the machine resources or usage. Is it negligible? Is there any advantage on taking one big instance instead?
The issue is whether or not your computing task can be completed on the smaller instances and also there is an overhead involved in instance-to-instance communication that isn't present in intra-instance communication.
So, it is all about fitting your solution onto the instances and your requirements.
There is no right answer to this question. The answer depends on your specific workload, and you have to try out both approaches to find out what works best for your case. There are advantages and disadvantages to both approaches.
For example, if the OS takes 200 MB for each instance, you will be left with only 600 MB both nano instances combined vs the 800 MB on the single micro instance.
When the cluster scales out, initializing 2 nano instances might roughly take twice as much time as initializing one micro instance to provide the same additional capacity to handle the extra load.
Also, as noted by Cargo23, inter-instance communication might increase the latency of your application.

Does one cpu (one thread of CPU) or 1GB mem of aws/azure/DO/GCP mean same consistently within a cloud provider?

Let me explain my question little better. Let me start with famous fictitious Scott Processor as example.
Config C19 - 2019 Scott Processor Base Frequency 2.7 GHz One Core One Thread
Config C21 - 2021 Scott Processor Base Frequency 4.0 GHz One Core One Thread and further improvements
So, a process/program using C2 is expected to run faster and quicker than on C1.
Now, say, any cloud provider in 2019 would have gone with the best processor at that time i.e C19. Great. And in 2021 they would have built new systems with C21. There is no guarantee that in 2021 they have retired or upgraded C19 with C21. Fine.
So now in 2021 the cloud provider is offering to customer 1CPU (and does not disclose to customers if it is a C19 or C21). It's just a CPU offering (AWS discloses if it's ARM or other type of processor but not the frequency of the CPU).
Now if I buy one CPU service from the Cloud Provider and say today I got C21 configuration. I ran a benchmark of my application (v1.0.0) and it took 10 minutes to do some job. Fine. Job Done and I terminated my service. After one month for some requirement I have to re-run the benchmark (with out any application changes - still v1.0.0). So I sign back with the same cloud provider and get one cpu. So is it possible that this time I might get C19 Configuration (think so) and if I re-run benchmarks again my application might take 13 minutes this time because I got C19 processor not C21 processor.
So Basically my point is - does 1CPU (or 1GB memory) offering by a cloud provider mean same performance during the present time (not concerned about future). No matter where and who sign's up. And is it consistent across various Cloud providers or mileage could vary across cloud provider for the same resource.
If it's consistent(between one cloud provider or all of them) - how does it work. How do they ensure that between various hardware configurations in there cloud, they can guarantee customers get same mileage for a given type of resource.
Comment: Is this the reason why provider like AWS have introduced generations across instance types - like M1, M2 and T1, T2 etc. So they can guarantee same performance for same configuration for the same generation and instance type.
Your comment in bold is correct: A given 'family' of instances uses a specific chipset.
For AWS, these are listed on: Amazon EC2 Instance Types - Amazon Web Services
In the AWS data center, there are racks of 'host computers'. Each host belongs to a particular family. For example M5 is "Up to 3.1 GHz Intel Xeon® Platinum 8175M processors with new Intel Advanced Vector Extension (AVX-512) instruction set".
Each Host computer is then subdivided into multiple virtual computers. m5.24xlarge is the whole host computer with 96 virtual CPUs. Or, this can be divided into two m5.12xlarge virtual computers, each with 48 vCPUs. As long as you always choose the same instance type, then you will always receive the same virtual hardware (CPU, RAM).
Specifications vary across instance families since they use different CPUs and possibly different generations of hardware. Specifications also vary across cloud providers, so you would not expect to find exactly the same hardware specification between, say, AWS and Azure. All the cloud providers build their own hardware and probably use different chipsets even if they still come from Intel, AMD and nVidia. AWS even makes its own Graviton ARM chip.
When new Regions and Availability Zones are opened, they might not provide older generations of instance families. For example, the older M3 family is not available in newly-opened region.

Nested Virtualization in aws bare Metal c5 instances [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
I have a use case that I want to install windows 10 on an aws instance. Then on top of it, I want to install VMware workstation. In that VMware workstation, i want to install multiple VMs e.g kali, redhat, etc. Earlier this week, i had a simple aws instance( with server 2016) and it didn't allowed me to install VMs on vmware workstation inside server2016. It said that hypervisor and VMware can't stand simultanously. While looking for the resolution, I found exact same issue like mine:
https://forums.aws.amazon.com/thread.jspa?threadID=293113
And it said some thing like this:
Nested virtualization is not supported on AWS instances unless you are using AWS bare metal instances. https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-instances-with-direct-access-to-hardware/
Now please clearly tell me that "if i get c5.xlarge bare metal instance of aws, then can I install my use case as i described in my first paragraph?" Please help. I couldn't find exact answer anywhere else!
Thank you in advance...
There is no such thing as a c5.xlarge bare metal instance.
Instances run on a physical 'host' in the AWS data center. Each host supports one 'family' of instances, such as C5. This is because each family has a specific type of processor and a particular ratio between CPU and RAM.
A C5 host has 96 vCPUs and 192 GB of RAM. This can be divided into different 'instance types' within the family, such as:
c5.large with 2 vCPUs and 4 GB RAM
c5.xlarge with twice as much (4, 8)
c5.12xlarge with 12 times as much as a c5.xlarge
All the way up to c5.24xlarge that has all 96 vCPUs and 192 GB of RAM
The instance type you choose basically gives you a 'slice' of the host.
If you wish to go bare metal, then you get the entire host with 96 vCPUs and 192 GB of RAM. When selecting bare metal, you get the whole host computer and it is big!
This is why you cannot get a c5.xlarge as a bare metal instance.
So, your choices are:
Get a c5.metal instance, install VMWare and create smaller virtual computers, or
Use VMware Cloud on AWS where VMware runs the system for you and you can get smaller virtual computers, or
Give your students Amazon EC2 instances (which would be the simplest option!), or
Run your own hardware
I think azure cloud are supporting nested virtualization.

Ideal EC2 Instance type for magento 2.2.3

I am running the magento 2.2.3 version on AWS EC2 currently on c5x.large the performance seems fine for me with proper page speed backed by cloudfront and Redis.
Due to cost optimization I have decided to use m4.large instance and saw the degradation in performance magento page speed from 2.5 second to 6.6 second. I noticed the cpu usage with m4 large type instance it was going up during cache creation and was neutral at other time. I also noticed the cache flush operation from magento admin panel which took approx 3.5 min where as in case of C5 xlarge it was taking 50 seconds to complete same operation.
Is something wrong with my application or it's the cache operation that has direct connection with my cpu? What will be the right Instance series to be choosen for magento 2.2.3 on production?
Also this was not the case with magento 2.1.6, the cache flush operations are very normal even with t2.medium instances we used t2.medium for dev instances earlier.
Specifications:
M4 large : 2 vcpu 8 GB RAM
C5 Xlarge : 4 vcpu 8 GB RAM
You really need to understand what the limiting factors are in your application, and pick an appropriate instance family.
In addition to having two additional virtual cores, the C5 runs on a newer CPU family (so will usually have a slightly higher clock speed), and supports higher networking throughput than the M4.large.
Another big difference between the two is the maximum supported I/O rate. The C5.xlarge supports 16,000 IOPS while the M4 is limited to 3,600. This is related to the reduced networking capacity.
So determine if CPU, Networking or I/O is causing the slowdown, and then determine if the cost/benefits of moving to a more appropriate instance type are worth it.

Which EC2 instance size should I use to serve 10K users [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm modeling costs for a rest api for an e-commerce application mobile and want to determine the appropriate instance size and numbers.
Our config:
- Apache
- Slim framework PHP
Our Estimation:
- Users/Day: 10000
- Page views / User: 10
- Number total of users: 500 000
- Number total of products: 36 000
That's an extremely difficult question to provide a concrete answer to, primarily because the instance-type most appropriate for you is going to be based on the application requirements. Is the application memory intensive (use the r3 series)? Is it processing intensive (use the c4 series)? If it's a general application that is not particularly memory or processor intensive, you can stick with the M4 series, and if the web application really doesn't do much of anything besides serve up pages, maybe some database access, than you can go with the T2 series.
Some things to keep in mind:
The T2 series instances don't give you 100% of the processor. You are given a % of the processor (base performance) and then credits to use if your application spikes. When you run out of credits, you are dropped down to base performance.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits
t2.nano --> 5%
t2.micro --> 10%
t2.small --> 20%
t2.medium --> 40%
t2.large --> 60%
Each of the instances in each of the EBS-backed series (excluding T2) offer different max throughput to the EBS volume.
https://aws.amazon.com/ec2/instance-types/
If I had to wager a guess, for 100,000 page views per day, assuming the web application does not do very much other than generate the pages, maybe some DB access, I would think a t2.large would suffice with possibility to move up to m4.large as the smallest M4 instance.
But, this all defeats the wonders of AWS. Just spin up an instance and try it for a few days. If you notice it's failing, figure out why (processes taking too long, out of memory errors, etc.), shut down the instance and move up to the next instance.
Also, AWS allows you to easily build fault tolerance into your architecture and to scale-OUT, so if you end up needing 4 processors and 16gb memory (1 x m4.xlarge instance), you may do just as well to have 2 x m4.large instances (2 processors and 8gb memory) behind a load balancer. Now, you have 2 instances with the same specs and roughly the same cost (I think it's marginally cheaper actually).
You can see instance pricing here:
https://aws.amazon.com/ec2/pricing/
You can also put together your (almost) entire AWS architecture costs using this calculator:
http://calculator.s3.amazonaws.com/index.html