Nested Virtualization in aws bare Metal c5 instances [closed] - amazon-web-services

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
I have a use case that I want to install windows 10 on an aws instance. Then on top of it, I want to install VMware workstation. In that VMware workstation, i want to install multiple VMs e.g kali, redhat, etc. Earlier this week, i had a simple aws instance( with server 2016) and it didn't allowed me to install VMs on vmware workstation inside server2016. It said that hypervisor and VMware can't stand simultanously. While looking for the resolution, I found exact same issue like mine:
https://forums.aws.amazon.com/thread.jspa?threadID=293113
And it said some thing like this:
Nested virtualization is not supported on AWS instances unless you are using AWS bare metal instances. https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-instances-with-direct-access-to-hardware/
Now please clearly tell me that "if i get c5.xlarge bare metal instance of aws, then can I install my use case as i described in my first paragraph?" Please help. I couldn't find exact answer anywhere else!
Thank you in advance...

There is no such thing as a c5.xlarge bare metal instance.
Instances run on a physical 'host' in the AWS data center. Each host supports one 'family' of instances, such as C5. This is because each family has a specific type of processor and a particular ratio between CPU and RAM.
A C5 host has 96 vCPUs and 192 GB of RAM. This can be divided into different 'instance types' within the family, such as:
c5.large with 2 vCPUs and 4 GB RAM
c5.xlarge with twice as much (4, 8)
c5.12xlarge with 12 times as much as a c5.xlarge
All the way up to c5.24xlarge that has all 96 vCPUs and 192 GB of RAM
The instance type you choose basically gives you a 'slice' of the host.
If you wish to go bare metal, then you get the entire host with 96 vCPUs and 192 GB of RAM. When selecting bare metal, you get the whole host computer and it is big!
This is why you cannot get a c5.xlarge as a bare metal instance.
So, your choices are:
Get a c5.metal instance, install VMWare and create smaller virtual computers, or
Use VMware Cloud on AWS where VMware runs the system for you and you can get smaller virtual computers, or
Give your students Amazon EC2 instances (which would be the simplest option!), or
Run your own hardware

I think azure cloud are supporting nested virtualization.

Related

Can I create custom EC2 hardware? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I would like to specify the hardware components of an EC2 instance instead of selecting an instance type.
For example, instance type A has a CPU from company B with C cores and D GB RAM.
I would like to build my own specifications by choosing every component.
When I google this question, I see results about creating an EC2 instance, which is not the same as creating an instance type.
I also see information about creating a machine image. From what I can tell, this is about making a custom operating system.
I don’t think this is possible. Why? If EC2 machines are virtual, couldn’t you arrange virtual components with ease? If EC2 instances have physical CPUs, is it too inconvenient to offer custom hardware?
This is not possible.
AWS has racks of 'host' computers, each with a particular specification in terms of CPU type, number of CPUs, RAM, directly-attached disks (sometimes), attached GPUs (sometimes), network connectivity, etc.
Each of these hosts is then divided into multiple 'instances', such as:
This is showing that the R5 Host contains 96 virtual CPUs and 768 GB of RAM.
It can be used as an entire computer, known as r5.metal, or
It can be divided into 2 x r5.12xlarge each with 48 vCPUs and 384 GB of RAM -- each being half of the host, or
It can be divided into 6 x r5.4xlarge each with 16 vCPUs and 128 GB of RAM -- each being 1/6th of the host, or
It can be divided into 48 x r5.large each with 2 VCPUs and 16 GB of RAM -- each being 1/48th of the host
And so on
AWS somehow determines how to divide each host computer to support the necessary demand. However, each host can only be divided into smaller versions of the host.
EC2 Instance Families determine what type of CPU is provided and the ratio of CPU:RAM. Each host computer matches one of these Instance Families.
See: Amazon EC2 Instance Types - Amazon Web Services

Which EC2 instance size should I use to serve 10K users [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm modeling costs for a rest api for an e-commerce application mobile and want to determine the appropriate instance size and numbers.
Our config:
- Apache
- Slim framework PHP
Our Estimation:
- Users/Day: 10000
- Page views / User: 10
- Number total of users: 500 000
- Number total of products: 36 000
That's an extremely difficult question to provide a concrete answer to, primarily because the instance-type most appropriate for you is going to be based on the application requirements. Is the application memory intensive (use the r3 series)? Is it processing intensive (use the c4 series)? If it's a general application that is not particularly memory or processor intensive, you can stick with the M4 series, and if the web application really doesn't do much of anything besides serve up pages, maybe some database access, than you can go with the T2 series.
Some things to keep in mind:
The T2 series instances don't give you 100% of the processor. You are given a % of the processor (base performance) and then credits to use if your application spikes. When you run out of credits, you are dropped down to base performance.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits
t2.nano --> 5%
t2.micro --> 10%
t2.small --> 20%
t2.medium --> 40%
t2.large --> 60%
Each of the instances in each of the EBS-backed series (excluding T2) offer different max throughput to the EBS volume.
https://aws.amazon.com/ec2/instance-types/
If I had to wager a guess, for 100,000 page views per day, assuming the web application does not do very much other than generate the pages, maybe some DB access, I would think a t2.large would suffice with possibility to move up to m4.large as the smallest M4 instance.
But, this all defeats the wonders of AWS. Just spin up an instance and try it for a few days. If you notice it's failing, figure out why (processes taking too long, out of memory errors, etc.), shut down the instance and move up to the next instance.
Also, AWS allows you to easily build fault tolerance into your architecture and to scale-OUT, so if you end up needing 4 processors and 16gb memory (1 x m4.xlarge instance), you may do just as well to have 2 x m4.large instances (2 processors and 8gb memory) behind a load balancer. Now, you have 2 instances with the same specs and roughly the same cost (I think it's marginally cheaper actually).
You can see instance pricing here:
https://aws.amazon.com/ec2/pricing/
You can also put together your (almost) entire AWS architecture costs using this calculator:
http://calculator.s3.amazonaws.com/index.html

Does VMware vSphere ESXi 5.1 installation erase all data from the disk? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
A task assigned to me is to virtualize the Lab using VMware vSphere products (ESXi and vSphere Client). I spent couple of days to get know-how about vSphere and finally decided to put my hands on it......While installation begins I proceeded very carefully and i reached this Warning message from installer:
Warning: This disk will be repartitioned.
What I understand from this warning message that my all hard disk is going to be repartitioned and obviously formatted and em gona loose my all data.
I was about to confirm but suddenly realize that "repartition" mean my all data on the disk will be gone, and this is what I really don’t like to face.
My problem is that i want to install VMware ESXi 5.1 on a machine which has single hard disk with 4 partitions(3 primary, 1 extended). All three primary partitions are hosting one OS; mean 3 Operating Systems are already installed on each primary partition.
So I want to install VMware ESXi on one of the primary partition, i am ready to lose OS on that primary partition but I do not want to lose my other 2 OS and data on extended partition.
How can I install VMware ESXi 5.1 alongside my other OS without losing data???
is it possible??? If yes guide me please guide, if not then give me kind suggestions.
i come here after googling. ;-(
Bundle of thanks..........
Unfortunately, if you only have one local disk, the ESX installation will reformat and repartition that drive. There is some good news though!
Instead, you can download ESXi, which is a free hypervisor and is considered "installable". So, you can put the ESXi software onto something like an external HD or a USB stick and you will be able to boot your ESXi server using whichever external drive you select. This will also allow you to keep the partitions on your local disk in place while you use that disk's remaining space for your VM datastores.
Here is a great link from VMware that shows you exactly how to do it, enjoy!
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020655

VMWare Player vs VMWare Workstation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Can anyone give me a simple comparison of those two? It is hard to get the idea from their web site.
VM Player runs a virtual instance, but can't create the vm. [Edit: Now it can.] Workstation allows for the creation and administration of virtual machines. If you have a second machine, you can create the vm on one and run it with the player the other machine. I bought Workstation and I use it setup testing vms that the player runs. Hope this explains it for you.
Edit: According to the FAQ:
VMware Workstation is much more advanced and comes with powerful features including snapshots, cloning, remote connections to vSphere, sharing VMs, advanced Virtual Machines settings and much more. Workstation is designed to be used by technical professionals such as developers, quality assurance engineers, systems engineers, IT administrators, technical support representatives, trainers, etc.
VMWare Player can be seen as a free, closed-source competitor to Virtualbox.
Initially VMWare Player (up to version 2.5) was intended to operate on fixed virtual operating systems (e.g. play back pre-created virtual disks).
Many advanced features such as vsphere are probably not required by most users, and VMWare Player will provide the same core technologies and 3D acceleration as the ESX Workstation solution.
From my experience VMWare Player 5 is faster than Virtualbox 4.2 RC3 and has better SMP performance. Both are great however, each with its own unique advantages. Both are somewhat lacking in 2D rendering performance.
See the official FAQ, and a feature comparison table.
from http://www.vmware.com/products/player/faqs.html:
How does VMware Player compare to VMware Workstation? VMware Player
enables you to quickly and easily create and run virtual machines.
However, VMware Player lacks many powerful features, remote
connections to vSphere, drag and drop upload to vSphere, multiple
Snapshots and Clones, and much more.
Not being able to revert snapshots it's a big no for me.
One main reason we went with Workstation over Player at my job is because we need to run VMs that use a physical disk as their hard drive instead of a virtual disk. Workstation supports using physical disks while Player does not.
Workstation has some features that Player lacks, such as teams (groups of VMs connected by private LAN segments) and multi-level snapshot trees. It's aimed at power users and developers; they even have some hooks for using a debugger on the host to debug code in the VM (including kernel-level stuff). The core technology is the same, though.
re: VMware Workstation support for physical disks vs virtual disks.
I run Player with the VM Disk files on their own dedicated fast hard drive, independent from the OS hard drive.
This allows both the OS and Player to simultaneously independently read/write to their own drives, the performance difference is noticeable, and a second WD Black or Raptor or SSD is cheap.
Placing the VM disk file on a second drive also works with Microsoft Virtual PC.

Development Environment in a VM against an isolated development/test network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons.
The standard setup is something like:
VMWare image given to devs with tools installed
VM is customized to suit project/stream needs
VM sits in a network & domain that is isolated from the live/production network
SCM connectivity is only possible through dev/test network
Email and office tools need to be on live network so this means having two separate desktops going at once
Heavyweight dev tools in use on VMs so they are very resource hungry
Some problems that people complain about are:
Development environment runs slower than normal (host OS is windows XP so memory is limited)
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Need a separate login to Dev/Test network/domain
Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)?
In particular are there viable options that would remove the need for running stuff in a VM altogether?
In particular are there viable options
that would remove the need for running
stuff in a VM altogether?
Given that you said there are unspecified risk/governance/security/compliance reasons for your organization's use of VMs, I doubt any option we could provide could negate those. Ultimately it sounds like they just need their development team as sandboxed as possible.
(And even so, the question/answers would probably be better off at serverfault since it's more networking/security oriented.)
It sounds like a big problem is not having enough horsepower on the host OS. WinXP should be fine, but you need to have adequate hardware. i.e. at least 3 GB RAM, dual core CPU, and hardware that supports virtualization. Clipboard sync should be working with the VM.
I am not currently doing this, but I've thought about it, and we're kind of kicking this idea around with the idea of making it easier to standardize the dev environment, and to avoid wasting a day when you get a new PC. I'm dismayed to hear that it's not the utopia that I had dreamed...
I've been using VMs as a development environment for a long time. There's nothing inherently wrong with it, and it presents lots of benefits.
Ensuring a consistent environment
Separating file systems for different backup scenarios
Added security
Potentially gives developers access to more raw computing power.
There is a lot of innovation in the VM world, as evidenced by the growing popularity of VM farms, hardware support for virtualization, and controlled "turnkey" solutions, like MS's VirtualPC images for testing browser compatibility and the TurnKey set of appliances.
As others have said, your issues are probably due to insufficient hardware or sub-optimal configurations.
Development environment runs slower than normal (host OS is windows XP so memory is limited)
This should not be noticeable. XP vs. Windows Vista or Win7 is a marginal comparison. I would check the amount of physical RAM allocated to the VM.
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
There are VM-specific optimizations/configurations that can make these tasks seamless. I would consult your VM maintenance staff.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Again, should be seamless, but consult VM staff.
Need a separate login to Dev/Test network/domain
I would see this as a business decision: your company could obviously set up virtual machines with the same domain poicies as your own personal workstation, but may have other (big brother?) purposes for forcing you to login separately.
As far as using VM's as an agent of control, I think there are better solutions, like well-designed authorization controls around the production machines. There's nothing like paper trails to make people behave themselves.