I have just installed VMware ESXi 7 as a virtual machine just for learning. I have seen it is feasible to create nested vms using VMware Player Workstation plus Intel chipset: my testing purpose is to create a virtual machine inside a virtualized ESXi server.
Actually I cannot install any vm, probably due to the fact I have not created any datastore yet.
In order to create a datastore I thought to edit the partition of the free space avalaible (for a linux vm 20GB are enough), but when I try to edit partition I get such summary in which I cannot configure anything at all (see pics).
Have you any suggestion?
When you install SO it's not a good practice to add it as a datastore. Please turn off you VM and add another disk to ESXI. After you boot up server again you will be able to create a new datastore.
Related
I have a small virtual machine (f1-micro) on google cloud platform. It has been running daily without issues for more than a year. Some days ago I tried to install an sftp server. I installed some packages and create some users. Since then I'm not able to accces it :(.
Somebody knows what can I do to recover the access? Maybe, there is the option to move the data on the hard disk to another "fresh" virtual machine?
Thanks,
Ferran
I have a production server that hosts 3 VM's over Esxi 5.5.
Back in the day, I used a customized HP image to get ESXi installed on the Proliant server.
I have purchased a new server with Esxi 6.7 installed and wonder if I can move my 3 VM's hosted on the old HP server onto my new server (running ESXi 6.7).
The HP server sits 1500Km away so is challenging to test.
Did anyone come across any challenges removing VM's from one Host to another running different ESXi versions?
Thank you
You won't have any problems in the transportation process, but need to decide which transportation method use.
You can install vCenter Server Appliance then migrate with Storage vMotion.
If you do not want to install vCenter, you can turn the VMs power off and get an OVF copy to export (more on OVF here on vmware.com's site). You can then add this again from the deploy OVF section.
Link below for Vcenter installation.
https://www.tayfundeger.com/vcenter-server-appliance-6-7-kurulumu-bolum-1.html
https://www.tayfundeger.com/vcenter-server-appliance-6-7-kurulumu-bolum-2.html
Thanks.
VM objects are fairly backwards compatible and most go back quite a few years and a handful of versions, so you should be fine between those particular versions.
The biggest consideration is normally how to get the VM object data from point A to point B. Example:
Are you using storage based replication?
Are you SCPing the data directly from the hosts?
Are you exporting the VMs, transporting the data, and importing them?
Etc.
Yes, you can. The things you must take into account are:
1 - Hardware version. It's not possible to downgrade HW version through the ESXi UI. This won't be a problem in your case, as you are moving the VMs to a higher ESXi version that still supports ESXi 5.5's HW version. Once you have the VMs in the target server you can decide to upgrade to the most recent HW version for your new platform.
2 - VMFS version. ESXi 6.7 allows the use of VMFS-5 or VMFS-6, which is a newer version of the VMWare file system. You can indeed move VMs from VMFS-5 to VMFS-6. Nonetheless, unless it's unavoidable to do so, I would use the same VMFS version, as performing a cross-file system migration can make you fall into some incompatibilities that you should avoid.
3 - You will have to move your VMs over IP. If you don't own a VMWare license that allows you to migrate them, you can use an ESXi backup tool from 33hops.com that is compatible with unlicensed Free ESXi.
This XSIBackup-DC is a well-tested tool that allows to live migrate VMs over IP in licensed or unlicense versions of ESXi.
I'm looking for ways to migrate the server from physical to GCP cloud but there is a lot of challenges to be considered.
My plans are :
Lift and shift the data | thinking of this if not using velostrata
Migrate using GCP velostrata.
Migrate using velostrata was not so clear there is no defined way to do it. link -> https://cloud.google.com/migrate/compute-engine/docs/4.5/how-to/prepare-vms-servers/physical-servers
By going through the documentation it looks to be migrated to VMware first then to the GCP cloud.
Can you guys let me simplified the steps and confirmation on this?
GCP has a couple of options to migrate instances.
Import disk
The import tool supports most virtual disk file formats, including VMDK and VHD
This feature has the following limitations:
Linux virtual disks must use grub as the bootloader.
UEFI bootloaders are not supported for either Windows or Linux.
Linux virtual disks must meet the same requirements as custom images,
including support for Virtio-SCSI Storage Controller devices.
When installed on Windows virtual disks, application-whitelisting
software, such as Cb Protection by Carbon Black, can cause the import
process to fail. You might need to uninstall such software prior to
import.
If you are importing a virtual disk running RHEL, bring your own
license (BYOL) is supported only if the python-boto package is
installed on the virtual disk prior to import.
Operating systems on virtual disks must support ACPI
If you decide to go this route I recommend you to look and use the compatibility precheck tool
Velostrata
Velostrata supports 4 different sources of machines.
On-premise VM
Azure
AWS
Physical server
The guide you share indicates that you need to download "Migrate for Compute Engine Connector ISO image" (included in the link), save it in an USB and make it bootable.
Then you will need to continue with the steps here
You can also use the path you suggest to do a P2V migration to VMware environment using a tool such as VMconverter
Once your machine is in a VMware environment follow the on-premise Velostrata migration guide
When building an OVF, you can specify a tag for VirtualSystemCollection that allows you to have multiple VMs to share the same base disk image, but any changes that the individual machines make are Copy-On-Write into a private disk area for each specific VM.
When you try to deploy images setup this way to EXSi, it complains Unsupported element 'VirtualSystemCollection'. It would appear that you need the commercial vCenter or vApp servers from VMWare to utilize this feature. (From what I've been able to grok so far)
Is there a way to do this through free software (Free like ESXi, or opensource)
The ultimate goal is that I want to have a single disk image that's used as a base - and to bring up a cluster of VMs that are then individually configured so that for a VM with a 500 meg disk, I only need '500M + (num_vms * delta_per_vm)' rather than '500M * num_vms'
An ESXi connected to a vCenter should support this via vApps but since you wanted a non-commercial solution, the closest things is to use VirtualBox.
The open source VirtualBox has multi-attach support to achieve this with different disk formats and it works very well. It also has special qcow, qemu copy on write disk support. Basically, you create a master disk and attach to multiple VMs. (Huge disk space saving.)
It can also happily import multiple VMs from a single OVA file with VirtualSystemCollection but unfortunately, it still requires manual intervention to tell VirtualBox that disks are shared after importing all the VMs. (Well, it defeats the appliance deployement in the first place...)
After creating master disk (or after deployment), attaching to multiple VMs can be done with GUI or with the following command:
VBoxManage storageattach "vm-name" --storagectl "sata1" --port 0 --device 0 --type hdd --medium base.vdi --mtype multiattach
For more information, see http://www.electricmonk.nl/log/2011/09/24/multiple-virtualbox-vms-using-one-base-image-copy-on-write/ and http://virtbjorn.blogspot.com.tr/2012/12/virtualbox-multi-attach-disk.html
If you really want to use VMWare ESXi, you can use data deduplication to achieve the same task on block level. (which is generally used with cloud hosting companies). You can see the deduplication success rates with open source tools here: http://opendedup.org/deduprates
In VMware products, a multi-tier appliance (VirtualSystemCollection) is called a vApp. In vSphere, vApps live in vCenter and not ESX. So yes, you need vCenter to import a VirtualSystemCollection.
If you are using Workstation, you can also try the free vApprun tool:
https://labs.vmware.com/flings/vapprun
Here is what I did successfully to have such OVF images imported into my free ESXi server.
In the OVF file a XML element VirtualSystemCollection defines the vApp.
You can manually edit the OVF file and remove or comment this part as shown bellow. This will allow to import the VM into ESXi without vCenter once the OVF image is converted using VMware OVF Tool.
<!-- ovf:VirtualSystemCollection ovf:id="dummy-id">
<ovf:Info>A collection of virtual machines</ovf:Info>
<ovf:Name>dummy-name</ovf:Name>
<ovf:StartupSection>
<ovf:Info>VApp startup section</ovf:Info>
<ovf:Item ovf:id="dummy-id" ovf:order="0" ovf:startAction="powerOn" ovf:startDelay="0" ovf:stopAction="powerOff" ovf:stopDelay="0"/>
</ovf:StartupSection-->
Keep the remaining part intact and remove the following line at the end.
</ovf:VirtualSystemCollection>
Also make sure you have the last ESXi Embedded Host Client installed to avoid other bug related problems during import.
https://labs.vmware.com/flings/esxi-embedded-host-client
Converting OVF to VMX can be done using VMWare OVF Tool. In command line it looks simply as following:
ovftool <path_to_source>/<myvm>.ovf <path_to_target>/<myvm>.vmx
I develop exclusively on VMs. I currently run Boot Camp on a MacBook Pro and do all my development on a series of Virtual PC VMs for many different environments. This post by Andrew Connell litterally changed the way I work.
I'm thinking about switching to Fusion and running everything in OS X but I wasn't able to answer the following questions about VM Fusion/Workstation/Server. I need to know if the following features from Virtual PC/Server exist in their VMWare counter parts.
Differencing Disks (ability to create a Base VM and provision new VMs which just add deltas on top of the base [saves a ton of disk space, and makes it easy to spin up new VMs with a base set of funcitonality]). (Not available with Fusion, need Workstation [$189])
Undo disks (ability to rollback all changes to the VM within a session). (Available in both Workstation and Fusion [$189/$79.99 respectively])
Easily NAT out a different subnet for the VM to sit in. (In both Fusion/Workstation).
Share VMs between VM Player and VM Server. I'd like to build up a VM locally (on OS X/Fusion) and then move it to some server (Win2k3/Win2k8 and VM Server) and host it there but with VM Server. (In both Fusion/Workstation).
An equivalent to Hyper-V. (Both Fusion and Workstation take advantage of type-2 hypervisor a for 64x VMs, neither do for 32 bit VMs. VMWare claims they're no slower as a result some benchmarks corroborate this assertion).
Ability to Share disks between multiple VMs. If I have a bunch of databases on a virtual disk and want them to appear on more than one VM I should be able to just attach them. (Available in both Fusion and Workstation)
(Nice to have) Support for multiple processors assigned to a VM (Available in both Fusion and Workstation).
Is there a VMWare guru out there who knows for sure that the above features are available on the other side?
Also the above has been free (as long as you have licenses for Windows machines), besides buying Fusion are there any other costs?
The end result of my research, thanks so much!
You can only create Linked clones and Full Clones (which are close to differencing disks) in VMWare Workstation (not Fusion). Workstation also has at better snapshot management in addition to other features which are difficult to enumerate. That being said Workstation is $189 (as opposed to $79) and not available on OS X. In addition Fusion 1.1 (current release) has a bunch of display bugs on OS X 10.5 (works well on 10.4). These will be remedied in Fusion 2.0 which is currently in (RC1). I'll probably wait until v2.0 comes out and then use both Workstation/Fusion to provision and use these VMs on OS X.
I've not used Fusion, just workstation and server
1) Yes, you can create a linked clone from current vm state, or from a saved state (snapshot) in VMware Workstation
2) Yes, revert to snapshots
3) There's a number of different network setups, NAT's one of them
4) VMware virtual machines created with VMware Fusion are fully compatible with VMware’s latest products.
5) ?
6) You can add pre-existing to disks to other vm's
7) Yup, you create multi-cpu vm's
Workstation costs, but VMWare Server is free
It doesn't have #1, at least.
VMWare server is free, but only allows for one snapshot, a serious deficiency. VMWare Workstation allows multiple snapshots and can perform most of the same functionality.
VMWare has a Hypervisior which is equivalent to Hyper-V in Virtual PC.
You can not share a VM that was created in Fusion with Windows VMWare Server (free version) you'll need the paid version to be able to share amongst both.
I'd also take a look at Sun's xVM VirtualBox for Mac. It runs Windows XP and Vista quite swift on my Mac.
1 and 2) VirtualBox has snapshots that branch off from the base VM like a tree. You can revert to any previous snapshots and name them.
3) It has NAT support and bridged networking like the VMWare and Microsoft products.
4) There is no server version of VirtualBox, but I know it shares an engine with Qemu, so it may be possible to host your VBox images on Qemu.
5) VirtualBox does have a hypervisor if your Mac has VT-x enabled.
6) Sure, you can add existing disks to other VMs. But you can't run the same disk in multiple VMs at once. (Isn't that a restriction of all virtualization hosts, though?)
7) No. VirtualBox will give each image one CPU and spread them out.