I'm looking for ways to migrate the server from physical to GCP cloud but there is a lot of challenges to be considered.
My plans are :
Lift and shift the data | thinking of this if not using velostrata
Migrate using GCP velostrata.
Migrate using velostrata was not so clear there is no defined way to do it. link -> https://cloud.google.com/migrate/compute-engine/docs/4.5/how-to/prepare-vms-servers/physical-servers
By going through the documentation it looks to be migrated to VMware first then to the GCP cloud.
Can you guys let me simplified the steps and confirmation on this?
GCP has a couple of options to migrate instances.
Import disk
The import tool supports most virtual disk file formats, including VMDK and VHD
This feature has the following limitations:
Linux virtual disks must use grub as the bootloader.
UEFI bootloaders are not supported for either Windows or Linux.
Linux virtual disks must meet the same requirements as custom images,
including support for Virtio-SCSI Storage Controller devices.
When installed on Windows virtual disks, application-whitelisting
software, such as Cb Protection by Carbon Black, can cause the import
process to fail. You might need to uninstall such software prior to
import.
If you are importing a virtual disk running RHEL, bring your own
license (BYOL) is supported only if the python-boto package is
installed on the virtual disk prior to import.
Operating systems on virtual disks must support ACPI
If you decide to go this route I recommend you to look and use the compatibility precheck tool
Velostrata
Velostrata supports 4 different sources of machines.
On-premise VM
Azure
AWS
Physical server
The guide you share indicates that you need to download "Migrate for Compute Engine Connector ISO image" (included in the link), save it in an USB and make it bootable.
Then you will need to continue with the steps here
You can also use the path you suggest to do a P2V migration to VMware environment using a tool such as VMconverter
Once your machine is in a VMware environment follow the on-premise Velostrata migration guide
Related
I've spent ages going around in circles on this so I'm hoping someone will point me in the right direction.
I'm creating a SQL Server lab running under Hyper-V on an Azure Windows Server 2016 Datacenter Gen 1 virtual machine. So far so good as I've got an AG running on two replica VMs, however I want to expand the lab to inlude a Windows Failover Cluster so I need to be able to create a shared disk and that's where I'm stuck. Whenever I try and add a shared disk in the Hyper-V Manager (or PowerShell) I get the following error:
The storage where the virtual hard disk is located does not support virtual hard disk sharing
It can't be the type of Azure disk I'm using as I get the same problem trying to create the Hyper-V shared disk on a Standard HDD, Standard SSD and Premium SSD with sharing enabled so what else do I need to do?
Regards,
Gordon.
You can try below possible solutions:
Please check if you have enabled the Sharing while creating the managed disk in the portal. If not then please enable it while deploying and add the max shares as per your requirement .
Reference:
Share an Azure managed disk across VMs - Azure Virtual Machines | Microsoft Docs
Enable shared disks for Azure managed disks - Azure Virtual Machines | Microsoft Docs
Please try to attach the VHDx file to IDE controller as its a Gen 1 virtual machine because only the SCSI controller has the option "virtual hard disk sharing" and Gen 1 VM only can boot from IDE.
Note: Please do not use this feature without CSV or Scale-Out File Server with SMB 3.0 on file-based storage .
Reference :
the storage where the virtual hard disk is located does not support virtual disk sharing (microsoft.com)
Thank you Prabhu Dutta Mohanty for providing the reference link .
Note: If the issue is still not resolved , Please create a support request to Azure support from portal (Support+Help) for assistance.
I have just installed VMware ESXi 7 as a virtual machine just for learning. I have seen it is feasible to create nested vms using VMware Player Workstation plus Intel chipset: my testing purpose is to create a virtual machine inside a virtualized ESXi server.
Actually I cannot install any vm, probably due to the fact I have not created any datastore yet.
In order to create a datastore I thought to edit the partition of the free space avalaible (for a linux vm 20GB are enough), but when I try to edit partition I get such summary in which I cannot configure anything at all (see pics).
Have you any suggestion?
When you install SO it's not a good practice to add it as a datastore. Please turn off you VM and add another disk to ESXI. After you boot up server again you will be able to create a new datastore.
I want to emulate the NFV framework for one of my projects and I have proprietary software that can only be deployed as a virtual machine in VirtualBox. From the research, I found that VirtualBox uses Type-2 hypervisor and ETSI NFV Standards do not mandate a specific hypervisor. Now, I have my network function running as a VNF. This architecture can be mapped to NFV infrastructure functional block of the NFV, as it has VNFs, Virtual Hardware, Type-2 Hypervisor, and Physical Hardware.
Is it safe to assume that my implementation is an emulation of the NFV framework because I do not really need VIM and MANO for this project?
I also researched that I can use OpenStack as a VIM and OSM as the MANO. My proprietary software does not have an image and can't be instantiated through VIM. The only way is deploying through VirtualBox. If my first assumption is wrong, is there a way to integrate VirtualBox (NFV infrastructure) with OpenStack (VIM) and OSM (MANO).
When building an OVF, you can specify a tag for VirtualSystemCollection that allows you to have multiple VMs to share the same base disk image, but any changes that the individual machines make are Copy-On-Write into a private disk area for each specific VM.
When you try to deploy images setup this way to EXSi, it complains Unsupported element 'VirtualSystemCollection'. It would appear that you need the commercial vCenter or vApp servers from VMWare to utilize this feature. (From what I've been able to grok so far)
Is there a way to do this through free software (Free like ESXi, or opensource)
The ultimate goal is that I want to have a single disk image that's used as a base - and to bring up a cluster of VMs that are then individually configured so that for a VM with a 500 meg disk, I only need '500M + (num_vms * delta_per_vm)' rather than '500M * num_vms'
An ESXi connected to a vCenter should support this via vApps but since you wanted a non-commercial solution, the closest things is to use VirtualBox.
The open source VirtualBox has multi-attach support to achieve this with different disk formats and it works very well. It also has special qcow, qemu copy on write disk support. Basically, you create a master disk and attach to multiple VMs. (Huge disk space saving.)
It can also happily import multiple VMs from a single OVA file with VirtualSystemCollection but unfortunately, it still requires manual intervention to tell VirtualBox that disks are shared after importing all the VMs. (Well, it defeats the appliance deployement in the first place...)
After creating master disk (or after deployment), attaching to multiple VMs can be done with GUI or with the following command:
VBoxManage storageattach "vm-name" --storagectl "sata1" --port 0 --device 0 --type hdd --medium base.vdi --mtype multiattach
For more information, see http://www.electricmonk.nl/log/2011/09/24/multiple-virtualbox-vms-using-one-base-image-copy-on-write/ and http://virtbjorn.blogspot.com.tr/2012/12/virtualbox-multi-attach-disk.html
If you really want to use VMWare ESXi, you can use data deduplication to achieve the same task on block level. (which is generally used with cloud hosting companies). You can see the deduplication success rates with open source tools here: http://opendedup.org/deduprates
In VMware products, a multi-tier appliance (VirtualSystemCollection) is called a vApp. In vSphere, vApps live in vCenter and not ESX. So yes, you need vCenter to import a VirtualSystemCollection.
If you are using Workstation, you can also try the free vApprun tool:
https://labs.vmware.com/flings/vapprun
Here is what I did successfully to have such OVF images imported into my free ESXi server.
In the OVF file a XML element VirtualSystemCollection defines the vApp.
You can manually edit the OVF file and remove or comment this part as shown bellow. This will allow to import the VM into ESXi without vCenter once the OVF image is converted using VMware OVF Tool.
<!-- ovf:VirtualSystemCollection ovf:id="dummy-id">
<ovf:Info>A collection of virtual machines</ovf:Info>
<ovf:Name>dummy-name</ovf:Name>
<ovf:StartupSection>
<ovf:Info>VApp startup section</ovf:Info>
<ovf:Item ovf:id="dummy-id" ovf:order="0" ovf:startAction="powerOn" ovf:startDelay="0" ovf:stopAction="powerOff" ovf:stopDelay="0"/>
</ovf:StartupSection-->
Keep the remaining part intact and remove the following line at the end.
</ovf:VirtualSystemCollection>
Also make sure you have the last ESXi Embedded Host Client installed to avoid other bug related problems during import.
https://labs.vmware.com/flings/esxi-embedded-host-client
Converting OVF to VMX can be done using VMWare OVF Tool. In command line it looks simply as following:
ovftool <path_to_source>/<myvm>.ovf <path_to_target>/<myvm>.vmx
We have the need to perform tests on localized platforms that put some burden on our hardware resources because for just a few weeks we might need plenty of servers and clients (Windows 2003 and Windows 2008, Vista, XP, Red Hat, etc) in multiple languages.
We typically have relied on blades with Windows 2003 and VMWare, but sometimes these are overgrown by punctual needs and also have the issue that the acquisition and deployment process is quite slow if the environment needs to grow.
Is Amazon EC2/S3 usable in the following scenario?
Install VMWare (Desktop because we need the ability to have snapshots) on an Amazon AMI.
Load existing VMWare images from S3 and run them on EC2 instances (perhaps 3 or 4 server or client OSes on each EC2 instance.
We are more interested in the ability to very easily start or stop VMware snaphsots for relatively short tests. This is just for testing configurations, not a production environment to actually serve a user workload. The only real user is the tester. These configurations might be required for just a few weeks and then turned off for a few months until the next release requires them again.
Is EC2/S3 a viable alternative for this type of testing purpose?
Do you actually need VMWare, or are you testing software that runs in the VMWare VMs? You might actually need VMWare if you are testing e.g. VMWare deployment policy, or are running code that tests the VMWare APIs. Examples of the latter might be you are testing an application server stack and currently using VMWare to test on many platforms.
If you actually need VMWare, I do not believe that you can install VMWare in EC2. Someone will correct & enlighten me if this is not the case.
If you don't actually need VMWare, you have more options. If you can use one of the zillion public AMIs as a baseline, clone the appropriate AMIs and customize them to suit your needs (save the customized version as a private AMI for your team). Then, you can use as many of them as you like. Perhaps you already have a bunch of VMWare images that you need to use in your testing. In that case, you can migrate your VMWare image to an EC2 AMI as described in various places in Google, for example:
http://thewebfellas.com/blog/2008/9/1/creating-an-new-ec2-ami-from-within-vmware-or-from-vmdk-files
(Apologies to the SO censors for not pasting the entire article here. It's pretty long.) But that's a shortcut; you can always use the documented AMI creation process to convert any machine (VMWare or not) to an AMI. Perform that process for each VMWare VM you have, and you'll be all set. Just keep in mind that when you create an AMI, you have to upload it to S3, and that will take a lot of time for large VMs.
This is a bit of a shameless plug, but we have a new startup that may deal with exactly your problem. Amazon EC2 is excellent for on-demand computing, but is really targeted at just a single user launching production servers. We've extended EC2 to make it a Virtual Lab Management environment, with self-service, policies and VM sharing. You can check it out at http://LabSlice.com and see if it meets your needs.
Amazon provides a solution themselves now: http://aws.typepad.com/aws/2010/12/amazon-vm-import-bring-your-vmware-images-to-the-cloud.html