Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am stuck with the 4gb limit of the .vdi file. How can I get around this limit?
HOST OS : Ubuntu 10.04
Guest OS : Ubuntu 10.04
filesystem: fat32 (where the vdi file is located)
Can I create another vdi file and mount it on file system?
If you have LVM in guest configured, then just create anothe vdi. Add to the machine. Then mount it and add to lvm as Physical Volume.
Next you can resize the Logical volume. And it works!
If you do not have LVm configured. Than you can add vdi and mount.But it will probably be just another part of filesystem. What you can do is to migrate your /home dir to this new volume, and then mount it as /home.
filesystem: fat32 (where the vdi file is located)
That's where the 4GB file size limit comes from. Use ext3 (default for linux) or Reiser or any of the other modern filesystems.
You can create a split disk image like so:
VBoxManage createhd --filename /path/to/my-disk.vmdk --size 8192 --format vmdk --variant split2g
Now that there is a Virtualbox 4.0, you can resize VDI!
I have virtualbox portable on my USB stick, with FAT32 filesystem. When I tried to use a dinamic VDI on a virtual machine with a size limit of 100Gb, Virtualbox display an advice telling to use NTFS filesystem instead of FAT, because FAT32 allows 4 Gb as maximum file size.
Related
I have an Ubuntu 16.04.3 LTS GCE instance.
I increased system disk space (from 20 GB to 30 GB) but after GCE restart, if I run
df -h
I still get 20GB on disk size.
In the past, on Ubuntu GCE Instances, after the instance restart, the System automatically saw new disk space.
Also in the documentation I read:
"Alternatively, instances that use most recent versions of Public
Images can automatically resize their partitions and file systems
after a system reboot. The SUSE Linux Enterprise Server (SLES) public
images are the only images that do not support this feature."
So, what is the problem?
What can I do to get the new space?
You can execute:
$ sudo lsblk
$ sudo resize2fs /dev/sda1 30G
If you receive the message:
The filesystem is already 5242619 (4k) blocks long. Nothing to do!
Sorry, I believe that you can't resize the disk using resize2fs.
So, I know two alternatives that you can follow.
Alternative 1 - Attach another one disk in the VM.
Alternative 2 - Create a new disk from a snapshot of the original disk.
Steps to create a disk from a snapshot:
Stop the machine (not necessary but it is safe)
Take a snapshot
gcloud beta compute disks snapshot <disk_name> --project=<project_id> --snapshot-names=snapshot-1 --zone=<zone> --storage-location=us
Create a new machine from that snapshot
I'm trying to convert an openVZ container to VMware.
Since this is planned for roughly 1000 instances, I'm looking for a different approach than reinstalling from scratch.
I followed the steps in the last post:
https://communities.vmware.com/message/1719787#1719787
However, when booting from a live CD, it can't find any linux partition.
I also tried yum install kernel-xx which had no effect on the live CD not finding a partition so I'm assuming there's an error while converting.
Does anyone know of a solution or some tweaks to the one I posted?
The OS in this case is CentOS 7 on openVZ 6.
Long story short: Convert openVZ to KVM, then convert to VMWare.
create a KVM with the same OS as your container
mount the KVM image file
rsync all data to that image file
umount image file and start and stop KVM
convert img to vmdk with qemu-img
move vmdk file to esxi host
convert to thin-provisioned vmdk with vmkfstools
I had to tackle (and still am) multiple issues to make it boot, like recrating initrams, reinstalling policykit, reconfiguring networking, adjusting grub.
Hope this helps someone.
It looks like you will have to go the OpenVZ -> KVM -> VMware route. This post by Roman Pertl explains how he did it, plus it also links to some other tutorials.
rsync --exclude=/var/lib/initramfs-tools/* --exclude=/var/lock --exclude=/etc/fstab --exclude=/etc/modules --exclude=/etc/mtab --exclude=/boot/* --exclude=/proc/* --exclude=/lib/modules/* --exclude=/tmp/* --exclude=/dev/* --exclude=/sys/* -e ssh --delete-after --numeric-ids -avpogtStlHz / root#sourcevm:22/
I downloaded a VM instance from the web and launched / modified it using it using VMware Workstation 12 Player
I would now like to transfer this image onto an ESXi host running VMware ESXi Version 5.5.0.
I have tried copying the working directory "C:\Users\xxxx\Downloads\Kali-Linux-2.0.0-vm-amd64\Kali-Linux-2.0.0-vm-amd64" to the ESXi datastore and have tried to import it using a couple of methods:
I tried browsing to the Datastore, right clicking the "Kali-Linux-2.0.0-vm-amd64.vmx" file and selecting "add to inventory"
I tried creating a virtual machine, selecting the option to use and existing disk and pointed it at the VMDK file.
Both methods allow me to create the machine, but fail with the following error when I try to power it up.
Failed to start the virtual machine.
Module DiskEarly power on failed.
Cannot open the disk '/vmfs/volumes/4dc219c6-2eb825c6-0119-d8d3855f4a40/Kali-Linux-2.0.0-vm-amd64/Kali-Linux-2.0.0-vm-amd64.vmdk' or one of the snapshot disks it depends on.
The system cannot find the file specified
VMware ESX cannot find the virtual disk "/vmfs/volumes/4dc219c6-2eb825c6-0119-d8d3855f4a40/Kali-Linux-2.0.0-vm-amd64/Kali-Linux-2.0.0-vm-amd64.vmdk". Verify the path is valid and try again.
I have checked and I can see the VMDK file on the Datastore.
I don't know if it of any significance, but the files on my desktop are broken down into multiple VMDK files and when I copied it to the Datastore, it turned them into one large VMDK file.
It might be best to use vmware converter to import the vm into your esxi host and/or even try an export to OVF from workstation and then an import (deploy OVF) to ESXi host.
From the error generated it looks like the original VM may still have some snapshots. Try and remove any snapshots, then take note of the disk controller and disk type of the workstation vm and check that it is supported for ESXi vm (for example IDE etc.), ESXi likes SCSI.
I have ssd and hdd. So I want to install Genymotion and virual box on ssd. But when I install programs on geny motion emulator the programs will be installed on the ssd and I want to avoid recent writes on it. I know that I can move genymotions emulators on hdd (from options), but I don't know where they are installed. So where app is installed for an emulator? In it's folder (where it is downloaded from genymotion) or ?
Thanks in advance.
On Linux, the deployed virtual machines' disks are stored in ~/.Genymotion/deployed/<name>. On Windows, they're found in c:\Users\<username>\AppData\Local\Genymobile\Genymotion\deployed\<name>.
I have a MacBook Air and space is a premium. I have a vagrant instance which has been growing in size from 2 GB to 8 GB.
I was looking at options for reducing the disk size found a few tutorials for VDI, but the actual file is a .vmdk file. Unfortunately, the tool to manage vmdk files is a commercially licensed tool from VMware.
Why does vagrant use the vmdk format as its default packaging format?
Is there a way to configure the vagrantfile and force it to use vdi instead of vmdk?
Simple answer is no.
VirtualBox only supports exporting images as OVF/OVA.
Vagrant 1.0.x base boxes are basically tar files of the VirtualBox exports. It changed a little bit in 1.1.x and 1.2+.
Anyway, technically you should still be able to convert the VMDK to VDI but you will have to re-attach it to the existing VM or create a new one using it, e.g.: VBoxManage clonehd in.vmdk out.vdi --format VDI
Refer to http://docs.vagrantup.com/v2/boxes/format.html
In the past, boxes were just tar files of VirtualBox exports. With Vagrant supporting multiple providers, box files are now tar files where the contents differ for each provider. They are still tar files, but they may now optionally be gzipped as well.
Box files made for Vagrant 1.0.x and VirtualBox continue to work with Vagrant 1.1+ and the VirtualBox provider.