Can't boot VDI image after converting qcow2 QEMU image [closed] - virtualbox

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I try to launch a VirtualBox VDI image (obtained from a qcow2 image created with QEMU). This image has been created with the following command, starting from
qemu-img convert -f qcow2 -O vdi debian-9.0-sparc64.qcow2 debian-9.0-sparc64.vdi
Version of qemu-img is :
$ qemu-img --version
qemu-img version 2.9.0
But when I add the VDI image into VirtualBox, illustrated as below :
and launch it, I get this message :
From what I have seen on similar FATAL errors, It seems that I have to add an ISO image of the OS, in addition to the VDI image created, doesn't it ?
I have an ISO image of Debian-9 Sparc64 (debian-9.0-sparc64-NETINST-1.iso) but this is an installation raw ISO image, not a current ISO image with an installed OS.
I tried to add this ISO image in configuration panel like this :
and the ordering of boot devices :
I have also tried to generate a VDI image from this tutorial,
but without success.
For the moment, I can only launch the qcow2 image with QEMU like :
qemu-system-sparc64 -name debian-sparc64 -machine sun4u,accel=tcg,usb=off -m 1024 \
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 \
-rtc base=utc -no-reboot -no-shutdown \
-boot strict=on \
-drive file=debian-9.0-sparc64.qcow2,if=none,id=drive-ide0-0-1,format=qcow2,cache=none \
-device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-1,id=ide0-0-1 \
-netdev user,id=hostnet0,hostfwd=tcp::5555-:22 \
-device e1000,netdev=hostnet0,id=net0,mac=52:54:00:ce:98:e8 \
-msg timestamp=on -nographic
I was motivated to launch Debian 9 Sparc64 with VirtualBox because, with qemu-system-sparc64, I can't have network (but this is a different problem).
What might be wrong (to launch VirtualBox VDI image) or give some clues to fix this error message at boot ?
Update 1
The issue seems to come from the conversion between .qcow2 and VDI with qemu-img tool. How to make bootable a VDI disk ?

VBoxManage convertdd debian-9.0-sparc64.qcow2 linux_file.vdi

Related

Converting VirtualBox .VDI .VHD .VMDK to BOOTABLE .iso file

The title is pretty much describe it all.
I thought it would be an extremely easy task but I'm googling the topic for a few days and can not find a proper solution.
I succeed convert it to .iso but it's not BOOTABLE from physical machine.
I have tried :
VBoxManage clonehd file.vdi output.iso --format RAW
I have tried :
VBoxManage clonemedium --format RAW gangina.vdi gangina.img
I have tried :
qemu-img convert -f vpc -O raw gangina.vhd gangina.raw
I have also tried to mount the bootable vdi file and :
sudo dd if={mountedDirectory} of=gangina.iso status=progress
unfortunately none of them is actually bootable from physical machine.
I'm sad :(
You cannot DD with a mounted directory.
You can dd the partition, but it would work better to dd the entire drive
example: dd sudo dd if={/dev/sda} of=filename.iso status=progress
i am assuming you are on a linux machine, but when you get it write to a usb and plug it in and boot it. i have used this method before with much success!
while you can do just a partition say sda1 or sda2 dd'ing the entire drive will achieve what you are looking for.
Keep on Keeping on
You can convert your bootable .VDI .VHD and .VMDK souce to BOOTABLE .iso on follow way on Linux like p.e. Ubuntu, Mint or Debian:
Convert .vdi to .img
qemu-img convert -f vdi -O raw source_image.vdi destination_image.img
Convert .vhd to .img
qemu-img convert -f vpc -O raw source_image.vhd destination_image.img
Convert
qemu-img convert -f vmdk -O raw source_image.vmdk destination_image.img

Is there a counterpart to the VirtualBox command 'vboximg-mount' on macOS? (i.e., How do I unmount?)

I can find official documentation on FUSE-mounting virtual disk images but there doesn't seem to be a way to unmount.
Does anybody know how I can unmount a FUSE-mounted filesystem on the specified virtual disk image?
fusermount -u yourmountpoint
See fusermount -h

"virtual memory exhausted" when building Docker image

When building a Docker image, there's some compilations of C++ scripts and I ended up with errors like:
src/amun/CMakeFiles/cpumode.dir/build.make:134: recipe for target 'src/amun/CMakeFiles/cpumode.dir/cpu/decoder/encoder_decoder_state.cpp.o' failed
virtual memory exhausted: Cannot allocate memory
But when building the same .cpp code on the host machine, it works fine.
After some checking, the error message seems to be similar to the one that people get on a Raspberry Pi, https://www.bitpi.co/2015/02/11/how-to-change-raspberry-pis-swapfile-size-on-rasbian/
And after some more googling, this post on the Mac forum says that:
Swapfiles are dynamically created as needed, until either the disk is
full, or the kernel runs out of page table space. I do not think you
can change the page table space limits in the Mac OS X kernel. I have
not seen anything in all the years I've been using OS X.
Is there a way to increase the swap space for Docker build on Mac OS?
If not, how else can be done to overcome the "virtual memory exhausted" error when building a Docker image?
That does not seem trivial to do with XHyve.
As stated in this thread
I think the default size of the VM is 16GB. I kept running out of swap space even after bumping the ram on the VM up to 16GB.
Check if the method used for a VirtualBox VM would apply in XHyve: see "How to increase the swap space available in the boot2docker virtual machine?"
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit
Check also this Digital Ocean Setup, again to test in your XHyve context.
mkswap is also seen here or in docker-root-xhyve/contrib/makehdd/makehdd.sh.
Since you have enough available memory in your host, I recommend you to assign more memory to the Docker VM that is behind.
As stated here:
As I can see that you are on OSX, which runs docker over a Linux VM. Configure the max memory clicking the whale icon in the task bar. By default is 2G.
For further information: https://docs.docker.com/docker-for-mac/#memory

Centos7 Kickstart on VirtualBox/VMware Workstation

I am trying to create a kickstart for Centos 7.3. I have a windows desktop with VMware Workstation Player installed. I started with a dvd that has Centos 7.3 on it. I then created a vm in VMware Workstation Player and installed the os. I restarted the vm and copied over all the files from /dev/sr0 from my dvd to a new place. I copied the anaconda file and renamed it to ks.cfg. I then used the command below to make an iso.
mkisofs -o /home/kickstart.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-road-size 4 -boot-info-able -J -R -v "centos7.3"
Next I took this and burned it to a blank cd using
growisofs --dvd-compat -Z /dev/cdrom=/home/kickstart.iso
When I use this in VirtualBox as the optical drive mounted the installer gets stuck on
Started Show Plymouth Boot Screen
Started Device-Mapper Multipath Device Controller
Starting Open-iSCSI...
Reached target Paths.
Reached target Basic System.
Started Open-iSCSI.
Starting dracut initqueue hook..
then on VMware Workstation Player it goes to
Started Show Plymouth Boot Screen
Started Device-Mapper Multipath Device Controller
Starting Open-iSCSI...
Reached target Paths.
Reached target Basic System.
Started Open-iSCSI.
Starting dracut initqueue hook..
... [sda] Assuming cache: write though
Why is it hanging on these spots? I have tried looked everywhere and can't seem to find any solutions so far.
you've probably found something else for this but in case not, or in case someone else encounters this... I encountered some issues with this as well. I don't know if I have the exact issue, though it was hanging on dracut init, and changing this bit allowed the install to continue.
What it turned out to be was the -V flag on the mkisofs command. Whatever you name it with the -V flag (which it doesn't look like you have), it needs to be the value of LABEL in your /isolinux/isolinux.cfg file. In my fiddling I was using "MyLinuxISO" for this value.
in my /isolinux/isolinux.cfg:
label linux
menu label ^Install CentOS Linux 7 with KS
menu default
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=MyLinuxISO ks=cdrom:/ks.cfg
using mkisofs
mkisofs -o /home/kickstart.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-road-size 4 -boot-info-able -J -R -v -V "MyLinuxISO"
Don't know if this will help you but give it a shot?
Cheers

VMWARE ESXi PANIC: Failed to find HD boot partition

I've got problems installating the VMWARE ESXi Server.
The Installation finishes without any error messages and prompts me to reboot.
After pressing Enter the System reboots. While booting through the yellow loading-screen it switches to black and displays the following Error-Message:
PANIC: Failed to find HD boot partition
All modules have been loaded without any errors.
After typing unsupported into the console the busybox comes up.
I tooked a look into the /dev/disks directory but no disk devices gets listed in difference to the installation process.
Switching to the system-console during installation both sata disks on MPC51 controller are shown.
The controllers are named vmhba0 and vmhba32.
Does anyone know how to solve the problem?!
Hardware is a ESPRIMO P5615 (nForce4) from Fujitsu-Siemens.
The only solution I have found is to run the server from a thumb drive and use the embedded hard drive to store your virtual servers. This solution worked for me.
To achieve this in this way you will need:
A USB thumb drive 1GB or larger
An active Linux machine (or, use a liveCD option on your PowerEdge such as Knoppix or Gentoo LiveCD)
Mount your ESXi ISO:
mount -t iso9660 -o loop VMware-VMvisor-InstallerCD-3.5.0_Update_2-110271.i386.iso /mnt/esx
Write the installer file to the thumb drive:
tar xvzf /mnt/esx/install.tgz usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_2-110271.i386.dd.bz2 -O | bzip2 -d -c | dd of=/dev/sdb
Assumptions here (adjust to your settings):
/dev/sdb is where your thumb drive resides
VMware-VMvisor-InstallerCD-3.5.0_Update_2-110271.i386.iso is the name of your ISO file
usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_2-110271.i386.dd.bz2 is the name of the dd file in your iso (run tar ztf /mnt/esx/install.tgz to see what your exact file name is, it should be similar and relatively obvious)
It will take a few minutes to write, and when it's done simply boot off of this thumb drive. The PowerEdge servers have an internal USB (at least mine does) if aesthetics are important to you.
Source: http://cyborgworkshop.org/2008/08/30/install-vmware-esxi-onto-a-usb-thumbdrive/
EDIT 12/19/2009: ESXi 4.0.0 uses image.tgz instead of install.tgz to store it's dd file