How can i Extract Files From VDI - virtualbox

I was using VirtualBox on my PC(WIN 7)
I managed to View some files in my .VDI file..
How can I open or view the contents of my .vdi file and retrieve the files from there?

I had a corrupted VDI file (according to countless VDI-viewer programs I've used with cryptic errors like invalid handle, no file selected, please format disk) and I was not able to open the file, even with VirtualBox. I tried to convert it using the VirtualBox command line tools, with no success. I tried mounting it to a new virtual machine, tried mounting it with ImDisk, no dice. I read four Microsoft TechNet articles, downloaded their utilities and tried countless things; no success.
However, when I tried 7Zip (https://www.7-zip.org/download.html) I was able to view all of the files, and extract them selectively. Here's how I did it:
install 7zip (make sure that you also install the context-menu items, if prompted.)
right-click on the VDI file, select "Open Archive"
when the window appears, right click on the largest file in the archive (there should be two files, one is "Basic Microsoft Data Partition" and the other one something else, called system or something.) Left click on the largest one and click "Open inside". The file size is listed to the right of each file in bytes.
you should see all of the files inside of the archive. You can drag files that you'd like to extract right to your desktop. You can double click on folders to view inside them too.
If 7zip gives you a cryptic error after extracting the files, it means that you closed the folder's window that you are copying files to in Windows Explorer.
If you didn't close the window and you're still getting an error, try extracting each sub-folder individually. Also make sure that you have enough local hard drive space to copy the files to, even if you are copying them just to an external disk, as 7zip copies them first to your local disk. If the files are highly compressible, you might be able to get away with using NTFS compression for the AppData/temp folder so that when 7zip extracts the files locally, it'll compress them so that it can copy them over to your other disk.

You can mount partitions from .vdi images using qemu-nbd:
sudo apt install qemu-utils
sudo modprobe nbd
vdi="/path/to/your.vdi" # <<== Edit this
sudo qemu-nbd -c /dev/nbd0 "$vdi"
# view partitions and select the one you want to mount.
# Using parted here, but you can also use cfdisk, fdisk, etc.
sudo parted /dev/nbd0 print
part=nbd0p2 # <<== partition you want to mount
sudo mkdir /mnt/vdi
sudo mount /dev/$part /mnt/vdi
Some users seem to need to add a parameter to the modprobe command. I didn't with Ubuntu 16.04, but if it doesn't work for you, try adding max_part=16 :
sudo modprobe nbd max_part=16
When done:
sudo umount /dev/$part
sudo qemu-nbd --disconnect /dev/nbd0

Try out VMXray.
You can explore your vmdk image right inside your browser. Select the files that you want to extract and extract them to the desired location. Not just vmdk, you can use VMXRay for looking into and extracting files from RAW, QEMU/KVM QCOW2, Virtualbox VDI, and ISO images. ext2, ext3, FAT and NTFS are current supported file systems. You can also use this to recover deleted photos from raw dumps of your camera's SD card, for example.
And, do not worry, no data from your files is ever sent over the network. Data never leaves your machine. VMXRay works completely inside your browser.

As a first approach you can simply try any archive viewer to open .vdi file.
I tried 7zip to open Ubuntu Mate .vdi file and it shown all Linux file system like below.

An easy way is to attach the VDI as a second disk in another Virtual Machine.
The drive does not appear immediately; in Windows go to Disk Manager, bring the disk online and assign it a drive letter.

You can use ImDisk to mount VDI file as a local drive in Windows. Follow this virtualbox forum thread and become happy )) Also you can convert VDI to VHD and use default Windows Disk manager to mount VHD (described here)

Related

Google Colab is telling me that a path to a video input is not set when it clearly is set correctly

I am using google colab GPU to run an open source vehicle counting system on the darknet/yolov3 framework. I was able to get yolov3 running perfectly for video object detection. I cannot get this second repository to run and I think it is a google colab issue because I am new to it. Ivy-master and darknet-master are cloned under content, I also tried saving these in drive but it did not make a difference.
I have my .env file set up as vars.env and installed the colab-env. When I run !python -m main in colab, I get the error Path to video or camera input not set. My variable in my vars.env file looks like so:
VIDEO="./content/vehiclesystem/data/demodata/videos/sample.mp4". The path is correct, so why is colab telling me the path is not set? I have tried asking owner for help but no luck.
I think the path should not contain the . (dot). But for certain results, try to go to the files panel on the left side and find the path by right-clicking the mp4 file and selecting Copy Path option.
Paste this path into your variable in my vars.env file. It should work now :)

Failed to import appliance . Could not find a SCSI controller

I am on a mac and am trying to import a virtual machine image (.ova file). I try to import the file on a VM and get the following error.
Could not find a storage controller named 'SCSI Controller'
Any solutions out there that already exists for this problem.
I got a clue to the answer from here: https://ctors.net/2014/07/17/vmware_to_virtualbox
basically you need to change the virtual disk controller eg change ddb.adapterType from "buslogic" or "lsilogic" to "ide"
However if you don't have VMware to boot the original image and remove the vmware tools and remove the hard disk, you can hack the .ovf file in the .ova file to switch the virtual SCSI controller to an IDE controller.
here's how.
First open the ova archive, lets assume its in the current dir called vm.ova
mkdir ./temp
cd temp
tar -xvf ../vm.ova
This will extract 3 files, an *.ovf file, a virtual disk *.vmdk file, and a manifest .mf file.
edit the .ovf file, find the SCSI reference, it will be lsilogicsas or "buslogic" or "lsilogic". replace that word with ide.
While you are at it you may want to rename all the files so that they don't have spaces or strange chars in the name, this males it more UNIX friendly. Of course if you rename the files you need to modify the references in the .ovf and .mf files.
because you've modified the files the you need to recompute the sha1 values in the .mf file. eg run sha1sum to get the value and replace the old ones in the mf file.
$ sha1sum vm.ovf
4806ebc2630d9a1325ed555a396c00eadfc72248 vm.ovf
now that you've swapped the disk controller and fixed up the manifest's sha1 values you can pack the .ova back up. The files have to be in order inside the archive so do this (use your file names)
tar -cvf ../vm-new.ova ./vm.ovf
tar -rvf ../vm-new.ova ./vm.vmdk
tar -rvf ../vm-new.ova ./vm.mf
done. Now you can open Virtualbox and click File -> Import Appliance then point it at the vm-new.ova file. once done you should be able to start the vm.
hope that helps.
Cheers Karl
I run through a similar problem and I just extracted the.ova file and create new VM with my own settings using the .vmdk file.
tar -xvf vm.ova
vm.ovf
vm.vmdk
vm.mf

AWS Versioning - batch/mass restore to last version that isn't 0kb?

I made a very foolish error with a large image directory on our server which is mounted via S3FS to an EC2 instance and I ran Image_Optim on it. It seemed to do a good job until I noticed missing files on the website, which when I looked id noticed were files which had been left at 0kb...
...Now fortunately I have versioning on and a quick look seems to show at the exact same time on all the 0kb files is the correct version as well.
It has happened to about 1300 files in a 2500 directory. Question is, is it possible for me to batch process all the 0kb files and tell them to restore to the latest version that is bigger than 0kb??
The only batch restore tool I can find is S3 Browser but that causes you to restore all files in a folder to their latest version. In some cases this would work for the 0kb files but for many it won't, I also don't own the program so would rather do it with a command line script if possible.
Once your file(s) have become 0 bytes or 0kb, you cannot recover them, well at least not easily. If you mean restore / recover from ext. Backup then that will work.

Installing OpenCart extensions locally

When installing OpenCart extensions, you´re generally given a bunch of folders that should be copied to the root directory and the extension files will find their way to the right subfolders. This works great in FTP software, but on a local installation (Mac OSX) using Finder, this operation makes Finder want to overwrite the folders completely, deleting the actual site and just keep the extension.
I can hold Alt when dragging the folders and it will give me the option to not overwrite, the problem is I have hidden files visible, which means there's now a .DS_STORE file in each folder and the ”Hold ALT”-approach doesn’t work in case there are ANY duplicate files in any of the folders.
I’m sure someone out there has stumbled upon the same problem, any ideas for how to solve such a simple but annoying problem? I do not wish to use FTP software for local file management.
I have the same problem, and i found 3 different ways to solve this:
a - use another file manager, i personally use "Transmit" to do this sort of things;
b - use terminal, like: ditto <source> <destination>. Or easier way just type ditto, and drag the source folder, then drag the destination folder, all inside source will merge inside destination;
c - unzip the plugin, inside the OC folder using the terminal, like: tar -zxvf plugin.zip;

Tarring only the files of a directory

If I have a folder with a bunch of images, how can I tar ONLY the images and not the folder structure leading to the images without having to CD into the directory of images?
tar czf images.tgz /path/to/images/*
Now when images.tgz is extracted, the contents that are extracted are /path/to/images/...
How I can only have the images included into the tgz file (and not the three folders that lead to the images)?
I know you can use --strip-components when untarring although I'm not sure if this would also work when creating.
Perhaps iterate through a folder structure and then pipe the results via stdout/stdin to tar sequentially? (with cat or similar?) IANA Linux Expert so afraid I can only theorise versus provide hard code at this second, you've got me wondering now though...