convert vmdk to ova using ovftool - vmware

I am trying to convert vmdk file to ova uisng ovftool
This is the command I type.
C:\Program Files\VMware\VMware OVF Tool>ovftool -st=vmdk "C:\Windows Server 2016\win2trg1-1.vmdk" -tt=ova "C:\Windows Server 2016\win2trg1-1.ova"
However it did not work. The error is found below.
Error: Failed to parse option '-st=vmdk'. Reason: Source type not known: vmdk
Completed with errors
I am using windows 8 cmd and I did get help from this link.
convert VMX to OVF using OVFtool. It did not work
Any solutions?

Try this command:
ovftool [original .vmx location and filename] [new .ova location and filename]
Example:
ovftool test_machine.vmx test_machine.ova
If you don't have .vmx then you can also say
ovftool test_machine.ovf test_machine.ovf

If you have VMware Workstation, you can open your VM (the .vmx file) and use File > Export to OVF.... This will create an OVF file that you can use in VirtualBox.
According to this post, an OVA file is just a TAR archive containing the OVF package files. So you could do it by hand.

Related

gcloud builds submit command is not working as per the documentation

Trying to build the image by using gcloud build submit command with passing the source as GCS bucket as per the syntax but it's not working.
gcloud builds submit gs://bucket/object.zip --tag=gcr.io/my-project/image
Error : -bash: gs://bucket_name/build_files.zip: No such file or directory
This path exists in the GCP project where I'm executing the command but still it says no such file or directory.
What I'm missing here ?
Cloud Build looks for local file or tar.gz file on Google Cloud Storage.
Is the case of a zip file like your case, the solution is to start to download locally the file, UNZIP THE FILE and then launch your Cloud Build.
Indeed, you need to unzip the file. Cloud Build won't do it for you, it can only ungzip and untar files. When you add --tag parameter, Cloud Build looks for a Dockerfile file if your set of file and run a docker build with this file.
Please try with single quotes(') or double quotes(") around gs://bucket/object.zip, and not the back quote (`), so the command would look like this:
gcloud builds submit 'gs://bucket/object.zip' --tag=gcr.io/my-project/image
Looks like there is an issue with the documentation, the changes have now been submitted to Google.

Why can't my GCP script/notebook find my file?

I have a working script that finds the data file when it is in the same directory as the script. This works both on my local machine and Google Colab.
When I try it on GCP though it can not find the file. I tried 3 approaches:
PySpark Notebook:
Upload the .ipynb file which includes a wget command. This downloads the file without error but I am unsure where it saves it to and the script can not find the file either (I assume because I am telling it that the file is in the same directory and pressumably using wget on GCP saves it somewhere else by default.)
PySpark with bucket:
I did the same as the PySpark notebook above but first I uploaded the dataset to the bucket and then used the two links provided in the file details when you click the file name inside the bucket on the console (neither worked). I would like to avoid this though as wget is much faster then downloading on my slow wifi then reuploading to the bucket through the console.
GCP SSH:
Create cluster
Access VM through SSH.
Upload .py file using the cog icon
wget the dataset and move both into the same folder
Run script using python gcp.py
Just gives me an error saying file not found.
Thanks.
As per your first and third approach, if you are running a PySpark code on Dataproc, irrespective of whether you use .ipynb file or .py file, please note the below points:
If you use the ‘wget’ command to download the file, then it will be downloaded in the current working directory where your code is executed.
When you try to access the file through the PySpark code, it will check defaultly in HDFS. If you want to access the downloaded file from the current working directory, use the “ file:///” URI with absolute file path.
If you want to access the file from HDFS, then you have to move the downloaded file to HDFS and then access from there using an absolute HDFS file path. Please refer the below example:
hadoop fs -put <local file_name> </HDFS/path/to/directory>

(gcloud.compute.images.create) Could not fetch resource: Invalid value for field 'resource.rawDisk.source'

I'm trying to create a custom image for Google Compute Engine by using a file from Cloud Storage with the following command:
gcloud compute images create my-custom-image-name --source-uri gs://my-storage-bucket-name/gce-demo-tar.gz
Output:
ERROR: (gcloud.compute.images.create) Could not fetch resource:
- Invalid value for field 'resource.rawDisk.source': 'https://storage.googleapis.com/storage/v1/b/my-storage-bucket-name/o/gce-demo-tar.gz'.
The provided source is not a supported file.
The file in question is from a virtual machine exported in RAW format using the following command:
VBoxManage clonehd -format RAW ~/VirtualBox\ VMs/SLES12sp5/SLES12sp5.qcow ~/disk.raw
Then archived with the following command:
gtar -cSzf gce-demo-tar.gz disk.raw
However, I'm not sure if the problem is related to the file itself as I have exactly the same error if I try to import an OVA file or it may be related to storage permissions or configuration?
Thank you!
In the file path when specifying your --source-uri flag, try gs://my-storage-bucket-name/gce-demo.tar.gz and make sure the file is uploaded with the same name.
The error might be occurring because of the file extension you tried to use, which is .gz and it should be .tar.gz instead.

VirtualBox is exist a logic bug that between storageattach with registervm

Background info:
Two files:
xxx.vbox is used to register the VM host in the Virtualbox.
xxx.vdi is a disk that used to register as a virtual disk.
I want to register the above both, but they are mutually exclusive.
command line:
> VBoxManage.exe storageattach "Ubuntu-Lite" --storagectl
VBoxManage.exe: error: Could not find a registered machine named 'Ubuntu-Lite'(Need a vm name.)
> VBoxManage registervm "..\Ubuntu-Lite\Ubuntu-Lite.vbox"
VBoxManage.exe: error: Could not find an open hard disk with UUID {9a69f2a6-6199-49f6-825e-58eb29a82db4}
(Need a disk.)
How to resolve that?
Resolved.
On the vm files store directory lost a file that named by xxx.vbox-prev.
Find it,then use the registervm command to register the VM directly.

Packer - 1st: How to create a file in the machine. 2nd: Export machine to ova file

I use Packer to create a template file which is deployed via API to our provider.
The template is built from CentOS 7.4 minimal ISO file, using a kickstart anaconda-ks.cfg.
In that kickstart file I'm configuring what packages to install in my template and in the post-config part of the kickstart file I run different bash commands to configure it. On this post-config I also run a few cat > /path/file.sh <<EOF to put some files on disk.
1st.
One of the files is quite large and although I've tried with splitting it in pieces, one last piece freeze the template creation. I can see nothing wrong in my code. Seems to me like the last cat >> /path/file.sh <<EOF just freezes Packer job.
The question is if there is any method like in Terraform to use a template file somewhere in the Packer directory structure that will be used as a source to create that /path/file.sh file in my template.
2nd.
When the template is finished I need to export it to an .ova file, because my provider does not accept any other file type.
As in my json file I'm using builder type virtualbox-iso and post-processors type vagrant, I'm wondering how can I do the last part - to export to ova.
My first thought was to use ovftool, but as I'm new to packer I do not know how to insert that in my Packer code.
Please help.
Thank you.
Use the file provisioner
Set "format": "ova" in your template and remove the vagrant post-processor unless you need a vagrant box too. *) See virtualbox-iso: format
*) If you really need that you should run a shell-local post-provisioner in parallel with the vagrant one that converts the ovf to ova instead of setting format, since most likely the vagrant box must contain an ovf.