packer vmware-iso export to single file - virtualbox

I am using virtualbox-iso and vmware-iso builders. I am on Mac, so vmware-iso runs with vmware fusion.
Virtualbox-iso out is a single .ova file.
But the vmware-iso output is actually a bunch of files. Also could not figure out a way to import them.
How do I make packer export the vmware-iso output into a single importable output file?

https://github.com/mitchellh/packer/issues/1593
Apparently packer exports only .vmx format for vmware.

If you're willing to go the plugin route, the following post-processor will do what you need:
packer-post-processor-ovftool
It uses VMWare's command-line ovftool to add the ability to Packer to convert .ovf files (actually multiple files within a single folder) into a single .ova file. Simply configure your packer template as such:
{
"post-processors": [{
"type": "ovftool",
"only": ["vmware"],
"format": "ova"
}]
}
If you don't like that route, apparently .ova files are just tar files of the entire .ova directory. You could use Packer's compress post-processor to compress the VMWare build output into a single tar archive and then just rename the file extension from .tar to .ova. You would configure that as follows:
{
"post-processors": [{
"type": "compress",
"only": ["vmware"],
"output": "actuallyAnOVA.tar"
}]
}

Related

Cloudfoundry VCAP_SERVICES variables are not supplied to container for django

I have a dockerized Django application that I want to deploy on SAP Cloud Platform via cloudfoundry cli utility. I have added couple of User Provided Services with their own set of credentials. For example, I added AWS S3 as a User Provided Service and provided credentials for the same.
Now those credentials are available as environment variable as
VCAP_SERVICES={"user-provided":[{
"label": "user-provided",
"name": "s3",
"tags": [
],
"instance_name": "s3",
"binding_name": null,
"credentials": {
"aws_access_key": "****",
"aws_secret_key": "****",
"bucket": "****",
"region": "****",
"endpoint": "*****"
},
"syslog_drain_url": "",
"volume_mounts": [
]
}]}
I have .env file where in I have variables defined, eg. AWS_ACCESS_KEY. Usually I pass string value to the variable which is then consumed by my app. But, given that I have configured it via User Provided Service mechanism and credentials are already there, I was wondering how do I get to access those credentials.
There are a few ways to extract service information in Python applications.
You can do it programmatically using the cfenv library. You would simply integrate this library into the start-up of your application. This generally offers the most flexibility, but can sometimes be difficult to integrate with frameworks, depending on how frameworks expect the configuration to be feed in.
You can generate environment variables or configuration files, from your example the .env file, on the fly. This can be done using a .profile script. A .profile script, if placed into the root of your application will execute prior to your application but inside the runtime container. This allows you to adjust the configuration of your application at just the last moment.
A .profile script is just a shell script and in it, you can use tools like jq or sed to extract information from the VCAP_SERVICES environment variable and put that information elsewhere (possibly other environment variables or into a .env file).
Because you are pushing a Python application, the .profile could also execute a Python script. The Python buildpack will run and guarantee that a Python runtime is available on the PATH for use by your .profile script. Thus you can do something like this, to execute a Python script.
.profile script:
python $HOME/.profile.py
.profile.py script (made up this name, you can call it anything):
#!/usr/bin/env python3
print("Hello from python .profile script")
You can even import Python libraries in your script that are included in your requirements.txt file from this script.

Packer - 1st: How to create a file in the machine. 2nd: Export machine to ova file

I use Packer to create a template file which is deployed via API to our provider.
The template is built from CentOS 7.4 minimal ISO file, using a kickstart anaconda-ks.cfg.
In that kickstart file I'm configuring what packages to install in my template and in the post-config part of the kickstart file I run different bash commands to configure it. On this post-config I also run a few cat > /path/file.sh <<EOF to put some files on disk.
1st.
One of the files is quite large and although I've tried with splitting it in pieces, one last piece freeze the template creation. I can see nothing wrong in my code. Seems to me like the last cat >> /path/file.sh <<EOF just freezes Packer job.
The question is if there is any method like in Terraform to use a template file somewhere in the Packer directory structure that will be used as a source to create that /path/file.sh file in my template.
2nd.
When the template is finished I need to export it to an .ova file, because my provider does not accept any other file type.
As in my json file I'm using builder type virtualbox-iso and post-processors type vagrant, I'm wondering how can I do the last part - to export to ova.
My first thought was to use ovftool, but as I'm new to packer I do not know how to insert that in my Packer code.
Please help.
Thank you.
Use the file provisioner
Set "format": "ova" in your template and remove the vagrant post-processor unless you need a vagrant box too. *) See virtualbox-iso: format
*) If you really need that you should run a shell-local post-provisioner in parallel with the vagrant one that converts the ovf to ova instead of setting format, since most likely the vagrant box must contain an ovf.

convert vmdk to ova using ovftool

I am trying to convert vmdk file to ova uisng ovftool
This is the command I type.
C:\Program Files\VMware\VMware OVF Tool>ovftool -st=vmdk "C:\Windows Server 2016\win2trg1-1.vmdk" -tt=ova "C:\Windows Server 2016\win2trg1-1.ova"
However it did not work. The error is found below.
Error: Failed to parse option '-st=vmdk'. Reason: Source type not known: vmdk
Completed with errors
I am using windows 8 cmd and I did get help from this link.
convert VMX to OVF using OVFtool. It did not work
Any solutions?
Try this command:
ovftool [original .vmx location and filename] [new .ova location and filename]
Example:
ovftool test_machine.vmx test_machine.ova
If you don't have .vmx then you can also say
ovftool test_machine.ovf test_machine.ovf
If you have VMware Workstation, you can open your VM (the .vmx file) and use File > Export to OVF.... This will create an OVF file that you can use in VirtualBox.
According to this post, an OVA file is just a TAR archive containing the OVF package files. So you could do it by hand.

packer: Build type vmware-iso using ova as source

In packer, when using "type": "virtualbox-ovf" it is possible to define a base ova using:
"source_path": "file://{{user `pwd`}}/server-base.ova",
This option does not seem to be available when building with "type": "vmware-iso"
Is there an alternative way for building with VMWare provider, starting from a base .ova?
Use vmware-vmx. If you have an OVA, use ovftool to convert it before running Packer.

Access latest file in repository with gradle

I've created a task that can extract local .tar files (which I previously manually downloaded from Artifactory) as a test. How can I reference the files when they are on Artifactory from my gradle script? Should I just use the server path? All I've done with gradle is basic stuff and haven't worked with repositories.
I'd also like to perform a certain action based on whether the file has changed since I last ran the script, is that possible?
One way of doing this would be to create a new configuration for your TAR file. In my example I gave it the name myTar. In the repositories closure you define the URL to your Artifactory repository and reference the TAR file as dependency in the dependencies closure. When running Gradle it will download the file for you and put it in your local repository. As I read you already created a task that extracts the TAR file. I created a task named extractMyTar which references your downloaded TAR file by its configuration name and untars it into a local directory.
configurations {
myTar
}
repositories {
mavenRepo urls: 'http://my.artifactory/repo'
}
dependencies {
myTar 'your.org:artifact-name:1.0#tar'
}
task extractMyTar << {
File myTarFile = configurations.getByName('myTar').singleFile
if(myTarFile.exists()) {
ant.untar(src: myTarFile, dest: file('myDestDir'))
}
}