I'm using Packer to export existing VirtualBox VM to ova, but could not find how to specify a filename for that ova.
Config looks like this:
"builders": [{
"type": "virtualbox-vm",
"vm_name": "base-vm",
"output_directory": "output-ova",
"format": "ova",
...
In the output I got output-ova/base-vm.ova. Desired output is something like output-ova/exported-vm.ova.
Option vm-name controls both name of the existing VM and the name of the exported file.
So, how do I set a different name for the output file?
did you try something with
"builders": [{
"type": "virtualbox-vm",
"vm_name": "base-vm",
"output_directory": "output-ova",
"format": "ova",
"export_opts":
[
"--output", "exported-vm.ova",
],
...
Related
I've created a simple VM in VirtualBox and installed Ubuntu, however, I am unable to import this to AWS and generate an AMI from it.
Operating system: Ubuntu 20.04.4 LTS
Kernel: Linux 5.4.0-104-generic
I've followed the steps provided according to the docs and setup role-policy.json & trust-policy.json:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
I keep running into the error:
{
"ImportImageTasks": [
{
"Description": "My server VM",
"ImportTaskId": "import-ami-xxx",
"SnapshotDetails": [
{
"DeviceName": "/dev/sde",
"DiskImageSize": 2362320896.0,
"Format": "VMDK",
"Status": "completed",
"Url": "s3://xxxx/simple-vm.ova",
"UserBucket": {
"S3Bucket": "xxx",
"S3Key": "simple-vm.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2.",
"Tags": []
}
]
}
I've tried changing disk to and from .vdi and .vmdk
I've tried disabling floppy drive and update initramfs
I ran into this error and was able to get around it by using import-snapshot instead of import-image. Then I could deploy the snapshot using the ordinary means of creating an image from the snapshot.
My goal is to have an AWS System Manager Document download a script from S3 and then run that script on the selected EC2 instance. In this case, it will be a Linux OS.
According to AWS documentation for aws:downloadContent the sourceInfo Input is of type StringMap.
The example code looks like this:
{
"schemaVersion": "2.2",
"description": "aws:downloadContent",
"parameters": {
"sourceType": {
"description": "(Required) The download source.",
"type": "String"
},
"sourceInfo": {
"description": "(Required) The information required to retrieve the content from the required source.",
"type": "StringMap"
}
},
"mainSteps": [
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType":"{{ sourceType }}",
"sourceInfo":"{{ sourceInfo }}"
}
}
]
}
This code assumes you will run this document by hand (console or CLI) and then enter the sourceInfo in the parameter. When running this document by hand, anything entered in the parameter (an S3 URL) isn't accepted. However, I'm not trying to run this by hand, but rather programmatically and I want to hard code the S3 URL into sourceInfo in mainSteps.
AWS does give an example of syntax that looks like this:
{
"path": "https://s3.amazonaws.com/aws-executecommand-test/powershell/helloPowershell.ps1"
}
I've coded the document action in mainSteps like this:
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType": "S3",
"sourceInfo":
{
"path": "https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh"
},
"destinationPath": "/tmp"
}
},
However, it doesn't seem to work and I receive this error:
invalid format in plugin properties map[sourceInfo:map[path:https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh] sourceType:S3];
error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string
Note: I have seen this post that references how to format it for Windows. I did try it, didn't work and doesn't seem relevant to my Linux needs.
So my questions are:
Do you need a parameter for sourceInfo of type StringMap - something that won't be used within the aws:downloadContent {{ sourceInfo }} mainSteps?
How do you properly format the aws:downloadContent action sourceInfo StringMap in mainSteps?
Thank you for your effort in advance.
I had similar issue as I did not want anyone to type the stuff when running. So I added a default to the download content
"sourceInfo": {
"description": "(Required) Blah.",
"type": "StringMap",
"displayType": "textarea",
"default": {
"path": "https://mybucket-public.s3-us-west-2.amazonaws.com/automation.sh"
}
}
I've just imported around 1000+ endpoints into a new collection from a swagger endpoint (awesome feature btw).
What i would like to do now is for this collection add an env variable into the url as its the same collection from Dev to Stage to Prod.
A simple regex or string match substitution would be great but I cant find anyway to do this. Is it possible?
In the exported collection JSON we can see objects of the following form:
"url": {
"raw": "https://example.com/user",
"host": [
"https://example.com"
],
"path": [
"user"
]
}
The goal is to convert them to:
"url": {
"raw": "{{someUrl}}/user",
"host": [
"{{someUrl}}"
],
"path": [
"user"
]
}
Using sed we can achive this as follows:
Export collection to postman_collection.json
Use sed to replace https://example.com with {{someUrl}}:
sed -i -- 's/https:\/\/example.com/{{someUrl}}/g' postman_collection.json
Re-import the collection
Create Postman environment variable someUrl in Dev, Stage, and Prod environments.
I want to dynamically increment my environment variables (dates) for my AWS datapipeline and was wondering if someone has achieved this through ShellCommandActivity by changing the config.json file?
{
"values": ..{}
}
Not sure what you are trying to achieve. You can use nested expressions in your pipeline definition anywhere;
#{format(#scheduledStartTime, 'YYYY-MM-dd')}
E.g. as parameters you can use in your "command":
"parameters": [
{
"id": "myDate",
"type" : "DateTime"
}
],
"values": {
"myDate": "#{minusDays(myDateTime,1)}"
}
Or get a date as part of the shell command that is being executed:
date -v -1d
More info:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-expressions.html
I successfully managed to get a data pipeline to transfer data from a set of tables in Amazon RDS (Aurora) to a set of .csv files in S3 with a "copyActivity" connecting the two DataNodes.
However, I'd like the .csv file to have the name of the table (or view) that it came from. I can't quite figure out how to do this. I think the best approach is to use an expression the filePath parameter of the S3 DataNode.
But, I've tried #{table}, #{node.table}, #{parent.table}, and a variety of combinations of node.id and parent.name without success.
Here's a couple of JSON snippets from my pipeline:
"database": {
"ref": "DatabaseId_abc123"
},
"name": "Foo",
"id": "DataNodeId_xyz321",
"type": "MySqlDataNode",
"table": "table_foo",
"selectQuery": "select * from #{table}"
},
{
"schedule": {
"ref": "DefaultSchedule"
},
"filePath": "#{myOutputS3Loc}/#{parent.node.table.help.me.here}.csv",
"name": "S3_BAR_Bucket",
"id": "DataNodeId_w7x8y9",
"type": "S3DataNode"
}
Any advice you can provide would be appreciated.
I see that you have #{table} (did you mean #{myTable}?). If you are using a parameter to pass the name of the DB table, you can use that in the S3 filepath as well like this:
"filePath": "#{myOutputS3Loc}/#{myTable}.csv",