Packer post process AMI to virtualbox? - amazon-web-services

I have packer configured to use the amazon-ebs builder to create a custom AMI from the Red Hat 6 image supplied by Red Hat. I'd really like to packer to post process the custom AMI into a virtualbox image for local testing. I've tried adding a simple post processor to my packer json as follows:
"post-processors": [
{
"type": "vagrant",
"keep_input_artifact": false
}
],
But all I end up with is a tiny .box file. When I add this to vagrant, it just seems to be a wrapper for my original AMI in Amazon:
$ vagrant box list
packer (aws, 0)
I was hoping to see something like this:
rhel66 (virtualbox, 0)
Can packer convert my AMI into a virtualbox image?

Post-processor in your example just gives you the vagrant for that image. That image was aws, so no it didn't change anything. To change it to virtualbox you'd have to convert it.
Per the docs have you tried:
{
"type": "virtualbox",
"only": ["virtualbox-iso"],
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
}
}
The above is untested. AWS provides some docs on exporting here

Related

startup scripts on Google cloud platform using Packer

Im using hashicorp's Packer to create machine images for the google cloud (AMI for Amazon). I want every instance to run a script once the instance is created on the cloud. As i understand from the Packer docs, i could use the startup_script_file to do this. Now i got this working but it seems that the script is only runned once, on image creation resulting in the same output on every running instance. How can i trigger this script only on instance creation such that i can have different output for every instance?
packer config:
{
"builders": [{
"type": "googlecompute",
"project_id": "project-id",
"source_image": "debian-9-stretch-v20200805",
"ssh_username": "name",
"zone": "europe-west4-a",
"account_file": "secret-account-file.json",
"startup_script_file": "link to file"
}]
}
script:
#!/bin/bash
echo $((1 + RANDOM % 100)) > test.log #output of this remains the same on every created instance.

Incorrect image reference when launching Dataflow Flex templates

We are using Dataflow Flex Templates and following this guide (https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates) to stage and launch jobs. This is working in our environment. However, when I SSH onto the Dataflow VM and run docker ps I see it is referencing the a different docker image to the one we speccify in our template (underlined in green):
The template I am launching from is as follows and jobs are created using gcloud beta dataflow flex-template run:
{
"image": "gcr.io/<MY PROJECT ID>/samples/dataflow/streaming-beam-sql:latest",
"metadata": {
"description": "An Apache Beam streaming pipeline that reads JSON encoded messages from Pub/Sub, uses Beam SQL to transform the message data, and writes the results to a BigQuery",
"name": "Streaming Beam SQL",
"parameters": [
{
"helpText": "Pub/Sub subscription to read from.",
"label": "Pub/Sub input subscription.",
"name": "inputSubscription",
"regexes": [
".*"
]
},
{
"helpText": "BigQuery table spec to write to, in the form 'project:dataset.table'.",
"is_optional": true,
"label": "BigQuery output table",
"name": "outputTable",
"regexes": [
"[^:]+:[^.]+[.].+"
]
}
]
},
"sdkInfo": {
"language": "JAVA"
}
}
So I would expect the output of docker ps to show gcr.io/<MY PROJECT ID>/samples/dataflow/streaming-beam-sql as the image on Dataflow. When I launch the image from GCR to run on a GCE instance I get the following output when running docker ps:
Should I expect to see the name of the image I have referenced in the Dataflow template on the Dataflow VM? Or have I missed a step somewhere?
Thanks!
TLDR; You are looking in the worker VM instead of launcher VM.
In case of flex templates, when you run the job, it first creates a launcher VM where it pulls your container and runs it to generate the job graph. This VM will destroyed after this step is completed. Then the worker VM is started to actually run the generated job graph. In the worker VM there is no need for your container. Your container is used only to generate the job graph based on the parameters passed.
In your case, you are trying to search for your image in the worker VM. The launcher VM is short lived and starts with launcher-*********************. If you SSH into that VM and do docker ps you will be able to see your container image.

error launching simple web app in docker container on AWS Elastic Beanstalk

That is what I get when I follow the instructions at the official Docker tutorial here: tutorial link
I uploaded my Dockerrun.aws.json file and followed all other instructions.
The logs show nothing even when I click Request:
If anyone has a clue as to what I need to do, ie. why would not having a default VPC even matter here? I have only used my AWS account to set up Linux Machine EC2 instances for a Deep Learning nanodegree at Udacity in the past (briefly tried to set up a VPC just for practice but am sure I deleted/terminated everything when I found out that is not included in the free tier).
The author of the official tutorial forgot to add that you have to add the tag to the image name in the Dockerrun.aws.json file per below in gedit or other editor where :firsttry is the tag:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "hockeymonkey96/catnip:firsttry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
It works:

How to mount an EFS file system using Docker in Beanstalk

Using application version label jenkins-mt-002.zip, which defines a Dockerfile as well as a Dockerrun.aws.json file. The JSON file only contains the volume mapping.
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"HostDirectory": "/efs_mount/master/live",
"ContainerDirectory": "/root/.jenkins"
}
]
}
I am trying to map an EFS mount located at /efs_mount on the host system to /root/.jenkins inside of the docker container. I thought I had set it up correctly, but apparently I'm doing something wrong. Could someone take a look and let me know what I'm doing wrong?

Example for 'aws ec2 import-image'

Try as I might, I cannot get the import-image task to work. I'm looking for a working example that I can reproduce, preferably starting with a "raw" disk image.
Most recent problems:
"Unsupported kernel version" when using an image that works fine when converted with the mouse instead of the API (posted to EC2 forum, no response: https://forums.aws.amazon.com/thread.jspa?threadID=221844)
"No valid partitions" when using a VirtualBox VMDK image that boots just fine in VirtualBox.
I ran into a similar issue when I tried importing FreeBSD bundled OVAs to it. According to the pre-requisites/checklist, Amazon does not yet support vmimporting of FreeBSD. That produces the "No valid partitions".
Also, if you use LUKS encrypted partitions it produced that same error for me, (Ubuntu).
For "Unsupported kernel version", here is my output of that same error:
c:\Users\XXXXX\Documents>aws ec2 describe-import-image-tasks --import-task-ids "import-ami-fgacu4yu"
{
"ImportImageTasks": [
{
"Status": "deleted",
"SnapshotDetails": [
{
"UserBucket": {
"S3Bucket": "myautomationbucket",
"S3Key": "ubuntu14.04-patched.ova"
},
"DiskImageSize": 843476480.0,
"Format": "VMDK"
}
],
"Description": "Optimus Custom Ubuntu14.04",
"StatusMessage": "ClientError: Unsupported kernel version 4.2.0-36-generic",
"ImportTaskId": "import-ami-XXXXXXXX"
}
]
}
According to AWS they posted a list of known good kernels however they are not verbose for my favorite flavor, Ubuntu.
http://docs.amazonaws.cn/en_us/AWSEC2/latest/WindowsGuide/VMImportPrerequisites.html
So what I had done is downgrade the kernel to their acceptable ones.
I obtained how to get what was "acceptable" by performing this command on an existing, known good running instance in my EC2:
c:\Users\XXXXXX\Documents>aws ec2 describe-instance-attribute --instance-id i-12345678 --attribute kernel --region us-east-1
{
"InstanceId": "i-12345678",
"KernelId": {
"Value": "aki-825ea7eb"
}
}
So this aki-824ea7eb is the supported kernel ID. That isn't very helpful, so after some research I realized that AWS may only have a list of supported kernels due to the limitation of their existing platform -- they are not running ESXi you know. ;)
I had searched and found this to be useful and followed the instructions for 13.04 https://www.linode.com/docs/tools-reference/custom-kernels-distros/run-a-distributionsupplied-kernel-with-pvgrub
I performed 1,2,3,4, but I had skipped steps 5,6,7,8... performed 9 and then 15.
And then when I performed them on my VM, repackaged the VM to an OVA and ran my vmimport, it successfully imported with an instance.
Hope this helps.