How can I convert a Docker image into a (vagrant) VirtualBox box? - virtualbox

I've been looking at Packer.io, and would love to use it to provision/prepare the vagrant (VirtualBox) boxes used by our developers.
I know I could build the boxes with VirtualBox using the VirtualBox Packer builder, but find the layer stacking of Docker to provide a much faster development process of the boxes.
How do I produce the image with a Dockerfile and then export it as a Vagrant box?

Find the size of the docker image from docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mybuntu 1.01 7c142857o35 2 weeks ago 1.94 GB
Run a container based on the image docker run mybuntu:1.01
Create a QEMU image from the container,
Also, use the size of the image in the first command (seek=IMAGE_SIZE).
And, for the docker export command retrieve the appropriate container id from docker ps -a
dd if=/dev/zero of=mybuntu.img bs=1 count=0 seek=2G
mkfs.ext2 -F mybuntu.img
sudo mount -o loop mybuntu.img /mnt
docker export <CONTAINER-ID> | sudo tar x -C /mnt
sudo umount /mnt
Use qemu-utils to convert to vmdk
sudo apt-get install qemu-utils
qemu-img convert -f raw -O vmdk mybuntu.img mybuntu.vmdk
More info on formats that are available for conversion can be found here.
Now you can import the vmdk file in virtualbox

Provided that your target is VirtualBox, it could be probably better if you use Vagrant for the whole process.
Vagrant ships with a docker provisioner that could automatically install docker on the vm and build a Dockerfile:
Vagrant.configure("2") do |config|
config.vm.provision "docker" do |d|
d.build_image "/vagrant/app"
end
end
Once your image is built, you can produce a vagrant box using the vagrant package command.

This is the route I'm going to try:
docker export to get a tar ball,
then create a VMDK using the qemu-* tools and steps as outlined here: https://superuser.com/a/482127/59809
This will allow me to setup/provision the machine using Docker, and then run it in Virtualbox controlled via vagrant.

Related

Can't get CloudWatchAgent to start on Ubuntu18.04 in Docker build for AWS Batch job

I'm trying build an image for use on EC2 instances in an AWS Batch job. I need to use Ubuntu 18.04 because the goal is to run some Fortran software that I can only get to compile on Ubuntu 18.04. I have the Fortran software and some python scripts running well on a manually started Ubuntu 18.04 EC2 instance.
Now, I'm trying to build an image with Docker (that I'll eventually apply to 100s or 1000s of EC2 instances)... but I have to get CloudWatchAgent (CWA) installed and started, and I can't get CWA to start in the Docker build. CWA starts and runs fine in my manual EC2 development instance (Ubuntu 18.04). I initially had problems with CWA in my manual instance because CWA uses systemctl, and so I had to manually install systemd, and that worked after a reboot. But, I'm not able to replicate this in my Docker build, but always get the error:
System has not been booted with systemd as init system (PID 1). Can't operate.
unknown init system
The command '/bin/sh -c sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json' returned a non-zero code: 1
I tried starting with an ubuntu 18.04 image that is supposed to have systemd already installed, and tried rebooting my EC2 instance, same error. Here's the source: https://hub.docker.com/r/jrei/systemd-ubuntu
I looked for other ideas, e.g.: Docker System has not been booted with systemd as init system
... but couldn't figure out how to make it work in a Docker build.
So,
am I using the Ubuntu 18.04 image (that has systemd) in my build wrong- how to use in a Docker build?
is there another way to start CloudWatchAgent in Ubuntu 18.04 that gets around the systemd problem?
would it work/is there a way to restart the operating system inside the Docker container, during the docker build stage?
am I stuck and will have to try recompile everything on a different Ubuntu or AMI like Amazon Linux?
Or is there something else I'm missing?
Here's my Docker file:
#version with systemd already installed
FROM jrei/systemd-ubuntu#sha256:1b65424e0ec4f6772576b55c49e1470ba506504d1033e9da5795785b1d6a4d88 as ubuntu-base
RUN apt-get update && apt-get install -y \
sudo \
wget \
python3-pip
RUN sudo apt-get -y install libgfortran3
RUN sudo pip3 install boto3
RUN wget https://s3.us-east-2.amazonaws.com/amazoncloudwatch-agent-us-east-2/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
RUN sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
COPY . .
RUN cp amazon-cloudwatch-agent.json /opt/aws/amazon-cloudwatch-agent/etc/
ENV ECS_AVAILABLE_LOGGING_DRIVERS = awslogs
RUN sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json
RUN mkdir -p cpseqlogs
CMD python3 cpsequence.py
Thanks for any suggestions, ideas, or tips (I'm fairly new to Docker, but not totally green on linux).

How can I use docker volume with ubuntu image to access a downloaded file from AWS S3?

I want to copy a file from AWS S3 to a local directory through a docker container.
This copying command is easy without docker, I can see the file downloaded in the current directory.
But the problem is with docker that I don’t even know how to access the file.
Here is my Dockerfile:
FROM ubuntu
WORKDIR "/Users/ezzeldin/s3docker-test"
RUN apt-get update
RUN apt-get install -y awscli
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
CMD [ "aws", "s3", "cp", "s3://ezz-test/s3-test.py", "." ]
The current working folder that I should see the file downloaded to is s3docker-test/. This is what I'm doing after building the Dockerfile to mount a volume myvol to the local directory
docker run -d --name devtest3 -v $PWD:/var/lib/docker/volumes/myvol/_data ubuntu
So after running the image I get this:
download: s3://ezz-test/s3-test.py to ./s3-test.py
which shows that the file s3-test.py is already downloaded, but when I run ls in the interactive terminal I can't see it. So how can I access that file?
Looks like you are overriding containers folder with your empty folder, when you run -v $PWD:/var/lib/docker/volumes/myvol/_data.
Try to simply copy the files from container to host fs by running:
docker cp \
<containerId>:/Users/ezzeldin/s3docker-test/s3-test.py \
/host/path/target/s3-test.py
You could perform this command even on downed container. But first you will have to run it without folder override:
docker run -d --name devtest3 ubuntu

How to upgrde to python3.6 deep learning vm image in google-cloud compute engine?

I'm using my workstation for deeplearning and planning to move to GCP compute engine workstation. I've deeplearning custom images in its market place with easy jupyterlab integration but those images are based on python3.5 and my code is based on python3.6, hence, getting some run time errors. I tried upgrading to python3.6 in pytorch deeplearning vm image but not successfull. Can somebody guide me to install/upgrade to python3.6 in gcp deeplearning image?
According to document Deep Learning VM images, it use Debian 9 "Stretch"
So all you have to do is follow this document How to Install Python 3.6.4 on Debian 9
By use test package, Begin by editing the ‘/etc/apt/sources.list’ file with your favorite editor(we’ll use nano) and add the line below at the bottom of the file:
# sudo nano /etc/apt/sources.list
deb http://ftp.de.debian.org/debian testing main
Then execute the following command to make the ‘stable’ repository default on your server:
# echo 'APT::Default-Release "stable";' | sudo tee -a /etc/apt/apt.conf.d/00local
Now update the package list:
# sudo apt-get update
And install Python 3.6.4 from the Debian ‘testing’ repository using the following command:
# sudo apt-get -t testing install python3.6
If everything went well, run the following command to open the Python 3.6.4 interpreter:
# python3.6

How do I run docker on AWS?

I have an aws code pipeline which currently successfully deploys code to my EC2 instances.
I have a Docker image that has the necessary setup to run my code, Dockerfile provided below. When I run docker run -t it just loads up an interactive shell on my docker but then hangs on any command (eg: ls)
Any advice?
FROM continuumio/anaconda2
RUN apt-get install git
ENV PYTHONPATH /app/phdcode/panaxeaA1
# setting up venv
RUN conda create --name panaxea -y
RUN /bin/bash -c "source activate panaxea"
# Installing necessary packages
RUN conda install -c guyer pysparse
RUN conda install -c conda-forge pympler
RUN pip install pysparse
RUN git clone https://github.com/usnistgov/fipy.git
RUN cd fipy && python setup.py install
RUN cd ~
WORKDIR /app
COPY . /app
RUN cd panaxeaA1/models/alpha04c/launchers
RUN echo "launching..."
CMD python launcher_260818_aws.py
docker run -t simply starts a docker container with a pseuodo-tty connection to the container's stdin. However, just running this command does not establish an interactive shell to the container. You will need this to be able to have run commands within your container.
You need to also append the -i command line flag along with the shell you wish to use. For example, docker run -it IMAGE_NAME bash will launch a container from the image you provide using bash as your interactive shell. You can then run Bash commands as you normally would.
If you are looking for a simple way to run containers on EC2 instances in AWS, I highly recommend AWS EC2 Container Service (ECS) as an option. It is a very simple service for running containers that abstracts and manages much of the server level work involved in running containers.

docker info command doesn't show anything in ec2 Instance

I have installed docker using sudo yum install -y docker and started the docker service by running the following commands. Initially, it worked and I was able to run docker containers. Now the docker daemon is working but wen I run docker commands like docker ps, docker info..etc. It's not showing anything on stdout.
I have uninstalled the docker version using sudo yum remove docker and removed all the files manually and installed the new one but still it's the same issue.
Here is the link that I have followed to install docker in EC2 instance.
https://aws.amazon.com/blogs/devops/set-up-a-build-pipeline-with-jenkins-and-amazon-ecs/
Docker version
1.12.6, build 7392c3b/1.12.6
uname -a
Linux ip adress 4.4.41-36.55.amzn1.x86_64 #1 SMP Wed Jan 18 01:03:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
I was not able to figure out what went wrong? Could you please help me in debug this issue.
Thank you in advance.
As I understood from what you said and going through the link you mentioned, you have given the docker command capabilities to the user jenkins, which you have done using :
usermod -a -G docker jenkins
So in order to run docker related command you should login as the user Jenkins. You can use the following command to login as the user jenkins.
sudo -su jenkins
From there you should be able to run the docker commands as expected.
PS - Follow the steps again to install docker.
Hope this help.