How to run a docker image from within a docker image? - django

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.

I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

Related

'Could not SQLConnect' error when connecting dockerised model and dockerised database

I'm trying to connecting a dockerised c++ application with a dockerised database so that I can get it running and get some outputs, the configuration can be found in this question
when I try to run the model (which inside the application container) against the dockerised database:
>docker run --net xxxxx-network -it xxxxxrun:localbase
root#xxxxxxxx:/run# isql xxx.x.x.x user=root
[ISQL]ERROR: Could not SQLConnect
I'm new to odbc and docker, can someone gave me some hint? Many thanks.
I am assuming that your running each docker container separately.
In this case in order for your C++ application container to be able to connect to
the Mysql container they will need to be on same network.
Create Docker network docker network create mysql-network
Run C++ application container like so: docker run -it --network mysql-network xxxxxrun:localbase (xxxxxrun should be name of image and localbase should be image tag that you want to run)
Run Mysql database with command similar to docker run --network mysql-network -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
In this situation the two containers should be able to communicate freely with each other across the network.

Cannot launch interactive session in Windows IIS Docker container

I'm using the AWS "Windows Server 2016 Base with Containers" image (ami-5e6bce3e).
Using docker info I can confirm I have the latest (Server Version: 1.12.2-cs-ws-beta).
From Powershell (running as Admin) I can successfully run the "microsoft/windowsservercore" container in interactive mode, connecting to CMD in the container:
docker run -it microsoft/windowsservercore cmd
When I attempt to run the "microsoft/iis" container in interactive mode, although I am able to connect to IIS (via browser), I am never connected to the interactive CMD session in the container.
docker run -it -p 80:80 microsoft/iis cmd
Instead, I simply get:
Service 'w3svc' started
Using another Powershell window, I can:
docker container ls
...and see my container running.
Attempting to attach locks up and never returns.
I have since switched regions and found that there are different AMI's on each region:
us-east-1: ami-d08edfc7
us-west-2: ami-5e6bce3e
...both of these have the same result.
Relevant links used:
AWS announcement and simple Docker example
MSDN simple Docker example
MSDN IIS Docker example
Update
Using the following link I was able to create my own Dockerfile based off the server base and installing IIS and this seems to work fine.
custom Dockerfile
This is not an issue with AWS AMI's, it was due to the way the Microsoft IIS Dockerfile was written / being new to Docker.
Link to Microsoft's IIS DockerFile
The last line (line 7):
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
Difference between CMD and ENTRYPOINT
So since this Dockerfile uses ENTRYPOINT, to launch an interactive powershell session, use the following command:
docker run --entrypoint powershell -it -p 80:80 microsoft/iis
Note that it seems that the "--entrypoint" flag needs to be after run, as this won't work:
docker run -it -p 80:80 microsoft/iis --entrypoint powershell
Here is another reference link regarding ENTRYPOINT and CMD differences

Need to take image of docker image or container from application installed machine in AWS

As i am working on docker, i need help to take a container or image from existing AWS box. In my AWS box our application is installed and initiated.
For our application initialization, it takes more time. So i want to deploy this container(application installed) while the box launching time itself. When i am taking docker container it will have my application initiated, as per my understanding. So i can save the application initialization time.
I am launching the machine through ansible in AWS VPC. So i can call the docker container there.
Can anyone help on this how to do this activity.
With Thanks,
Ezhilmurugan M I
If you docker commit your changes into an image with a tag, you can then push to a registry, and then pull down the images on another server.
$ docker commit <hash or name> yourusername/red_panda
$ docker push yourusername/red_panda
On other host
$ docker pull yourusername/red_panda
You could also export the image, transfer however you want, and then import the image on the new server.
$ docker export red_panda > latest.tar
$ cat latest.tgz | docker import - exampleimagelocal:new

AWS: CodeDeploy for a Docker Compose project?

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.

Sharing directories in a Docker container both with a Dockerfile and after the container is running

Sharing data between a running docker container and my host (on AWS) seems overly complicated. From the docker documentation it seems as if I need to specify volumes when I start the container.
I found this: https://github.com/synack/docker-rsync
But this watches recursively to copy only from the host machine to the docker container
I'm looking for a way to create (preferably in a Dockerfile) a folder visible on my host machine on AWS where I can scp files into that folder and they will be visible on my docker container. I am also looking for my docker image to be able to write to that folder so if the container is stopped I won't lose those files.
As a side note I already declared in my Dockerfile to
VOLUME /Training-master
but I don't know how to access it from my machine and when I stopped the container I lost the data.
Does anyone know how to do this or can they point me in the right direction?
What you are looking for is provided by docker run time options. Documented here: http://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
At the end of it, its clearly mentioned
Note: The host directory is, by its nature, host-dependent.
For this reason, you can’t mount a host directory from Dockerfile
because built images should be portable. A host directory wouldn’t
be available on all potential hosts.
Like Raghav said a drive cannot be created and shared from a Dockerfile because of image portability.
But after you create the image you can run this command and this will create a shared folder between host and docker. Be careful because you can overview a directory in the docker container if it has the same name as an existing folder:
$ sudo docker run -itd -v /home/ubuntu/Sharing dockeruser/imageID:version bash
/home/ubuntu/Sharing -- Path to sharing folder on host computer
/Share -- Path to sharing folder in my container
dockeruser/imageID:version -- the name of your container
-v -- specifies you are creating a volume
-d -- daemonizes the containe, puts it in the background
bash -- the command for the container to execute
Just for reference for Windows users:
1) You can mount a host folder into a container by
docker run -ti -v C:\local_folder\:c:\container_folder container1
2) Alternatively, you can create a volume:
docker volume create --name temp_volume
See the absolute path of the volume by:
docker volume inspect temp_volume
The mountpoint is the absolute path of the volume. You can add/remove files from that path. Then you can mount it to the container by:
docker run -ti -v temp_volume:c:\tmploc container1
Notice that both host and container are Windows machines.