Is supervisord needed for docker+gunicorn+nginx? - django

I'm running django with gunicorn inside docker, my entry point for docker is:
CMD ["gunicorn", "myapp.wsgi"]
Assuming there is already a process that run the docker when the system starts and restart the docker container when it stops, do I even need to use supervisord? if gunicorn will crash won't it crash the docker and then restart?

The only time you need something like supervisord (or other process supervisor) in a Docker container is if you need to start up multiple independent processes inside the container when the it starts.
For example, if you needed to start both nginx and gunicorn in the same container, you would need to investigate some sort of process supervisor. However, a much more common solution would be to place these two services in two separate containers. A tool like docker-compose helps manage multi-container applications.
If a container exits because the main process exits, Docker will restart that container if you configured a restart policy when you first started it (e.g., via docker run --restart=always ...).

The simple answer is no. And yes you can start both nginx and gunicorn in the same container. You can either create a script which executes everything your container needs to run and start it with CMD at the end of your Dockerfile. Or you can combine everything like so:
CMD (cd /usr/src/app && \
nginx && \
gunicorn wsgi:application --config ../configs/gunicorn.conf)
Hope that helps!

Related

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

Google Cloud container orchestration and cleanup

In short, I am looking to see if it is possible to run multiple Docker containers on the same machine via gcloud's create-with-container functionality (or similar). The idea is that there will be some "worker" container (which does some arbitrary work) which runs and completes, followed by a "cleanup" container which subsequently runs performing the same task each time.
Longer explanation:
I currently have an application that launches tasks that run inside Docker containers on Google Cloud. I use gcloud beta compute instances create-with-container <...other args...> to launch the VM, which runs the specified container. I will call that the "worker" container, and the tasks it performs are not relevant to my question. However, regardless of the "worker" container, I would like to run a second, "cleanup" container upon the completion of the first. In this way, developers can write independently write Docker containers that do not have to "repeat" the work done by the "cleanup" container.
Side note:
I know that I could alternatively specify a startup script (e.g. a bash script) which starts the docker containers as I describe above. However, when I first tried that, I kept running into issues where the docker pull <image> command would timeout or fail for some reason when communicating with dockerhub. The gcloud beta compute instances create-with-container <...args...> seemed to have error handling/retries built-in, which seemed ideal. Does anyone have a working snippet that would provide relatively robust error handling in the startup script?
As far as I know the limitation is one container per VM instance. See limitations.
Answer: It is currently not possible to launch multiple containers with the create-with-container functionality.
Alternative: You mentioned that you have already tried launching your containers with a startup script. Another option would be to specify a cloud-init config through instance metadata. Cloud-init is built into Container-Optimized OS (the same OS that you would use with create-with-container).
It works by adding and starting a systemd service, which means that you can:
specify that your service should run after other services: network-online.target and docker.socket
specify a Restart policy for the service to do retries on failure,
add an ExecStopPost specification to run your cleanup (or add a separate service for that in the cloud-init config)
This is a snippet that could be a starting point (you would need to add it under user-data metadata key):
#cloud-config
users:
- name: cloudservice
uid: 2000
write_files:
- path: /etc/systemd/system/cloudservice.service
permissions: 0644
owner: root
content: |
[Unit]
Description=Start a simple docker container
Wants=network-online.target docker.socket
After=network-online.target docker.socket
[Service]
ExecStart=/usr/bin/docker run --rm -u 2000 --name=cloudservice busybox:latest /bin/sleep 180
ExecStopPost=/bin/echo Finished!
Restart=on-failure
RestartSec=30
runcmd:
- systemctl daemon-reload
- systemctl start cloudservice.service

docker restart container failed: "already in use", but there's no more docker image

I first got my nginx docker image:
docker pull nginx
Then I started it:
docker run -d -p 80:80 --name webserver nginx
Then I stopped it:
docker stop webserver
Then I tried to restart it:
$docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.
Well, it's an error. But in fact there's nothing in container list now:
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Why I restart nginx image failed? How to fix it?
It is because
you have used --name switch.
container is stopped and not removed
You find it stopped
docker ps -a
You can simply start it using below command:
docker start webserver
EDIT: Alternatives
If you want to start the container with below command each time,
docker run -d -p 80:80 --name webserver nginx
then use one of the following:
method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped
docker run -d -p 80:80 --rm --name webserver nginx
method 2: remove it explicitly after stopping the container before starting the command that you are currently using.
docker stop <container name>
docker rm <container name>
As the error says.
You have to remove (or rename) that container to be able to reuse that name
This leaves you two options.
You may delete the container that is using the name "webserver" using the command
docker rm 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74
and retry.
Or you may use a different name during the run command. This is not recommended, as you no longer need that docker.
It's better to remove the unwanted docker and reuse the name.
While the great answers are correct, they didn't actually solve the problem I was facing.
How To:
Safely automate starting of named docker container regardless of its prior state
The solution is to wrap the docker run command with an additional check and either do a run or a stop + run (effectively restart with the new image) based on the result.
This achieves both of my goals:
Avoids the error
Allows me to periodically update the image (say new build) and restart safely
#!/bin/bash
# Adapt the following 3 parameters to your specific case
NAME=myname
IMAGE=myimage
RUN_OPTIONS='-d -p 8080:80'
ContainerID="$(docker ps --filter name="$NAME" -q)"
if [[ ! -z "$ContainerID" ]]; then
echo "$NAME already running as container $ContainerID: stopping ..."
docker stop "$ContainerID"
fi
echo "Starting $NAME ..."
exec docker run --rm --name "$NAME" $RUN_OPTIONS "$IMAGE"
Now I can run (or stop + start if already running) the $NAME docker container in a idempotent way, without worrying about this possible failure.

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/

AWS: CodeDeploy for a Docker Compose project?

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.