Dockerfile Customization for Google Cloud Run with Nginx and uWSGI - django

I'm trying to migrate a Django application from Google Kubernetes Engine to Google Cloud Run, which is fully managed. Basically, with Cloud Run, you containerize your application into a single Dockerfile and Google does the rest.
I have a Dockerfile which at one point does call a bash script via ENTRYPOINT
But I need to start Nginx and start Gunicorn. The Google Cloud Run documentation suggest starting Gunicorn like this:
CMD gunicorn -b :$PORT foo.wsgi
(let's sat my Django app is named "foo")
But I also need to start Nginx via:
CMD ["nginx", "-g", "daemon off;"]
And since only one CMD is allowed per Dockerfile, I'm not sure how to combine them.
To try to get around some of these difficulties, I was looking into using a Dockerfile to build from that already has both working and I came across this one:
https://github.com/tiangolo/meinheld-gunicorn-docker
But the paths don't quite match mine. Quoting from the documentation of that repo:
You don't need to clone the GitHub repo. You can use this image as a
base image for other images, using this in your Dockerfile:
FROM tiangolo/meinheld-gunicorn:python3.7
COPY ./app /app It will expect a file at /app/app/main.py.
And will expect it to contain a variable app with your "WSGI"
application.
My wsgi.py file ends up at /app/foo/foo/wsgi.py and contains an application named application
But if I understand that documentation correctly, when it says it will expect the WSGI application to be named app and to be located at /app/app/main.py it's basically saying that I need to revise the path and the variable name so that when it builds the image it knows that app is called application and that instead of finding it at /app/app/main.py it will find it at /app/foo/foo/wsgi.py
I assume that I can fix the app vs application variable name by adding a line to my wsgi.py file like app = application but I'm not sure how to correct the path that Docker expects.
Can someone explain to me how to adapt this to my needs?
(Or any other way to get it working)

Related

How does a django app start it's virtualenv?

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

Can't activate django python environment in docker django application

Can't understand django command in docker application.
I am trying to run command which normally would work.
source project/bin/activate
Which results in :
-bash: project/bin/activate: No such file or directory
The command would work in non docker django app for sure. Also tried :
docker-compose run web source project/bin/activate
docker-compose up source project/bin/activate
What is right command then?
Have you tried, giving absolute path for the activate file. Something like this:
~/workspace/project/bin/activate
The above might actually work.

Run django migrate in docker

I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.

create dockerfile to build new images

FROM denmarkcontrevida/base:15.05
MAINTAINER Denmark Contrevida<denmarkcontrevida#esutek.com>
# Config files
# Config pyenv
# Config Nginx
# Config PostgreSQL
# Create DB & Restore database
This image will install to the newest version.
PostgreSQL
Nginx
Pyenv
Django
Python 3
IF are about to install that many different services, make sure to start from a base image made to manage them.
Use phusion/baseimage-docker (which always starts the my_init script, to take care of the zombie processes)
In that image, you can define multiple program (daemon) to run:
You only have to write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>.
If your base image has a /etc/service/helper/run script, then any image based on it would run helper, plus any other /etc/service/xxx/run script of your own: replace xxx by the running services like nginx, django, postgresSQL.
You wouldn't need it for python3 (which is simply called, but doesn't run in the background)

Docker and Django manage.py

I'm trying to find a workflow with Docker and Django. Currently, I'm using the basic configuration from the docker documentation.
I'd like to use manage.py startapp directly from the container to start a new app using:
docker-compose run web ./manage.py startapp myapp
But all the files created in the volume are owned by the root user and not by myself, so I can't edit them from the host.
My idea is to avoid installing all the requirements on my host machine but maybe I should not create app from the container?
One possible solution is to create a user and make it having the same UID/GID than my user on my host machine but it won't work if I try to use an other account on my host machine...
Any suggestion?
What worked best for me was avoiding (or minimizing) file creation inside the containers.
My Dockerfile would just copy the requirements.txt and install them;
and the container would access the app files through a mounted volume.
I pass the env var PYTHONDONTWRITEBYTECODE=1 to the containers, so python does not create *.pyc/*.pyo files.
The few times I cannot avoid it (like, ./manage.py makemigrations), I run chown afterwards.
It's not ideal, but as this happens rarely for my case, I don't bother.