How to deploy a django/gunicorn application as a systemd service? - django

Context
For the last few years, I have been only using docker to containerize and distribute django applications. For the first time, I am asked to set things so it could be deployed the old way. I am looking for a way to nearly achieve the levels of encapsulation and automation I am used to while avoiding any pitfalls
Current heuristic
So, the application (my_app):
is build as a .whl
comes with gunicorn
is published on a private Nexus and
can be installed on any of the client's server with a simple pip install
(it relies on Apache as reverse proxy / static files server and Postgresql but this is irrelevant for this discussion)
To make sure that the app will remain available, I thought about running it as a service managed by systemd. Based on this article, here is what /etc/systemd/system/my_app.service could look like
[Unit]
Description=My app service
After=network.target
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
User=centos
ExecStart=gunicorn my_app.wsgi:application --bind 8000
[Install]
WantedBy=multi-user.target
and, from what I understand, I could provide this service with environment variables to be read by os.environ.get() somewhere in /etc/systemd/system/my_app.service.d/.env
SECRET_KEY=somthing
SQL_ENGINE=somthing
SQL_DATABASE=somthing
SQL_USER=somthing
SQL_PASSWORD=somthing
SQL_HOST=somthing
SQL_PORT=somthing
DATABASE=somthing
Opened questions
First if all, are there any obvious pitfalls?
upgrade. Will restarting the service be enough to upgrade the application after a pip install --upgrade my_app?
library conflicts Is it possible to make sure that gunicorn my_app.wsgi:application will use the correct versions of libraries used by the project? Or do I need to use a virtualenv? If so, should source /path/to/the/venv/bin/activate be used in ExecStart or is there a cleaner way to make sure ExecStart runs in a separate virtual environment?
where should python manage.py migrate and python manage.py collectstatic live? Should I use those in /etc/systemd/system/my_app.service

Maybe you can try, but I'm also not sure if it works. This is for the third question.
You have to create a virtualenv, install gunicorn and django in it.
Create a file python like this "gunicorn_config.py"
Fill like this, or can be adjusted as needed:
#gunicorn_config.py
command = 'env/bin/gunicorn'
pythonpath = 'env/myproject'
bind = '127.0.0.1:8000'
in the systemd ExecStart fill env/bin/gunicorn -c env/gunicorn_config.py myproject.wsgi.

Related

Dockerfile Customization for Google Cloud Run with Nginx and uWSGI

I'm trying to migrate a Django application from Google Kubernetes Engine to Google Cloud Run, which is fully managed. Basically, with Cloud Run, you containerize your application into a single Dockerfile and Google does the rest.
I have a Dockerfile which at one point does call a bash script via ENTRYPOINT
But I need to start Nginx and start Gunicorn. The Google Cloud Run documentation suggest starting Gunicorn like this:
CMD gunicorn -b :$PORT foo.wsgi
(let's sat my Django app is named "foo")
But I also need to start Nginx via:
CMD ["nginx", "-g", "daemon off;"]
And since only one CMD is allowed per Dockerfile, I'm not sure how to combine them.
To try to get around some of these difficulties, I was looking into using a Dockerfile to build from that already has both working and I came across this one:
https://github.com/tiangolo/meinheld-gunicorn-docker
But the paths don't quite match mine. Quoting from the documentation of that repo:
You don't need to clone the GitHub repo. You can use this image as a
base image for other images, using this in your Dockerfile:
FROM tiangolo/meinheld-gunicorn:python3.7
COPY ./app /app It will expect a file at /app/app/main.py.
And will expect it to contain a variable app with your "WSGI"
application.
My wsgi.py file ends up at /app/foo/foo/wsgi.py and contains an application named application
But if I understand that documentation correctly, when it says it will expect the WSGI application to be named app and to be located at /app/app/main.py it's basically saying that I need to revise the path and the variable name so that when it builds the image it knows that app is called application and that instead of finding it at /app/app/main.py it will find it at /app/foo/foo/wsgi.py
I assume that I can fix the app vs application variable name by adding a line to my wsgi.py file like app = application but I'm not sure how to correct the path that Docker expects.
Can someone explain to me how to adapt this to my needs?
(Or any other way to get it working)

Docker container set up for Django & VueJS

Good afternoon,
I am looking for some guidance on where to concentrate my efforts. I keep getting off down these rabbit holes and cannot seem to find the path I am looking for.
I have developed a couple small internal django apps but wish to integrate VueJS into the mix for more dynamic front end.
My goals are:
I want to use Django-restframework for the backend calls
I want to use VueJS for the front end and make calls back to the REST API.
I want all of this to live in Docker container(s) that I can sync using Jenkins.
My questions / concerns:
I keep trying to build a single docker container for both VueJS and Django but starting with either Node or Python, I seem to end up in dependency hell. Does anyone have a good reference link?
I can't decide if I want it completely decoupled or to try to preserve some of the Django templates. The reason for the latter is that I don't want to lose the built in Django authentication. I am not skilled enough to write the whole authentication piece so I would rather not lose that already being done.
If I am complete decoupled and django is strictly the API, I could also have a single docker container for the django, and a second docker container for the front end. Thoughts?
Finally, these webapps are all the same risk level and exist on the same web app server with a separate postgres database server. Should nginx be on the server, then gunicorn in the docker container with django? Where do most devs draw the line on what is native on the server and what is being served from a docker container? These are all pretty low volume apps targeted for specific purposes.
Thanks for all your guidance as I continue to venture into new territory.
Kevin
*Update December 2020
If you are interested in SSR, here's an outline for an approach I have been working on:
Update December 2020
Here's another way to do Django with Vue in containers on DigitalOcean with docker swarm. It a lot simpler than the approach with AWS shown below, but not as scalable:
Update June 2020
I have made some changes to the project that make use of the Cloud Development Kit (CDK) and AWS Fargate instead of ECS with container instances to take advantage of serverless infrastructure.
Here's an overview of the project architecture: https://verbose-equals-true.gitlab.io/django-postgres-vue-gitlab-ecs/start/overview/
and here's an updated architecture diagram:
Edit May 2019
Here is a setup for Django and Vue using ECS, the AWS container orchestration tool and GitLab for CI/CD. The repo is here.
I have been working on a project that demonstrates how to set up a Django + Vue project with docker. It is an ope source project called verbose-equals-true (https://verbose-equals-true.tk). The source code for this project is available here: https://gitlab.com/briancaffey/verbose-equals-true
Here is an overview of how I'm structuring the project for production. The project uses docker compose to orchestrate the different containers for production and also for development.
Let me know if you have any questions any questions about how I'm using Django/Vue/docker. I have documentation with detailed descriptions at https://verbose-equals-true.tk/docs.
Here are some thoughts on your questions/concerns:
I started with the official recommendations from VueJS for how to dockerize a Vue app, and an official example from Docker about how to dockerize a postgres + Django app. You can probably put everything in the same container, but I like separating things out as a way of keeping things modular and clear.
I'm using JWT for authentication with djangorestframework_jwt package. I can also use the builtin Django authentication system and Django admin.
I think it makes sense to have separate containers. In development you can serve the Vue app from the node container running npm run serve, and in production you can serve the production app's static files from an nginx container (you can use a multi-stage build for this part).
I would keep everything in containers with nothing on the server except the docker engine. This will simplify setup and will allow you to keep your application portable to wherever you decide to deploy it. The only thing that makes sense to keep separate is the postgres database. It is often much easier to connect to a managed database service like AWS RDS, but it is also possible to run a postgres container on your docker host machine to keep things even simpler. This will require that you do backups on your own, so you will need to be familiar with docker volumes.
I've been working with Django/Vue and this is what I do:
Create the Django project
Initialize the project's folder as new Vue project using the vue-cli
From here I can start two development servers, one for Django and the other for Vue:
python manage.py runserver
In another terminal:
npm run serve
In order to consume my API in Vue this I use this configuration in vue.config.js:
module.exports = {
baseUrl: process.env.NODE_ENV === 'production'
? '/static/'
: '/',
outputDir: '<PROJECT_BASE_DIR>/static',
indexPath: '../templates/index.html',
filenameHashing: false,
devServer: {
proxy: {
'/api': {
target: 'http://localhost:8000'
}
}
},
}
devServer redirects the requests to the API, outputDir and indexPath help to build the app to the project's folder, <PROJECT_BASE_DIR>/templates/ and <PROJECT_BASE_DIR>/static/
The next thing is to create a TemplateView and set the template_name to index.html (the file built by Vue), with this you have your SPA inside a Django view/template.
With this approach you can use a Docker container for Django.
I hope this gives you some basic guidance to continue.
Alejandro
#docker-compose.yml
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
server:
build: server/
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
client:
build: client/
command: npm run serve
ports:
- '8081:8080'
depends_on:
- server
#server django Dockerfile
# pull the official base image
FROM python:3
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SUPERUSER_PASSWORD="admin"
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app
EXPOSE 8000
CMD ["python", "manage.py", "runserver"]
#client vue Dockerfile
FROM node:14.17.5 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
EXPOSE 8080
CMD [ "npm", "run", "serve" ]
it's working

Docker compose for production and development

So I use Python+Django (but it does not really matter for this question)
When I write my code I simply run
./manage.py runserver
which does the webserver, static files, automatic reload, etc.
and and to put it on production I use series of commands like
./manage.py collectstatic
./manage.py migrate
uwsgi --http 127.0.0.1:8000 -w wsgi --processes=4
also I have few other services like postgres, redis (which are common for both production and dev)
So I'm trying to adapt here docker(+ -compose) and I cannot understand how to split prod/dev with it.
basically in docker-compose.yml you define your services and images - but in my case image in production should run one CMD and in dev another..
what are the best practices to achieve that ?
You should create additional docker-compose.yml files like docker-compose-dev.yml or docker-compose-pro.yml and override some of the original docker-compose.yml configuration with -f command:
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up -d
Sometimes, I also use different Dockerfile for different environments and specify dockerfile parameter in docker-compose-pro.yml build section, but I didn't recommend it because you will end with duplicated Dockerfiles.
Update
Docker has introduced multi-stage builds feature https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds which allow to create a Dockerfile for different environments.
Usually having a different production and dev starting workflow is a bad idea. You should always try to keep both dev and prod environments very similar, even in the way you launch your applications. You should always externalize the configuration that is different between the different environments.
Having different startup sequence is maybe acceptable, however having multiple docker images (or dockerfiles) for each environment is a very bad idea. Docker images should be immutable and portable.
However, you might have some constraints. Docker-compose allows you to override the command that is specified in the image. There is the command property that will override the default command in the image. I would recommend that you keep the image production ready,
i.e. use something like CMD ./manage.py collectstatic && ./manage.py migrate && uwsgi --http 127.0.0.1:8000 -w wsgi --processes=4 in the Dockerfile.
In the compose file just override the CMD by specifying:
command: ./manage.py runserver
Having multiple compose file is usually not a big issue. You can keep your compose files clean and manageable by using some nice compose file features such as extends, where once compose file can extend another one.

How does a django app start it's virtualenv?

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

Run django migrate in docker

I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.