Django development Server does not reload in docker container - django

with cookiecutter-django I use the docker setup in the sync version. Unfortunately, the development server does not restart automatically on any code change. Which means I would need to restart the containers on every change, which is a hassle.
I am working with Windows 10 and Docker Desktop and the WSL 2 engine.
Thanks for any help

Cookie-cutter django uses runserver_plus inside the local docker container, so more broadly the question is "How to get auto-reloading to work for runserver_plus inside docker on WSL2."
runserver_plus is built on Werkzeug, and exposes Werkzeug's built-in reloading capabilities. While Werkzeug does support filesystem event-based reloading, this doesn't work inside WSL2 when you're mounting a file path from the host.
The CLI documentation includes a section on this specific issue, which links to Werkzeug's auto-reloader docs. These state that you'll need the stat reloader, which is a brute-force update checker that stat's all files every few seconds to look for changes. You can enable this explicitly:
python manage.py runserver_plus --reloader-interval 1 --reloader-type stat ...
(Note that 1-second intervals might be too frequent, decrease to whatever makes sense for you.)
In Cookie-cutter Django specifically, you'll find this command in the compose/local/django/start script. Modify the runserver_plus command, rebuild the image, and restart your container.
The original question didn't ask about Celery, but I'll note that cookiecutter uses watchfiles to auto-reload celery workers. This suffers from the same issue in docker+WSL2, and can be forced to using file stat polling by setting WATCHFILES_FORCE_POLLING=true (in .envs/.local/.django in cookiecutter).

Related

Running postgres 9.5 and django in one container for CI (Bamboo)

I am trying to configure a CI job on Bamboo for a Django app, the tests to be run rely on a database (postgres 9.5). It seems that a prudent way to go about is it run the whole test in a docker container, as I do not control the agent environment so I cannot install Postgres there.
Most guides I found recommend running postgres and django in two separate containers and using docker-compose to easily manage them. In this scenario each docker image runs just one service, started with CMD. In Bamboo I cannot use docker-compose however, I need to use just one image, so I am trying to get Postgres and Django to run nicely together in one container but with little success so far.
My problem is that I see no easy way to start Postgres as a service inside docker but NOT as a docker CMD command, official postgre image uses an entrypoint.sh approach, also described in the official docker docs
But it is not clear to me how to implement that. I would appreciate your help!
Well, basically you would start postgres as a background process in the docker-entrypoint shell script that does otherwise start your django application.
The only trick here is that you need to put a 'trap' command in it so that you can send a shutdown/kill to the background process when your master process stops.
Although I have done that a thousand times, I know that it is a good source for programming errors. In general I do just use my docker-systemctl-replacement which takes care of running multiple applications as services, just as if the container is a virtual machine hosting multiple applications.
Your only other option is to add in a startup script in your Dockerfile, or kick it off as part of your docker run ... commands. We don't generally use the "Docker" tasks, as I find them ... distasteful (also why I usually just fall back to running a "Script" task, and directly calling docker run in that script task)
Anyway, you'd have to have your Docker container execute a script that would:
Start up Postgres (like a sudo systemctl start postgresql)
Execute your tests.
Your Dockerfile will have to install Postgresql and do some minor setup work I imagine (like create relevant users and databases with the proper owner). Since we're all good citizens, we remember to never run your containers as root, right?
Note - you can always hack around getting two containers to talk to each other without using docker-compose. It's a bit less convenient, but you could do something like:
docker run --detach --cidfile=db_cidfile --name ci_db postgresql_image
...
docker run --link ci_db testing_image
Make sure that you EXPOSE the right ports on the postgresql image to the testing_image container.
EDIT: I'm looking more at my specific case - we just install Postgresql into a base CentOS host rather than use the postgresql default image (using yum install http://yum.postgresql.org/..../pgdg-centos...rpm and then just install postgresql-server and postgresql-contrib packages from there). There is a CMD [ "/usr/pgsql-ver/bin/postgres", "-D", "/var/lib/pgsql/ver/data"] in our Dockerfile, too. We don't do anything fancy with the docker container, though. NOTE: we don't use this in production at all, this is strictly for local and CI testing.

Deployment of Django in local network while continuing development

The question is in the title.
I need to deploy a Django application in a local network (i still don't know how to do but i suppose it's quite easy) but i still need to develop it. My question is how to do to allow users to use the application while i'm still developing it ?
Is it a solution to keep to versions of the application, one deployed and one in development ? In this way, I can replace the application deployed by the newly developed one when I finish coding it.
Another question concerns the database, can I still modify the database if I just add new models without touching existing ones ?
Thank you in advance,
This is a good blog entry that covers deploying Django using Heroku. I'll give you a quick rundown of what makes all the different technologies important:
Git
Git, or any other version control system, is certainly not required. Why it lends itself to deploying Django projects is that your usually distributing your application by source, i.e. your not compiling it or packaging it as an egg. Usually you'll organize your Git repository as such that updating your application on the server only requires you to do a checkout of the latest sources--nothing else.
virtualenv and pip
This, again, is not a strict requirement, but I'd strongly suggest you take the time to familiarize yourself with virtualenv and pip if you already haven't done so, since it's going to make deploying your Python applications across different runtime environments, local or remote, a breeze.
Basically, your project will need to have at least Django and Gunicorn available on the Python path, possibly even a database driver. What that means is that every time you try to deploy your application somewhere you'll have to install Python and do the easy_install dance all over.
virtualenv will redistribute a Python installation, which in turn means that the new Python instance will, by default, have it's very own Python path configuration relative to the installation. pip is like easy_install on steroids, since it supports checking out Python dependencies directly from code repositories and supports a requirements file format with which you can install and configure all of your dependencies in one fell swoop.
With virtualenv and pip, all you'd need to do is have a simple text file with all your dependencies that can be parsed with pip and an installed Python distribution on the machine. From there you just do git checkout repo /app/path; easy_install virtualenv; virtualenv /app/path; ./app/path/scripts/activate; pip install -r /app/path/requirements.txt. Voila, Gunicorn, Django and all other dependencies are then installed and available immediately. When you run the Gunicorn Django script with the Python instance in /app/path/scripts, then the script will immediately have access to the Gunicorn sources and it will be able to locate your Django project which will have access to Django and other dependencies as well.
Gunicorn
This is the actual Python application that will manage your Django instance and provide an HTTP interface that exposes it to HTTP clients. It'll start several worker processes which will all be distinct Python virtual machines loaded with the sources of your application and it's dependencies. The main Gunicorn process will in turn take charge of managing which worker processes manage which requests for maximum throughput.
The basic principle of wiring Nginx and Gunicorn
The most important thing to observe is that Nginx and Gunicorn are separate processes that you manage independently.
The Nginx Web server will be publicly exposed, i.e. it will be directly accessible over the internet. For requests to static media, such as actual images, CSS stylesheets, JavaScript sources and PDF files accessible via the filesystem, Nginx will take charge of returning them in the response body to HTTP clients if you configure it to look for files on the path where you configured your project to collect static media.
Any other request should be proxied to your Gunicorn instance. It will be configured to listen to HTTP requests on a certain port on the loopback interface, so you'll using Nginx as a revers proxy to http://127.0.0.1:8080 for requests to your Django instance.
This is the basic rundown for deploying your Django projects into production that should satisfy the needs of 95% Django projects running out there. While I did reference Nginx and Gunicorn, it's the usual approach when it comes to setting up any Web server to act as a reverse-proxy to a Python WSGI server.

Update Docker container after making changes to image

I am new to Docker and am experimenting with developing a Django App on Docker.
I have followed the example in this link here:
Currently I am developing my app and have made changes to various files within the web directory. For now in order to test my changes I have had to remove all my running containers, stop my docker machine, start my docker machine, attach docker machine, run docker-compose up. This is a timely process and is unproductive especially if I need to keep testing after small changes.
My question is if I make changes to the image (changes in the web directory) how can I update my container to reflect those changes or should I be doing things differently?
How do other people develop using Docker? what are your best practices?
You could use volumes to map host directory in container's web directory. Any changes in host directory will be reflected immediately without restarting container. See below post.
How to make a docker container directory accesible from host?
You can use docker-compose up --build to rebuild the image and container after making changes. It will automatically rebuild and restart any changed containers. There shouldn't be any reason to stop docker machine. If you are using a Mac or Windows PC, you can try the new beta app, which is a bit easier to use than prior versions.
Also see: https://docs.docker.com/compose/reference/
As for best practices, this probably isn't really the right forum unless you have a more specific question.

Development and production with docker with multiple sites

Currently I have 3 linode servers:
1: Cache server (Ubuntu, varnish)
2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)
3: DB server (Ubuntu, postgresql + pg_bouncer)
On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.
A typical site consist of python (django), celery (beat, flower) and gunicorn.
My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!
Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.
I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.
Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.
Could anyone point me in the right direction as how to achieve this?
I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.
There doesn't seem to be any really turnkey solution on Docker yet.
It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.
Roughly what I came up with is this:
using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
I have a git repo per site, plus a git repo for the 'server'
separate containers for db, nginx and each site
each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).
Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.
That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.
I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.
I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.

Different methods to deploy Django project and their pros and cons?

I am quite a noob when it comes to deploying a Django project. I'd like to know what are the various methods to deploy Django project and which one is the most preferred.
The Django documentation lists Apache/mod_wsgi, Apache/mod_python and FastCGI etc.
mod_python is deprecated now, one should use mod_wsgi instead.
Django with mod_wsgi is easy to setup, but:
you can only use one python version at a time [edit: you even can only use the python version mod_wsgi was compiled for]
[edit: seems if I'm wrong on mod_wsgi not supporting virtualenv: it does]
So for multiple sites (targeting different django/python versions) on a server mod_wsgi is not
the best solution.
FastCGI can be used with virtualenv, also with different python versions, as you run it with
./manage.py runfcgi …
and then configure your webserver to use this fcgi interface.
The new, hot stuff about django deployment seems to be gunicorn. It's a webserver that implements wsgi and is typically used as backend with a "big" webserver as proxy.
Deployment with gunicorn feels a lot like fcgi: you run a process doing the django processing stuff with manage.py, and a webserver as frontend to the world.
But gunicorn deployment has some advantages over fcgi:
speed - I didn't find the sources, but benchmarks say fcgi is not as fast as the f suggests
config files, for fcgi you must do all configuration on the commandline when executing the manage.py command. This comes unhandy when running multiple django instances via an init.d (unix-like OS' system service startup). It's always the same cmdline, with just different configuration files
gunicorn can drop privileges: no need to do this in your init.d script, and it's easy to switch to one user per django instance
gunicorn behaves more like a daemon: writing pidfile and logfile, forking to the background etc. makes again using it in an init.d script easier.
Thus, I would suggest to use the gunicorn solution, unless you have a single site on a single server with low traffic, than you could use the wsgi solution. But I think in the long run you're more happy with gunicorn.
If you have a django only webserver, I would suggest to use nginx as frontendproxy, as it's the best performing (again this is based on benchmarks I read in some blogposts - don't have the url anymore).
Personally I use apache as frontendproxy, as I need it for other sites hosted on the server.
A simple setup instruction for django deployment could be found here:
http://ericholscher.com/blog/2010/aug/16/lessons-learned-dash-easy-django-deployment/
My init.d script for gunicorn is located at github:
https://gist.github.com/753053
Unfortunately I did not yet blog about it, but an experienced sysadmin should be able to do the required setup.
Use the Nginx/Apache/mod-wsgi and you can't go wrong.
If you prefer a simple alternative, just use Apache.
There is a very good deployment document: http://lethain.com/entry/2009/feb/13/the-django-and-ubuntu-intrepid-almanac/
I myself have faced a lot of problems in deploying Django Projects and automating the deployment process. Apache and mod_wsgi were like curse for Django Deployment. There are several tools like Nginx, Gunicorn, SupervisorD and Fabric which are trending for Django deployment. At first I used/configured them individually without Deployment automation which took a lot of time(I had to maintain testing as well as production servers for my client and had to update them as soon as a new feature was tested and approved.) but then I stumbled upon django-fagungis, which totally automates my Django Deployment from cloning my project from bitbucket to deploying on my remote server (it uses Nginx, Gunicorn, SupervisorD, Fabtic and virtualenv and also installs all the dependencies on the fly), all with just three commands :) You can find more about it in my blog post here. Now I even don't have to get involved in this process(which used to take a lot of my time) and one of my junior developers runs those three commands of django-fagungis mentioned here on his local machine and we get a crisp new copy of our project deployed in minutes without any hassle:)