Celery and Django, queries cause ProgrammingError - django

I'm building a small Django project with cookiecutter-django and I need to run tasks in the background. Even though I set up the project with cookiecutter I'm facing some issues with Celery.
Let's say I have a model class called Job with three fields: a default primary key, a UUID and a date:
class Job(models.Model):
access_id = models.UUIDField(default=uuid.uuid4, editable=False, unique=True)
date = models.DateTimeField(auto_now_add=True)
Now if I do the following in a Django view everything works fine:
job1 = Job()
job1.save()
logger.info("Created job {}".format(job1.access_id))
job2 = Job.objects.get(access_id=job1.access_id)
logger.info("Retrieved job {}".format(job2.access_id))
If I create a Celery task that does exactly the same, I get an error:
django.db.utils.ProgrammingError: relation "metagrabber_job" does not exist
LINE 1: INSERT INTO "metagrabber_job" ("access_id", "date") VALUES ('e8a2...
Similarly this is what my Postgres docker container says at that moment:
postgres_1 | 2018-03-05 18:23:23.008 UTC [85] STATEMENT: INSERT INTO "metagrabber_job" ("access_id", "date") VALUES ('e8a26945-67c7-4c66-afd1-bbf77cc7ff6d'::uuid, '2018-03-05T18:23:23.008085+00:00'::timestamptz) RETURNING "metagrabber_job"."id"
Interestingly enough, if I look into my Django admin I do see that a Job object is created, but it carries a different UUID as the logs say..
If I then set CELERY_ALWAYS_EAGER = False to make Django execute the task and not Celery: voila, it works again without error. But running the tasks in Django isn't the point.
I did quite a bit of searching and I only found similar issues where the solution was to run manage.py migrate. However I did this already and this can't be the solution otherwise Django wouldn't be able to execute the problematic code with or without Celery.
So what's going on? I'm getting this exact same behavior for all my model objects.
edit:
Just in case, I'm using Django 2.0.2 and Celery 4.1

I found my mistake. If you are sure that your database is migrated properly and you get errors as above: it might very well be that you can't connect to the database. Your db host might be reached, but not the database itself.
That means your config is probably broken.
Why it was misconfigured: in the case of cookiecutter-django there is an issue that Celery might complain about running as root on Mac, so I set the environment variable C_FORCE_ROOT in my docker-compose file. [Only for local, you should never do this in production!] Read about this issue here https://github.com/pydanny/cookiecutter-django/issues/1304
The relevant parts of the config looked like this:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
volumes:
- .:/app
environment:
- POSTGRES_USER=asdfg123456
- USE_DOCKER=yes
ports:
- "8000:8000"
- "3000:3000"
command: /start.sh
celeryworker:
<<: *django
depends_on:
- redis
- postgres
environment:
- C_FORCE_ROOT=true
ports: []
command: /start-celeryworker.sh
However setting this environment variable via docker-compose file prevented the django environment variables to be set on the celeryworker container, leaving me with a nonexistent database configuration.
I added the POSTGRES_USER variable to that container manually and things started to work again. Stupid mistake on my end, but I hope I can save some time for someone with this answer.

Related

How to make docker-compose ".env" file take precedence over shell env vars?

I would like my docker-compose.yml file to use the ".env" file in the same directory as the "docker-compose.yml" file to set some envrionment variables and for those to take precedence for any other env vars set in the shell. Right now I have
$ echo $DB_USER
tommyboy
and in my .env file I have
$ cat .env
DB_NAME=directory_data
DB_USER=myuser
DB_PASS=mypass
DB_SERVICE=postgres
DB_PORT=5432
I have this in my docker-compose.yml file ...
version: '3'
services:
postgres:
image: postgres:10.5
ports:
- 5105:5432
environment:
POSTGRES_DB: directory_data
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: password
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
DB_SERVICE: host.docker.internal
DB_NAME: directory_data
DB_USER: ${DB_USER}
DB_PASS: password
DB_PORT: 5432
command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- postgres
In my Python 3/Django 3 project, I have this in my application's settings.py file
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASS'],
'HOST': os.environ['DB_SERVICE'],
'PORT': os.environ['DB_PORT']
}
}
However when I run my project, using "docker-compose up", I see
maps-web-1 | File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
maps-web-1 | connection = Database.connect(**conn_params)
maps-web-1 | File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
maps-web-1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
maps-web-1 | psycopg2.OperationalError: FATAL: role "tommyboy" does not exist
It seems like the Django container is using the shell's env var instead of what is passed in and I was wondering if there's a way to have the Python/Django container use the ".env" file at the root for it's env vars.
I thought at first I had misread your question, but I think my original comment was correct. As I mentioned earlier, it is common for your local shell environment to override things in a .env file; this allows you to override settings on the command line. In other words, if you have in your .env file:
DB_USER=tommyboy
And you want to override the value of DB_USER for a single docker-compose up invocation, you can run:
DB_USER=alice docker-compose up
That's why values in your local environment take precedence.
When using docker-compose with things that store persistent data -- like Postgres! -- you will occasionally see what seems to be weird behavior when working with environment variables that are used to configure the container. Consider this sequence of events:
We run docker-compose up for the first time, using the values in your .env file.
We confirm that we can connect to the database us the myuser user:
$ docker-compose exec postgres psql -U myuser directory_data
psql (10.5 (Debian 10.5-2.pgdg90+1))
Type "help" for help.
directory_data=#
We stop the container by typing CTRL-C.
We start the container with a new value for DB_USER in our
environment variable:
DB_USER=tommyboy docker-compose up
We try connecting using the tommyboy username...
$ docker-compose exec postgres psql -U tommyboy directory_data
psql: FATAL: role "tommyboy" does not exist
...and it fails.
What's going on here?
The POSTGRES_* environment variables you use to configure the
Postgres are only relevant if the database hasn't already been
initialized. When you stop and restart a service with
docker-compose, it doesn't create a new container; it just restarts
the existing one.
That means that in the above sequence of events, the database was
originally created with the myuser username, and starting it the
second time when setting DB_USER in our environment didn't change
anything.
The solution here is use the docker-compose down command, which
deletes the containers...
docker-compose down
And then create a new one with the updated environment variable:
DB_USER=tommyboy docker-compose up
Now we can access the database as expected:
$ docker-compose exec postgres psql -U tommyboy directory_data
psql (10.5 (Debian 10.5-2.pgdg90+1))
Type "help" for help.
directory_data=#
Values in the shell take precedence over those specified in the .env file.
If you set TAG to a different value in your shell, the substitution in image uses that instead:
export TAG=v2.0
docker compose convert
version: '3'
services:
web:
image: 'webapp:v1.5'
Please refer link for more details: https://docs.docker.com/compose/environment-variables/
I cannot provide a better answer than the excellent one provided by #larsks but please, let me try giving you some ideas.
As #larsks also pointed out, any shell environment variable will take precedence over those defined in your docker-compose .env file.
This fact is stated as well in the docker-compose documentation when taking about environment variables, emphasis mine:
You can set default values for environment variables using a .env file,
which Compose automatically looks for in project directory (parent folder
of your Compose file). Values set in the shell environment override those
set in the .env file.
This mean that, for example, providing a shell variable like this:
DB_USER= tommyboy docker-compose up
will definitively overwrite any variable you could have defined in your .env file.
One possible solution to the problem is trying using the .env file directly, instead of the environment variables.
Searching for information about your problem I came across this great article.
Among other things, in addition to explaining your problem too, it mentions as a note at the end of the post an alternative approach based on the use of the django-environ package.
I was unaware of the library, but it seems it provides an alternative way for configuring your application reading your configuration directly from a configuration file:
import environ
import os
env = environ.Env(
# set casting, default value
DEBUG=(bool, False)
)
# Set the project base directory
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))
# False if not in os.environ because of casting above
DEBUG = env('DEBUG')
# Raises Django's ImproperlyConfigured
# exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')
# Parse database connection url strings
# like psql://user:pass#127.0.0.1:8458/db
DATABASES = {
# read os.environ['DATABASE_URL'] and raises
# ImproperlyConfigured exception if not found
#
# The db() method is an alias for db_url().
'default': env.db(),
# read os.environ['SQLITE_URL']
'extra': env.db_url(
'SQLITE_URL',
default='sqlite:////tmp/my-tmp-sqlite.db'
)
}
#...
If required, it seems you could mix the variables defined in the environment as well.
Probably python-dotenv would allow you to follow a similar approach.
Of course, it is worth mentioning that if you decide to use this approach you need to make accesible the .env file to your docker-compose web service and associated container, perhaps mounting and additional volume or copying the .env file to the web directory you already mounted as volume.
You still need to cope with the PostgreSQL container configuration, but in a certain way it could help you achieve the objective you pointed out in your comment because you could use the same .env file (certainly, a duplicated one).
According to your comment as well, another possible solution could be using Docker secrets.
In a similar way as secrets works in Kubernetes, for example, as explained in the official documentation:
In terms of Docker Swarm services, a secret is a blob of data, such
as a password, SSH private key, SSL certificate, or another piece
of data that should not be transmitted over a network or stored
unencrypted in a Dockerfile or in your application’s source code.
You can use Docker secrets to centrally manage this data and
securely transmit it to only those containers that need access to
it. Secrets are encrypted during transit and at rest in a Docker
swarm. A given secret is only accessible to those services which
have been granted explicit access to it, and only while those
service tasks are running.
In a nutshell, it provides a convenient way for storing sensitive data across Docker Swarm services.
It is important to understand that Docker secrets is only available when using Docker Swarm mode.
Docker Swarm is an orchestrator service offered by Docker, similar again to Kubernetes, with their differences of course.
Assuming you are running Docker in Swarm mode, you could deploy your compose services in a way similar to the following, based on the official docker-compose docker secrets example:
version: '3'
services:
postgres:
image: postgres:10.5
ports:
- 5105:5432
environment:
POSTGRES_DB: directory_data
POSTGRES_USER: /run/secrets/db_user
POSTGRES_PASSWORD: password
secrets:
- db_user
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
DB_SERVICE: host.docker.internal
DB_NAME: directory_data
DB_USER_FILE: /run/secrets/db_user
DB_PASS: password
DB_PORT: 5432
command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- postgres
secrets:
- db_user
secrets:
db_user:
external: true
Please, note the following.
We are defining a secret named db_user in a secrets section.
This secret could be based on a file or computed from standard in, for example:
echo "tommyboy" | docker secret create db_user -
The secret should be exposed to every container in which it is required.
In the case of Postgres, as explained in the section Docker secrets in the official Postgres docker image description, you can use Docker secrets to define the value of POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB: the name of the variable for the secret is the same as the normal ones with the suffix _FILE.
In our use case we defined:
POSTGRES_USER_FILE: /run/secrets/db_user
In the case of the Django container, this functionality is not supported out of the box but, due to the fact you can edit your settings.py as you need to, as suggested for example in this simple but great article you can use a helper function to read the required value in your settings.py file, something like:
import os
def get_secret(key, default):
value = os.getenv(key, default)
if os.path.isfile(value):
with open(value) as f:
return f.read()
return value
DB_USER = get_secret("DB_USER_FILE", "")
# Use the value to configure your database connection parameters
Probably this would make more sense to store the database password, but it could be a valid solution for the database user as well.
Please, consider review this excellent article too.
Based on the fact that the problem seems to be caused by the change in your environment variables in the Django container one last thing you could try is the following.
The only requirement for your settings.py file is to declare different global variables with your configuration. But it didn't say nothing about how to read them: in fact, I exposed different approaches in the answer, and, after all, is Python and you can use the language to fill your needs.
In addition, it is important to understand that, unless in your Dockerfile you change any variables, when both the Postgres and Django containers are created the will receive exactly the same .env file with exactly the same configuration.
With these two things in mind you could try creating a Django container local copy of the provided environment in your settings-py file and use it between restarts or between whatever reason is causing the variables to change.
In your settings.py (please, forgive me for the simplicity of the code, I hope you get the idea):
import os
import ast
env_vars = ['DB_NAME', 'DB_USER', 'DB_PASS', 'DB_SERVICE', 'DB_PORT']
if not os.path.exists('/tmp/.env'):
with open('/tmp/.env', 'w') as f:
for env_var in env_vars:
f.write(env_var)
f.write('=')
f.write(os.environ[env_var])
f.write('\n')
with open('/tmp/.env') as f:
cached_env_vars = f.read()
cached_env_vars_dict = ast.literal_eval(cached_env_vars)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': cached_env_vars_dict['DB_NAME'],
'USER': cached_env_vars_dict['DB_USER'],
'PASSWORD': cached_env_vars_dict['DB_PASS'],
'HOST': cached_env_vars_dict['DB_SERVICE'],
'PORT': cached_env_vars_dict['DB_PORT']
}
#...
}
I think any of the aforementioned approches is better, but certainly it will ensure environment variables consistency accross changes in the environment and container restarts.

Copying code using Dockerfile or mounting volume using docker-compose

I am following the official tutorial on the Docker website
The docker file is
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
The docker-compose is:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
I do not understand why in the Dockerfile they are copying the code COPY . /code/ , but then again mounting it in the docker-compose - .:/code ? Is it not enough if I either copy or mount?
Both the volumes: and command: in the docker-compose.yml file are unnecessary and should be removed. The code and the default CMD to run should be included in the Dockerfile.
When you're setting up the Docker environment, imagine that you're handed root access to a brand-new virtual machine with nothing installed on it but Docker. The ideal case is being able to docker run your-image, as a single command, pulling it from some registry, with as few additional options as possible. When you run the image you shouldn't need to separately supply its source code or the command to run, these should usually be built into the image.
In most cases you should be able to build a Compose setup with fairly few options. Each service needs an image: or build: (both, if you're planning to push the image), often environment:, ports:, and depends_on: (being aware of the limitations of the latter option), and your database container(s) will need volumes: for their persistent state. That's usually it.
The one case where you do need to override command: in Compose is if you need to run a separate command on the same image and code base. In a Python context this often comes up to run a Celery worker next to a Django application.
Here's a complete example, with a database-backed Web application with an async worker. The Redis cache layer does not have persistence and no files are stored locally in any containers, except for the database storage. The one thing missing is the setup for the database credentials, which requires additional environment: variables. Note the lack of volumes: for code, the single command: override where required, and environment: variables providing host names.
version: '3.8'
services:
app:
build: .
ports: ['8000:8000']
environment:
REDIS_HOST: redis
PGHOST: db
worker:
build: .
command: celery worker -A queue -l info
environment:
REDIS_HOST: redis
PGHOST: db
redis:
image: redis:latest
db:
image: postgres:13
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Where you do see volumes: overwriting the image's code like this, it's usually an attempt to avoid needing to rebuild the image when the code changes. In the Dockerfile you show, though, the rebuild is almost free assuming the requirements.txt file hasn't changed. It's also almost always possible to do your day-to-day development outside of Docker – for Python, in a virtual environment – and use the container setup for integration testing and deployment, and this will generally be easier than convincing your IDE that the language interpreter it needs is in a container.
Sometimes the Dockerfile does additional setup (changing line endings or permissions, rearranging files) and the volumes: mount will hide this. It means you're never actually running the code built into the image in development, so if the image setup is buggy in some way you won't see it. In short, it reintroduces the "works on my machine" problem that Docker generally tries to avoid.
It used for saving the image after with the code.
When you use COPY it save it as part of the image.
While mounting is only while developing.
Ideally we use a single Dockerfile to create the image we use for both production and development. This increases the similarity of the app's runtime environment, which is a good thing.
In contrast to what #David writes: it's quite handy to do your day-to-day development with a Docker container. Your code runs in the same environment in production and development. If you use virtualenv in development you're not making use of that very practical attribute of Docker. The environments can diverge without you knowing and prod can break while dev keeps on working.
So how do we let a single Dockerfile produce an image that we can run in production and use during development? First we should talk about the code we want to run. In production we want to have a specific collection of code to run (most likely the code at a specific commit in your repository). But during development we constantly change the code we want to run, by checking out different branches or editing files. So how do we satisfy both requirements?
We instruct the Dockerfile to copy some directory of code (in your example .) into the image, in your example /code. If we don't do anything else: that will be the code that runs. This happens in production.
But in development we can override the /code directory with a directory on the host computer using a volume. In the example the Docker Compose file sets the volume. Then we can easily change the code running in the dev container without needing to rebuild the image.
Also: even if rebuilding is fast, letting a Python process restart with new files is a lot faster.

have different base image for an application in docker

I am new in docker world.I am trying to understand Docker concepts about parent images. Assume that I want to run my django application on docker. I want to use ubuntu and python, I want to have postgresql as my database backend, and I want to run my django application on gunicorn web server. Can I have different base image for ubuntu, python, postgres and gunicorn and create my django container like this:
FROM ubuntu
FROM python:3.6.3
FROM postgres
FROM gunicorn
...
I am thinking about having different base image because if someday I want to update one of these image, I only have to update base image and not to go into ubuntu and update them.
You can use multiple FROM in the same Dockerfile, provided you are doing a multi-stage build
One part of the Dockerfile would build an intermediate image used by another.
But that is generally use to cleanly separate the parents used for building your final program, from the parents needed to execute your final program.
No, you can not create your image like this, the only image that will treat as the base image in the Dockerfile you posted will be the last FROM gunicor. what you need is multi-stage builds but before that, I will clear some concept about such Dockerfile.
A parent image is the image that your image is based on. It refers to
the contents of the FROM directive in the Dockerfile. Each subsequent
declaration in the Dockerfile modifies this parent image. Most
Dockerfiles start from a parent image, rather than a base image.
However, the terms are sometimes used interchangeably.
But in your case, I will not recommend putting everything in one Dockerfile. It will kill the purpose of Containerization.
Rule of Thumb
One process per container
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
dockerfile_best-practices
Apart from Database, You can use multi-stage builds
If you use Docker 17.05 or higher, you can use multi-stage builds to
drastically reduce the size of your final image, without the need to
jump through hoops to reduce the number of intermediate layers or
remove intermediate files during the build.
Images being built by the final stage only, you can most of the time
benefit both the build cache and minimize images layers.
Your build stage may contain several layers, ordered from the less
frequently changed to the more frequently changed for example:
Install tools you need to build your application
Install or update library dependencies
Generate your application
use-multi-stage-builds
With the multi-stage build, The Dockerfile can contain multiple FROM lines and each stage starts with a new FROM line and a fresh context. You can copy artifacts from stage to stage and the artifacts not copied over are discarded. This allows to keep the final image smaller and only include the relevant artifacts.
Is it possible? Yes, Technically multiple base images (FROM XXXX) can appear in single docker file. But it is not for what you are trying to do. They are used for multi-stage builds. You can read more about it here.
The answer to your question is that, if you want to achieve this type of docker image, you should use one base image and install everything else in it with RUN commands like this.
FROM ubuntu
RUN apt install postgresql # install postgresql
...
Obviously it is not that simple. base ubuntu image is very minimal you have to install all dependencies and tools needed to install python, postgres and gunicorn yourself with RUN commands. For example, if you need to download python source code using
RUN wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
wget (most probably) is not pre installed in ubuntu image. You have to install it yourself.
Should I do it? I think you are going against the whole idea of dockerization of apps. Which is not to build a monolithic giant image containing all the services, but to divide services in separate containers. (Generally there should be one service per container) and then make these containers talk to each other with docker networking tools. That is you should use one container for postgres one for nginx and one for gunicorn, run them separately and connect them via network.There is an awesome tool, docker-compose, comes with docker to automate this kind of multi-container setup. You should really use it. For more practical example about it, please read this good article.
You can use official docker image for django https://hub.docker.com/_/django/ .
It is well documented and explained its dockerfile.
If you wants to use different base image then you must go with docker-compose.
Your docker-compose.yml will look like
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
follow this blog for details https://realpython.com/django-development-with-docker-compose-and-machine/

Debugging Django on Docker on Vagrant with IDE

I have quite the Docker stack at the moment, compiled from many containers, one of which is running an instance of Django.
At the moment, I'm limited to debugging by importing logging and using
logger = logging.getLogger(__name__)
logger.debug("your variable: " + variableName)
It's totally inefficient and requires me to rebuild the docker stack every time I want to re-evaluate a change.
I'm used to working in Komodo and having a robust, step-able debugger at my disposal, but I can's seem to find any documents on how to wire up a Docker container inside a vagrant VM to an IDE (or command line debugger) that will let me step through code without a rebuild.
How can I wire up a debugging IDE to a docker container inside a Vagrant VM? Thanks.
I recommend you to use Docker Compose to handle and link your containers. I'm also using a Docker stack on my dev env with a container for
- django
- postgres
- nginx
You just have to synchronize your code with the code inside your docker container. To do that, use the volumes command in your docker-compose file. Here is an example, with 2 containers (django and postgres) :
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/webapp
ports:
- "8000:8000"
links:
- db
This portion of code will do what you want. All your project . will be synchronized with the /webapp folder of your docker container then, no more need to rebuild your docker image:
volumes:
- .:/webapp
Then, to debug, I recommend you to use pdb, which is in my opinion , the best way to debug your django app, run :
docker-compose -f [path/to/your/docker-compose.yml] --service-ports [name-of-your-django-container] python manage.py runserver
E.g :
docker-compose -f django_project/docker-compose.yml --service-ports django python manage.py runserver
Let's debug a view :
1. import pdb in the view :import pdb
2. add pdb.set_trace() in a method or a class in your view
3. request the right url
and you will be able to debug through your terminal
You should have something like that :
(Pdb) > /webapp/car/views.py(18)get()
-> for car in serializer.data:
Here is a tutorial to use Compose and Django from Docker : Quickstart Guide: Compose and Django

What is the correct way to set up a Django/PostgreSQL project with Docker Compose?

Can you tell me what is the best way to way to set up and configure a Django/PostgreSQL project with Docker Compose?
(with the latest versions of everything, python 3.4, django 1.8.1, etc.)
Have you looked at the examples on the Docker website? Here's a link describing exactly this.
Basically, you need two services, one for your Django app and one for your Postgres instance. You would probably want to build a Docker image for your Django app from you current folder, hence you'll need to define a Dockerfile:
# Dockerfile
FROM python:3.4-onbuild
That's the whole Dockerfile! Using the magic -onbuild image, files are automatically copied to the container and the requirements are installed with pip. For more info, read here.
Then, you simply need to define your docker-compose.yml file:
# docker-compose.yml
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
Here, you've defined the Postgres service, which is built from the latest postgres image. Then, you've defined your Django app's service, built it from the current directory, exposed port 8000 so that you can access it from outside your container, linked to the database container (so that they can magically communicate without anything specific from your side - more details here) and started the container with the classic command you use to normally start your Django app. Also, a volume is defined in order to sync the code you're writing with the one from inside your container (so that you don't need to rebuild your image every time you change the code).
Related to having the latest Django version, you just have to specify it in your requirements.txt file.