Docker-Compose Workflow, docker-compose down? - django

I am learning docker have a working docker-compose implementation with django/postgresql. Everything is working as expected. My question is what is considered "best practice" with data persistence and the risk to the data.
Here is my full docker-compose.yml:
version: '2'
services:
db:
image: postgres
volumes:
- postgresql:/var/lib/postgresql
ports:
- "5432:5432"
env_file: .env
web:
build: .
command: python run_server.py
volumes:
- .:/project
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgresql:
the run_server.py script simply checks to makes sure the database can be connected to and then runs python manage.py runserver.
So if I stop my containers and restart them the data persists. My concern lies in the docker-compose down command. This command deletes the database. Is this intended? It seems like it would be very easy to run this and accidentally do a lot of damage.
Is there a way so that the database persists even if these containers are removed?

I follow this guide for integrate Django with Docker.
https://docs.docker.com/compose/django/
The way for make the data "persist" is set the database outside the Docker image and let the APP connect to the database via the settings.py
With this trick, when the container is down, the dabase persist because is outside the same container.
Another trick is set the database inside another docker container

Related

docker-compose equivalent for ECS

I'm in the process of deploying my flask application that runs on gunicorn and nginx. I'm able to get the app to run locally by creating a service for the flask app and another for nginx, and I'm able to run them using docker-compose.yml file. Now I know how to deploy a single service that I run using Dockerfile to ECS; however, I'm not sure how I would do the same for multiple services, so in others words, I want to know what is the ECS equivalent to running multiple services simultaneous using docker-compose.
I've tried having two containers when defining my task definition, but for reasons I'm unable to figure out the Tasks "fails to start".
Following is my docker-compose.yml file, to give you a better idea:
version: '3'
services:
flask_app:
container_name: flask_app
restart: always
build: ./flask_app
ports:
- "8000:8000"
command: gunicorn -w 3 -b 0.0.0.0:8000 wsgi:server
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- flask_app

how to dump postgres database in django?

I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?
While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

How to connect a django app to a dockerized postgres db, both from a dockerized django and non-dockerized django using same DATABASE_HOST

I have a postgres container, docker-compose.yml:
services:
db:
container_name: db
expose:
- "5432"
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
And a django project with settings.py:
DATABASES = {
'default': {
'HOST': os.environ.get('POSTGRES_HOST', '127.0.0.1')
# etc
}
}
.env
POSTGRES_HOST_DJANGO=db
When I run my django app locally with manage.py runserver 0.0.0.0:8000 it connects fine, using the default POSTGRES_HOST=127.0.0.1, because .env isn't loaded.
I also run my django app in a container sometimes:
docker-compose.yml:
web:
#restart: unless-stopped
build: .
env_file: .env
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- 8000:8000
links:
- db:db
However it uses the .env file and connects with POSTGRES_HOST=db
If I try to connect the locally run django app with POSTGRES_HOST=db it fails:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
And if I try to run the django app in a container with POSTGRES_HOST=127.0.0.1, it fails in the same way.
How can I get them to use the same POSTGRES_HOST?
The problem seems to be in the network config. I don't see one.
The default behavior of docker-compose is that it creates a network for every compose file - by default the name is the folder name with '_default'.
Django App is in a different network and Postgres is in a different network
If your Django app and your Postgres containers are in different docker-compose files, using container names to resolve hosts will not work(by default, can be done with a custom network config) as they are in two different networks.
As you have done a port binding, you can directly access Postgres by giving host machine's private ip and port 5432 in the container, this way communication is happening through the host network.
If you find a need to make the containers talk to each other directly, make sure they are on the same docker network
I figured out how to do it. It wasn't getting them to use the same variable, it was to get them to read different variables based on how it was run. So:
from docker-compose.yml
web:
build: .
command: bash -c "cd /app/src/ && python manage.py runserver 0.0.0.0:8000
env_file: .env
environment:
POSTGRES_HOST: db # takes precendent over .env file
And in .env:
POSTGRES_HOST=127.0.0.1
Now, when I run locally, with ./manage.py runserver, it uses the .env file and connects to the db container properly at 127.0.0.1:5342
But if I run docker-compose up web, even though it also read the .env file, the environment variable provided in the compose file takes precedent and it uses POSTGRES_HOST: db and connects to the db container as well!

Docker: Reconnect new postgres container to existing Data container

I have 3 docker containers. One running django, another running postgres and third is a data container for the postgres. I'm using docker-compose to link them up.
docker-compose.yml
dbdata:
image: postgres
container_name: dbdata_container
volumes:
- ./data:/var/lib/postgresql/data
command: true
db:
image: postgres
container_name: postgres_container
ports:
- "5432:5432"
volumes_from:
- dbdata
web:
build: .
container_name: django_container
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I have mistaken deleted my postgres container.
How to create a new postgres container which connects to existing data container?
I have tried running:
docker-compose up
It fails with following error:
web_1 | django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
When I connect to postgres using PgMastero I could only see any tables in it.
Kindly help
Found out a work around. Updating it here so it might help others who might face same issue.
I think links are getting messed up when once postgres container is removed.
So if I do docker-compose up, docker is unable to link with db container which I suppose is still pointing with deleted container or the new container is given different link name.
Deleting all the three containers and doing docker-compose up does the trick. I am not sure why is this, may be some bug in docker-compose module.