I have dockerized a Django project with Postgres, Gunicorn, and Nginx following this tutorial.
Now i want to move the application to azure container instances. Can i simply create a container group following this tutorial, and expect the container images to communicate the right way?
To run the project locally i use docker-compose -f docker-**compose.prod.yml** up -d --build But how is the communication between the containers handled in azure container instances?
The docker-compose.prod.yml looks like this:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
The containers will be able to communicate with each others using the services names (web, db, nginx) because they are part of the container group's local network. Also, take a look at the documentation as you can't use docker-composes file directly unless you use the edge version of Docker Desktop.
On another note, upon restarting, you will loose whatever you stored in your volumes because you are not using some kind of external storage. Look at the documentation.
Related
Can some body help me solve this issue. Why am i getting this error? I have db in .env host and links, network in docker-compose file too. I am not being to figure out where the issue is being raised.
Here is my docker-compose file.
version: "3.9"
volumes:
dbdata:
networks:
django:
driver: bridge
services:
web:
build:
context: .
volumes:
- .:/home/django
ports:
- "8000:8000"
command: gunicorn Django.wsgi:application --bind 0.0.0.0:8000
container_name: django_web
restart: always
env_file: .env
depends_on:
- db
links:
- db:db
networks:
- django
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5430:5432
networks:
- django
container_name: django_db
here is my .env with database settings
DB_USER=admin
DB_NAME=test
DB_PASSWORD=admin
DB_HOST=db
DB_PORT=5432
DB_SCHEMA=public
CONN_MAX_AGE=60
The problem is that when you are using:
docker compose up --build
As docker compose document describes :
flag
reference
--build
Build images before starting containers.
That means that during the build time there is no "db" container running, so there is no possible to exist a "db" host name.
A suggestion would be to to not engage any DB transaction during the build phase.
You can make any database "rendering" **as a seeding process in your app start
I've setup a Django project that runs in Docker. I want to have my dev environment on a remote server and have it auto reload as I change my files locally. I've set up remote deployment in Pycharm and it works fine. All local changes are reflected on my remote server, I can also see that files get changed inside the Docker container as I've setup a volume in my docker-compose file. However auto reloading is not working, and I cannot figure out why.
My docker-compose file:
version: '3.7'
services:
web:
container_name: my_project_ws
restart: always
build:
context: .
dockerfile: Dockerfile
command: gunicorn my_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
- .:/my_project
expose:
- 8000
env_file:
- .env
depends_on:
- db
db:
container_name: my_project_db
restart: always
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env
nginx:
restart: always
build: ./nginx
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
ports:
- 80:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
Running Gunicorn with --reload solved the problem.
When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.
Question is about keeping Redis data alive between docker-compose up and docker-compose down.
In the docker-compose.yaml file bellow db service uses - postgres_data:/var/lib/postgresql/data/ volume to keep data alive.
I would like to do something like this for redis service but I can not find workable solution to do so. Only one way I have managed to achieve this goal is to store data in local storage - ./storage/redis/data:/data. All experiments with external volume gave no results.
Question is -is it possible somehow to store redis data between docker-compose down and docker-compose up in a volume likewise it made in DB service?
Sorry if question is naiveā¦
Thanks
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
env_file:
- ./series/.env
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
db:
build:
context: .
dockerfile: postgres.dockerfile
restart: always
env_file:
- ./series/.env
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=1q2w3e
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- target: 5432
published: 5433
protocol: tcp
mode: host
redis:
image: redis:alpine
command: redis-server --appendonly yes
ports:
- target: 6379
published: 6380
protocol: tcp
mode: host
volumes:
- ./storage/redis/data:/data
restart: always
environment:
- REDIS_REPLICATION_MODE=master
volumes:
postgres_data:
You just need to add a named volume for Redis data next to the postgres_data:
volumes:
postgres_data:
redis_data:
Then change host path to the named volume:
redis:
...
volumes:
- redis_data:/data
If Redis saved data with host path, then the above will work for you. I mention that because you have to configure Redis to enable persistent storage (see Redis Docker Hub page https://hub.docker.com/_/redis).
Beware, running docker-compose down -v will destroy volumes as well.
I have a question about how to open a database created with docker using https://github.com/cookiecutter/cookiecutter in a database client
image 1
image 2
image3
COMPOSE LOCAL FILE
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: tienda_local_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: tienda_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
Is the port on the container mapped? Try using 127.0.0.1(assuming this is on the same machine) as the host in your local client instead of the container name. If that doesn't work can you share your docker-compose.yaml
You haven't network between containers/services in docker-compose:
You can achieve this in number ways:
link between containers/services. This is a deprecated way but, depends on your docker version still work. Add to your docker-compose file:
django:
...
links:
- postgres
Connect services to the same network. Add to both your services definition:
networks:
- django
Also, you need to define network django in docker-compose file
networks
django:
Connect your services via a host network. Just add to both services defenition:
network: host