Why does this docker compose file build the same image four times? - django

When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local

Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.

Related

Getting error code 247 while deploying django app

I am trying to deploy my Django application to Digital ocean droplets, using the less expensive one, that gives me 512 mb of ram, 1 CPU and 10 gigs of SSD. Then, after I set up everything properly, I run docker-compose up --build to see if everything is fine. It launches all. In my docker compose, I use a postgres instance, a redis one and a celery one, and the django application I wrote. If that matters, here is the docker-compose file
version: "3.9"
services:
db:
container_name: my_table_postgres
image: postgres
ports:
- 5432/tcp
volumes:
- my_table_postgres_db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_table_postgres
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=blablabla
redis:
container_name: redis
image: redis
ports:
- 6739:6739/tcp
environment:
- REDIS_HOST=redis-oauth-user-service
volumes:
- redis_data:/var/lib/redis/data/
my_table:
container_name: my_table
build: .
command: python manage.py runserver 0.0.0.0:5000
volumes:
- .:/api
ports:
- "5000:5000"
depends_on:
- db
- redis
celery:
image: celery
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: ['python', '-m', 'celery', '-A', 'mytable' ,'worker', '-l', 'INFO']
volumes:
- .:/api
depends_on:
- redis
- my_table
links:
- redis
volumes:
my_table_postgres_db:
redis_data:
Then, all starts up quite slowly, but after I try to make a request from something like postman, in the terminal of docker compose, the main process of the django app says that my_table exited with code 247. Can you please tell me why? Do I need to change some setup? Or is the droplet ram too low?
Thank you a lot
It was just a problem of dimensions. In fact, the cluster used to have to little RAM to support all the requirements. So, after changing the droplet, everything worked fine.

service refers to undefined volume

I want to share temp files between Django project and celery worker (it works with TemporaryUploadedFiles, so I want to have access to these files from celery worker to manage them). I've read about shared volumes, so I tried to imlement it in my docker-compose file and run it, but the command gave me this error:
$ docker compose up --build
service "web" refers to undefined volume shared_web_volume/: invalid compose project
And sometimes "web" replace with "celery", so both celery and django have no access to this volume.
Here is my docker-compose.yml file:
volumes:
shared_web_volume:
postgres_data:
services:
db:
image: postgres:12.0-alpine
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
web:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: python manage.py runserver 0.0.0.0:8000
volumes:
- "shared_web_volume/:/MoreEnergy/"
ports:
- 1337:8000
env_file:
- ./.env
depends_on:
- db
celery:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: celery -A MoreEnergy worker --loglevel=info
volumes:
- "shared_web_volume/:/MoreEnergy/"
env_file:
- ./.env
depends_on:
- web
- redis
redis:
image: redis:5-alpine
What am I doing wrong?
Upd: temp dir is my project folder (I've set it with FILE_UPLOAD_TEMP_DIR variable in settings file), so I don't need to make one more volume only for shared temp files (If I have to, tell me).
your shared volume name is shared_web_volume and it's different from the one that you use in the volumes section which is shared_web_volume/
so my suggestion is to erase the forward slash to be like this
volumes:
- "shared_web_volume:/MoreEnergy/"
Don't forget to do the same thing in celery container

django and AWS: Which is better lambda or fargate

I currently use docker-compose.yml to deploy my django application on an AWS EC2 instance. But i feel the need for scaling and load balancing.
I have two choices
AWS lambda (using zappa - but heard that this is no longer maintained. There are limitations memory consumption and execution time (It was increased to 15 mins recently from the earlier 5 mins) and also 3GB memory limit (CPU increases proportionally). Also if it is used sporadically it may need to be pre-warmed (called on a schedule) for extra performance.)
AWS fargate (not sure how to use the docker-compose.yml file here)
My django application required some big libraries like pandas etc.
Currently i use docker-compose.yml file to run the django application. My docker-compose.yml file is as below
I have used the following images django-app, reactjs-app, postgres, redis and nginx.
# My version of docker = 18.09.4-ce
# Compose file format supported till version 18.06.0+ is 3.7
version: "3.7"
services:
nginx:
image: nginx:1.19.0-alpine
ports:
- 80:80
volumes:
- ./nginx/localhost/configs:/etc/nginx/configs
networks:
- nginx_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_postgres_data
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
redis:
image: "redis:5.0.9-alpine3.11"
command: redis-server
environment:
- REDIS_REPLICATION_MODE=master
networks: # connect to the bridge
- redis_network
celery_worker:
image: "python3.9_django_image"
command:
- sh -c celery -A celery_worker.celery worker --pool=solo --loglevel=debug
depends_on:
- redis
networks: # connect to the bridge
- redis_network
- postgresql_network
webapp:
image: "python3.9_django-app"
command:
- sh -c python manage.py runserver 0.0.0.0:8000
depends_on:
- postgresql
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- postgresql_network
- nginx_network
- redis_network
node_reactjs:
image: "node16_reactjs-app"
command:
- sh -c yarn run dev
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- nginx_network
networks:
postgresql_network:
driver: bridge
redis_network:
driver: bridge
nginx_network:
driver: bridge
and access using
domain-name:80 for accessing reacts app
api.domain-name:80 for accessing the django rest apis
which i configured in the nginx.
So in my scenario how can I shift to AWS

Azure Container Instances: Create a multi-container group from Django+Nginx+Postgres

I have dockerized a Django project with Postgres, Gunicorn, and Nginx following this tutorial.
Now i want to move the application to azure container instances. Can i simply create a container group following this tutorial, and expect the container images to communicate the right way?
To run the project locally i use docker-compose -f docker-**compose.prod.yml** up -d --build But how is the communication between the containers handled in azure container instances?
The docker-compose.prod.yml looks like this:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
The containers will be able to communicate with each others using the services names (web, db, nginx) because they are part of the container group's local network. Also, take a look at the documentation as you can't use docker-composes file directly unless you use the edge version of Docker Desktop.
On another note, upon restarting, you will loose whatever you stored in your volumes because you are not using some kind of external storage. Look at the documentation.

how to open a postgres database created with docker in a database client?

I have a question about how to open a database created with docker using https://github.com/cookiecutter/cookiecutter in a database client
image 1
image 2
image3
COMPOSE LOCAL FILE
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: tienda_local_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: tienda_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
Is the port on the container mapped? Try using 127.0.0.1(assuming this is on the same machine) as the host in your local client instead of the container name. If that doesn't work can you share your docker-compose.yaml
You haven't network between containers/services in docker-compose:
You can achieve this in number ways:
link between containers/services. This is a deprecated way but, depends on your docker version still work. Add to your docker-compose file:
django:
...
links:
- postgres
Connect services to the same network. Add to both your services definition:
networks:
- django
Also, you need to define network django in docker-compose file
networks
django:
Connect your services via a host network. Just add to both services defenition:
network: host