Developing with celery and docker - django

I have noticed that developing with celery in container, something like this:
celeryworker:
build: .
user: django
command: celery -A project.celery worker -Q project -l DEBUG
links:
- redis
- postgres
depends_on:
- redis
- postgres
env_file: .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.celery
if I want to do some changes on some celery task, I have to completly rebuild the docker image in order to have the latest changes.
So I tried:
docker-compose -f celery.yml down
docker-compose -f celery.yml up
Nothing changed, then:
docker-compose -f celery.yml down
docker-compsoe -f celery.yml build
docker-compose -f celery.yml up
I have the new changes.
Is this the way to do it? seems very slow to me, all the time rebuilding the image, is then better have the local celery outsite docker containers?

Mount your . (that is, your working copy) as a volume within the container you're developing in.
That way you're using the fresh code from your working directory without having to rebuild (unless, say, you're changing dependencies or something else that requires a rebuild).
The idea is explained here by Heroku, emphasis mine:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
env_file: .env
depends_on:
- db
volumes:
- ./webapp:/opt/webapp # <--- Whatever code your Celery workers need should be here
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"

Related

Images loss every time on server when run "docker-compose up --build" + Django

I had a setup docker on Django project, images are stored on server but whenever I upload changes on server and then run "docker-compose up --build" then it loss every images that I uploaded, path showing images there but I am not able to view it. I don't know want happened there. if anyone have idea what to do for that ? How to resolved this issue ? is there any things need to add in docker-compose file ?
Below is my docker compose file.
version: '3'
volumes:
web-django:
web-static:
services:
redis:
image: "redis:alpine"
web:
build: .
image: project_web
command: python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- redis
celery:
image: project_web
command: celery -A project_name worker -l info
restart: always
depends_on:
- web
- redis
celery-beat:
image: citc_web
command: celery -A project_name beat -l info
restart: always
depends_on:
- web
- redis
I want solution of it, what i missing on above file and want to resolve it.

Getting error code 247 while deploying django app

I am trying to deploy my Django application to Digital ocean droplets, using the less expensive one, that gives me 512 mb of ram, 1 CPU and 10 gigs of SSD. Then, after I set up everything properly, I run docker-compose up --build to see if everything is fine. It launches all. In my docker compose, I use a postgres instance, a redis one and a celery one, and the django application I wrote. If that matters, here is the docker-compose file
version: "3.9"
services:
db:
container_name: my_table_postgres
image: postgres
ports:
- 5432/tcp
volumes:
- my_table_postgres_db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_table_postgres
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=blablabla
redis:
container_name: redis
image: redis
ports:
- 6739:6739/tcp
environment:
- REDIS_HOST=redis-oauth-user-service
volumes:
- redis_data:/var/lib/redis/data/
my_table:
container_name: my_table
build: .
command: python manage.py runserver 0.0.0.0:5000
volumes:
- .:/api
ports:
- "5000:5000"
depends_on:
- db
- redis
celery:
image: celery
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: ['python', '-m', 'celery', '-A', 'mytable' ,'worker', '-l', 'INFO']
volumes:
- .:/api
depends_on:
- redis
- my_table
links:
- redis
volumes:
my_table_postgres_db:
redis_data:
Then, all starts up quite slowly, but after I try to make a request from something like postman, in the terminal of docker compose, the main process of the django app says that my_table exited with code 247. Can you please tell me why? Do I need to change some setup? Or is the droplet ram too low?
Thank you a lot
It was just a problem of dimensions. In fact, the cluster used to have to little RAM to support all the requirements. So, after changing the droplet, everything worked fine.

service refers to undefined volume

I want to share temp files between Django project and celery worker (it works with TemporaryUploadedFiles, so I want to have access to these files from celery worker to manage them). I've read about shared volumes, so I tried to imlement it in my docker-compose file and run it, but the command gave me this error:
$ docker compose up --build
service "web" refers to undefined volume shared_web_volume/: invalid compose project
And sometimes "web" replace with "celery", so both celery and django have no access to this volume.
Here is my docker-compose.yml file:
volumes:
shared_web_volume:
postgres_data:
services:
db:
image: postgres:12.0-alpine
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
web:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: python manage.py runserver 0.0.0.0:8000
volumes:
- "shared_web_volume/:/MoreEnergy/"
ports:
- 1337:8000
env_file:
- ./.env
depends_on:
- db
celery:
build:
context: ./MoreEnergy
dockerfile: Dockerfile
entrypoint: sh ./entrypoint.sh
command: celery -A MoreEnergy worker --loglevel=info
volumes:
- "shared_web_volume/:/MoreEnergy/"
env_file:
- ./.env
depends_on:
- web
- redis
redis:
image: redis:5-alpine
What am I doing wrong?
Upd: temp dir is my project folder (I've set it with FILE_UPLOAD_TEMP_DIR variable in settings file), so I don't need to make one more volume only for shared temp files (If I have to, tell me).
your shared volume name is shared_web_volume and it's different from the one that you use in the volumes section which is shared_web_volume/
so my suggestion is to erase the forward slash to be like this
volumes:
- "shared_web_volume:/MoreEnergy/"
Don't forget to do the same thing in celery container

Why does this docker compose file build the same image four times?

When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.

How do I change where docker-compose mounts volumes?

I have a docker-compose.yml file which I am trying to run inside of Google Cloud Container-Optimized OS (https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os). Here's my docker-compose.yml file:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
When I run docker-compose up, I eventually get the error:
Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/my-repo': mkdir /rootfs: read-only file system
Reading further, it appears that /rootfs is locked down (https://cloud.google.com/container-optimized-os/docs/concepts/security), with only a few paths writeable. I'd like to mount all my volumes to one of these directories such as /home, any suggestions on how I can do this? Is it possible to mount all my volumes to /home/xxxxxx by default without having to change my docker-compose.yml file, such as passing a flag to docker-compose up?