Permission denied: '/code/celerybeat.pid' - django

I can't run Celery beat using Docker.
celerybeat_1 | celery.platforms.LockFailed: [Errno 13] Permission
denied: '/code/celerybeat.pid'
docker service:
celerybeat:
<<: *django
depends_on:
- postgres
- redis
command: /start-celerybeat.sh
start-celerybeat.sh
#!/bin/sh
set -o errexit
set -o nounset
celery -A my_project.taskapp beat -l info --loglevel=debug --scheduler django_celery_beat.schedulers:DatabaseScheduler
How can I fix that?

Delete that file. Then, modify the last line of start-celerybeat.sh, adding --pidfile /tmp/celerybeat.pid to the end

Related

Unable to open unix socket in redis - Permission denied while firing up docker container

I am trying to fire up a separate redis container which will work as a broker for celery. Can someone help me with as to why the docker user is not able to open the UNIX socket. I have even tried making the user as root but it doesn't seem to work. Please find below the Dockerfile, docker-compose file and redis.conf file.
Dockerfile:
FROM centos/python-36-centos7
USER root
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH=/home/django/.local/bin:$PATH
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle.conf /home/django
RUN yum install -y dnf
RUN dnf install -y libaio libaio-devel
RUN rpm -i /home/django/oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm && \
cp /home/django/oracle.conf /etc/ld.so.conf.d/ && \
ldconfig && \
ldconfig -p | grep client64
COPY ./requirements /home/django/requirements
WORKDIR /home/django
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r ./requirements/development.txt
COPY . .
RUN chmod 777 /home/django
EXPOSE 8000
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
Docker-compose file:
version : '3.8'
services:
app:
build: .
volumes:
- .:/django
- cache:/var/run/redis
image: app_name:django
container_name: app_name
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:10.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=app_name
- POSTGRES_PASSWORD=app_password
- POSTGRES_DB=app_db
labels:
description : "Postgres Database"
container_name: app_name-db-1
redis:
image: redis:alpine
command: redis-server /etc/redis/redis.conf
restart: unless-stopped
ports:
- 6379:6379
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis-server.log:/var/log/redis/redis-server.log
- cache:/var/run/redis/
- ./redis/redis.conf:/etc/redis/redis.conf
container_name: redis
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
volumes:
postgres_data:
cache:
static-volume:
docker-entrypoint.sh:
# run migration first
python manage.py migrate
python manage.py preload_sites -uvasas -l
python manage.py preload_endpoints -uvasas -l
python manage.py collectstatic --noinput
#start celery
export C_FORCE_ROOT='true'
celery multi start 1 -A realm -l INFO -c4
# start the server
python manage.py runserver 0:8000
redis.conf
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
logfile /var/log/redis/redis-server.log
I am new to docker so apologies if I have not done something very obvious or if I have not followed some of the best practices.

How to scale Heroku Django Celery app with Docker

I'm trying to deploy my Django app to Heroku and i have several doubts about dynos, workers and deploy configuration.
In my heroku.yml file I have 2 types of processes, one for the web and the other for celery, I would like them to both have only 1 dyno but with several workers and to be scalable if necessary.
heroky.yml:
build:
docker:
web: Dockerfile-django
celery: Dockerfile-django
run:
web: gunicorn project.wsgi --log-file -
celery: celery -A project worker -B --loglevel=INFO
docker-compose.yml:
version: '3.7'
services:
web:
container_name: dilains_django_ctnr
build:
context: .
dockerfile: Dockerfile-django
restart: always
env_file: ./project/project/.env
command: python manage.py check
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./project:/dilains
depends_on:
- postgres
- redis
ports:
- 8000:8000
networks:
- dilains-ntwk
redis:
container_name: dilains_redis_ctnr
build:
context: .
dockerfile: Dockerfile-redis
volumes:
- ./redis-data:/data
ports:
- 3679:3679
networks:
- dilains-ntwk
celery:
container_name: dilains_celery_ctnr
build:
context: .
dockerfile: Dockerfile-django
restart: always
env_file: ./project/project/.env
command: celery -A project worker -B --loglevel=INFO
volumes:
- ./project:/dilains
depends_on:
- redis
- web
- postgres
networks:
- dilains-ntwk
networks:
dilains-ntwk:
driver: bridge
Dockerfile-django:
FROM python:3.7-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add --no-cache bash postgresql postgresql-dev gcc python3-dev musl-dev jpeg-dev zlib-dev libjpeg
RUN mkdir /dilains
COPY ./project /dilains/
COPY ./requirements.txt /dilains/
WORKDIR /dilains
RUN pip install -r requirements.txt
EXPOSE 8000
I tried scale with this commands to scale each process type with 4 workers:
$ heroku ps -a app_name
=== celery (Standard-1X): /bin/sh -c celery\ -A\ project\ worker\ -B\ --loglevel\=INFO (1)
celery.1: up 2020/10/23 08:05:31 +0200 (~ 41m ago)
=== web (Standard-1X): /bin/sh -c gunicorn\ project.wsgi\ --log-file\ - (1)
web.1: up 2020/10/23 08:05:40 +0200 (~ 41m ago)
$ heroku ps:scale web=1 worker=4 -a app_name
$ heroku ps:scale celery=1 worker=4 -a app_name
I'm paying Stardar-1X and tells: number of process type - unlimited and horizontally scaling - yes
Anybody could help please ?

Django Model.objects.all() returning empty QuerySet in celery task

project/project/settings.py
...
CELERY_BEAT_SCHEDULE = {
'find-subdomains': {
'task': 'subdiscovery.tasks.mytask',
'schedule': 10.0
}
}
project/subdiscovery/tasks.py
from __future__ import absolute_import, unicode_literals
from celery import shared_task
from subdiscovery.models import Domain
#shared_task
def mytask():
print(Domain.objects.all())
return 99
The celery worker shows an empty QuerySet:
celery_worker_1 | [2019-08-12 07:07:44,229: WARNING/ForkPoolWorker-2] <QuerySet []>
celery_worker_1 | [2019-08-12 07:07:44,229: INFO/ForkPoolWorker-2] Task subdiscovery.tasks.mytask[60c59024-cd19-4ce9-ae69-782a3a81351b] succeeded in 0.004897953000181587s: 99
However, importing the same model works in a python shell:
./manage.py shell
>>> from subdiscovery.models import Domain
>>> Domain.objects.all()
<QuerySet [<Domain: example1.com>, <Domain: example2.com>, <Domain: example3.com>]>
I should mention it's running in a docker stack
EDIT:
Ok, entering the running docker container
docker exec -it <web service container id> /bin/sh
and running
$ celery -A project worker -l info
works as expected:
[2019-08-13 05:12:28,945: INFO/MainProcess] Received task: subdiscovery.tasks.mytask[7b2760cf-1e7f-41f8-bc13-fa4042eedf33]
[2019-08-13 05:12:28,957: WARNING/ForkPoolWorker-8] <QuerySet [<Domain: uber.com>, <Domain: example1.com>, <Domain: example2.com>, <Domain: example3.com>]>
Here's what the docker-compose.yml looks like
version: '3'
services:
web:
build: .
image: app-image
ports:
- 80:8000
volumes:
- .:/app
command: gunicorn -b 0.0.0.0:8000 project.wsgi
redis:
image: "redis:alpine"
ports:
- 6379:6379
celery_worker:
working_dir: /app
command: sh -c './wait-for web:8000 && ./wait-for redis:6379 -- celery -A project worker -l info'
image: app-image
depends_on:
- web
- redis
celery_beat:
working_dir: /app
command: sh -c 'celery -A project beat -l info'
image: app-image
depends_on:
- celery_worker
Any idea why the worker started with docker-compose doesn't work, but entering the running container and starting a worker does?
Reposting from reddit https://www.reddit.com/r/docker/comments/cpoedr/different_behavior_when_starting_a_celery_worker/ewqx3mp?utm_source=share&utm_medium=web2x
Your problem here is that your celery worker doesn’t see sqlite database. You need to switch to different DB or make your ./app volume visible.
version: '3'
services:
...
celery_worker:
working_dir: /app
command: sh -c './wait-for web:8000 && ./wait-for redis:6379 -- celery -A project worker -l info'
image: app-image
volumes: # <-here
- .:/app
depends_on:
- web
- redis
...
I suggest switching to more production ready DB like postgres

Run coverage test in a docker container

I am trying to use the coverage tool to measure the code coverage of my Django app, when i test it work fine, but when i pushed to github, i got some errors in travis-ci:
Traceback (most recent call last):
File "/usr/local/bin/coverage", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 756, in main
status = CoverageScript().command_line(argv)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 491, in command_line
return self.do_run(options, args)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 641, in do_run
self.coverage.save()
File "/usr/local/lib/python3.6/site-packages/coverage/control.py", line 782, in save
self.data_files.write(self.data, suffix=self.data_suffix)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 680, in write
data.write_file(filename)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 467, in write_file
with open(filename, 'w') as fdata:
PermissionError: [Errno 13] Permission denied: '/backend/.coverage'
The command "docker-compose run backend sh -c "coverage run manage.py test"" exited with 1.
my travis.yml:
language: python
python:
- "3.6"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run backend sh -c "python manage.py test && flake8"
- docker-compose run backend sh -c "coverage run manage.py test"
after_success:
- coveralls
and my dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
# Install dependencies
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# Setup directory structure
RUN mkdir /backend
WORKDIR /backend
COPY scripts/start_dev.sh /
RUN dos2unix /start_dev.sh
COPY . /backend
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol/
RUN chmod -R 755 /vol/web
USER user
docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
So after seeing the lack of permissions on /.coverage, I simply added chmod +x .coverage, however, this made no difference and I yet received the exact same error.
Your permission issue is most likely due to the fact you have a volume (./backend:/backend) and that you are using a user in your container which does not have the right permission to write on this volume (USER user)
Since you probably cannot change the permissions on the Travis CI directory ./backend, you could try to change the user which is used to write files to that location. This is easy to do with docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
user: root
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db

Django cant connect to redis in docker

Sorry for my english. I have project in django, In my project i want use celery for background task and now i need set settings in docker for this library. This my docker file:
FROM python:3
MAINTAINER Alex2
RUN apt-get update
# Install wkhtmltopdf
RUN curl -L#o wk.tar.xz https://downloads.wkhtmltopdf.org/0.12/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz \
&& tar xf wk.tar.xz \
&& cp wkhtmltox/bin/wkhtmltopdf /usr/bin \
&& cp wkhtmltox/bin/wkhtmltoimage /usr/bin \
&& rm wk.tar.xz \
&& rm -r wkhtmltox
RUN apt-get install -y cron
# for celery
ENV APP_USER user
ENV APP_ROOT /src
RUN groupadd -r ${APP_USER} \
&& useradd -r -m \
--home-dir ${APP_ROOT} \
-s /usr/sbin/nologin \
-g ${APP_USER} ${APP_USER}
# create directory for application source code
RUN mkdir -p /usr/django/app
COPY requirements.txt /usr/django/app/
WORKDIR /usr/django/app
RUN pip install -r requirements.txt
this my docker-compose.dev
version: '2.0'
services:
web:
build: .
container_name: api_dev
image: img/api_dev
volumes:
- .:/usr/django/app/
- ./static:/static
expose:
- "8001"
env_file: env/dev.env
command: bash django_run.sh
nginx:
build: nginx
container_name: ng_dev
image: img/ng_dev
ports:
- "8001:8001"
volumes:
- ./nginx/dev_api.conf:/etc/nginx/conf.d/api.conf
- .:/usr/django/app/
- ./static:/static
depends_on:
- web
links:
- web:web
db:
image: postgres:latest
container_name: pq01
ports:
- "5432:5432"
redis:
image: redis:latest
container_name: rd01
command: redis-server
ports:
- "8004:8004"
celery:
build: .
container_name: cl01
command: celery worker --app=myapp.celery
volumes:
- .:/usr/django/app/
links:
- db
- redis
and i have this error:
cl01 | User information: uid=0 euid=0 gid=0 egid=0
cl01 |
cl01 | uid=uid, euid=euid, gid=gid, egid=egid,
cl01 | [2018-07-31 16:40:00,207: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 2.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:02,211: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 4.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:06,217: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 6.00 seconds...
i cant understand why it not connect. My settings file project
CELERY_BROKER_URL = 'redis://redis:8004/0'
CELERY_RESULT_BACKEND = 'redis://redis:8004/0'
Everything looks like good, but mayby in some file i dont add some settings. Please help me solve this problem
I think the port mapping causes the problem, So, change redis settings in docker-compose.dev file as (removed ports option)
redis:
image: redis:latest
container_name: rd01
command: redis-server
and in your settings.py
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
You dont have to map the ports unless you are using them in your local envirnment