Request to Django views that start Celery tasks time out - django

I'm deploying a Django app with Docker.
version: '3.1'
services:
b2_nginx:
build: ./nginx
container_name: b2_nginx
ports:
- 1904:80
volumes:
- ./app/cv_baza/static:/static:ro
restart: always
b2_app:
build: ./app
container_name: b2_app
volumes:
- ./app/cv_baza:/app
restart: always
b2_db:
container_name: b2_db
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: -
MYSQL_DATABASE: cvbaza2
volumes:
- ./db:/var/lib/mysql
- ./init:/docker-entrypoint-initdb.d
rabbitmq:
container_name: b2_rabbit
hostname: rabbitmq
image: rabbitmq:latest
ports:
- "5672:5672"
restart: on-failure
celery_worker:
build: ./app
command: sh -c "celery -A cv_baza worker -l info"
container_name: celery_worker
volumes:
- ./app/cv_baza:/app
depends_on:
- b2_app
- b2_db
- rabbitmq
hostname: celery_worker
restart: on-failure
celery_beat:
build: ./app
command: sh -c "celery -A cv_baza beat -l info"
container_name: celery_beat
volumes:
- ./app/cv_baza:/app
depends_on:
- b2_app
- b2_db
- rabbitmq
hostname: celery_beat
image: cvbaza_v2_b2_app
restart: on-failure
memcached:
container_name: b2_memcached
ports:
- "11211:11211"
image: memcached:latest
networks:
default:
In this configuration, hitting any route that is supposed to start a task just hands the request until it eventually times out. Example of route
class ParseCSV(views.APIView):
parser_classes = [MultiPartParser, FormParser]
def post(self, request, format=None):
path = default_storage.save("./internal/documents/csv/resumes.csv", File(request.data["csv"]))
parse_csv.delay(path)
return Response("Task has started")
Task at hand
#shared_task
def parse_csv(file_path):
with open(file_path) as resume_file:
file_read = csv.reader(resume_file, delimiter=",")
for row in file_read:
new_resume = Resumes(first_name=row[0], last_name=row[1], email=row[2],
tag=row[3], university=row[4], course=row[5], year=row[6], cv_link=row[7])
new_resume.save()
None of the docker containers produce an error. Nothing crashes, it just times out and fails silently. Does anyone have a clue where the issue might lie?

Are you checking the result of the Celery task?
result = parse_csv.delay(path)
task_id = result.id
Then somewhere else (perhaps another view):
task = AsyncResult(task_id)
if task.ready():
status_message = task.get()

Related

Project with django,docker,celery,redis giving error/mainprocess] cannot connect to amqp://guest:**#127.0.0.1:5672//: [errno 111] connection refused

I'm trying to create a Django project with celery and redis for the messaging service using docker-compose. I'm getting Cannot connect to amqp://guest:**#127.0.0.1:5672. I'm not using guest as a user anywhere or 127.0.0.1:5672 and amqp is for RabbitMQ but I'm not using RabbitMQ. So, I don't know if my docker-compose volumes are not set correctly for celery to get the settings, where is it getting amqp from, or is the broker miss configured.
docker-compose.yml:
version: '3'
# network
networks:
data:
management:
volumes:
postgres-data:
redis-data:
services:
nginx:
image: nginx
ports:
- "7001:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ../static:/static
command: [nginx-debug, '-g', 'daemon off;']
networks:
- management
depends_on:
- web
db:
image: postgres:14
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data/
- ../data:/docker-entrypoint-initdb.d # import SQL dump
environment:
- POSTGRES_DB=link_checker_db
- POSTGRES_USER=link_checker
- POSTGRES_PASSWORD=passw0rd
networks:
- data
ports:
- "5432:5432"
web:
image: link_checker_backend
build:
context: .
dockerfile: Dockerfile
environment:
- DJANGO_LOG_LEVEL=ERROR
- INITIAL_YAML=/code/initial.yaml
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code
command: >
sh -c "
python manage.py migrate --noinput &&
python manage.py collectstatic --no-input &&
python manage.py runserver 0.0.0.0:7000
"
networks:
- data
- management
depends_on:
- db
redis:
image: redis
volumes:
- redis-data:/data
networks:
- data
celery-default:
image: link_checker_backend
volumes:
- ../:/code
- ../link_checker:/code/link_checker
- ../link_checker_django/:/code/link_checker_django
- ./settings.py:/code/link_checker_django/settings.py
working_dir: /code/link_checker
command: celery -A celery worker --pool=prefork --concurrency=30 -l DEBUG
networks:
- data
depends_on:
- db
- redis
celery.py
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "link_checker_django.settings")
app = Celery("link_checker")
app.config_from_object("django.conf:settings")
app.conf.task_create_missing_queues = True
app.autodiscover_tasks()
settings.py
BROKER_URL = "redis://redis:6379/0"
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
File structure:
link_checker_django
deploy
docker-compose.yml
link_checker
celery.py
link_checker_django
settings.py
manage.py
Thanks, for any help.

Django/Docker/Postgresql: app connected to the 'wrong' database

I have develop a project with Django/Docker/Postgresql and use docker-compose to deploy on a linux remote server.
I want to deploy 2 apps based on the same code (and same settings file), preprod and demo, with two disctincts PostgreSQL databases (databases are not dockerized): ecrf_covicompare_preprod and ecrf_covicompare_demo, respectively for preprod and demo.
Apps tests will be done by differents teams.
I have :
2 docker-compose files, docker-compose.preprod.yml and docker-compose.demo.yml, respectively for preprod and demo
.env files, .env.preprod and .env.preprod.demo, respectively for preprod and demo
Databases parameters of connection are set in these .env files.
But my 2 apps connect to the same database (ecrf_covicompare_preprod).
If I connect to my 'web demo' container to print environment variables I get SQL_DATABASE=ecrf_covicompare_demo which is correct
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: ecrf_covicompare_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_covicompare_redis
image: "redis:alpine"
celery:
container_name: ecrf_covicompare_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_covicompare_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_covicompare_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
.env.preprod
SQL_DATABASE=ecrf_covicompare_preprod
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
docker-compose.demo.yml (simplified)
version: '3.7'
services:
demo_web:
container_name: ecrf_covicompare_web_demo
//
env_file:
- ./.env.preprod.demo
//
demo_redis:
container_name: ecrf_covicompare_redis_demo
image: "redis:alpine"
demo_celery:
container_name: ecrf_covicompare_celery_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_celery-beat:
container_name: ecrf_covicompare_celery-beat_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_nginx:
container_name: ecrf_covicompare_nginx_demo
//
ports:
- 1380:80
depends_on:
- demo_web
.env.preprod.demo
SQL_DATABASE=ecrf_covicompare_demo
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
Im new to all the docker compose stuff but to me your configuration looks fine. A few ideas I had:
you mention two different PostgreSQL databases. Are those hosted on the same PostgreSQL server or two different servers? In both .env files you set DATABASE=postgres. If they are running on the same server instance I could imagine that this leads to them using the same database depending on how this variable is used later on.
are you sure that the env variables are set on time? Once you manually check them from inside th container they are set correctly. But also while your containers are booting up? No expert on how docker compose handles these files but maybe you could try printing the env variables during container initialization from within some script.
Are you completely sure its not hardcoded somewhere? Maybe try searching all source files for the DB name they both connect to. I failed with this far too often to not check this.
Hope this helps. Its a bit of a guess but your configuration looks fine to me otherwise.

Django/SCSS: missing staticfiles manifest -> staticfiles.json with empty paths

EDIT 05/02/2021 10:45
I still do not find any solution to my issue
reading other post show that there are many possible cause to this issue
If someone could help and explain me how django_compressor should work?
For example,
is it right that the manifest files is called staticfiles.json?
It is abnormal that this files contain no paths?
which paths it should contain?
...
EDIT 04/02/2021 14:00
I run
python manage.py findstatic --verbosity 2 theme.scss
and get (below), that mean that path is correct?
Found 'theme.scss' here:
/usr/src/app/static/theme.scss
Looking in the following locations:
/usr/src/app/static
EDIT 04/02/2021 13:38
I mentionned that with DEBUG = True and runserver it works
I mean I can customize bootstrap
and I can see staticfiles.json in /usr/src/app/static but this file contain no paths {"paths": {}, "version": "1.0"}
EDIT 04/02/2021 13:04
During running, logs mentionned
0 static files copied to '/usr/src/app/static'.
Found 'compress' tags in:
/usr/src/app/cafe/templates/cafe/table.html
...
But I've controlled in web container and static files are availables at /usr/src/app/static as expected (see docker-compose file)
I try to use SCSS in my Django project using django_compressor et django libsass
stack: Django/Nginx/Postgresql/Docker
I have configured 2 environment of development: dev and preprod
I got an error: valueerror: missing staticfiles manifest entry for 'theme.scss'
I did not understand because it has worked bt I've tryed to delete container/images/volumes and rebuild all my project and got this error
I've tryed DEBUG = True, STATIC_ROOT = 'static'... but nothing works
logs only raise this error
settings -> preprod.py
- app
- core
- static
- bootstrap
- css
- js
- theme.scss
- nginx
DEBUG = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_FINDERS = [
'compressor.finders.CompressorFinder',
]
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_libsass.SassCompiler'),
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
LIBSASS_OUTPUT_STYLE = 'compressed'
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
)
entrypoint.preprod.sh
python manage.py collectstatic --no-input
python manage.py compress --force
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis
image: "redis:alpine"
celery:
container_name: celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
celery-beat:
container_name: celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
db:
condition: service_started
web:
condition: service_healthy
redis:
condition: service_started
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- app_volume:/var/lib/postgresql/backup
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
app_volume:

docker-compose issue - Celery container not able to access DB container

Regards
I have been working on a Django Application, that runs on Redis, PostgreSQL, Celery, RabbitMQ.
I have written a docker-compose to run all of these services in their separate containers.
Here's my docker-compose.yml
version: "3.2"
services:
app:
build:
context: .
image: &app app
ports:
- "8000:8000"
env_file: &envfile
- env.env
volumes:
- ./app:/app
environment:
- DB_HOST=db
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
- redis
- broker
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
db:
image: postgres:12-alpine
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
worker:
build: .
image: *app
restart: always
env_file: *envfile
command: ["celery", "worker", "--app=worker.worker.app", "--concurrency=1", "--hostname=worker#%h", "--loglevel=INFO"]
volumes:
- ./app:/app
depends_on:
- broker
- redis
- db
broker:
image: rabbitmq:3
env_file: *envfile
ports:
- 5672:5672
flower:
image: zoomeranalytics/flower:0.9.1-4.0.2
restart: "no"
env_file: *envfile
ports:
- "5555:5555"
depends_on:
- broker
My application is working just fine, and the containers seems to be working, the problem arises when I push an Async job to the worker container. The worker container picks up the job, and start processing, I am trying to access the DB in the worker, but it's giving me the following error -
Task [640127f3-7769-4757-8c33-8de9052ca92c] raised unexpected: OperationalError('could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket "/tmp/.s.PGSQL.5432"?\n')
I understand that my worker container is trying to access DB, which it is not able to find, can someone please help me access my DB container through the Worker container, quite stuck with this.
I tried depends_on, links, but nothing seems to be working.
Got it, thanks to my teammate.
I missed giving environment variable in the worker container, as soon as I passed, voila!
environment:
- DB_HOST=db

Handling RACE CONDITION in Docker containers of a django app which include postgres,nginx,celery,redis,elasticsearch

I am new with docker. I am having trouble with multiple containers deploy at a same time, it's occurring race condition. Every time I enter docker-compose up --build command elasticsearch or redis starts first and database starts and exits with error code 0 as well as celery and nginx. I tried using "sleep" command, but no luck(maybe I missed something). Here is my docker-compose.yml file -
version: "3"
services:
db:
image: postgres:9.6-alpine
container_name: myblogdb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- myblogdb_data:/var/lib/postgresql/data/
ports:
- "4949:5432"
web:
build: ./app
command: sh -c "gunicorn djangoApp.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app:/usr/src/app/
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- db
- redis
- es
nginx:
restart: always
build: ./nginx
volumes:
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "1337:80"
depends_on:
- web
redis:
image: "redis:alpine"
es:
image: elasticsearch:5.6.15-alpine
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms256M -Xmx256M"
volumes:
- my_blog_esdata:/usr/share/elasticsearch/data/
ports:
- "9200:9200"
celery:
restart: always
build: ./app
command: sh -c "celery -A djangoApp worker -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
celery-beat:
restart: always
build: ./app
command: sh -c "celery -A djangoApp beat -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
volumes:
myblogdb_data:
my_blog_static_volume:
my_blog_media_volume:
my_blog_esdata:
Please let me know if I'm missing something here. Thanks
You need to add a script like wait-for-it or wait-for in order to control startup and shutdown order in compose that basically tells a service to wait for another service before running the start command.
So if you want Django to wait for PostgreSQL the command in docker-compose will be:
["./wait-for", "db:5432", "--", "gunicorn", "djangoApp.wsgi:application", "--bind", "0.0.0.0:8000"]
There is a full explanation in the following answer, the answer describe it for MySQL and Golang but same concept applies to your case