How to configure docker to use redis with celery - django

docker-compose.yml
version: '3.1'
services:
redis:
image: redis:latest
container_name: rd01
ports:
- '6379:6379'
webapp:
image: webapp
container_name: wa01
ports:
- "8000:8000"
links:
- redis
depends_on:
- redis
celery:
build: .
container_name: cl01
command: celery -A server worker -l info
depends_on:
- redis
I also don't feel I understand links and depends_on, I tried different combinations.
Celery cannot connect to redis. I get the following error -
[2018-08-01 13:59:42,249: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I believe I have set broker URLs correctly in settings.py in my django application (webapp image)
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
What is the right way to dockerize a django project with celery and redis? TIA.
EDITS
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
app = Celery('server')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
This is my django project to it's simplest form to reproduce the error.

You have to add the redis url while initialize the Celery classas,
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
app = Celery('server', broker='redis://redis:6379/0') # Change is here <<<<
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
UPDATE
[After a long discussion]
Change your docker-compose.yml as
version: '3.1'
services:
redis:
image: redis:latest
container_name: rd01
webapp:
build: .
container_name: wa01
ports:
- "8000:8000"
links:
- redis
depends_on:
- redis
celery:
build: .
volumes:
- .:/src
container_name: cl01
command: celery -A server worker -l info
links:
- redis
and Dockerfile as
FROM python:3.6
RUN mkdir /webapp
WORKDIR /webapp
COPY . /webapp
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["/start.sh"]

Related

CookieCutter Django websocket disconnecting (using Docker)

I am trying to build a chat application. I am using Cookiecutter django as the project initiator. And using a docker environment, and celery as i need to use redis for websocket intercommunication. But websocket is constantly disconnecting even if i am accepting the connection is consumers using self.accept(). i have also included the CHANNEL_LAYERS in the config/settings/base.py file.
Here is my local env file.
# General
# ------------------------------------------------------------------------------
USE_DOCKER=yes
IPYTHONDIR=/app/.ipython
# Redis
# ------------------------------------------------------------------------------
REDIS_URL=redis://redis:6379/0
REDIS_HOST=redis
REDIS_PORT=6379
# Celery
# ------------------------------------------------------------------------------
# Flower
CELERY_FLOWER_USER=xxxxx
CELERY_FLOWER_PASSWORD=xxxx
Here is my consumers.py file
from channels.generic.websocket import JsonWebsocketConsumer
class ChatConsumer(JsonWebsocketConsumer):
"""
This consumer is used to show user's online status,
and send notifications.
"""
def __init__(self, *args, **kwargs):
super().__init__(args, kwargs)
self.room_name = None
def connect(self):
print("Connected!")
self.room_name = "home"
self.accept()
self.send_json(
{
"type": "welcome_message",
"message": "Hey there! You've successfully connected!",
}
)
def disconnect(self, code):
print("Disconnected!")
return super().disconnect(code)
def receive_json(self, content, **kwargs):
print(content)
return super().receive_json(content, **kwargs)
Heres my routing.py file
from django.urls import path
from chat_dj.chats import consumers
websocket_urlpatterns = [
path('', consumers.ChatConsumer.as_asgi()),
]
Heres my asgi.py file
"""
ASGI config
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/dev/howto/deployment/asgi/
"""
import os
import sys
from pathlib import Path
from django.core.asgi import get_asgi_application
# This allows easy placement of apps within the interior
# chat_dj directory.
ROOT_DIR = Path(__file__).resolve(strict=True).parent.parent
sys.path.append(str(ROOT_DIR / "chat_dj"))
# If DJANGO_SETTINGS_MODULE is unset, default to the local settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
# This application object is used by any ASGI server configured to use this file.
django_application = get_asgi_application()
# Import websocket application here, so apps from django_application are loaded first
from config import routing # noqa isort:skip
from channels.routing import ProtocolTypeRouter, URLRouter # noqa isort:skip
application = ProtocolTypeRouter(
{
"http": get_asgi_application(),
"websocket": URLRouter(routing.websocket_urlpatterns),
}
)
Heres my local.yml file
version: '3'
volumes:
chat_dj_local_postgres_data: {}
chat_dj_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: chat_dj_local_django
container_name: chat_dj_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: chat_dj_production_postgres
container_name: chat_dj_local_postgres
volumes:
- chat_dj_local_postgres_data:/var/lib/postgresql/data:Z
- chat_dj_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: chat_dj_local_docs
container_name: chat_dj_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./chat_dj:/app/chat_dj:z
ports:
- "9000:9000"
command: /start-docs
redis:
image: redis:6
container_name: chat_dj_local_redis
ports:
- "6379:6379"
celeryworker:
<<: *django
image: chat_dj_local_celeryworker
container_name: chat_dj_local_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: chat_dj_local_celerybeat
container_name: chat_dj_local_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: chat_dj_local_flower
container_name: chat_dj_local_flower
ports:
- "5555:5555"
command: /start-flower
As i am using react as frontend, here's my frontend part from where i am trying to connect to the websocket.
import React from 'react';
import useWebSocket, { ReadyState } from 'react-use-websocket';
export default function App() {
const { readyState } = useWebSocket('ws://127.0.0.1:8000/', {
onOpen: () => {
console.log("Connected!")
},
onClose: () => {
console.log("Disconnected!")
}
});
const connectionStatus = {
[ReadyState.CONNECTING]: 'Connecting',
[ReadyState.OPEN]: 'Open',
[ReadyState.CLOSING]: 'Closing',
[ReadyState.CLOSED]: 'Closed',
[ReadyState.UNINSTANTIATED]: 'Uninstantiated',
}[readyState];
return (
<div>
<span>The WebSocket is currently {connectionStatus}</span>
</div>
);
};
The websocket is constantle disconnecting and i dont know why. As i am using a windows machine, locally i cant go for redis, and i dont know whats wrong with cookiecutter django.

Celery beat sends the same task twice to the worker on every interval

I have the following scheduled task in example_app -> tasks.py:
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(
crontab(minute='*/1'),
test.s(),
)
#app.task
def test():
print('test')
However this scheduled task is executed twice every minute:
celery_1 | [2022-02-08 16:53:00,044: INFO/MainProcess] Task example_app.tasks.test[a608d307-0ef8-4230-9586-830d0d900e67] received
celery_1 | [2022-02-08 16:53:00,046: INFO/MainProcess] Task example_app.tasks.test[5d5141cc-dcb5-4608-b115-295293c619a9] received
celery_1 | [2022-02-08 16:53:00,046: WARNING/ForkPoolWorker-6] test
celery_1 | [2022-02-08 16:53:00,047: WARNING/ForkPoolWorker-7] test
celery_1 | [2022-02-08 16:53:00,048: INFO/ForkPoolWorker-6] Task example_app.tasks.test[a608d307-0ef8-4230-9586-830d0d900e67] succeeded in 0.0014668999938294291s: None
celery_1 | [2022-02-08 16:53:00,048: INFO/ForkPoolWorker-7] Task example_app.tasks.test[5d5141cc-dcb5-4608-b115-295293c619a9] succeeded in 0.001373599996441044s: None
I read that this can be caused when you change django timezone from UTC, which I have done on this project. I tried this solution from another question but it doesn't stop the duplication:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.base')
class MyCeleryApp(Celery):
def now(self):
"""Return the current time and date as a datetime."""
from datetime import datetime
return datetime.now(self.timezone)
app = MyCeleryApp('tasks')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
settings:
CELERY_BROKER_URL = "redis://redis:6379/0"
CELERY_RESULT_BACKEND = "redis://redis:6379/0"
running in docker:
celery:
restart: always
build:
context: .
command: celery --app=config.celery worker -l info
depends_on:
- db
- redis
- django
celery-beat:
restart: always
build:
context: .
command: celery --app=config.celery beat -l info
depends_on:
- db
- redis
- django
have also tried docker like this:
celery:
restart: always
build:
context: .
command: celery --app=config.celery worker -B -l info
depends_on:
- db
- redis
- django
with same result.
Not sure what is causing this.
In case it helps anyone else, I solved the problem by removing the periodic task from tasks.py and defining it in settings. So the files look like this:
tasks.py
#app.task
def test():
print('test')
settings.py
CELERY_BEAT_SCHEDULE = {
"test-task": {
"task": "example_app.tasks.test",
"args": (),
"schedule": crontab(),
}
}
Not sure what was wrong with my previous setup as I thought it was inline with the docs.

ElasticSearch, FarmHaystack, Django connection Refused

I'm trying to make this https://haystack.deepset.ai/docs/latest/tutorial5md into a Dockerized Django App, the problem is when I implement the code locally it works but when I make a dockerized version of it it gives me a connection refused, my guess is that the two docker images can't find their ways to each other.
This is my docker-compose.yaml file
version: '3.7'
services:
es:
image: elasticsearch:7.8.1
environment:
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m -Dlog4j2.disable.jmx=true"
- discovery.type=single-node
- VIRTUAL_HOST=localhost
ports:
- "9200:9200"
networks:
- test-network
container_name: es
healthcheck:
test: ["CMD", "curl", "-s", "-f", "http://localhost:9200"]
retries: 6
web:
build: .
command: bash -c "sleep 1m && python manage.py migrate && python manage.py makemigrations && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
networks:
- test-network
ports:
- "8000:8000"
depends_on:
- es
healthcheck:
test: ["CMD", "curl", "-s", "-f", "http://localhost:9200"]
retries: 6
networks:
test-network:
driver: bridge
and this is my apps.py
from django.apps import AppConfig
import logging
# from haystack.reader.transformers import TransformersReader
from haystack.reader.farm import FARMReader
from haystack.preprocessor.utils import convert_files_to_dicts, fetch_archive_from_http
from haystack.preprocessor.cleaning import clean_wiki_text
from django.core.cache import cache
import pickle
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
from haystack.retriever.sparse import ElasticsearchRetriever
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
class SquadmodelConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'squadModel'
def ready(self):
document_store = ElasticsearchDocumentStore(host="elasticsearch", username="", password="", index="document")
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
document_store.write_documents(dicts)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True)
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document")
retriever = ElasticsearchRetriever(document_store=document_store)
self.reader = reader
self.retriever = retriever
my views.py
from django.apps import apps as allApps
from rest_framework.decorators import api_view
from rest_framework.response import Response
from haystack.pipeline import ExtractiveQAPipeline
theApp = allApps.get_app_config('squadModel')
reader = theApp.reader
retreiver = theApp.retriever
#api_view(['POST'])
def respondQuestion(request):
question = request.data["question"]
pipe = ExtractiveQAPipeline(reader, retreiver)
prediction = pipe.run(query=question, top_k_retriever=10, top_k_reader=5)
content = {"prediction": prediction}
return Response(content)
again this Django API works perfectly locally with an elastic search docker image but in this config i can't manage to make it work.
Any help ?
As suggested by #leandrojmp, just needed to replace "localhost" with "es" on the apps.py.
document_store = ElasticsearchDocumentStore(host="es", username="", password="", index="document")

Celery tasks don't run in docker container

In Django, I want to perform a Celery task (let's say add 2 numbers) when a user uploads a new file in /media. What I've done is to use signals so when the associated Upload object is saved the celery task will be fired.
Here's my code and Docker configuration:
signals.py
from django.db.models.signals import post_save
from django.dispatch import receiver
from core.models import Upload
from core.tasks import add_me
def upload_save(sender, instance, signal, *args, **kwargs):
print("IN UPLOAD SIGNAL") # <----- LOGS PRINT UP TO HERE, IN CONTAINERS
add_me.delay(10)
post_save.connect(upload_save, sender=Upload) # My post save signal
tasks.py
from celery import shared_task
#shared_task(ignore_result=True, max_retries=3)
def add_me(upload_id):
print('In celery') # <----- This is not printed when in Docker!
return upload_id + 20
views.py
class UploadView(mixins.CreateModelMixin, generics.GenericAPIView):
serializer_class = UploadSerializer
def post(self, request, *args, **kwargs):
serializer = UploadSerializer(data=request.data)
print("SECOND AFTER")
print(request.data) <------ I can see my file name here
if serializer.is_valid():
print("THIRD AFTER") <------ This is printer OK in all cases
serializer.save()
print("FOURTH AFTER") <----- But this is not printed when in Docker!
return response.Response(
{"Message": "Your file was uploaded"},
status=status.HTTP_201_CREATED,
)
return response.Response(
{"Message": "Failure", "Errors": serializer.errors},
status=status.HTTP_403_FORBIDDEN,
)
docker-compose.yml
version: "3.8"
services:
db:
# build: ./database_docker/
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
# volumes:
# - media:/code/media
web:
build: ./docker/
command: bash -c "python manage.py migrate --noinput && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
- media:/code/media
ports:
- "8000:8000"
depends_on:
- db
rabbitmq:
image: rabbitmq:3.6.10
volumes:
- media:/code/media
worker:
build: ./docker/
command: celery -A example_worker worker --loglevel=debug -n worker1.%h
volumes:
- .:/code
- media:/code/media
depends_on:
- db
- rabbitmq
volumes:
media:
Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip3 install -r requirements.txt
COPY . /code/
WORKDIR /code
Everything works OK when not in Docker.
The problem is that when I'm deploying the above in Docker and try to upload a file, the request never finishes even-though the file is uploaded in the media folder (confirmed it by accessing its contents in both the web and worker containers).
More specifically it seems that the Celery task is not executed (finished?) and the code after the serializer.save() is never reached.
When I remove the signal (thus no Celery task is fired) everything is OK. Can someone please help me?
I just figured it out. Turns out that I need to add the following in the __init__.py of my application.
from .celery import app as celery_app
__all__ = ("celery_app",)
Don't know why everything is running smoothly without this piece of code when I'm not using containers...

celery workers unable to connect to dockerized redis instance using Django

Currently have a dockerized django application and intended on using Celery to handle a long-running task.
BUT Docker-compose up fails with following error:
[2018-12-17 17:25:59,710: ERROR/MainProcess] consumer: Cannot
connect to redis://redis:6379//: Error -2 connecting to redis:6379.
Name or service not known..
There are some similar questions concerning this on SOF but they all seem to focus on CELERY_BROKER_URL in settings.py, which I believe I have set correctly as follows
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
My docker-compose.yml :
db:
image: postgres:10.1-alpine
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- dsne-django-nginx
django: &python
restart: unless-stopped
build:
context: .
networks:
- dsne-django-nginx
volumes:
- dsne-django-static:/usr/src/app/static
- dsne-django-media:/usr/src/app/media
ports:
- 8000:8000
depends_on:
- db
- redis
- celery_worker
nginx:
container_name: dsne-nginx
restart: unless-stopped
build:
context: ./nginx
dockerfile: nginx.dockerfile
networks:
- dsne-django-nginx
volumes:
- dsne-django-static:/usr/src/app/static
- dsne-django-media:/usr/src/app/media
- dsne-nginx-cert:/etc/ssl/certs:ro
- /etc/ssl/:/etc/ssl/
- /usr/share/ca-certificates/:/usr/share/ca-certificates/
ports:
- 80:80
- 443:443
depends_on:
- django
redis:
image: redis:alpine
celery_worker:
<<: *python
command: celery -A fv1 worker --loglevel=info
ports: []
depends_on:
- redis
- db
volumes:
postgres_data:
dsne-django-static:
driver: local
dsne-django-media:
driver: local
dsne-nginx-cert:
networks:
dsne-django-nginx:
driver: bridge
init.py :
from .celery import fv1 as celery_app
__all__ = ('celery_app',)
celery.py :
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
import fv1
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'fv1.settings')
app = Celery('fv1')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Where am I going wrong and why can't my celery workers connect to Redis?
your redis container does not list port 6379