I am trying to build a chat application. I am using Cookiecutter django as the project initiator. And using a docker environment, and celery as i need to use redis for websocket intercommunication. But websocket is constantly disconnecting even if i am accepting the connection is consumers using self.accept(). i have also included the CHANNEL_LAYERS in the config/settings/base.py file.
Here is my local env file.
# General
# ------------------------------------------------------------------------------
USE_DOCKER=yes
IPYTHONDIR=/app/.ipython
# Redis
# ------------------------------------------------------------------------------
REDIS_URL=redis://redis:6379/0
REDIS_HOST=redis
REDIS_PORT=6379
# Celery
# ------------------------------------------------------------------------------
# Flower
CELERY_FLOWER_USER=xxxxx
CELERY_FLOWER_PASSWORD=xxxx
Here is my consumers.py file
from channels.generic.websocket import JsonWebsocketConsumer
class ChatConsumer(JsonWebsocketConsumer):
"""
This consumer is used to show user's online status,
and send notifications.
"""
def __init__(self, *args, **kwargs):
super().__init__(args, kwargs)
self.room_name = None
def connect(self):
print("Connected!")
self.room_name = "home"
self.accept()
self.send_json(
{
"type": "welcome_message",
"message": "Hey there! You've successfully connected!",
}
)
def disconnect(self, code):
print("Disconnected!")
return super().disconnect(code)
def receive_json(self, content, **kwargs):
print(content)
return super().receive_json(content, **kwargs)
Heres my routing.py file
from django.urls import path
from chat_dj.chats import consumers
websocket_urlpatterns = [
path('', consumers.ChatConsumer.as_asgi()),
]
Heres my asgi.py file
"""
ASGI config
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/dev/howto/deployment/asgi/
"""
import os
import sys
from pathlib import Path
from django.core.asgi import get_asgi_application
# This allows easy placement of apps within the interior
# chat_dj directory.
ROOT_DIR = Path(__file__).resolve(strict=True).parent.parent
sys.path.append(str(ROOT_DIR / "chat_dj"))
# If DJANGO_SETTINGS_MODULE is unset, default to the local settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
# This application object is used by any ASGI server configured to use this file.
django_application = get_asgi_application()
# Import websocket application here, so apps from django_application are loaded first
from config import routing # noqa isort:skip
from channels.routing import ProtocolTypeRouter, URLRouter # noqa isort:skip
application = ProtocolTypeRouter(
{
"http": get_asgi_application(),
"websocket": URLRouter(routing.websocket_urlpatterns),
}
)
Heres my local.yml file
version: '3'
volumes:
chat_dj_local_postgres_data: {}
chat_dj_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: chat_dj_local_django
container_name: chat_dj_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: chat_dj_production_postgres
container_name: chat_dj_local_postgres
volumes:
- chat_dj_local_postgres_data:/var/lib/postgresql/data:Z
- chat_dj_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: chat_dj_local_docs
container_name: chat_dj_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./chat_dj:/app/chat_dj:z
ports:
- "9000:9000"
command: /start-docs
redis:
image: redis:6
container_name: chat_dj_local_redis
ports:
- "6379:6379"
celeryworker:
<<: *django
image: chat_dj_local_celeryworker
container_name: chat_dj_local_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: chat_dj_local_celerybeat
container_name: chat_dj_local_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: chat_dj_local_flower
container_name: chat_dj_local_flower
ports:
- "5555:5555"
command: /start-flower
As i am using react as frontend, here's my frontend part from where i am trying to connect to the websocket.
import React from 'react';
import useWebSocket, { ReadyState } from 'react-use-websocket';
export default function App() {
const { readyState } = useWebSocket('ws://127.0.0.1:8000/', {
onOpen: () => {
console.log("Connected!")
},
onClose: () => {
console.log("Disconnected!")
}
});
const connectionStatus = {
[ReadyState.CONNECTING]: 'Connecting',
[ReadyState.OPEN]: 'Open',
[ReadyState.CLOSING]: 'Closing',
[ReadyState.CLOSED]: 'Closed',
[ReadyState.UNINSTANTIATED]: 'Uninstantiated',
}[readyState];
return (
<div>
<span>The WebSocket is currently {connectionStatus}</span>
</div>
);
};
The websocket is constantle disconnecting and i dont know why. As i am using a windows machine, locally i cant go for redis, and i dont know whats wrong with cookiecutter django.
Related
I'm trying to make this https://haystack.deepset.ai/docs/latest/tutorial5md into a Dockerized Django App, the problem is when I implement the code locally it works but when I make a dockerized version of it it gives me a connection refused, my guess is that the two docker images can't find their ways to each other.
This is my docker-compose.yaml file
version: '3.7'
services:
es:
image: elasticsearch:7.8.1
environment:
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m -Dlog4j2.disable.jmx=true"
- discovery.type=single-node
- VIRTUAL_HOST=localhost
ports:
- "9200:9200"
networks:
- test-network
container_name: es
healthcheck:
test: ["CMD", "curl", "-s", "-f", "http://localhost:9200"]
retries: 6
web:
build: .
command: bash -c "sleep 1m && python manage.py migrate && python manage.py makemigrations && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
networks:
- test-network
ports:
- "8000:8000"
depends_on:
- es
healthcheck:
test: ["CMD", "curl", "-s", "-f", "http://localhost:9200"]
retries: 6
networks:
test-network:
driver: bridge
and this is my apps.py
from django.apps import AppConfig
import logging
# from haystack.reader.transformers import TransformersReader
from haystack.reader.farm import FARMReader
from haystack.preprocessor.utils import convert_files_to_dicts, fetch_archive_from_http
from haystack.preprocessor.cleaning import clean_wiki_text
from django.core.cache import cache
import pickle
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
from haystack.retriever.sparse import ElasticsearchRetriever
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
class SquadmodelConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'squadModel'
def ready(self):
document_store = ElasticsearchDocumentStore(host="elasticsearch", username="", password="", index="document")
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
document_store.write_documents(dicts)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True)
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document")
retriever = ElasticsearchRetriever(document_store=document_store)
self.reader = reader
self.retriever = retriever
my views.py
from django.apps import apps as allApps
from rest_framework.decorators import api_view
from rest_framework.response import Response
from haystack.pipeline import ExtractiveQAPipeline
theApp = allApps.get_app_config('squadModel')
reader = theApp.reader
retreiver = theApp.retriever
#api_view(['POST'])
def respondQuestion(request):
question = request.data["question"]
pipe = ExtractiveQAPipeline(reader, retreiver)
prediction = pipe.run(query=question, top_k_retriever=10, top_k_reader=5)
content = {"prediction": prediction}
return Response(content)
again this Django API works perfectly locally with an elastic search docker image but in this config i can't manage to make it work.
Any help ?
As suggested by #leandrojmp, just needed to replace "localhost" with "es" on the apps.py.
document_store = ElasticsearchDocumentStore(host="es", username="", password="", index="document")
In Django, I want to perform a Celery task (let's say add 2 numbers) when a user uploads a new file in /media. What I've done is to use signals so when the associated Upload object is saved the celery task will be fired.
Here's my code and Docker configuration:
signals.py
from django.db.models.signals import post_save
from django.dispatch import receiver
from core.models import Upload
from core.tasks import add_me
def upload_save(sender, instance, signal, *args, **kwargs):
print("IN UPLOAD SIGNAL") # <----- LOGS PRINT UP TO HERE, IN CONTAINERS
add_me.delay(10)
post_save.connect(upload_save, sender=Upload) # My post save signal
tasks.py
from celery import shared_task
#shared_task(ignore_result=True, max_retries=3)
def add_me(upload_id):
print('In celery') # <----- This is not printed when in Docker!
return upload_id + 20
views.py
class UploadView(mixins.CreateModelMixin, generics.GenericAPIView):
serializer_class = UploadSerializer
def post(self, request, *args, **kwargs):
serializer = UploadSerializer(data=request.data)
print("SECOND AFTER")
print(request.data) <------ I can see my file name here
if serializer.is_valid():
print("THIRD AFTER") <------ This is printer OK in all cases
serializer.save()
print("FOURTH AFTER") <----- But this is not printed when in Docker!
return response.Response(
{"Message": "Your file was uploaded"},
status=status.HTTP_201_CREATED,
)
return response.Response(
{"Message": "Failure", "Errors": serializer.errors},
status=status.HTTP_403_FORBIDDEN,
)
docker-compose.yml
version: "3.8"
services:
db:
# build: ./database_docker/
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
# volumes:
# - media:/code/media
web:
build: ./docker/
command: bash -c "python manage.py migrate --noinput && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
- media:/code/media
ports:
- "8000:8000"
depends_on:
- db
rabbitmq:
image: rabbitmq:3.6.10
volumes:
- media:/code/media
worker:
build: ./docker/
command: celery -A example_worker worker --loglevel=debug -n worker1.%h
volumes:
- .:/code
- media:/code/media
depends_on:
- db
- rabbitmq
volumes:
media:
Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip3 install -r requirements.txt
COPY . /code/
WORKDIR /code
Everything works OK when not in Docker.
The problem is that when I'm deploying the above in Docker and try to upload a file, the request never finishes even-though the file is uploaded in the media folder (confirmed it by accessing its contents in both the web and worker containers).
More specifically it seems that the Celery task is not executed (finished?) and the code after the serializer.save() is never reached.
When I remove the signal (thus no Celery task is fired) everything is OK. Can someone please help me?
I just figured it out. Turns out that I need to add the following in the __init__.py of my application.
from .celery import app as celery_app
__all__ = ("celery_app",)
Don't know why everything is running smoothly without this piece of code when I'm not using containers...
Currently have a dockerized django application and intended on using Celery to handle a long-running task.
BUT Docker-compose up fails with following error:
[2018-12-17 17:25:59,710: ERROR/MainProcess] consumer: Cannot
connect to redis://redis:6379//: Error -2 connecting to redis:6379.
Name or service not known..
There are some similar questions concerning this on SOF but they all seem to focus on CELERY_BROKER_URL in settings.py, which I believe I have set correctly as follows
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
My docker-compose.yml :
db:
image: postgres:10.1-alpine
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- dsne-django-nginx
django: &python
restart: unless-stopped
build:
context: .
networks:
- dsne-django-nginx
volumes:
- dsne-django-static:/usr/src/app/static
- dsne-django-media:/usr/src/app/media
ports:
- 8000:8000
depends_on:
- db
- redis
- celery_worker
nginx:
container_name: dsne-nginx
restart: unless-stopped
build:
context: ./nginx
dockerfile: nginx.dockerfile
networks:
- dsne-django-nginx
volumes:
- dsne-django-static:/usr/src/app/static
- dsne-django-media:/usr/src/app/media
- dsne-nginx-cert:/etc/ssl/certs:ro
- /etc/ssl/:/etc/ssl/
- /usr/share/ca-certificates/:/usr/share/ca-certificates/
ports:
- 80:80
- 443:443
depends_on:
- django
redis:
image: redis:alpine
celery_worker:
<<: *python
command: celery -A fv1 worker --loglevel=info
ports: []
depends_on:
- redis
- db
volumes:
postgres_data:
dsne-django-static:
driver: local
dsne-django-media:
driver: local
dsne-nginx-cert:
networks:
dsne-django-nginx:
driver: bridge
init.py :
from .celery import fv1 as celery_app
__all__ = ('celery_app',)
celery.py :
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
import fv1
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'fv1.settings')
app = Celery('fv1')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Where am I going wrong and why can't my celery workers connect to Redis?
your redis container does not list port 6379
i'm new to docker ,i've developed an app using flask and it's working fine.when i tried deploying the app on docker the console says that it's Running on http://127.0.0.1:5000/ ,but when i try to log to this adress on the browser it shows me "this website is not accessible"
here's my Dockerfile:
FROM python:3.6
RUN mkdir /web WORKDIR /web ADD . /web/ RUN pip install -r requirements.txt
ENV FLASK_ENV="docker" EXPOSE 5000 CMD ["python", "mongo.py"]
my dockercompose.yaml file:
version: '3'
services:
# Define the Flask web application
flaskapp:
# Build the Dockerfile that is in the web directory
build: ./web
# Always restart the container regardless of the exit status; try and restart the container indefinitely
restart: always
# Expose port 8000 to other containers (not to the host of the machine)
expose:
- "8000"
# Mount the web directory within the container at /home/flask/app/web
# volumes:
# - ./web:/homvole/flask/app/web
# Don't create this container until the redis and mongo containers (below) have been created
depends_on:
- redis
- mongo
# Link the redis and mongo containers together so that they can talk to one another
links:
- redis
- mongo
# Pass environment variables to the flask container (this debug level lets you see more useful information)
environment:
FLASK_DEBUG: 1
# Deploy with 3 replicas in the case of failure of one of the containers (only in Docker Swarm)
deploy:
mode: replicated
replicas: 3
# Define the redis Docker container
redis:
# use the redis:alpine image: https://hub.docker.com/_/redis/
image: redis:alpine
restart: always
deploy:
mode: replicated
replicas: 3
# Define the redis NGINX forward proxy container
# Define the mongo database
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: root
# Expose port 5432 to other Docker containers
expose:
- "8081"
my mongo.py file:
from flask import Flask, jsonify, request
#from flask.ext.pymongo import PyMongo
from flask_pymongo import PyMongo
app = Flask(__name__)
app.config['MONGO_DBNAME'] = 'databasename'
#app.config['MONGO_URI'] = 'mongodb://username:password#hostname:port/databasename'
app.config['MONGO_URI'] = 'mongodb://root:root#localhost:27017/databasename'
mongo = PyMongo(app)
#app.route('/framework/', methods=['GET'])
def get_all_frameworks():
framework = mongo.db.framework
output = []
for q in framework.find():
output.append({'name' : q['name'], 'language' : q['language']})
return jsonify({'result' : output})
#app.route('/framework/find/<name>', methods=['GET'])
def get_one_framework(name):
framework = mongo.db.framework
q = framework.find_one({'name' : name})
if q:
output = {'name' : q['name'], 'language' : q['language']}
else:
output = 'No results found'
return jsonify({'result' : output})
#app.route('/framework/', methods=['POST'])
def add_framework():
framework = mongo.db.framework
name = request.json['name']
language = request.json['language']
framework_id = framework.insert({'name' : name, 'language' : language})
new_framework = framework.find_one({'_id' : framework_id})
output = {'name' : new_framework['name'], 'language' : new_framework['language']}
return jsonify({'result' : output})
#app.route('/framework/update/', methods=['POST'])
def update_framework():
framework = mongo.db.framework
name = request.json['name']
language = request.json['language']
myquery = { "name": name }
newvalues = { "$set": {"language":language } }
q=framework.update_one(myquery, newvalues)
if q:
output = {'name' : q['name'], 'language' : q['language']}
else:
output = 'No results found'
return jsonify({'result' : output})
#app.route('/framework/delete/<name>', methods=['GET'])
def delete_one_framework(name):
framework = mongo.db.framework
#name = request.json['name']
myquery = { "name": name }
q=framework.delete_one(myquery)
if q:
output = 'element deleted successfully '
else:
output = 'No results found'
return jsonify({'result' : output})
if __name__ == '__main__':
app.run(debug=False)
app.run(threaded=True)
app.run(host='0.0.0.0')
to deploy my app i've used the commands:
docker build -t moapp4 .
docker run -p 5000:5000 moapp4
this shows:
Serving Flask app "mongo" (lazy loading)
Environment: docker
Debug mode: off
Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
docker-compose.yml
version: '3.1'
services:
redis:
image: redis:latest
container_name: rd01
ports:
- '6379:6379'
webapp:
image: webapp
container_name: wa01
ports:
- "8000:8000"
links:
- redis
depends_on:
- redis
celery:
build: .
container_name: cl01
command: celery -A server worker -l info
depends_on:
- redis
I also don't feel I understand links and depends_on, I tried different combinations.
Celery cannot connect to redis. I get the following error -
[2018-08-01 13:59:42,249: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I believe I have set broker URLs correctly in settings.py in my django application (webapp image)
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
What is the right way to dockerize a django project with celery and redis? TIA.
EDITS
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
app = Celery('server')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
This is my django project to it's simplest form to reproduce the error.
You have to add the redis url while initialize the Celery classas,
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
app = Celery('server', broker='redis://redis:6379/0') # Change is here <<<<
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
UPDATE
[After a long discussion]
Change your docker-compose.yml as
version: '3.1'
services:
redis:
image: redis:latest
container_name: rd01
webapp:
build: .
container_name: wa01
ports:
- "8000:8000"
links:
- redis
depends_on:
- redis
celery:
build: .
volumes:
- .:/src
container_name: cl01
command: celery -A server worker -l info
links:
- redis
and Dockerfile as
FROM python:3.6
RUN mkdir /webapp
WORKDIR /webapp
COPY . /webapp
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["/start.sh"]