Django channels times out with daphne and worker - django

I have a problem with django channels.
My Django app was running perfectly with WSGI for HTTP requests.
I tried to migrate to channels in order to allow websocket requests, and it turns out that after installing channels and running ASGI (daphne) and a worker, the server answers error 503 and the browser displays error 504 (time out) for the http requests that were previously working (admin page for example).
I read all the tutorial I could find and I do not see what the problem can be. Moreover, if I run with "runserver", it works fine.
I have an Nginx in front of the app (on a separate server), working as proxy and loadbalancer.
I use Django 1.9.5 with asgi-redis>=0.10.0, channels>=0.17.0 and daphne>=0.15.0. The wsgi.py and asgi.py files are in the same folder. Redis is working.
The command I was previously using with WSGI (and which still works if I switch back to it) is:
uwsgi --http :8000 --master --enable-threads --module Cats.wsgi
The command that works using runserver is:
python manage.py runserver 0.0.0.0:8000
The commands that fail for the requests that work with the 2 other commands are:
daphne -b 0.0.0.0 -p 8000 Cats.asgi:channel_layer
python manage.py runworker
Other info:
I added 'channels' in the installed apps (in settings.py)
other settings.py relevant info
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "Cats.routing.app_routing",
"CONFIG": {
"hosts": [(os.environ['REDIS_HOST'], 6379)],
},
},
}
Cats/routing.py
from channels.routing import route, include
from main.routing import routing as main_routing
app_routing = [
include(main_routing, path=r"^/ws/main"),
]
main/routing.py
from channels.routing import route, include
http_routing = [
]
stream_routing = [
route('websocket.receive', 'main.consumers.ws_echo'), #just for test once it will work
]
routing = [
include(stream_routing),
include(http_routing),
]
main/consumers.py
def ws_echo(message):
message.reply_channel.send({
'text': message.content['text'],
})
#this consumer is just for test once it will work
Any idea what could be wrong? All help much appreciated! Ty
EDIT:
I tried a new thing:
python manage.py runserver 0.0.0.0:8000 --noworker
python manage.py runworker
And this does not work, while python manage.py runserver 0.0.0.0:8000 was working...
Any idea that could help?

channels will use default views for un-routed requests.
assuming you use the javascripts right, I suggest you use only your default Cats/routing.py file as following:
from channels.routing import route
from main.consumers import *
app_routing = [
route('websocket.connect', ws_echo, path="/ws/main")
]
or with reverse to help with your path
from django.urls import reverse
from channels.routing import route
from main.consumers import *
app_routing = [
route('websocket.connect', ws_echo, path=reverse('main view name'))
]
I think also your consumer should be changed. when browser connects using websockets the server should first handle adding message reply channel. something like:
def ws_echo(message):
Group("notifications").add(message.reply_channel)
Group("notifications").send({
"text": json.dumps({'testkey':'testvalue'})
})
the send function should probably be called up on different event and the "notifications" Group should probably changed to have a channel dedicated to the user. something like
from channels.auth import channel_session_user_from_http
#channel_session_user_from_http
def ws_echo(message):
Group("notify-private-%s" % message.user.id).add(message.reply_channel)
Group("notify-private-%s" % message.user.id).send({
"text": json.dumps({'testkey':'testvalue'})
})

If you're using heroku or dokku make sure you've properly set the "scale" to include the worker process. By default they will only run the web instance and not the worker!
For heroku
heroku ps:scale web=1:free worker=1:free
For dokku create a file named DOKKU_SCALE and add in:
web=1
worker=1
See:
http://blog.codelv.com/2017/10/timouts-django-channels-on-dokku.html
https://blog.heroku.com/in_deep_with_django_channels_the_future_of_real_time_apps_in_django

Related

Socket handshake error when using gunicorn

I have a flask app that processes a web socket stream of audio from Twilio.
The app works fine without gunicorn but when I start it with gunicorn I get only the first message of the socket (connect) and an unsuccessful handshake. Here is how the app looks:
from flask import Flask
from flask_sockets import Sockets
from geventwebsocket.handler import WebSocketHandler
from gevent import pywsgi
...
app = Flask(__name__)
sockets = Sockets(app)
...
#sockets.route('/media')
def media(ws):
...
if __name__ == '__main__':
server = pywsgi.WSGIServer(('', HTTP_SERVER_PORT), app, handler_class=WebSocketHandler)
server.serve_forever()
When I start the app directly using python flaskapp.py it works ok.
When I start it using gunicorn by writing:
gunicorn -k flask_sockets.worker --bind 0.0.0.0:5055 --log-level=bug flaskapp:app
this is where the connection "hangs" and carries no further than the initial connection, apparently due to the handshake failing.
It's important to note that I haven't "gevent monkey patched" the code, but I'm not sure if it has anything to do with the problem.
Any idea will much be appreciated!
Don't have the ability to test this right now, but perhaps try with:
from flask import Flask
from flask_sockets import Sockets
from geventwebsocket.handler import WebSocketHandler
from gevent import pywsgi
...
app = Flask(__name__)
sockets = Sockets(app)
...
#sockets.route('/media')
def media(ws):
...
server = pywsgi.WSGIServer(('', HTTP_SERVER_PORT), app, handler_class=WebSocketHandler)
if __name__ == '__main__':
server.serve_forever()
Then change the launch command to:
gunicorn -k flask_sockets.worker --bind 0.0.0.0:5055 --log-level=bug flaskapp:server
(Gunicorn should be importing the server object, which can't live within that final if statement, as that code only runs when launched with python directly).

raise ConnectionError(self._error_message(e)) kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection refused

minimal django/celery/redis is running locally, but when deployed to heroku gives me the following error, when I run on python:
raise ConnectionError(self._error_message(e))
kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection
refused.
This is my tasks.py file in my application directory:
from celery import Celery
import os
app = Celery('tasks', broker='redis://localhost:6379/0')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
#app.task
def add(x, y):
return x + y
Requirements.txt:
django
gunicorn
django-heroku
celery
redis
celery-with-redis
django-celery
kombu
I have set worker dyno to 1.
Funny things is i could have sworn it was working before, now it doesnt work for some reason.
Once, you have a minimal django-celery-redis project setup on local, here is how you deploy it on heroku:
Add to your tasks.py:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Make sure your requirements.txt is like this:
django
gunicorn
django-heroku
celery
redis
Add to your Procfile: "worker: celery worker --app=hello.tasks.app"
Make sure it still runs on local
enter into terminal: "export REDIS_URL=redis://"
run "heroku local&"
run python
import hello.tasks
hello.tasks.add.delay(1,2)
Should return something like:
<AsyncResult: e1debb39-b61c-47bc-bda3-ee037d34a6c4>
"heroku apps:create minimal-django-celery-redis"
"heroku addons:create heroku-redis -a minimal-django-celery-redis"
"git add ."
"git commit -m "Demo""
"git push heroku master"
"heroku open&"
"heroku ps:scale worker=1"
"heroku run python"
import hello.tasks
hello.tasks.add.delay(1, 2)
You should see the task running in the application logs: "heroku logs -t -p worker"
This solved it for me, i forgot to import celery in project/init.py like so
from .celery import app as celery_app
__all__ = ("celery_app",)

Django Channels Worker not responding to websocket.connect

I'm having a problem with django channels. Daphne accepts WebSocket CONNECT requests properly, but then the workers doesn't respond to the request with the supplied method in consumers.py. The thing is this happens only most of the time. Sometimes it responds with the method in the consumers.py but most of the time the worker doesn't respond at all. I have a duplicate code working fine in vagrant (trusty64) environment, but the code behaves like that in an actual trusty64 machine. It should be noted that the trusty64 machine that hosts the app also has other application running (about 4 apps running at the same time).
I have a routing.py set up like this
from channels import route
from app.consumers import connect_tracking, disconnect_tracking
channel_routing = [
route("websocket.connect", connect_tracking, path=r'^/websocket/tms/tracking/stream/$'),
route("websocket.disconnect", disconnect_tracking, path=r'^/websocket/tms/tracking/stream/$'),
]
with the corresponding consumers.py that looks like this
import json
from channels import Group
from channels.sessions import channel_session
from channels.auth import http_session_user, channel_session_user, channel_session_user_from_http
from django.conf import settings
#channel_session_user_from_http
def connect_tracking(message):
group_name = settings.TRACKING_GROUP_NAME
print "%s is joining %s" % (message.user, group_name)
Group(group_name).add(message.reply_channel)
#channel_session_user
def disconnect_tracking(message):
group_name = settings.TRACKING_GROUP_NAME
print "%s is joining %s" % (message.user, group_name)
Group(group_name).discard(message.reply_channel)
and some channels related lines in settings.py like this
redis_host = os.environ.get('REDIS_HOST', 'localhost')
CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(redis_host, 6379)],
},
"ROUTING": "tms_app.routing.channel_routing",
},
}
referencing another question, I've tried running daphne and worker like this
daphne tms_app.asgi:channel_layer --port 9015 --bind 0.0.0.0 -v2
python manage.py runworker -v3
I've captured daphne's and the worker's log, it looks like this
Daphne log :
2016-12-30 17:00:18,870 INFO Starting server at 0.0.0.0:9015, channel layer tms_app.asgi:channel_layer
2016-12-30 17:00:26,788 DEBUG WebSocket open for websocket.send!APpWONQKKDXR
192.168.31.197:48933 - - [30/Dec/2016:17:00:26] "WSCONNECT /websocket/tms/tracking/stream/" - -
2016-12-30 17:00:26,790 DEBUG Upgraded connection http.response!sqlMPEEtolDP to WebSocket websocket.send!APpWONQKKDXR
corresponding worker log :
2016-12-30 17:00:22,265 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2016-12-30 17:00:22,265 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
As you can see when there's a WSCONNECT event, the worker doesn't respond to it.
There's another question that's close to this issue that was solved by downgrading Twisted to 16.2 but it doesn't work for me.
UPDATE January 3, 2017
I cannot replicate the issue on a local vagrant machine despite using the same code and same settings for nginx, supervisor, gunicorn and daphne. I tried changed the channel layers settings so it uses IPC instead of redis and it works. Here's the settings :
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_ipc.IPCChannelLayer",
"ROUTING": "tms_app.routing.channel_routing",
"CONFIG": {
"prefix": "tms",
},
},
}
However this does not solve the current problem as I intend to use Redis channel layers because it's more easier to scale compared to IPC. Does this mean there's something wrong with my redis server?
I think the reason your Connection doesnt complete is because you are not sending the accept message like this:
message.reply_channel.send({'accept': True})
This is what works for my version of Channels, but you should make check the docs for your version to make sure what works for you

Why is python-pdfkit hanging on printing page with OpenLayers3 content when run with uWSGI and NGINX?

I'm using Django served by uWSGI and NGINX.
Ubuntu 14.04.1 LTS 64-bit
Python 3.4
Django 1.7.4
uWSGI 1.9.17.1-debian (64bit)
NGINX 1.4.6
python-pdfkit 0.5.0
wkhtmltopdf 0.12.2.1
OpenLayers v3.0.0
When I try running pdfkit.from_url(...) to print a map to pdf the request times out.
More specifically it hangs in python's subprocess.py communicate, self._communicate:
with _PopenSelector() as selector:
if self.stdin and input:
selector.register(self.stdin, selectors.EVENT_WRITE)
if self.stdout:
selector.register(self.stdout, selectors.EVENT_READ)
if self.stderr:
selector.register(self.stderr, selectors.EVENT_READ)
while selector.get_map():
...
selector.get_map() always returns a valid result, ensuring an infinite loop.
If I run this in the Django development server (instead of uWSGI+NGINX) everything runs fine.
in my view:
wkhtmltopdfBinLocationString = '/usr/local/bin/wkhtmltopdf'
wkhtmltopdfBinLocationBytes = wkhtmltopdfBinLocationString.encode('utf-8')
#this fixes some leftover python2 assumptions about strings
config = pdfkit.configuration(wkhtmltopdf=wkhtmltopdfBinLocationBytes)
pdfkit.from_url(reportPdfUrl, reportPdfFile, configuration=config, options={
'javascript-delay': 1500
})
Several places I have seen answers along the line of "set the close-on-exec flag on the socket" solving similar issues.
Is this something I can set from my "from_url" options (wkhtmltopdf does not accept it by that name) or can I configure uWSGI to assume 'close-on-exec'? I have not been able to make either of these work, but maybe I just need help with changing my uWSGI customization file:
[uwsgi]
workers = 1
chdir = [...]
plugins = python34
wsgi-file = [...]/wsgi.py
pythonpath = [...]
I tried something like
close-on-exec = true
but that didn't seem to do anything.
NOTE: the wsgi.py file is simple:
"""
WSGI config for dst project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "[my_project].settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Any thoughts?

Running periodic tasks with django and celery

I'm trying create a simple background periodic task using Django-Celery-RabbitMQ combination. I installed Django 1.3.1, I downloaded and setup djcelery. Here is how my settings.py file looks like:
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
....
import djcelery
djcelery.setup_loader()
...
INSTALLED_APPS = (
'djcelery',
)
And I put a 'tasks.py' file in my application folder with the following contents:
from celery.task import PeriodicTask
from celery.registry import tasks
from datetime import timedelta
from datetime import datetime
class MyTask(PeriodicTask):
run_every = timedelta(minutes=1)
def run(self, **kwargs):
self.get_logger().info("Time now: " + datetime.now())
print("Time now: " + datetime.now())
tasks.register(MyTask)
And then I start up my django server (local development instance):
python manage.py runserver
Then I start up the celerybeat process:
python manage.py celerybeat --logfile=<path_to_log_file> -l DEBUG
I can see entries like this in the log:
[2012-04-29 07:50:54,671: DEBUG/MainProcess] tasks.MyTask sent. id->72a5963c-6e15-4fc5-a078-dd26da663323
And I also can see the corresponding entries getting created in database, but I can't find where it is logging the text I specified in the actual run function in MyTask class.
I tried fiddling with the logging settings, tried using the django logger instead of celery logger, but of no use. I'm not even sure, my task is getting executed. If I print any debug information in the task, where does it go?
Also, this is first time I'm working with any type of message queuing system. It looks like the task will get executed as part of the celerybeat process - outside the django web framework. Will I still be able to access all the django models I created.
Thanks,
Venkat.
Celerybeat it stuff, which pushes task when it need, but not executing them. You tasks instances stored in RabbitMq server. You need to execute celeryd daemon for executing your tasks.
python manage.py celeryd --logfile=<path_to_log_file> -l DEBUG
Also if you using RabbitMq, I recommend to you to install special rabbitmq management plugins:
rabbitmq-plugins list
rabbitmq-enable rabbitmq_management
service rabbitmq-server restart
It will be available at http://:55672/ login: guest pass: guest. Here you can check how many tasks in your rabbit instance online.
You should check the RabbitMQ logs, since celery sends the tasks to RabbitMQ and it should execute them. So all the prints of the tasks should be in RabbitMQ logs.