Using Websocket in Django View Not Working - django

Problem Summary
I am sending data to a front-end (React component) using Django and web-sockets. When I run the app and send the data from my console everything works. When I use a button on the front-end to trigger a Django view that runs the same function, it does not work and generates a confusing error message.
I want to be able to click a front-end button which begins sending the data to the websocket.
I am new to Django, websockets and React and so respectfully ask you to be patient.
Overview
Django back-end and React front-end connected using Django Channels (web-sockets).
User clicks button on front-end, which does fetch() on Django REST API end-point.
[NOT WORKING] The above endpoint's view begins sending data through the web-socket.
Front-end is updated with this value.
Short Error Description
The error Traceback is long, so it is included at the end of this post. It begins with:
Internal Server Error: /api/run-create
And ends with:
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
What I've Tried
Sending Data Outside The Django View
The function below sends data to the web-socket.
Works perfectly when I run it in my console - front-end updates as expected.
Note: the same function causes the attached error when run from inside the Django view.
import json
import time
import numpy as np
import websocket
def gen_fake_path(num_cities):
path = list(np.random.choice(num_cities, num_cities, replace=False))
path = [int(num) for num in path]
return json.dumps({"path": path})
def fake_run(num_cities, limit=1000):
ws = websocket.WebSocket()
ws.connect("ws://localhost:8000/ws/canvas_data")
while limit:
path_json = gen_fake_path(num_cities)
print(f"Sending {path_json} (limit: {limit})")
ws.send(path_json)
time.sleep(3)
limit -= 1
print("Sending complete!")
ws.close()
return
Additional Detail
Relevant Files and Configuration
consumer.py
class AsyncCanvasConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.group_name = "dashboard"
await self.channel_layer.group_add(self.group_name, self.channel_name)
await self.accept()
async def disconnect(self, close_code):
await self.channel_layer.group_discard(self.group_name, self.channel_name)
async def receive(self, text_data=None, bytes_data=None):
print(f"Received: {text_data}")
data = json.loads(text_data)
to_send = {"type": "prep", "path": data["path"]}
await self.channel_layer.group_send(self.group_name, to_send)
async def prep(self, event):
send_json = json.dumps({"path": event["path"]})
await self.send(text_data=send_json)
Relevant views.py
#api_view(["POST", "GET"])
def run_create(request):
serializer = RunSerializer(data=request.data)
if not serializer.is_valid():
return Response({"Bad Request": "Invalid data..."}, status=status.HTTP_400_BAD_REQUEST)
# TODO: Do run here.
serializer.save()
fake_run(num_cities, limit=1000)
return Response(serializer.data, status=status.HTTP_200_OK)
Relevant settings.py
WSGI_APPLICATION = 'evolving_salesman.wsgi.application'
ASGI_APPLICATION = 'evolving_salesman.asgi.application'
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels.layers.InMemoryChannelLayer"
}
}
Relevant routing.py
websocket_url_pattern = [
path("ws/canvas_data", AsyncCanvasConsumer.as_asgi()),
]
Full Error
https://pastebin.com/rnGhrgUw
EDIT: SOLUTION
The suggestion by Kunal Solanke solved the issue. Instead of using fake_run() I used the following:
layer = get_channel_layer()
for i in range(10):
path = list(np.random.choice(4, 4, replace=False))
path = [int(num) for num in path]
async_to_sync(layer.group_send)("dashboard", {"type": "prep", "path": path})
time.sleep(3)

Rather than creating a new connection from same server to itself , I'd suggest you to use the get_channel_layer utitlity .Because you are in the end increasing the server load by opening so many connections .
Once you get the channel layer , you can simply do group send as we normally do to send evnets .
You can read more about here
from channels.layers import get_channel_layer
from asgiref.sync import async_to_sync
def media_image(request,chat_id) :
if request.method == "POST" :
data = {}
if request.FILES["media_image"] is not None :
item = Image.objects.create(owner = request.user,file=request.FILES["media_image"])
message=Message.objects.create(item =item,user=request.user )
chat = Chat.objects.get(id=chat_id)
chat.messages.add(message)
layer = get_channel_layer()
item = {
"media_type": "image",
"url" : item.file.url,
"user" : request.user.username,
'caption':item.title
}
async_to_sync(layer.group_send)(
'chat_%s'%str(chat_id),
#this is the channel group name,which is defined inside your consumer"
{
"type":"send_media",
"item" : item
}
)
return HttpResponse("media sent")
In the error log, I can see that the handshake succeded for the first iteration and failed for 2nd . You can check that by printing something in the for loop . If that's the case the handshake most probably failed due to mulitple connections . I don't know how many connections the Inmemrorycache supports from same origin,but that can be reason that the 2nd connection is getting diconnected . You can get some idea in channel docs.Try using redis if you don't want to change your code,its pretty easy if you are using linux .

Related

How to connect Django web socket with the third-Party Web Sockets?

I want to connect my Django WebSocket with a third-party web socket. This program is one I wrote, and it functions properly. To avoid having to re-login with the third-party API, I have now added the code to check whether the same room is present in my database. if we use the same API KEY to re-connect to the third-party API. It gives the following error:
{"event":"login","status":401,"message":"Connected from another location"}
I want to see if the same cryptocurrency coin is already connected or not. We don't want to logon with the same API KEY once we're connected. I have two issues here:
Don't send the login request to that web socket again.
Don't send the subscribe request, if the same coin already exists. Let's say BTCUSD already connected and giving me the data. I want to just connect to the next user to same room and get the data on next request.
import websocket
import time
import ssl
from channels.generic.websocket import AsyncWebsocketConsumer
from .models import Room
login = {
"event": "login",
"data": {
"apiKey": "API_KEY",
},
}
class CryptoConsumer(AsyncWebsocketConsumer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.ws = websocket.WebSocket(sslopt={"cert_reqs": ssl.CERT_NONE})
self.ws.connect("wss://crypto.financialmodelingprep.com")
async def connect(self):
self.room_name = self.scope["url_route"]["kwargs"]["room_name"]
self.room_group_name = "crypto_%s" % self.room_name
# ****** Code Block ******
subscribe = {
"event": "subscribe",
"data": {
"ticker": self.room_name,
},
}
room = await Room.add(self.room_name) # Method in models.py to add the user and return True
if room is False:
self.ws.send(json.dumps(login))
print("The group with the name %s doesn't exist" % self.room_group_name)
time.sleep(1)
self.ws.send(json.dumps(subscribe))
# ****** End Code Block ******
# Join room group
await self.channel_layer.group_add(self.room_group_name, self.channel_name)
await self.accept()
async def disconnect(self, close_code):
unsubscribe = {
"event": "unsubscribe",
"data": {
"ticker": self.room_name,
},
}
self.ws.send(json.dumps(unsubscribe))
self.ws.close()
# Leave room group
await self.channel_layer.group_discard(self.room_group_name, self.channel_name)
# Receive message from WebSocket
async def receive(self, text_data="{'text': 'Dummy Text'}"):
text_data_json = json.loads(text_data)
message = text_data_json["message"]
message = str(self.ws.recv())
# Send message to room group
await self.channel_layer.group_send(
self.room_group_name, {"type": "chat_message", "message": message}
)
# Receive message from room group
async def chat_message(self, event):
message = event["message"]
# Send message to WebSocket
await self.send(text_data=json.dumps({"message": message}))
Note: Why I want to do this entire step because we don't want the API KEY made public.
So, the code coming from the front end will connect to our Django web socket, and then we'll connect to the third-party web socket and return the data that was sent by them.

Django Channels consumer consuming 1 call twice

I am using a combination of DRF 3.11.0 and Channels 2.4.0 to implement a backend, and it is hosted on Heroku on 1 dyno with a Redis resource attached. I have a socket on my React frontend that successfully sends/received from the backend server.
I am having an issues where any message sent back to the front end over the socket is being sent twice. I have confirmed through console.log that the front end is only pinging the back end once. I can confirm through print() inside of the API call that the function is only calling async_to_sync(channel_layer.group_send) once as well. The issue is coming from my consumer - when I use print(self.channel_name) inside of share_document_via_videocall(), I can see that two instances with different self.channel_names are being called (specific.AOQenhTn!fUybdYEsViaP and specific.AOQenhTn!NgtWxuiHtHBw. It seems like the consumer has connected to two separate channels, but I'm not sure why. When I put print() statements in my connect() I only see it go through the connect process once.
How do I ensure that I am only connected to one channel?
in settings.py:
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
#"hosts": [('127.0.0.1', 6379)],
"hosts": [(REDIS_HOST)],
},
},
}
Consumer:
import json
from asgiref.sync import async_to_sync
from channels.db import database_sync_to_async
from channels.generic.websocket import AsyncWebsocketConsumer
from rest_framework.authtoken.models import Token
from django.contrib.auth.models import AnonymousUser
from .exceptions import ClientError
import datetime
from django.utils import timezone
class HeaderConsumer(AsyncWebsocketConsumer):
async def connect(self):
print("connecting")
await self.accept()
print("starting")
print(self.channel_name)
await self.send("request_for_token")
async def continue_connect(self):
print("continuing")
print(self.channel_name)
await self.get_user_from_token(self.scope['token'])
await self.channel_layer.group_add(
"u_%d" % self.user['id'],
self.channel_name,
)
#... more stuff
async def disconnect(self, code):
await self.channel_layer.group_discard(
"u_%d" % self.user['id'],
self.channel_name,
)
async def receive(self, text_data):
text_data_json = json.loads(text_data)
if 'token' in text_data_json:
self.scope['token'] = text_data_json['token']
await self.continue_connect()
async def share_document_via_videocall(self, event):
# Send a message down to the client
print("share_document received")
print(event)
print(self.channel_name)
print(self.user['id'])
await self.send(text_data=json.dumps(
{
"type": event['type'],
"message": event["message"],
},
))
#database_sync_to_async
def get_user_from_token(self, t):
try:
print("trying token" + t)
token = Token.objects.get(key=t)
self.user = token.user.get_profile.json()
except Token.DoesNotExist:
print("failed")
self.user = AnonymousUser()
REST API call:
class ShareViaVideoChat(APIView):
permission_classes = (permissions.IsAuthenticated,)
def post(self, request, format=None):
data = request.data
recipient_list = data['recipient_list']
channel_layer = get_channel_layer()
for u in recipient_list:
if u['id'] != None:
print("sending to:")
print('u_%d' % u['id'])
async_to_sync(channel_layer.group_send)(
'u_%d' % u['id'],
{'type': 'share_document_via_videocall',
'message': {
'document': {'data': {}},
'sender': {'name': 'some name'}
}
}
)
return Response()
with respect to you getting to calls with different channel names are you sure your frontend has not connected twice to the consumer? Check in the debug console in your browser.
i get same problem with nextjs as a frontend of Django channels WebSocket server.
and after searching i found the problem related with tow things:
1- react strict mode (the request sending twice) :
to disable react strict mode in next.js , go to module name "next.config.js" , and change the value for strict mode to false , as the following :
/** #type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: false,
}
module.exports = nextConfig
2- in nextjs the code run twice (outside useEffect Hook) , one on server side and the second on the client side (which means each user will connect to websocket server twice, and got two channels name , and join to same group twice each time with different channel name . ) ,
so i changed my codes to connect with Django channels server only from client side , if you like see my full code / example , kindly visit the following URL , and note the checking code about "typeof window === "undefined":
frontend nextjs code :
https://stackoverflow.com/a/72288219/12662056
i don't know if my problem same your problem , but i hope that helpful.

Django Channels send group message from Celery task. Asyncio event loop stopping before all async tasks finished

I'm currently stuck on a particularly tricky problem, I'll try my best to explain it.
I have a Django project and it's main purpose is to execute queued tasks from a DB rapidly. I use Celery and Celerybeat to achieve this with Django channels to update my templates with the responses in real time.
The Celery worker is a gevent worker pool with a decent number of threads.
My Task(Simplified version):
#shared_task
def exec_task(action_id):
# execute the action
action = Action.objects.get(pk=action_id)
response = post_request(action)
# update action status
if response.status_code == 200:
action.status = 'completed'
else:
action.status = 'failed'
# save the action to the DB
action.save()
channel_layer = get_channel_layer()
status_data = {'id': action.id, 'status': action.status}
status_data = json.dumps(status_data)
try:
async_to_sync(channel_layer.group_send)('channel_group', {'type': 'propergate_status', 'data': status_data})
except:
event_loop = asyncio.get_running_loop()
future = asyncio.run_coroutine_threadsafe(channel_layer.group_send('channel_group', {'type': 'propergate_status', 'data': status_data}), event_loop)
result = future.result()
My Error:
[2019-10-03 18:47:59,990: WARNING/MainProcess] actions queued: 25
[2019-10-03 18:48:02,206: WARNING/MainProcess]
c:\users\jack\documents\github\mcr-admin\venv\lib\site-packages\gevent_socket3.py:123:
RuntimeWarning: coroutine 'AsyncToSync.main_wrap' was never awaited
self._read_event = io_class(fileno, 1)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
[2019-10-03 18:48:02,212: WARNING/MainProcess] c:\users\jack\documents\github\mcr-admin\venv\lib\site-packages\gevent_socket3.py:123:
RuntimeWarning: coroutine 'BaseEventLoop.shutdown_asyncgens' was never
awaited self._read_event = io_class(fileno, 1) RuntimeWarning:
Originally after I saved the action to the DB I just called:
async_to_sync(channel_layer.group_send)('channel_group', {'type': 'propergate_status', 'data': status_data})
But I kept getting a runtime error because you can't use async_to_sync if there is already an asyncio event loop already running, as shown here at line 61. So I had multiple gevent threads trying to async_to_sync very close together, constantly throwing the error in the link.
Which led me to this wonderful answer and the current version of the exec_task which has a 98% success rate in messaging the Django Channels group but I really need it to be 100%.
The problem here is that occasionally the asyncio event loop is stopped before the Coroutine I add has a chance to finish and I've been tweaking my code, playing around with the asyncio and event loop api but I either break my code or get worse results. I have a feeling it might be to do with the Asgiref async_to_sync function closing the loop early but it's complex and I only started working with python async a couple of days ago.
Any feedback, comments, tips or fixes are most welcome!
Cheers.
In the end I couldn't solve the problem and choose an alternative solution using a Channels AsyncHttpConsumer to send the group message. It's not optimal but it works and keeps the workflow in the Channels library.
Consumer:
class celeryMessageConsumer(AsyncHttpConsumer):
async def handle(self, body):
# send response
await self.send_response(200, b"Recieved Loud and Clear", headers=[
(b"Content-Type", b"text/plain"),
])
# formating url encoded string into json
body_data = urllib.parse.unquote_plus(body.decode("utf-8"))
body_data = json.loads(body_data)
id = body_data['data']['id']
await self.channel_layer.group_send(
f"group_{id}",
{
'type': 'propergate.data',
'data': body_data['data']
}
)
Routing:
application = ProtocolTypeRouter({
'websocket': AuthMiddlewareStack(
URLRouter(
myApp.routing.websocket_urlpatterns
)
),
'http': URLRouter([
path("celeryToTemplate/", consumers.celeryMessageConsumer),
re_path('genericMyAppPath/.*', AsgiHandler),
]),
})
Http Request:
data = json.dumps({'id': id, 'status': status})
response = internal_post_request('http://genericAddress/celeryToTemplate/', data)
if response.status_code == 200:
# phew
pass
else:
# whoops
pass
Requests:
def internal_post_request(request_url, payload):
headers={
'Content-Type': 'application/json'
}
response = requests.post(request_url, data=payload, headers=headers)
return response
Hi i'm currently encountering your exact issue where it's critical to be able to send messages from completed celery tasks to the client.
I was able to group_send before by using a signal to a model method for example:
def SyncLogger(**kwargs):
""" a syncronous function to instigate the websocket layer
to send messages to all clients in the project """
instance = kwargs.get('instance')
# print('instance {}'.format(instance))
args = eval(instance.args)
channel_layer = channels.layers.get_channel_layer()
async_to_sync(channel_layer.group_send)(
args ['room'],
{
"type": "chat.message",
"operation": args['operation'],
"state": instance.state,
"task": instance.task
})
and the signal
post_save.connect(SyncLogger, TaskProgress)
Update
I was able to send messages as long as there's an event_loop
this works no matter whether the consumer is async or not
#shared_task()
def test_message():
channel_layer = get_channel_layer()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(channel_layer.group_send('sync_chat', {
'type': 'chat.message',
'operation': 'operation',
'state': 'state',
'task': 'task'
}))

Sending data to django channels groups via django views

TL;DR
Want this flow:
ws://...
websocket client 1 <-----------> websocket client 2
^
|
server (send messages via views)
So I have the following:
views.py
def alarm(request):
layer = get_channel_layer()
async_to_sync(layer.group_send)('events', {
'type': 'events.alarm',
'content': 'triggered'
})
return HttpResponse('<p>Done</p>')
consumers.py
class EventConsumer(JsonWebsocketConsumer):
def connect(self):
print('inside EventConsumer connect()')
async_to_sync(self.channel_layer.group_add)(
'events',
self.channel_name
)
self.accept()
def disconnect(self, close_code):
print('inside EventConsumer disconnect()')
print("Closed websocket with code: ", close_code)
async_to_sync(self.channel_layer.group_discard)(
'events',
self.channel_name
)
self.close()
def receive_json(self, content, **kwargs):
print('inside EventConsumer receive_json()')
print("Received event: {}".format(content))
self.send_json(content)
def events_alarm(self, event):
print('inside EventConsumer events_alarm()')
self.send_json(
{
'type': 'events.alarm',
'content': event['content']
}
)
in routing.py
application = ProtocolTypeRouter({
'websocket': AllowedHostsOriginValidator(
AuthMiddlewareStack(
URLRouter(
chat.routing.websocket_urlpatterns,
)
)
),
})
where websocket_urlpatterns is
websocket_urlpatterns = [
url(r'^ws/chat/(?P<room_name>[^/]+)/$', consumers.ChatConsumer),
url(r'^ws/event/$', consumers.EventConsumer),
]
urls.py
urlpatterns = [
url(r'^alarm/$', alarm, name='alarm'),
]
when I call /alarm/ , only the HTTP request is made and the message is not sent to the websocket
The following are the logs:
[2018/09/26 18:59:12] HTTP GET /alarm/ 200 [0.07, 127.0.0.1:60124]
My intention is to make django view send to a group (use case would be for a server to send notification to all connected members in the group).
What setting am I missing here.
I am running django channels 2.1.3 with redis as backend. The CHANNELS_LAYERS etc. have all been setup.
Reference Links:
sending messages to groups in django channels 2
github issue
channels docs
EDIT:
I could send the message using the websocket-client from the view
from websocket import create_connection
ws = create_connection("ws://url")
ws.send("Hello, World")
But is it possible to send without using the above (donot wan't to create a connection)?
Source code: chat-app
credits to #Matthaus Woolard for making the concept pretty clear.
So this was the problem:
The client had disconnected when I tried to send the message from the django view. This happened as the server restarted upon code change. I refreshed the browser and it started to work.
Silly mistake
So to summarize:
Add the following in connect() in case of Synchronous consumer:
async_to_sync(self.channel_layer.group_add)('events', self.channel_name)
or add this incase of Async Consumer:
await self.channel_layer.group_add('events', self.channel_name)
create a view as follows:
def alarm(request):
layer = get_channel_layer()
async_to_sync(layer.group_send)('events', {
'type': 'events.alarm',
'content': 'triggered'
})
return HttpResponse('<p>Done</p>')

Emit/Broadcast Messages on REST Call in Python With Flask and Socket.IO

Background
The purpose of this project is to create a SMS based kill switch for a program I have running locally. The plan is to create web socket connection between the local program and an app hosted on Heroku. Using Twilio, receiving and SMS will trigger a POST request to this app. If it comes from a number on my whitelist, the application should send a command to the local program to shut down.
Problem
What can I do to find a reference to the namespace so that I can broadcast a message to all connected clients from a POST request?
Right now I am simply creating a new web socket client, connecting it and sending the message, because I can't seem to figure out how to get access to the namespace object in a way that I can call an emit or broadcast.
Server Code
from gevent import monkey
from flask import Flask, Response, render_template, request
from socketio import socketio_manage
from socketio.namespace import BaseNamespace
from socketio.mixins import BroadcastMixin
from time import time
import twilio.twiml
from socketIO_client import SocketIO #only necessary because of the hack solution
import socketIO_client
monkey.patch_all()
application = Flask(__name__)
application.debug = True
application.config['PORT'] = 5000
# White list
callers = {
"+15555555555": "John Smith"
}
# Part of 'hack' solution
stop_namespace = None
socketIO = None
# Part of 'hack' solution
def on_connect(*args):
global stop_namespace
stop_namespace = socketIO.define(StopNamespace, '/chat')
# Part of 'hack' solution
class StopNamespace(socketIO_client.BaseNamespace):
def on_connect(self):
self.emit("join", 'server#email.com')
print '[Connected]'
class ChatNamespace(BaseNamespace, BroadcastMixin):
stats = {
"people" : []
}
def initialize(self):
self.logger = application.logger
self.log("Socketio session started")
def log(self, message):
self.logger.info("[{0}] {1}".format(self.socket.sessid, message))
def report_stats(self):
self.broadcast_event("stats",self.stats)
def recv_connect(self):
self.log("New connection")
def recv_disconnect(self):
self.log("Client disconnected")
if self.session.has_key("email"):
email = self.session['email']
self.broadcast_event_not_me("debug", "%s left" % email)
self.stats["people"] = filter(lambda e : e != email, self.stats["people"])
self.report_stats()
def on_join(self, email):
self.log("%s joined chat" % email)
self.session['email'] = email
if not email in self.stats["people"]:
self.stats["people"].append(email)
self.report_stats()
return True, email
def on_message(self, message):
message_data = {
"sender" : self.session["email"],
"content" : message,
"sent" : time()*1000 #ms
}
self.broadcast_event_not_me("message",{ "sender" : self.session["email"], "content" : message})
return True, message_data
#application.route('/stop', methods=['GET', 'POST'])
def stop():
'''Right here SHOULD simply be Namespace.broadcast("stop") or something.'''
global socketIO
if socketIO == None or not socketIO.connected:
socketIO = SocketIO('http://0.0.0.0:5000')
socketIO.on('connect', on_connect)
global stop_namespace
if stop_namespace == None:
stop_namespace = socketIO.define(StopNamespace, '/chat')
stop_namespace.emit("join", 'server#bayhill.com')
stop_namespace.emit('message', 'STOP')
return "Stop being processed."
#application.route('/', methods=['GET'])
def landing():
return "This is Stop App"
#application.route('/socket.io/<path:remaining>')
def socketio(remaining):
try:
socketio_manage(request.environ, {'/chat': ChatNamespace}, request)
except:
application.logger.error("Exception while handling socketio connection",
exc_info=True)
return Response()
I borrowed code heavily from this project chatzilla which is admittedly pretty different because I am not really working with a browser.
Perhaps Socketio was a bad choice for web sockets and I should have used Tornado, but this seemed like it would work well and this set up helped me easily separate the REST and web socket pieces
I just use Flask-SocketIO for that.
from gevent import monkey
monkey.patch_all()
from flask import Flask
from flask.ext.socketio import SocketIO
app = Flask(__name__)
socketio = SocketIO(app)
#app.route('/trigger')
def trigger():
socketio.emit('response',
{'data': 'someone triggered me'},
namespace='/global')
return 'message sent via websocket'
if __name__ == '__main__':
socketio.run(app)