Is there any way to build an interactive terminal using Django Channels with it's current limitations? - django

It seems with Django Channels each time anything happens on the websocket there is no persistent state. Even within the same websocket connection, you can not preserve anything between each call to receive() on a class based consumer. If it can't be serialized into the channel_session, it can't be stored.
I assumed that the class based consumer would be persisted for the duration of the web socket connection.
What I'm trying to build is a simple terminal emulator, where a shell session would be created when the websocket connects. Read data would be passed as input to the shell and the shell's output would be passed out the websocket.
I can not find a way to persist anything between calls to receive(). It seems like they took all the bad things about HTTP and brought them over to websockets. With each call to conenct(), recieve(), and disconnect() the whole Consumer class is reinstantiated.
So am I missing something obvious. Can I make another thread and have it read from a Group?
Edit: The answers to this can be found in the comments below. You can hack around it. Channels 3.0 will not instantiate the Consumers on every receive call.

The new version of Channels does not have this limitation. Consumers stay in memory for the duration of the websocket request.

Related

Django, Manage threads and websocket for a data visualisation website

I am a Django beginner and here is the project:
I am building a data visualization website. I am loading a continuous data stream and I want the client to be able to choose the data processing to apply (in python, IA treatments with TensorFlow for example). Once the treatments are chosen by the client, I want to be able to launch them in a thread and send the results to the client every X seconds in a WebSocket (and be able to process WebSocket messages coming from the client at the same time).
I have already done it with Flask but I don't succeed with Django.
I have followed many tutorials, in particular this one, which seems similar to what I want to do: https://www.neerajbyte.com/post/how-to-implement-websocket-in-django-using-channels-and-stream-websocket-data
The main issue is that I don't know how or even where to create my thread. I can't do it in an AsyncWebsocketConsumer class, because it doesn't allow me to launch a thread that is able to send a Websocket request (I need to launch an async function to be able to send a request and I can't launch an async function in a thread).
One solution proposed in the above tutorial to send data regularly is to create a management command. A solution that doesn't please me either because the function executed this way can't be chosen by the client in an AsyncWebsocketConsumer class.
So here I am, I'm really struggling to find the solution.
I have heard some things about Celery and Gunicorn from here and there. What do you think about these? And how can these solve my issue?
I am open to every piece of advice, every tutorial, every idea which could allow me to move forward in my project.

django-channels databinding on model.save()

I have a channels app that is using databinding. When changes are made with django admin they are being pushed to the web as expected. I have loop set up on a socket connection to do some long polling on a gpio unit and update the db, these changes are not being pushed to the web. Channels documentation says:
Signals are used to power outbound binding, so if you change the values of a model outside of Django (or use the .update() method on a QuerySet), the signals are not triggered and the change will not be sent out. You can trigger changes yourself, but you’ll need to source the events from the right place for your system.
How do I go about triggering these changes, as it happens with admin?
Thanks and please let me know if this is to vague.
The relevant low-level code is in lines 121-187 of channels/binding/base.py (at least in version 1.1.6). That's where the signals are received and processed. It involves a few different things, such as keeping track of which groups to send the messages to. So it's a little involved, but you can probably tease out how to do it, looking at that code.
The steps involved are basically:
Find the right groups for the client
Format your message in the same way that the databinding code would (see this section of the docs)
Send the message to all the relevant groups you found in step 1.
Alternatively, you might consider using a REST API such that the socket code submits a POST to the API (which would create a database record via the ORM in the normal way) rather than directly creating database records. Your signals will happen automatically in that case. djangorestframework (server-side) and requests (client-side, if you're using python for the long-polling code) are your friends if you want to go that way, for sure. If you're using another language for the long-polling client, there are many equivalent packages for REST API client work.
Good luck!

Multithreded udp-Server vs. Non blocking calls

First questions here. I gave searched for this but haven't found any solution which fully answers my problem here. I'm using c++ and need to write a kind of usp chat (server and client) for programs to interact with one another. Well atm it works quite well.
I'm using Googles protobuf as Messages.
I've written it like that:
Server has a list of users curently logged in as well as a list of messages to process and distrubute.
One thread handles all receiving on its socket (I'm using one socket).
If command var in the message is login,
It looks through the list and checks for this combination of port and IP. If not in, the chat creates a new user entry.
If command logout, the server looks for the user in list and deletes it.
If command is message, the server looks if user is logged in and puts it on the message list.
The 2nd thread is for sending.
It waits till there is a message in the list and then cycles through all users to send this messages to their sockets except the sending one.
The server has set options on its socket to receive from any ip.
My question now is: is this the most performat solution?
I've read about select and poll. But it's always about multiple receiving sockets while I have only one.
I know the receiving thread may be idling all the time but in my environment there would be a high frequent message input.
I'm fairly new to socket programming but I think this was the most elegant solution. I was wondering if I could even create another thread which gets a list from receiving thread to process the messages.
Edit: how could I detect time outs?
I mean I could have a variable in the user list which increases or get set to 0. But what if messages won't come frequently. Maybe a server based ping message? Or maybe a flag on the message which get set to acknowledged and gets resend.
On the user side I need to first broadcast to find the server and then set port and up accordingly to the answer. How could I do that?
Because this should run autonomous. Meaning a client should detect dmthe server, login, sends its commands and detect wether it is still online.
On server side I don't know if this is so important. Possibly there might be a memory issue if there are too many things connected and non get logged off. Maybe I set a 1 h timeout to let it detect idle clients.

libcurl multi asynchronous support

From the examples and documentation, it seems libcurl multi interface provides asynchronous support in batch mode i.e. easy handles are added to multi and then finally the requests are fired simultaneously with curl_multi_socket_action. Is it possible to trigger a request, when easy handle is added but the control returns to application after request is written on the socket?
EDIT:
It'll help in firing request in the below model, instead of firing requests in batch(assuming request creation on client side and processing on the server takes same duration)
Client -----|-----|-----|-----|
Server < >|-----|-----|-----|----|
The multi interface returns "control" to the application as soon as it would otherwise block. It will therefor also return control after it has sent off the request.
But I guess you're asking how you can figure out exactly when the request has been sent? I think that's only really possibly by using CURLOPT_DEBUGFUNCTION and seeing when the request is sent. Not really a convenient way...
you can check the documents this:
https://curl.haxx.se/libcurl/c/hiperfifo.html
It's combined with libevent and libcurl.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded.
Then libcurl will download all easy handles asynchronously by calling curl_multi_socket_action ,so the control will return to system.

HTTP stream server: threads?

I already wrote here about the http chat server I want to create: Alternative http port?
This http server should stream text to every user in the same chat room on the website. The browser will stay connected and wait for further html code. (yes that works, the browser won't reject the connection).
I got a new question: Because this chat server doesn't need to receive information from the client, it's not necessary to listen to the client after the server sent its first response. New chat messages will be send to the server on a new connection.
So I can open 2 threads, one waiting for new clients (or new messages) and one for the html streaming.
Is this a good idea or should I use one thread per client? I don't think it's good to have one thread/client when there are many chat users online, since the server should handle multiple different chats with their own rooms.
3 posibilities:
1. One thread for all clients, send text to each client successive - there shouldn't be much lag since it's only text
this will be like: user1.send("text");user2.send("text"),...
2. One thread per chat or chatroom
3. One thread per chat user - ... many...
Thank you, I haven't done much with sockets yet ;).
Right now, you seem to be thinking in terms of a given thread always carrying out a given (type of) task. While that basic design can make sense, to produce a scalable server like this, it generally doesn't work very well.
Often a slightly more abstract viewpoint works out better: you have tasks that need to get done, and threads that do those tasks -- but a thread doesn't really "care" about what task it executes.
With this viewpoint, you simply need to create some sort of data structure that describes each task that needs to be done. When you have a task you want done, you fill in a data structure to describe the task, and hand it off to get done. Somewhere, there are some threads that do the tasks.
In this case, the exact number of threads becomes mostly irrelevant -- it's something you can (and do) adjust to fit the number of CPU cores available, the type of tasks, and so on, not something that affects the basic design of the program.
I think easiest pattern for this simple app is to have pool of threads and then for each client pick available thread or make it wait until one becomes available.
If you want serious understanding of http server architecture concepts google following:
apache architecture
nginx architecture