I'm trying to do the following thing
People go in my site, they press a button, and connect to the server via WebSocket to send some data (takes a while). If another client connects to the site and presses the button while another client is already sending data through the WebSocket, I want the second client to be redirected to another page that says its position on the queue for the websocket.
When the first client finishes the transfer, the next one in line will connect to it. So basically I'm trying to get a server resource to be used by only one client at a time (with a queue for others to wait in).
Is there a way to make something like this work?
Related
I want to make a chat application like WhatsApp, and I want to make the backend server using Django Channels to handle all the real-time updates.
I have been exploring various sources but I could not figure out one thing about how do i manage single websocket connection (single endpoint) for each user and still receive messages from all the chats he is part of in real time. As per my current understanding, I can add channel(web socket connection corresponding to a user) to different channel groups but what if a user is part of a lot of groups(basically is eligible to receive updates from various chats)? Should I add that channel to all the groups, he can be part of as soon as the connection is established or is there any workaround like one in my mind:
Store the list of channels corresponding to each user in a database.
Make a for loop so that whenever a message is received by server, it sends message to websocket connections corresponding to each user involved to receive that message?
Any help is appreciated. Thanks in advance.
Yes, for a simple chat system, you should just add the user's channel name to the groups he's subscribed to.
However, you definitely will need to model the chat system in the database for a more complex system. Let's say you have a model Chat, ChatMember and Message. When a user connects to the websocket, he does not need to specify any chat because it is a general connection. Any message sent by the client has to specify the chat, so you can loop through the chat members and forward the message to all who are currently connected.
How do you know who is currently connected? this is the tricky part. In my architecture, I have a group for each user, sort of like an inbox. The group name is generated from the user id. Each user can have several connections, say mobile, web etc. All the connections coming from a user is added to the users group and the user's number of active connection is saved in an Inbox model. With new connections, it is incremented and decremented during disconnections.
So to know which chat members are currently online, I can just check that the user's inbox has atleast one connection. If he is online I forward the message to his ibox group, else i store the message in his inbox. Whenever a user connects, he is sent all the messages in his inbox and the inbox is cleared.
This is just an example of a way to implement it but you can also think up a custom architecture or improve on it.
We have a iOS app.We use HTTP services for getting and posting JSON data. Push notifications also enabled. If backend services are down is there any way to notify the user that services are down
did you try having a timout? if the app can't connect to the server for some time,
not only the connection attempt nearly always terminates, but it also raises a timeout exception on most of the programming languages I use.
try to check timeout specifications on the object you use to communicate via http, you're probabely able to implement them.
if you can't connect to the server in order to receive the http message
simply tell the user "server unavailable" or something like that.
ideally, if you know the backend server will be dead for a while (for
update purposes etc.) and you still can use the http server,
you may send an http containning text saying something like "server unavailable", or you may just send an empty message and then detect
it on the front end.(that is if you never send empty messages, anyways
I think it'll give you issues. "server unavailable" should be better.).
if the http server will be periodically unavailable, try something like implementing
update notifications. when you start up the app, the app asks when does a server
update will occur. and then you save it, and when it starts up again, it checks
whether or not such an update is happening at the moment.
besides that, if you really want to use push notifications and it'll
be periodically unavailable, before the server gets down - send a notification.
really you just need to use your imagination here.
but what you can't do - is send a notification when your server is down,
if you don't know it'll get down. mainly because you have no way of notifying the client (because the server you use for communication is down). however - as I stated above - what you can do is have the client checking for when the server is down(if it can't connect etc.).
if you have a backup server you can send a notification when the server gets down. if both the server and the backup server gets down, if only the backup server needs to inform you - the client will most likely won't know it's down.
you may use an external company to be your backup server. so if the electricity
is down (or something like that) it won't affect your notification system.
hope it helps.
I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.
First questions here. I gave searched for this but haven't found any solution which fully answers my problem here. I'm using c++ and need to write a kind of usp chat (server and client) for programs to interact with one another. Well atm it works quite well.
I'm using Googles protobuf as Messages.
I've written it like that:
Server has a list of users curently logged in as well as a list of messages to process and distrubute.
One thread handles all receiving on its socket (I'm using one socket).
If command var in the message is login,
It looks through the list and checks for this combination of port and IP. If not in, the chat creates a new user entry.
If command logout, the server looks for the user in list and deletes it.
If command is message, the server looks if user is logged in and puts it on the message list.
The 2nd thread is for sending.
It waits till there is a message in the list and then cycles through all users to send this messages to their sockets except the sending one.
The server has set options on its socket to receive from any ip.
My question now is: is this the most performat solution?
I've read about select and poll. But it's always about multiple receiving sockets while I have only one.
I know the receiving thread may be idling all the time but in my environment there would be a high frequent message input.
I'm fairly new to socket programming but I think this was the most elegant solution. I was wondering if I could even create another thread which gets a list from receiving thread to process the messages.
Edit: how could I detect time outs?
I mean I could have a variable in the user list which increases or get set to 0. But what if messages won't come frequently. Maybe a server based ping message? Or maybe a flag on the message which get set to acknowledged and gets resend.
On the user side I need to first broadcast to find the server and then set port and up accordingly to the answer. How could I do that?
Because this should run autonomous. Meaning a client should detect dmthe server, login, sends its commands and detect wether it is still online.
On server side I don't know if this is so important. Possibly there might be a memory issue if there are too many things connected and non get logged off. Maybe I set a 1 h timeout to let it detect idle clients.
So I am trying to write a chat system using django (I am relatively new to real time system).
I did some research - there are lots of options (twisted, tornado etc) but for now I decided to to try and use nginx as the web server and redis's pubsub.
The chat would be between two users at a time.
Following is what I was thinking of:
On authentication all users issue a psubscribe chatctrl:*:. This essentially subscribes to a control channel to establish the initial conversation that is always needed
When a user u1 launches a chat with user u2, we
create a channel, say "chat:u1:u2" and subscribe to it.
The user u1 publishes a message to the control channel chatctrl:u1:u2: (a control message that would be listened to by u2) says effectively "do you want to chat with me on channel "chat:u1:u2"?
The user u2 should get this message, subscribes to the channel and responds as yes via another message on control channel (or on the newly established channel.
A session is established and both users can publish to the same channel and listen to it as well.
My question is:
1. Does the above make sense, first of all? If not how would you do it using redis?
2. The second question is where do I put the loop to listen to the messages. Since it would be "blocking" when there are no messages, it can not go in a view or in a model accessed by a view. Should it be in a spawned thread and if so how do I unsubscribe once the chat session is over?
Thanx!
See my answer here for an example of the system you describe.
In that code, the view spawns a Gevent greenlet that subscribes to Redis and pushes messages out to the client browser over socket.io.
The view then blocks until a message is recieved over socket.io, repeating for the duration of the chat session.
Hope that helps!