Using Chord I can successfully open a Clojurescript socket connection in the browser.
The documentation and accompanying example however don't seem to list any options for closing that connection. Does anyone know how this should be done?
The Chord docs do actually show a way to close the connection, albeit on the server.
The solution is to pass the open channel to the aysnc close! function. This is a generic channel operation and is not specific to Chord.
(close! channel)
Related
I'm using AWS Kinesis Video Signalling Channels to set up a WebRTC connection, with help from the Amazon Kinesis Video Streams WebRTC SDK for JavaScript (on whose repo I have cross-posted this question: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js/issues/175).
In my code, the "viewer" side of the WebRTC call uses the KVSWebRTC.SignalingClient.sendSdpOffer function to send an SDP offer, and the "master" side of the call listens for "sdpOffer" events and reacts to them by calling KVSWebRTC.SignalingClient.sendSdpAnswer.
This usually all works fine. But I've found that the following sequence of events can happen:
The viewer user opens the webpage, then sends an SDP offer
The master user opens the webpage, and receives the SDP offer
Before the master side can send an SDP answer, the master user refreshes the page
The master user tries to rejoin the call, but no longer receives any SDP offer event - it's as if the offer has been swallowed by the original aborted session. The connection can only be made if the viewer side makes a new SDP offer.
Am I correct in thinking that the SDP offer is "consumed" when the master side signalling client first fires the "sdpOffer" event?
What's the appropriate way to handle this? Is the viewer side supposed to speculatively make fresh SDP offers every few seconds, in case the above sequence of events has occurred? Is there any way for the viewer side to know that their SDP offer has been lost?
I fear that a client loses connection abruptly and get disconnected without leaving any info to the backend. If this happens, will the socket keep active?
I have found the answer in a issue on the django channels repository:
Using websocket auto-ping to periodically assert clients are still connected to sockets, and cutting them loose if not. This is a feature that's already in daphne master if you want to try it, and which is coming in the next release; the ping interval and time-till-considered-dead are both configurable. This solves the (more common) clients-disconnecting-uncleanly issue.
Storing the timestamp of the last seen action from a client in a database, and then pruning ones you haven't heard of in a while. This is the only approach that will also get round disconnect not being sent because a Daphne server was killed, but it's more specialised, so Channels can't implement it directly.
Link to the issue on github
I am trying to build a scalable chat server in Clojure. I am using http-kit, compojure and redis pub/sub to communicate between diffrent nodes (fan-out approach). The server will use websockets for connection b/w client-server with a fallback to long polling. A single user can have multiple connections to chat with one connection per tabs in the browser and message should be delivered to the all the connections.
So basically when the user connects I store the channel in a atom with a random uuid as
{:userid1 [{:socketuuid "random uuid#1 for uerid1" :socket "channel#1 for userid1"}
{:socketuuid "random uuid#2" :socket "channel#2"}]
:userid2 [{:socketuuid "random uuid#1 for userid2" :socket "channel#1 for userid2}]}
the message is POSTed to a common route for both websockets and long polling channels, the message structure looks like
{:from "userid1" :to "userid2" :message "message content"}
the server finds all the channels in the atom for the :from and :to user ids and send the message to the connected channels for the respective users, also it publishes the message over the redis server where the connected nodes look for channels stored in their own atom and deliver message to the respective users.
So the problem I am running into is, how to properly implement presence. Basically http-kit send you a status when a channel disconnects the status can be "server-close" or "client-close", while I can handle server disconnects (the client will reconnect automatically) but I am running into problem when the disconnect happens from client side, for eg. the user navigates to another page and will connect after a few seconds. How do I decide that the user has went offline when the client disconnects. Also I am concerned about message arrival b/w reconnects in long polling mode (my long polling timeout is 30 seconds).
Also please suggest a good presence mechanism for the above architecture. Thanks.
Please comment if you need more info. Thanks
Edit #1:
Can you recommend a good tutorial/ material on implementing presence in a chat server, I cant seem to find anything on it.
My current solution -> I am currently maintaining a global count and a last connected timestamp for the connected channels of a particular userid and when a user disconnects the count is decreased, and a timeout is implemented for 10 seconds which will check if the user has reconnected again (i.e. the last connected stamp is 10 seconds old and count is still zero), if not then the user is said to have gone offline, would you recommend this solution, If not why, or any improvements or better methods are appreciated.
Also please note I am using the timer/scheduled-task in http-kit, would these timeout significant performance effects?
There are two different cases here from client side.
Long Polling. I cannot see how this is a problem, if the client window closes, there wont be no polling anymore. One client less which asks for data.
Websockets. There is a close method available in the protocol. The client should send a notification if you implement it correctly. See here: Closing WebSocket correctly (HTML5, Javascript) for instance.
Does that answer your question?
I am working on a TCP Server in Go. Now I want to notify all goroutines that are talking to clients to drop their connections, dump what they've got and stop.
Closing a channel is a way to notify all of them.
Question is: Is that idiomatic Go? If I am wrong; then what should I do (for notifying all of goroutines - something like ManualResetEvent in .NET)?
Note: I am a Go newbie, just learning and started with TCP Server because I have written that before in C#.
Yes, closing a channel is an idiomatic Go way of communicating between Goroutines.
You'll need to pass a channel into each goroutine as it launches and check the channel with the select call after each network event.
You'll also want to set timeouts on network events so that you don't have connections hanging around forever.
As of Go 1.7, you can use the context package.
I need my server to stay connected to the server. Does anyone know how to do this? Or post links tutorials anything?
Also it says when it restarts 'could not accept client' so how would I clear everything and make it accept it?
Server code:
For your server side code, do a loop wrapping the accept call. For the accepted socket that is created create a new thread, so that the next accept will be called right away.
On server startup you may also want to use the SO_REUSEADDR flag. That way if you had a crash, or even a fast restart of the program, then your server will be able to use the same port again without a problem.
Client code:
For your client code you would just check for a socket error and if that occurs just establish a new connection.
Other resources:
Beej's guide to network programming is a great resource for learning socket programming.
Frostbytes.com also has a great tutorial on socket programming.
If you want something more in depth, check out Unix Network Programming 3rd Edition by W. Richard Stevens.
Other options:
Instead of plain bsd-style sockets, you could also try using boost asio for easier socket programming. You could check out their examples page.