Detect if re-connection has taken place in redis websocket on client side - django

I am quite new to django-websocket-redis and as normal I am facing some problems.
I have established a communication from the client to the server and vice versa using Websockets for Redis.
I would like to detect when a client is reconnected or disconnected from the server (meaning when the connection is closed and/or opened again), so that I implement a mechanism where clients are responsible for asking "what did I miss" when they reconnect, and then query the data that they missed.
Currently my client code is like this (fiddle here).
I can detect when the connections is established for the first time, but not when websocket connection is broken and reconnected.
Any ideas on how can i do that ?

The problem is with the function name which you have set it should not be on_connecting() instead it should be only the name of the function on_connecting.
Below is the code, replace your code with below one and check if that works.
var ws4redis = WS4Redis({
uri: '{{ WEBSOCKET_URI }}foobar?subscribe-broadcast&publish-broadcast&echo',
receive_message: receiveMessage,
connecting: on_connecting,
connected: on_connected,
error: on_error,
disconnected: on_disconnected,
close: on_close,
open: on_open,
});
When you write on_connecting() the functions are getting called when the WS4redis is been initialized that's why you see the console log for all the events

Related

How to tell gRPC client tells whether gRPC server has cancelled?

I'm using a synchronous bidirectional streaming grpc service, and I've let Server-side to close connection when timeout detected.
// server-side code
if (timeout-detected) {
server_context_->TryCancel();
}
My question is: how do I detect whether the connection still valid on client-side? If so, I could reestablish connection. I've tryed
// client-side code
if (client_reader_writer_->Write(&request)) {
// I consider it connection is still valid
} else {
// connection decided cancelled. Re-establish connection and do something.
}
But client_reader_writer_->Write(&request) return true, even after the server's log has shown cancelled.
If someone could give me some hints on this, I would be much grateful!
If your concern is about link, Keepalive can be used a mechanism to check if the channel is fine by transmitting HTTP2 pings at regular intervals and if there is no response, it is considered as broken.
Other approach is you can also come up with your own "heartbeat" mechanism where you can have the heartbeat sent periodically to check on server/network failure while trying to write to socket.
For server timeout in a typical scenario, indication to client can be done using context.abort(grpc.StatusCode.DEADLINE_EXCEEDED, 'RPC Time Out!')
Here is a reference for the same.

How to gracefully handle auto disconnect of Daphne websockets

Daphne has a parameter --websocket_timeout link. As mentioned in the doc,
--websocket_timeout WEBSOCKET_TIMEOUT
Maximum time to allow a websocket to be connected. -1 for infinite.
The socket is disconnected and no further communication can be done. However, the client does not receives a disconnect event, hence cant handle it gracefully. How does my client get to know whether the socket is disconnected or not?? I don't want to keep (at client) a timer nor want to keep rechecking it.
This is how I deploy my app
daphne -b 0.0.0.0 -p 8000 --websocket_timeout 1800 app.asgi:application
The socket gets auto-disconnected after every 30 mins, but the client never gets to know about this.
Whats the right way to go about it,.??
Update
Trying to send an event before the connection is closed. I'm over-riding my websocket_disconnect handler that sends the json before disconnecting. However, it does not send the event.
class Consumer(AsyncJsonWebsocketConsumer):
async def websocket_disconnect(self, message):
"""Over-riding."""
print('Inside websocket_disconnect consumer')
await self.send_json(
"event": "disconnecting..."
)
await super().websocket_disconnect(message)
I'm not sure it's a problem that needs a solution. The client has a certainty that after X minutes of inactivity it will get disconnected, where X is determined by the server. It has no certainty it won't happen before that. So you need connectivity handling code regardless.
While it seems dirty to keep an idling connection around, I can't imagine it costing a lot of resources.
Your premise that the client doesn't get to know about it is wrong. When you register the onclose handler, the client receives a disconnect event and can act accordingly.

What notification is provided for a lost connection in a C++ gRPC async server

I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2

Python socket.recv Closing Socket Prematurely

I have a web proxy that starts a TCP listener socket that accepts connections from clients. The listener accepts connections via:
clientConnection, clientAddress = listenerSocket.accept()
and then a new thread handles the client connection from there.
To mock a client connection, I am using telnet to connect to the proxy and issue commands. The proxy needs to receive data from telnet and I need to make sure that I receive all of it. To achieve this, I am doing the following:
while True:
requestBytes = clientConnection.recv(1024)
if not requestBytes:
break
requestBuffer += requestBytes
The proxy then decodes the bytes and does some things with them that takes a little bit of time, and then has to send a response back to the same client. However, when using the above code the connection with clientConnection gets closed long before I can process the bytes and respond.
Here's what I don't understand, when I use the following instead:
while True:
requestBytes = clientConnection.recv(1024)
requestBuffer += requestBytes
break
It works just fine and the clientConnection remains intact. This obviously has a problem if I receive more than 1024 bytes, but the clientConnection does not get closed.
More specifically, the error occurs after I have a response to send to the client and call:
clientConnection.sendall(response)
clientConnection.shutdown(1)
clientConnection.close()
The line clientConnection.shutdown(1) throws the error:
[Errno 107] Transport endpoint is not connected
which is confusing because somehow it was able to still call sendall on the previous line. Note that I did not actually receive anything on the client side.
I am sure that the connection is not getting closed elsewhere in the code. What exactly is happening here and what is the best way to do something like recvall and keep the clientConnection open?

Use ColdFusion to read events over a TCP/IP stream

Our new phone system is making use of Asterisk manager API which allows to read events and issue commands over a TCP/IP stream. My question is.. Is there any way at all to use ColdFusion to read (and in-turn process) the stream of events? As of now I'm able to view the phone events (incoming calls, transfers, hang-ups etc) via telnet and I'm wondering if it's possible to use a ColdFusion event gateway to process these events as they come over?
Once the initial connection is made (via telnet), I have to submit the following key:values in order to authenticate the connection before the stream begins.
Action: login<CRLF>
Username: usr<CRLF>
Secret: abc123<CRLF>
<CRLF>
Just wanted to specify that as I'm not sure if that's a deal-breaker with possibly using a web service in this manner. Also note we are using ColdFusion 10 Enterprise.
I realize this is an old thread, but I am posting this in case it helps the next guy ....
AFAIK, it cannot be done with a standard CF Event Gateway. However, one possibility is using Asterisk-Java. It is a java library that allows communication with an Asterisk Server. More specifically it Manager interface:
... is capable of sending [actions] and receiving [responses] and
[events]. It does not add any further functionality but rather
provides a Java view to Asterisk's Manager API (freeing you from
TCP/IP connection and parsing stuff).
So it can be used to issue commands to the server, and receive events, just like telnet.
Starter example:
Download the Asterisk-Java jar and load it via this.javaSettings in your Application.cfc
Create a ManagerConnection with the settings for your Asterisk server
factory = createObject("java", "org.asteriskjava.manager.ManagerConnectionFactory");
connection = factory.init( "hostNameOrIP"
, portNum
, "userName"
, "theSecret" ).createManagerConnection();
Create a CFC to act as a listener. It will receive and handle events from Asterisk:
component {
public void function onManagerEvent(any managerEvent)
{
// For demo purposes, just output a summary of the event
WriteLog( text=arguments.managerEvent.toString(), file="AsteriskLog" );
}
}
Using a bit of dynamic proxy magic, register the CFC with the connection:
proxyListener = createDynamicProxy("path.YourCFCListener"
, ["org.asteriskjava.manager.ManagerEventListener"]);
connection.addEventListener( proxyListener );
Login to the server to begin receiving events. Setting the appropriate event level: "off", "on" or csv list of available events - "system", "call" and/or "log".
connection.login("on");
Run a simple "Ping" test to verify everything is working. Then sleep for a few seconds to allow some events to flow. Then close the connection.
action = createObject("java", "org.asteriskjava.manager.action.PingAction").init();
response = application.connection.sendAction(action);
writeDump(response.getResponse());
// disconnect and stop events
sleep(4000);
connection.logoff();
Check the demo log file. It should contain one or more events.
"Information","http-bio-8500-exec-4","10/14/16","15:17:19","XXXXX","org.asteriskjava.manager.event.ConnectEvent[dateReceived=Fri Oct 14 15:17:19 CDT 2016,....]"
NB: In a real application, the connection would probably be opened once, in OnApplicationStart, and stored in a persistent scope.
Events would continue to stream as long as the connection remained open. The connection should only be closed when the application
ends, to halt event streaming.
Yes-- you'd want to use a Socket Gateway. Ben Nadel has a great writeup about how to do this: Using Socket Gateways To Communicate Between ColdFusion And Node.js
Although he uses Node.js in his example, you should be able to use his guide to set up the Socket Gateway, then handle the data passed to it as you see fit.
What you want is a server-side TCP client. I suggest easySocket, a simple UDF that allows you to send TCP messages via Coldfusion by utilizing Java sockets.