How to gracefully handle auto disconnect of Daphne websockets - django

Daphne has a parameter --websocket_timeout link. As mentioned in the doc,
--websocket_timeout WEBSOCKET_TIMEOUT
Maximum time to allow a websocket to be connected. -1 for infinite.
The socket is disconnected and no further communication can be done. However, the client does not receives a disconnect event, hence cant handle it gracefully. How does my client get to know whether the socket is disconnected or not?? I don't want to keep (at client) a timer nor want to keep rechecking it.
This is how I deploy my app
daphne -b 0.0.0.0 -p 8000 --websocket_timeout 1800 app.asgi:application
The socket gets auto-disconnected after every 30 mins, but the client never gets to know about this.
Whats the right way to go about it,.??
Update
Trying to send an event before the connection is closed. I'm over-riding my websocket_disconnect handler that sends the json before disconnecting. However, it does not send the event.
class Consumer(AsyncJsonWebsocketConsumer):
async def websocket_disconnect(self, message):
"""Over-riding."""
print('Inside websocket_disconnect consumer')
await self.send_json(
"event": "disconnecting..."
)
await super().websocket_disconnect(message)

I'm not sure it's a problem that needs a solution. The client has a certainty that after X minutes of inactivity it will get disconnected, where X is determined by the server. It has no certainty it won't happen before that. So you need connectivity handling code regardless.
While it seems dirty to keep an idling connection around, I can't imagine it costing a lot of resources.
Your premise that the client doesn't get to know about it is wrong. When you register the onclose handler, the client receives a disconnect event and can act accordingly.

Related

How to tell gRPC client tells whether gRPC server has cancelled?

I'm using a synchronous bidirectional streaming grpc service, and I've let Server-side to close connection when timeout detected.
// server-side code
if (timeout-detected) {
server_context_->TryCancel();
}
My question is: how do I detect whether the connection still valid on client-side? If so, I could reestablish connection. I've tryed
// client-side code
if (client_reader_writer_->Write(&request)) {
// I consider it connection is still valid
} else {
// connection decided cancelled. Re-establish connection and do something.
}
But client_reader_writer_->Write(&request) return true, even after the server's log has shown cancelled.
If someone could give me some hints on this, I would be much grateful!
If your concern is about link, Keepalive can be used a mechanism to check if the channel is fine by transmitting HTTP2 pings at regular intervals and if there is no response, it is considered as broken.
Other approach is you can also come up with your own "heartbeat" mechanism where you can have the heartbeat sent periodically to check on server/network failure while trying to write to socket.
For server timeout in a typical scenario, indication to client can be done using context.abort(grpc.StatusCode.DEADLINE_EXCEEDED, 'RPC Time Out!')
Here is a reference for the same.

What notification is provided for a lost connection in a C++ gRPC async server

I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2

how often exactly CCS needs to close down a connection to perform load balancing?

i have a client XMPP and i have never received a CONNECTION_DRAINING message so that i have that question,
how often exactly CCS needs to close down a connection to perform load balancing?
this is part of my code where i verify if i receive a CONNECTION_DRAINING message
............................... more code
def message_callback(session, message):
global unacked_messages_quota
gcmData = message.getTags('data:gcm')
if gcmData:
print "alert, the connection is being drained and will be closed soon !!!!!!!!!!!!!"
gcm = message.getTags('gcm')
if gcm:
gcm_json = gcm[0].getData()
msg = json.loads(gcm_json)
if not msg.has_key('message_type'):
# Acknowledge the incoming message immediately.
send({'to': msg['from'],
'message_type': 'ack',
'message_id': msg['message_id']})
.......................................................... more code
i have read the docs from https://developers.google.com/cloud-messaging/ccs
specifically this part
Periodically, CCS needs to close down a connection to perform load
balancing. Before it closes the connection, CCS sends a
CONNECTION_DRAINING message to indicate that the connection is being
drained and will be closed soon. "Draining" refers to shutting off the
flow of messages coming into a connection, but allowing whatever is
already in the pipeline to continue. When you receive a
CONNECTION_DRAINING message, you should immediately begin sending
messages to another CCS connection, opening a new connection if
necessary. You should, however, keep the original connection open and
continue receiving messages that may come over the connection (and
ACKing them)—CCS handles initiating a connection close when it is
ready.
The CONNECTION_DRAINING message looks like this:
<message>
<data:gcm xmlns:data="google:mobile:data">
{
"message_type":"control"
"control_type":"CONNECTION_DRAINING"
}
</data:gcm>
</message>
It is usually at least once a week, but can be much more frequent depending on the load.

synchronous activemq webservice

I have a webservice (Restful) that send a message through ActiveMQ, and synchronously receive the response by creating a temporary listener in the same request.
The problem is, the listener wait for response of synchronous process , but never die. I need that listener receive response, and immediately stop the listener once is responded the request of webservice.
I have a great problem, because for each request of web services, a listener is created and this is active, producing overhead.
That code in the link is not production grade - simply an example how to make a "hello world" request reply.
Here is some psuedo code to deal with consuming responses blocking - and closing the consumer afterwards.
MessageConsumer responseConsumer = session.createConsumer(tempDest);
Messages response = responseConsumer.receive(waitTimeout);
// TODO handle msg
responseConsumer.close();
Temp destinations in JMS are pretty slow anyways. You can instead use JMSCorrelationID and make the replies go to a "regular queue" handled by a single consumer for all replies. That way, you need some thread handling code to hand over the message to the web service thread, but it will be non blocking and very fast.

How to increase the socket timeout on the server side using Restify?

I use restify to implement a node.js server. Basically the server runs a time-consuming process per a HTTP POST request, but somehow the socket gets closed and the client receives an error message like this:
[Error: socket hang up] code: 'ECONNRESET'
According to the error type, the socket is definitely closed on the server side.
Is there any option that I can set in the createServer method of the restify to solve this problem?
Edit:
The long running process is using Mongoose to run MongoDB process. Maybe it is also possible that the socket hangup is caused by the connection to MongoDB? How to increase the timeout for Mongoose? I found that the hang up happened in exactly 120 seconds, so it might be because of some default timeout configuration?
Thanks in advance!
You can use the standard socket on the req object, and manually call setTimeout to increase the time before node hangs up the socket. By default, node has a 2 minute timer on all sockets for inactivity, which is why you are getting hang ups at exactly 120s (this has nothing to do with restify). As an example of increasing that, set up a handler to run before your long running task like this:
server.use(function (req, res, next) {
// This will set the idle timer to 10 minutes
req.connection.setTimeout(600 * 1000);
res.connection.setTimeout(600 * 1000); //**Edited**
next();
});
This seams not to be actually implemented
https://github.com/mcavage/node-restify/issues/288