How to increase the socket timeout on the server side using Restify? - web-services

I use restify to implement a node.js server. Basically the server runs a time-consuming process per a HTTP POST request, but somehow the socket gets closed and the client receives an error message like this:
[Error: socket hang up] code: 'ECONNRESET'
According to the error type, the socket is definitely closed on the server side.
Is there any option that I can set in the createServer method of the restify to solve this problem?
Edit:
The long running process is using Mongoose to run MongoDB process. Maybe it is also possible that the socket hangup is caused by the connection to MongoDB? How to increase the timeout for Mongoose? I found that the hang up happened in exactly 120 seconds, so it might be because of some default timeout configuration?
Thanks in advance!

You can use the standard socket on the req object, and manually call setTimeout to increase the time before node hangs up the socket. By default, node has a 2 minute timer on all sockets for inactivity, which is why you are getting hang ups at exactly 120s (this has nothing to do with restify). As an example of increasing that, set up a handler to run before your long running task like this:
server.use(function (req, res, next) {
// This will set the idle timer to 10 minutes
req.connection.setTimeout(600 * 1000);
res.connection.setTimeout(600 * 1000); //**Edited**
next();
});

This seams not to be actually implemented
https://github.com/mcavage/node-restify/issues/288

Related

How to gracefully handle auto disconnect of Daphne websockets

Daphne has a parameter --websocket_timeout link. As mentioned in the doc,
--websocket_timeout WEBSOCKET_TIMEOUT
Maximum time to allow a websocket to be connected. -1 for infinite.
The socket is disconnected and no further communication can be done. However, the client does not receives a disconnect event, hence cant handle it gracefully. How does my client get to know whether the socket is disconnected or not?? I don't want to keep (at client) a timer nor want to keep rechecking it.
This is how I deploy my app
daphne -b 0.0.0.0 -p 8000 --websocket_timeout 1800 app.asgi:application
The socket gets auto-disconnected after every 30 mins, but the client never gets to know about this.
Whats the right way to go about it,.??
Update
Trying to send an event before the connection is closed. I'm over-riding my websocket_disconnect handler that sends the json before disconnecting. However, it does not send the event.
class Consumer(AsyncJsonWebsocketConsumer):
async def websocket_disconnect(self, message):
"""Over-riding."""
print('Inside websocket_disconnect consumer')
await self.send_json(
"event": "disconnecting..."
)
await super().websocket_disconnect(message)
I'm not sure it's a problem that needs a solution. The client has a certainty that after X minutes of inactivity it will get disconnected, where X is determined by the server. It has no certainty it won't happen before that. So you need connectivity handling code regardless.
While it seems dirty to keep an idling connection around, I can't imagine it costing a lot of resources.
Your premise that the client doesn't get to know about it is wrong. When you register the onclose handler, the client receives a disconnect event and can act accordingly.

What notification is provided for a lost connection in a C++ gRPC async server

I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2

Setting connection and read timeout for sockets in python thrift client

I am executing the following code for establishing a thrift server using the official thrift library of python.
`transport = TSocket.TSocket(self.__host, self.__port)`
`transport.setTimeout(2000)`
Would this set a sum total of connect timeout and read timeout to 2 seconds or is it simple the connect timeout. If so, how do I set the read timeout and vice versa.
The settimeout affects separately each operation: for example send, recv and connect operations.
It looks like you cannot set the read timeout differently from the connect timeout.
Also the timeout is for each operation so if you do a connect and then a read, you will have 2 + 2 seconds max possible time.
See this which is the method that the thrift TSocket uses:
socket.settimeout(...)

Jetty HTTP/2: How to set client session timeout?

I'm trying to create one session and reuse it for every request.
The problem is if I try to send a request after 30 seconds after the session was createad, I get:
Caused by: java.nio.channels.ClosedChannelException
at org.eclipse.jetty.http2.HTTP2Session$ControlEntry.succeeded
(HTTP2Session.java:1224) ~[http2-common-9.4.0.v20161208.jar:9.4.0.v20161208]
I tried like this
SSLSessionContext clientSessionContext = sslContextFactory.getSslContext().getClientSessionContext();
clientSessionContext.setSessionTimeout(60000);
but it doesen't seems to work
If you are using HttpClient, the client idle timeout can be set with HttpClient.setIdleTimeout(long).
If you are using the low-level HTTP2Client, the client idle timeout can be set with HTTP2Client.setIdleTimeout(long).
Both will control the connection/session idle timeout, which is apparently what you want. A negative value will disable the idle timeout.

Netty file trasfer proxy suffer big connection delay under high concurrency

I am doing a project of building a file transfer proxy using netty which should efficiently handle high concurrency.
Here is my structure:
Back Server, a normal file server just like Http(File Server) example on netty.io which receive and confirm a request and send out a file either using ChunkedBuffer or zero-copy.
Proxy, with both NioServerSocketChannelFactory and NioClientSocketChannelFactory, both using cachedThreadPool, listening to clients' requests and fetch the file from Back Server back to the clients. Once a new client is accepted, the new accepted Channel(channel1) created by NioServerSocketChannelFactory and waiting for the request. Once the request is received, the Proxy will establish a new connection to Back Server using NioClientSocketChannelFactory, and the new Channel(channel2) will send request to Back Server and deliver the response to the client. Each channel1 and channel2 using its own pipeline.
More simply, the procedure is
channel1 accepted
channel1 receives the request
channel2 connected to Back Server
channel2 send request to Back Server
channel2 receive response(including file) from Back Server
channel1 send the response got from channel2 to the client
once transferring is done, channel2 close and channel1 close on flush.(each client only send one request)
Since the required file can be big(10M), the proxy stops channel2.readable when channel1 is NOT writtable, just like example Proxy Server on netty.io.
With the above structure, each client has one accepted Channel and once it send a request it also corresponds to one client Channel until the transferring is done.
Then I use ab(apache bench) to fire up thousands of requests to the proxy and evaluate the request time. Proxy, Back Server and Client are three boxes on one rack which has no other traffic loaded.
The results are weird:
File Size 10MB, when concurrency is 1, connection delay is very small, but when concurrency increases from 1 to 10, top 1% connection delay becomes incredibly high, up to
3 secs. The other 99% are very small. When concurrency increases to 20, 1% goes to 8 sec. And it even causes ab to be timeout if concurrency is higher than 100. The 90% Processing delay are usually linear with the concurrency but 1% can abnormally goes very high under a random number of concurrency(varies over multiple testing).
File Size 1K, everything is fine at lease with concurrency below 100.
Put them on a single local machine, no connection delay.
Can anyone explain this issue and tell me which part is wrong? I saw many benchmarking online, but they are pure ping-pang testing rather than this large file transferring and proxy stuff. Hope this is interested to you guys :)
Thank you!
========================================================================================
After some source coding reading today, I found one place may prevent the new sockets to be accepted. In NioServerSocketChannelSink.bind(), the boss executor will call Boss.run(), which contains a for loop for accepting the incoming sockets. In each iteration of this loop, after getting the accepted channel, AbstractNioWorker.register() will be called which suppose to add new sockets into the selector running in worker executor. However, in
register(), a mutex called startStopLock has to be checked before worker executor invoked. This startStopLock is also used in AbstractNioWorker.run() and AbstractNioWorker.executeInIoThread(), both of which check the mutex before they invoke the worker thread. In other words, startStopLock is used in 3 functions. If it is locked in AbstractNioWorker.register(), the for loop in Boss.run() will be blocked which can cause incoming accept delay. Hope this ganna help.