HttpClient: Connection reset after 5 minutes idle, keep-alive possible? - web-services

I'm using a HttpClient to send a SOAP request to an webservice to query some data. For some webservice parameters the execution of the webservice takes longer than 5 minutes and after 5 minutes I get an java.net.SocketException: Connection reset.
I think that the error occures because the connection idles for more than 5 minutes and then a firewall caps the connection.
Is there a way to send a keep-alive package for a http post request or something to keep the connection alive? (I need a client-side solution if possible)
If you google for HttpClient keep-alive you find a lot of topics regarding reusing a connection. In my case I only want to keep the connection alive until I get a response.
Method to execute the SOAP request:
def executeSOAPRequest(String url, String content, String soapAction, Integer timeout) {
def retVal = new SoapResponse();
PostMethod post = new PostMethod(url);
RequestEntity entity = new StringRequestEntity(content,"ISO-8859-1","ISO-8859-1");
post.setRequestEntity(entity);
post.setRequestHeader("SOAPAction", soapAction);
HttpClient httpclient = new HttpClient()
httpclient.setTimeout(timeout)
try {
retVal.httpResponse = httpclient.executeMethod(post);
retVal.httpResponseBody = post.getResponseBodyAsString();
} catch(Exception e){
... exception handling ...
} finally {
... finally stuff ...
}
return retVal;
}
Currently the HttpClient v3.1 is used.

5 minutes are an eternity in telecommunications, this days one giga can be transferred in less time, and keeping a connection idle consumes resources not only in the ending machines but in the intermediate nodes such as routers and firewalls.
So IMHO you shouldn't try to keep the connection alive for so long, especially if you don't manage the networks you are using (ie, firewalls can have their own timeouts and kill your connection), you should reduce the time the server needs to respond, or use other asynchronous communication mechanism.

Related

Timeout on 3 attempt

I've set up two services with grapevine, a client server schema, I'm like the following problem, after the second request the server or the client returns me a timeout limit.
After calling the method RestResponse response = this.client.Execute (request) the RestResponse; To execute the request, so I saw it does not arrive on the server.
This always happens on 3 did I send the call

zmq DEALER socket zmq_recv_msg call always timeouts

I am using zmq ROUTER and DEALER sockets in my application(C++).
One process is listening on zmq ROUTER socket for clients to connect (a Service).
Clients connect to this service using zmq DEALER socket. From the client I am doing synchronous (blocking) request to the service. To avoid the
infinite wait time for the response, I am setting RCVTIMEO on DEALER socket to let say 5 ms. After setting this timeout I observe un-expected
behaviour on the client.
Here are the details:
Case 1: No RCVTIMEO is set on DEALER (client) socket
In this case, let say client sends 1000 Request to the service. Out of these requests for around 850 requests, client receives responses within 5 ms.
For remaining 150 request it takes more than 5 ms for response to come.
Case 2: RCVTIMEO is set for 5 ms on DEALER (client) socket
In this case, for the first 150-200 request I see valid response received within RCVTIMEO period. For all remaining requests I see RCVTIMEO timeout happening, which is not expected. The requests in both the cases are same.
The expected behiour should be: for 850 requests we should receive valid response (as they are coming within RCVTIMEO). And for remaining 150
requests we should see a timeout happening.
For having the timeout feature, I tried zmq_poll() also instead of setting RCVTIMEO, but results are same. Most of the requests are getting TIMEDOUT.
I went through the zmq documentation for details, but didn't find anything.
Can someone please explain the reason for this behaviour ?

Netty file trasfer proxy suffer big connection delay under high concurrency

I am doing a project of building a file transfer proxy using netty which should efficiently handle high concurrency.
Here is my structure:
Back Server, a normal file server just like Http(File Server) example on netty.io which receive and confirm a request and send out a file either using ChunkedBuffer or zero-copy.
Proxy, with both NioServerSocketChannelFactory and NioClientSocketChannelFactory, both using cachedThreadPool, listening to clients' requests and fetch the file from Back Server back to the clients. Once a new client is accepted, the new accepted Channel(channel1) created by NioServerSocketChannelFactory and waiting for the request. Once the request is received, the Proxy will establish a new connection to Back Server using NioClientSocketChannelFactory, and the new Channel(channel2) will send request to Back Server and deliver the response to the client. Each channel1 and channel2 using its own pipeline.
More simply, the procedure is
channel1 accepted
channel1 receives the request
channel2 connected to Back Server
channel2 send request to Back Server
channel2 receive response(including file) from Back Server
channel1 send the response got from channel2 to the client
once transferring is done, channel2 close and channel1 close on flush.(each client only send one request)
Since the required file can be big(10M), the proxy stops channel2.readable when channel1 is NOT writtable, just like example Proxy Server on netty.io.
With the above structure, each client has one accepted Channel and once it send a request it also corresponds to one client Channel until the transferring is done.
Then I use ab(apache bench) to fire up thousands of requests to the proxy and evaluate the request time. Proxy, Back Server and Client are three boxes on one rack which has no other traffic loaded.
The results are weird:
File Size 10MB, when concurrency is 1, connection delay is very small, but when concurrency increases from 1 to 10, top 1% connection delay becomes incredibly high, up to
3 secs. The other 99% are very small. When concurrency increases to 20, 1% goes to 8 sec. And it even causes ab to be timeout if concurrency is higher than 100. The 90% Processing delay are usually linear with the concurrency but 1% can abnormally goes very high under a random number of concurrency(varies over multiple testing).
File Size 1K, everything is fine at lease with concurrency below 100.
Put them on a single local machine, no connection delay.
Can anyone explain this issue and tell me which part is wrong? I saw many benchmarking online, but they are pure ping-pang testing rather than this large file transferring and proxy stuff. Hope this is interested to you guys :)
Thank you!
========================================================================================
After some source coding reading today, I found one place may prevent the new sockets to be accepted. In NioServerSocketChannelSink.bind(), the boss executor will call Boss.run(), which contains a for loop for accepting the incoming sockets. In each iteration of this loop, after getting the accepted channel, AbstractNioWorker.register() will be called which suppose to add new sockets into the selector running in worker executor. However, in
register(), a mutex called startStopLock has to be checked before worker executor invoked. This startStopLock is also used in AbstractNioWorker.run() and AbstractNioWorker.executeInIoThread(), both of which check the mutex before they invoke the worker thread. In other words, startStopLock is used in 3 functions. If it is locked in AbstractNioWorker.register(), the for loop in Boss.run() will be blocked which can cause incoming accept delay. Hope this ganna help.

How do I set timeout for TIdHTTPProxyServer (not connection timout)

I am using TIdHTTPProxyServer and now I want to terminate connection when it is success to connect to the target HTTP server but receive no response for a long time(i.g. 3 mins)
Currently I find no related property or event about it. And even if the client terminate the connection before the proxy server receive the response from the HTTP server. OnException Event will not be fired until the proxy server receive the response. (That is, if the proxy server still receive no response from HTTP Server, I even do not know the client has already terminate the connection...)
Any help will be appreciated.
Thanks!
Willy
Indy uses infinite timeouts by default. To do what you are asking for, you need to set the ReadTimeout property of the outbound connection to the target server. You can access that connection via the TIdHTTPProxyServerContext.OutboundClient property. Use the OnHTTPBeforeCommand event, which is triggered just before the OutboundClient connects to the target server, eg:
#include "IdTCPClient.hpp"
void __fastcall TForm1::IdHTTPProxyServer1HTTPBeforeCommand(TIdHTTPProxyServerContext *AContext)
{
static_cast<TIdTCPClient*>(AContext->OutboundClient)->ReadTimeout = ...;
}

how to set connection/request timeout for jetty server?

I'm running an embedded jetty server (jetty 6.1.24) inside my application like this:
Handler handler=new AbstractHandler()
{
#Override
public void handle(String target, HttpServletRequest request,
HttpServletResponse response, int dispatch)
throws IOException, ServletException {
//this can take a long time
doSomething();
}
};
Server server = new Server(8080);
Connector connector = new org.mortbay.jetty.nio.SelectChannelConnector();
server.addConnector(connector);
server.setHandler(handler);
server.start();
I would like to set a timeout value (2 seconds) so that if handler.handle() method takes more than 2 seconds, jetty server will timeout and response to the client with 408 http code (request timeout).
This is to guarantee that my application will not hold the client request for a long time and always response within 2 seconds.
I did some research and tested it with "connector.setMaxIdleTime(2000);" but it doesn't work.
Take a look at the API for SelectChannelConnector (Jetty):
http://download.eclipse.org/jetty/7.6.17.v20150415/apidocs/org/eclipse/jetty/server/nio/SelectChannelConnector.html
I've tried to locate any timeout features of the channel (which controls incoming connections): setMaxIdleTime(), setLowResourceMaxIdleTime() and setSoLingerTime() are available it appears.
NOTE: the reason for your timeout feature not to work has to do with the nature of the socket on your operating system. Perhaps even the nature of Jetty (i've read about it somewhere, but cannot remember where it was).
NOTE2: i'm not sure why you try to limit the timeout, perhaps a better approach is limiting the buffer sizes? If you're trying to prevent denial of service...
Yes, this is possible. You could do this using DosFilter of Jetty. This filter is generally used to configure a DOS attack prevention mechanism for your Jetty web server. A property of this filter called 'MaxRequestMs' provides what you are looking for.
For more details, check this.
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/servlets/DoSFilter.html