how to set connection/request timeout for jetty server? - jetty

I'm running an embedded jetty server (jetty 6.1.24) inside my application like this:
Handler handler=new AbstractHandler()
{
#Override
public void handle(String target, HttpServletRequest request,
HttpServletResponse response, int dispatch)
throws IOException, ServletException {
//this can take a long time
doSomething();
}
};
Server server = new Server(8080);
Connector connector = new org.mortbay.jetty.nio.SelectChannelConnector();
server.addConnector(connector);
server.setHandler(handler);
server.start();
I would like to set a timeout value (2 seconds) so that if handler.handle() method takes more than 2 seconds, jetty server will timeout and response to the client with 408 http code (request timeout).
This is to guarantee that my application will not hold the client request for a long time and always response within 2 seconds.
I did some research and tested it with "connector.setMaxIdleTime(2000);" but it doesn't work.

Take a look at the API for SelectChannelConnector (Jetty):
http://download.eclipse.org/jetty/7.6.17.v20150415/apidocs/org/eclipse/jetty/server/nio/SelectChannelConnector.html
I've tried to locate any timeout features of the channel (which controls incoming connections): setMaxIdleTime(), setLowResourceMaxIdleTime() and setSoLingerTime() are available it appears.
NOTE: the reason for your timeout feature not to work has to do with the nature of the socket on your operating system. Perhaps even the nature of Jetty (i've read about it somewhere, but cannot remember where it was).
NOTE2: i'm not sure why you try to limit the timeout, perhaps a better approach is limiting the buffer sizes? If you're trying to prevent denial of service...

Yes, this is possible. You could do this using DosFilter of Jetty. This filter is generally used to configure a DOS attack prevention mechanism for your Jetty web server. A property of this filter called 'MaxRequestMs' provides what you are looking for.
For more details, check this.
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/servlets/DoSFilter.html

Related

Limiting processing time of request for WCF or ASMX webservice

Let say I have a webservice (WCF and ASMX .net framework 4.8) which is hosted on IIS 10. Webservice has a method with this content:
public CustomerListResponse Get(CustomerListRequest request)
{
//sleep for 1 hour
System.Threading.Thread.Sleep(TimeSpan.FromHours(1));
return new CustomerListResponse();
}
The line that is performing sleep on thread is just to show that there is code that in some cases can take long time.
What I'm looking is setting or way to limit allowed processing time for example to one minute and error returned to client. I want the processing be killed by IIS/WCF/ASMX if the execution time will exceed one minute.
Unfortunately I didn't found a way in IIS for that. Also I don't have access to client code to set this limit on other side - change is possible only on server side.
What I tried:
on binding for WCF there is couple of properties openTimeout="00:01:00" closeTimeout="00:01:00" sendTimeout="00:01:00" receiveTimeout="00:01:00" - I set them all but it didn't work. Code can still process for long time.
<httpRuntime targetFramework="4.8" executionTimeout="60" /> - also didn't work
I don't have other ideas how to achieve that, but I believe there should be some solution to be able control how long we want to spend on processing.
You need to set the timeout on both client-side and server-side.
Client-side:
SendTimeout is used to initialize OperationTimeout, which manages the entire interaction of sending a message (including receiving a reply message in a request-reply case). This timeout also applies when a reply message is sent from the CallbackContract method.
OpenTimeout and CloseTimeout are used to open and close channels (when no explicit timeout value is passed).
ReceiveTimeout not used.
Server-side:
Send, open, and close timeouts are the same as on the client side (for callbacks).
ReceiveTimeout is used by the ServiceFramework layer to initialize idle session timeouts.

Example of embedded Jetty and using Micrometer for stats (without Spring)

I am new to using Micrometer as a metrics/stats producer and I am having a hard time in getting it configured correctly with my Jersey/Embedded Jetty server. I would like to get Jetty statistics added.
I already have the servlet producing stats for the JVM in a Prometheus format.
Does anyone know of a good working example on how to configure it?
I am not using SpringBoot.
The best way is to look at the Spring Boot code. For example it binds the jetty connections
JettyConnectionMetrics.addToAllConnectors(server, this.meterRegistry, this.tags);
And it uses an ApplicationStartedEvent to find the server reference.
private Server findServer(ApplicationContext applicationContext) {
if (applicationContext instanceof WebServerApplicationContext) {
WebServer webServer = ((WebServerApplicationContext) applicationContext).getWebServer();
if (webServer instanceof JettyWebServer) {
return ((JettyWebServer) webServer).getServer();
}
}
return null;
}
There are other classes that record the thread usage and SSL handshake metrics.

What notification is provided for a lost connection in a C++ gRPC async server

I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2

web service best practice - server timeout longer than http client timeout

I am trying to build a web service on top of hbase, so the code looks roughly like:
#GET
#Path("/blabla")
#Override
public List<String> getEvents($$$params$$$) {
......
//calling hbase query the events
......
}
When Hbase service is down, the hbase Java API keeps retrying to connect to Hbase region server util eventually it times out and throws a RT Exception:
NoServerForRegionException: Unable to find region for event,,99999999999999 after 10 tries.
The logic has no problem, my issue here is that the HttpClient times out way before hbase times out the retries. Then my web service API consumer gets no response, ugly.
Question -
What's the best practice here if you have server's timeout potentially longer than the http connection itself? How to have the web service respond to client gracefully in this case?
set the cashing for you scan object to some reasonable value. another thing, since you are using a web service to show the results to your users, i am assuming that you must be showing only a few rows(or records) at a time. you can use Hbase PageFilter so that you get only a specified no of rows each time and don't have to wait to get all the rows in one shot.

How do I set timeout for TIdHTTPProxyServer (not connection timout)

I am using TIdHTTPProxyServer and now I want to terminate connection when it is success to connect to the target HTTP server but receive no response for a long time(i.g. 3 mins)
Currently I find no related property or event about it. And even if the client terminate the connection before the proxy server receive the response from the HTTP server. OnException Event will not be fired until the proxy server receive the response. (That is, if the proxy server still receive no response from HTTP Server, I even do not know the client has already terminate the connection...)
Any help will be appreciated.
Thanks!
Willy
Indy uses infinite timeouts by default. To do what you are asking for, you need to set the ReadTimeout property of the outbound connection to the target server. You can access that connection via the TIdHTTPProxyServerContext.OutboundClient property. Use the OnHTTPBeforeCommand event, which is triggered just before the OutboundClient connects to the target server, eg:
#include "IdTCPClient.hpp"
void __fastcall TForm1::IdHTTPProxyServer1HTTPBeforeCommand(TIdHTTPProxyServerContext *AContext)
{
static_cast<TIdTCPClient*>(AContext->OutboundClient)->ReadTimeout = ...;
}