Jetty HTTP 2 Clients sharing same thread pool - jetty

Using Jetty 9.4.7
We are creating 2 clients using this code:
client = new HTTP2Client();
client.setIdleTimeout(-1);//disable client session timeout
client.setExecutor(httpThreadPool);
client.setConnectTimeout(connectionTimeoutMs);
try {
client.addBean(this.sslContextFactory);
client.start();
FuturePromise<Session> sessionPromise = new FuturePromise<>();
client.connect(sslContextFactory, new InetSocketAddress(this.host, this.port), new CustomSessionListener(clientInstanceName, this), sessionPromise);
this.session = sessionPromise.get(this.connectionTimeoutMs, TimeUnit.MILLISECONDS);
} catch(...
The httpThreadPool is common between the two clients. It is a ThreadPoolExecutor with core pool size 4 and max pool size 128.
The first client is created successfully. The second client fails with TimeoutException (regardless of the target servers, we even pointed them at the same server).
If we assign separate thread pools (or let the client construct its own default QueuedThreadPool) everything works fine.
Aside for an advice on the issue itself, is there any way to unwrap whatever exception is thrown when connecting the Http2 client? We tried overriding onFailure(Session,Throwable) in SessionListener, but it doesn't get there.
Thanks.
EDIT: Log excerpt on DEBUG: https://pastebin.com/MUKrw4JP

Related

Grpc server with async and sync services at the same time

I need to be able to serve responses for some particular requests from the main thread, while the rest can arrive from any thread. With that in mind,
I created a GRPC server which has 2 services, one is implemented as an AsyncService, and the other as a sync service.
However, when adding a completion queue, the sync service no longer responds to requests.
builder.RegisterService(this); // this inherits from Service (sync)
builder.RegisterService(&m_service); // m_services is an AsyncService
m_mainThreadQueue = builder.AddCompletionQueue();
m_server = std::unique_ptr<Server>(builder.BuildAndStart());
{
(new GrabSnapshotCallData(this, &m_service, m_mainThreadQueue.get()))->Proceed();
}
m_server->Wait();
Adding the completion queue makes the sync service no longer response to requests.
I couldn't find much information about this particular topic anywhere, so perhaps it is not really supported in grpc.
So, is there a way to have both async and sync services simultaneously on the same server? If not, what should I do to emulate that behavior

Google App Engine - http request/response

I have a Java web app hosted on Google App Engine (GAE). The User clicks on a button and he gets a data table with 100 rows. At the bottom of the page, there is a "Make Web service calls" button. Clicking on that, the application will take one row at a time and make a third party web-service call using the URLConnection class. That part is working fine.
However, since there is a 60 second limit to the HttpRequest/Response cycle, all the 100 transactions don't go through as the timeout happens around row 50 or so.
How do I create a loop and send the Web service calls without the User having to click on the 'Make Webservice calls' more than once?
Is there a way to stop the loop before 60 seconds and then start again without committing the HttpResponse? (I don't want to use asynchronous Google backend).
Also, does GAE support file upload (to get the 100 rows from a file instead of a database)
Thank you.
Adding some code as per the comments:
URL url = new URL(urlString);
HttpURLConnection connection = (HttpURLConnection) url
.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setConnectTimeout(35000);
connection.setRequestProperty("Accept-Language", "en-US,en;q=0.5");
connection.setRequestProperty("Authorization", encodedCredentials);
// Send post request
DataOutputStream wr = new DataOutputStream(
connection.getOutputStream());
wr.writeBytes(submitRequest);
It all depends on what happens with the results of these calls.
If results are not returned to a UI, there is no need to block it. You can use Tasks API to create 100 tasks and return a response to a user. This will take a few seconds at most. The additional benefit is that you can make up to 10 calls in parallel by using tasks.
If results have to be returned to a user, you can still use up to 10 threads to process as many requests in parallel as possible. Hopefully, this will bring your time under 1 minute, but you cannot guarantee it since you depend on responses from third-party resources which maybe unavailable at the moment. You will have to implement your own retry mechanism.
Also note that users are not accustomed to waiting for several minutes for a website to respond. You may consider a different approach when a user is notified after the last request is processed without blocking your client code.
And yes, you can load data from files on App Engine.
Try using asynchronous urlfetch calls:
LinkedList<Future<HttpResponse>> futures;
// Start all the request
for (Url url : urls) {
HttpRequest request = new HttpRequest(url, HTTPMethod.POST);
request.setPayload(...)
futures.add(urlfetchservice.fetchAsync(request);
}
// Collect all the results
for (Future<HttpResponse> future : futures) {
HttpResponse response = future.get()
// Do something with future
}

Embedded Jetty app with 2 ContextHandlers listening at same port - Loading issue

My embedded jetty app (using 6.1.26 jetty) has 2 context handlers registered to it. Both are listening at same port. Below is the sample.
Server s = new Server();
Connector c = new SelectChannelConnector();
((SelectChannelConnector)connector).setAcceptors(2);
connector.setHost(IP);
connector.setPort(port);
server.addConnector(connector);
ContextHandler context1 = new ContextHandler();
context.setContextPath("/abc");
context.setHandler(handler1);
context.setAllowNullPathInfo(true);
ContextHandler context2 = new ContextHandler();
context2.setContextPath("/xyz");
context2.setHandler(handler2);
context2.setAllowNullPathInfo(true);
ContextHandlerCollection hc = new ContextHandlerCollection();
hc.addHandler(context1);
hc.addHandler(context2);
server.setHandler(hc);
server.start();
I am also using a thread pool which is set at server level.
When I send requests to one context and put load on that so that all threads are used, At that time when I send a request to the second context its taking time to process the request to 2nd context.
I also tried by setting thread pool at SelectChannelConnector level and tried.
Also tried by adding more connectors using same host/port so that each will have its own thread pool.
My requirement is that other context (but port is same) should not delay processing when one context is under load.
Can I have dedicated thread pool for each context. Is there any other work around.
Appreciate reply to this.
Thanks
Sarath
With Jetty, the ThreadPool is at the connector level you can't have 2 different ThreadPools for handling different contexts. As the connector accepts the request, it pulls a Thread from the ThreadPool, and hands it off to the Server.getHandler() chain. at which point it goes through the hierarchy of handlers until one of your Contexts is used.
This means that the knowledge of the context comes in far too late to split up the ThreadPools.
Have you tried upgrading to Jetty 8 or Jetty 9 and using Async processing instead?
Or have you tried using the QoSFilter in Jetty 7, 8, or 9 to prioritize the handling better?

How to unpublish a web service using JAX-WS aadn bind to a different address

I've a JAX-WS web service deployed into an embedded Jetty server
I need to change the IP address which is associated with the Endpoint
In order to publish I do:
Service service = new Service();
Endpoint.publish(address, service);
What happen is that when i stop and restart the server, the Service is published again and bind to the new address I provide, but I get a Warning like this:
WARNING: "GMBAL901: JMX exception on registration of MBean MBeanImpl[type=WSEndpoint,name=MyServiceService-myservice_servicePort,oname=com.sun.metro:pp=/,type=WSEndpoint,name=MyServiceService-myservice_servicePort]"
and if I query both old address (e.g. 127.0.0.1) and new one (e.g. 192.168.X.X) both are still answering (with two different instances of myService.
I wouldn't have this behavior, I want that the WS is unbound from the old address.
How can I do?
Find out that I simply need to create the EndpointObject, publish it, and when I need to restart that will be enough to stop ep (it will ensure that same ep will not be republished), than creating a new one and republish it.
Endpoint ep;
...
if (ep != null && ep.isPublished()){
ep.stop();
}
ep = Endpoint.create(service);
ep.publish(getEndpointAddress(port, service));
till is better to wait half a second before restarting the server where the WS is published as sometime it stuck

how to set connection/request timeout for jetty server?

I'm running an embedded jetty server (jetty 6.1.24) inside my application like this:
Handler handler=new AbstractHandler()
{
#Override
public void handle(String target, HttpServletRequest request,
HttpServletResponse response, int dispatch)
throws IOException, ServletException {
//this can take a long time
doSomething();
}
};
Server server = new Server(8080);
Connector connector = new org.mortbay.jetty.nio.SelectChannelConnector();
server.addConnector(connector);
server.setHandler(handler);
server.start();
I would like to set a timeout value (2 seconds) so that if handler.handle() method takes more than 2 seconds, jetty server will timeout and response to the client with 408 http code (request timeout).
This is to guarantee that my application will not hold the client request for a long time and always response within 2 seconds.
I did some research and tested it with "connector.setMaxIdleTime(2000);" but it doesn't work.
Take a look at the API for SelectChannelConnector (Jetty):
http://download.eclipse.org/jetty/7.6.17.v20150415/apidocs/org/eclipse/jetty/server/nio/SelectChannelConnector.html
I've tried to locate any timeout features of the channel (which controls incoming connections): setMaxIdleTime(), setLowResourceMaxIdleTime() and setSoLingerTime() are available it appears.
NOTE: the reason for your timeout feature not to work has to do with the nature of the socket on your operating system. Perhaps even the nature of Jetty (i've read about it somewhere, but cannot remember where it was).
NOTE2: i'm not sure why you try to limit the timeout, perhaps a better approach is limiting the buffer sizes? If you're trying to prevent denial of service...
Yes, this is possible. You could do this using DosFilter of Jetty. This filter is generally used to configure a DOS attack prevention mechanism for your Jetty web server. A property of this filter called 'MaxRequestMs' provides what you are looking for.
For more details, check this.
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/servlets/DoSFilter.html