http async response handling with pistache framework - c++

I am trying to write a c++ pistache server that, on a specific endpoint, has to contact another pistache server.
This is the scenario:
client -> server1 -> server2
client <- server1 <- server2
I am having problems waiting for the response in server1 and sending it back to the client asynchronously.
More in details:
I think that an efficient way of handling this problem should be to call the response.send in the resp.then block (it returns a Pistache::Async::Promise). Unluckily, it gives a segmentation fault once that endpoint is called and it gets in the then block. So I guess it is illegal to do that as I wanted. Also, the logs are not giving more details than segmentation fault, so it is hard to debug.
I share my server1 code to show you how I did implement it.
void doSmth(const Rest::Request& request, Http::ResponseWriter httpResponse)
{
auto resp_srv2 = client
.post(addr)
.body(json)
.send();
resp_srv2.then(
[&](Http::Response response) {
httpResponse.send(r.response_code);
},
[&](std::exception_ptr exc) {
PrintException excPrinter;
excPrinter(exc);
});
}
In this case, I could avoid using the barrier as shown in the pistache git repo. Using their code, 28k requests from the user are correctly handled, and then I guess that the resources are not correctly allocated since it gets stuck.
Do you know how to send the response to the client once received the server2 response asynchronously? I need to do it in an optimized way and correctly allocate all the resources.
Thanks for your help!

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

Jetty HTTP 2 Clients sharing same thread pool

Using Jetty 9.4.7
We are creating 2 clients using this code:
client = new HTTP2Client();
client.setIdleTimeout(-1);//disable client session timeout
client.setExecutor(httpThreadPool);
client.setConnectTimeout(connectionTimeoutMs);
try {
client.addBean(this.sslContextFactory);
client.start();
FuturePromise<Session> sessionPromise = new FuturePromise<>();
client.connect(sslContextFactory, new InetSocketAddress(this.host, this.port), new CustomSessionListener(clientInstanceName, this), sessionPromise);
this.session = sessionPromise.get(this.connectionTimeoutMs, TimeUnit.MILLISECONDS);
} catch(...
The httpThreadPool is common between the two clients. It is a ThreadPoolExecutor with core pool size 4 and max pool size 128.
The first client is created successfully. The second client fails with TimeoutException (regardless of the target servers, we even pointed them at the same server).
If we assign separate thread pools (or let the client construct its own default QueuedThreadPool) everything works fine.
Aside for an advice on the issue itself, is there any way to unwrap whatever exception is thrown when connecting the Http2 client? We tried overriding onFailure(Session,Throwable) in SessionListener, but it doesn't get there.
Thanks.
EDIT: Log excerpt on DEBUG: https://pastebin.com/MUKrw4JP

Wildfly fail to respond to some of the requests when there are too many requests

I am using Wildfly 9.0. I am developing a web program with two main parts.
One is submitting the request the server itself's webservice, and another one is a web service which receive request and work on it with the database.
The whole flow is like
1.user open a webpage rendered from the server and submit some information
2.the information is submitted to the server
3.the server then submit a HTTP request to the web service of the same server
4.the web service handle the request and reply.
5.the resulting information is then rendered to the user
For example, some of the requests are generated like this in the server to the web service...
String url = "http://127.0.0.1:8080/myPathOfRFWS/someWS";
HttpUriRequest httpReq = new HttpPost(url);
httpReq.addHeader("userKey", "someString");
try (DefaultHttpClient client = new DefaultHttpClient();) {
HttpResponse resp = client.execute(httpReq);
return resp;
} catch (Exception e) {
logException(e);
}
The code work perfectly fine under normal situation.
However, when I am performing Stress Test on the server with JMeter (maybe like submitting 60 requests in 60seconds, each with some steps of processing, so probably there may be around 8-10 on-going requests running at the same time in JMeter), I find that there are often a few requests failing to be received on the web service side, and hence no response to the client.
I have checked with the thread dump, and find that the server has submitted the request to web service, but the web service seems doesn't receive the request nor do any action after the action is passed through the filter.
Here are some part of the thread dump.
"default task-26" #298 prio=5 os_prio=0 tid=0x000000001a497000 nid=0xd3c runnable [0x000000001fa7d000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:296)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(WindowsSelectorImpl.java:278)
at sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:159)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fde172b0> (a sun.nio.ch.Util$2)
- locked <0x00000000fde172a0> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fde17040> (a sun.nio.ch.WindowsSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at org.xnio.nio.SelectorUtils.await(SelectorUtils.java:46)
at org.xnio.nio.NioSocketConduit.awaitReadable(NioSocketConduit.java:345)
at org.xnio.conduits.AbstractSourceConduit.awaitReadable(AbstractSourceConduit.java:66)
at io.undertow.conduits.ReadDataStreamSourceConduit.awaitReadable(ReadDataStreamSourceConduit.java:101)
at io.undertow.conduits.FixedLengthStreamSourceConduit.awaitReadable(FixedLengthStreamSourceConduit.java:272)
at org.xnio.conduits.ConduitStreamSourceChannel.awaitReadable(ConduitStreamSourceChannel.java:151)
at io.undertow.channels.DetachableStreamSourceChannel.awaitReadable(DetachableStreamSourceChannel.java:77)
at io.undertow.server.HttpServerExchange$ReadDispatchChannel.awaitReadable(HttpServerExchange.java:1997)
at org.xnio.channels.Channels.readBlocking(Channels.java:295)
at io.undertow.servlet.spec.ServletInputStreamImpl.readIntoBuffer(ServletInputStreamImpl.java:170)
at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:146)
at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:133)
at org.jboss.resteasy.plugins.providers.ProviderHelper.writeTo(ProviderHelper.java:124)
at org.jboss.resteasy.plugins.providers.FileProvider.readFrom(FileProvider.java:88)
at org.jboss.resteasy.plugins.providers.FileProvider.readFrom(FileProvider.java:34)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.readFrom(AbstractReaderInterceptorContext.java:59)
at org.jboss.resteasy.core.interception.ServerReaderInterceptorContext.readFrom(ServerReaderInterceptorContext.java:62)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:51)
at org.jboss.resteasy.security.doseta.DigitalVerificationInterceptor.aroundReadFrom(DigitalVerificationInterceptor.java:32)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53)
at org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53)
at org.jboss.resteasy.core.MessageBodyParameterInjector.inject(MessageBodyParameterInjector.java:150)
at org.jboss.resteasy.core.MethodInjectorImpl.injectArguments(MethodInjectorImpl.java:89)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:112)
at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:237)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:130)
at io.undertow.websockets.jsr.JsrWebSocketFilter.doFilter(JsrWebSocketFilter.java:151)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132)
at com.mypackage.web.filter.AccessFilter.doFilter(AccessFilter.java:54)
Locked ownable synchronizers:
- <0x0000000084370a18> (a java.util.concurrent.ThreadPoolExecutor$Worker)
"default task-23" #295 prio=5 os_prio=0 tid=0x000000001a494800 nid=0x3d8 runnable [0x000000001dfac000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at com.mypackage.WsLib.callUriWs(WsLib.java:241)
Locked ownable synchronizers:
- <0x0000000084363dd8> (a java.util.concurrent.ThreadPoolExecutor$Worker)
Please advise if there is anything I can do to solve the issue or to check with the issue.

Webservice with always in memory object with queue

I have a function to give recommendations to users. This function need to make a lot of calcs to start, but after start it use the already calculed matrix on memory. After this, any other calc that is made, "fills" the object in memory to continuous learning.
My intention is to use this function to website users, but the response need to come from the same "object" in memory and need to be sequential by request because it is not thread safe.
How is the best way to get this working? My first idea was use signalr so the user dont need to wait to response and a queue to send the requests to objects. But how the signalr can receive the response for this specific request?
The entire flow is:
User enter on a page.
A javascript will call a service with the user ID and actual page.
The server will queue the ID an page.
The service will be calculating the results for each request on queue and sending responses.
The server will "receive" the response and send back to client.
The main problem is that I dont see a way to the service receive the response to send back to client until it is complete, without need to be looping in queues.
Thanks!
If you are going to use SignalR, I would suggest using a hub method to accept these potentially long running requests from the client. By doing so it should be obvious "how the signalr can receive the response for this specific request".
You should be able to queue your calculations from inside your hub method where you will have access to the caller's connection id (via the Context.ConnectionId property).
If you can await the results of your queued operation inside of the hub method you queue from, you can then simply return the result from your hub method and SignalR will flow the result back to the calling JavaScript. You can also use Clients.Caller.... to send the result back.
If you go this route I suggest you use async/await instead of blocking request threads waiting for your long-running calculations to complete.
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server
If you can't process your calculation results from the same method you queued the calculation from, you still have options. Just be sure to queue the caller's connection id and a request id along with the calculation to be processed.
Then, you can process the results of all your calculations from outside of your hub using GlobalHost.ConnectionManager.GetHubContext:
private IHubContext _context = GlobalHost.ConnectionManager.GetHubContext<MyHub>()
// Call ProcessResults whenever results are ready to send back to the client
public void ProcessResults(string connectionId, uint requestId, MyResult result)
{
// Presumably there's JS code mapping request id's to results
// if you can have multiple ongoing requests per client
_context.Clients.Client(connectionId).receiveResult(requestId, result);
}
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#callfromoutsidehub

how to set connection/request timeout for jetty server?

I'm running an embedded jetty server (jetty 6.1.24) inside my application like this:
Handler handler=new AbstractHandler()
{
#Override
public void handle(String target, HttpServletRequest request,
HttpServletResponse response, int dispatch)
throws IOException, ServletException {
//this can take a long time
doSomething();
}
};
Server server = new Server(8080);
Connector connector = new org.mortbay.jetty.nio.SelectChannelConnector();
server.addConnector(connector);
server.setHandler(handler);
server.start();
I would like to set a timeout value (2 seconds) so that if handler.handle() method takes more than 2 seconds, jetty server will timeout and response to the client with 408 http code (request timeout).
This is to guarantee that my application will not hold the client request for a long time and always response within 2 seconds.
I did some research and tested it with "connector.setMaxIdleTime(2000);" but it doesn't work.
Take a look at the API for SelectChannelConnector (Jetty):
http://download.eclipse.org/jetty/7.6.17.v20150415/apidocs/org/eclipse/jetty/server/nio/SelectChannelConnector.html
I've tried to locate any timeout features of the channel (which controls incoming connections): setMaxIdleTime(), setLowResourceMaxIdleTime() and setSoLingerTime() are available it appears.
NOTE: the reason for your timeout feature not to work has to do with the nature of the socket on your operating system. Perhaps even the nature of Jetty (i've read about it somewhere, but cannot remember where it was).
NOTE2: i'm not sure why you try to limit the timeout, perhaps a better approach is limiting the buffer sizes? If you're trying to prevent denial of service...
Yes, this is possible. You could do this using DosFilter of Jetty. This filter is generally used to configure a DOS attack prevention mechanism for your Jetty web server. A property of this filter called 'MaxRequestMs' provides what you are looking for.
For more details, check this.
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/servlets/DoSFilter.html