WSO2 AM endpoint timeout doesn't work correctly - wso2

In my case, I did a test with calling an API. Following is my selected logs:
2020-10-05 15:38:43,585 - sThroughMessageProcessor-12 - DEBUG g.apache.synapse.core.axis2.Axis2FlexibleMEPClient: [] Setting Timeout for endpoint : Endpoint [IDGen--v1.0_APIproductionEndpoint], URI : http://localhost:8082/sequence/batchId/next?length=20 to static timeout value : 15000
...
2020-10-05 15:39:03,660 - sThroughMessageProcessor-13 - DEBUG .apache.synapse.core.axis2.SynapseCallbackReceiver: [] Callback removed for request message id : urn:uuid:be12d50b-503b-4095-a296-ee36f9964d29. Pending callbacks count : 0
2020-10-05 15:39:03,661 - sThroughMessageProcessor-13 - DEBUG .apache.synapse.core.axis2.SynapseCallbackReceiver: [] Synapse received an asynchronous response message
You could see that, we set an endpoint timeout to 15s. And, the logs show that the callback was created at 15:38:43, and removed at 15:39:03, so it takes 20 seconds.
During monitoring, I found it works correctly but sometimes I works incorrectly.
Could you suggest any way I could investigate the root cause ?
PS: I set
'synapse.global_timeout_interval'=600000
'http.socket.timeout'=610000

Registered callbacks are not removed just after the endpoint timeout. There is a timeout handler it keeps checking the response and endpoint timeout periodically. So timeout_handler_interval[1] is used to define this timeout and there can be a delay to remove registered callback. Reducing this value can reduce this delay but it is an overhead to GW.
So what you experience is not an issue and expected behavior with synapse gateway.
[1] https://docs.wso2.com/display/EI611/Configuring+synapse.properties

Related

How to specify AWS SQS request timeout with C++ API?

I'm using AWS SQS and making a GetQueueURL request at startup. The problem is that if there is no network connection, the request blocks and doesn't time out until more than 400 seconds have passed (using default ClientConfiguration parameters).
The only timeout parameters I can see in ClientConfiguration are:
httpRequestTimeoutMs
requestTimeoutMs
connectTimeoutMs
However, changing these seem to have no effect in the case I mentioned above. I tried setting all three to 1000ms and, without a network connection, the GetQueueUrl request still blocks for several minutes until it times out.
If it helps, when the timeout does occur, the error I get on the request is: curlCode: 28, Timeout was reached
What am I missing here? Thanks!

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

Jetty HTTP/2: How to set client session timeout?

I'm trying to create one session and reuse it for every request.
The problem is if I try to send a request after 30 seconds after the session was createad, I get:
Caused by: java.nio.channels.ClosedChannelException
at org.eclipse.jetty.http2.HTTP2Session$ControlEntry.succeeded
(HTTP2Session.java:1224) ~[http2-common-9.4.0.v20161208.jar:9.4.0.v20161208]
I tried like this
SSLSessionContext clientSessionContext = sslContextFactory.getSslContext().getClientSessionContext();
clientSessionContext.setSessionTimeout(60000);
but it doesen't seems to work
If you are using HttpClient, the client idle timeout can be set with HttpClient.setIdleTimeout(long).
If you are using the low-level HTTP2Client, the client idle timeout can be set with HTTP2Client.setIdleTimeout(long).
Both will control the connection/session idle timeout, which is apparently what you want. A negative value will disable the idle timeout.

WSO2 ESB scheduled message forwarding processor becomes inactive after reaching max delivery attempt

I tried to follow this link , and I did it step by step for four time, for the first 3 times I used WSO2 MB as a broker, and the last time I tried Apache ActiveMQ but the problem is, when I shut down SimpleQuoteService server and send messages to the proxy via SoapUI , they accumulate in my queue and my scheduled message forwarding processor becomes inactivated after reaching max delivery attempts but WSO2-ESB documentation is sayinq :
"To test the failover scenario, shut down the JMS broker(i.e., the original message store) and send a few messages to the proxy service.
You will see that the messages are not sent to the back-end since the original message store is not available. You will also see that the messages are stored in the failover message store."
Anyone to explain?!!!
You can disable deactivating the message processor setting "max.delivery.drop" parameter to 'Enabled'. It will drop the message after max delivery attempts without deactivating the processor. See here for docs(definitions) of these parameters.

connection timeout after request is read in wso2 esb

After continuous time of the endpoint we are getting the message connection timeout after request is read and esb will stop responding. we need to restart the wso2 services again.
i had increase the socket time out as suggested.
Time out in the esb is defined in three levels.
endpoint timeout < socket timeout < synapse timeout.check[1]
If you have defined enpoint timeout for your endpoint you can increase it up to the timeout value of socket timeout. and you can in crease the socket time out to the vlaue of synapse timeout. default synapse timeout is 2 minutes.So even you increase the endpoint timeout and socket time out to 2 minutes and you dont get any response form your backend service,Then you should check your backend service.
once timeout occurred that enpoint will be suspended to 30000ms .So any request to that endpoint within the suspension period will be ignored by esb. you can disable the suspension period as mention here [2]
Default keepalive property is enabled in the esb .But some firewalls will ignore keep alive packets form esb .So there will be a actual connection between esb and firewall .But connection form firewall to backend might be closed.In that case disabling the keepalive property will create new connection for each request[3] and backend will give the response.
1.http://soatutorials.blogspot.in/2014/11/how-to-configure-timeouts-in-wso2-esb.html
2.http://miyurudw.blogspot.com/2012/02/disable-suspension-of-wso2-esb-synapse.html
3.https://udaraliyanage.wordpress.com/2014/06/17/wso2-esb-keep-alive-property/