connection timeout after request is read in wso2 esb - wso2

After continuous time of the endpoint we are getting the message connection timeout after request is read and esb will stop responding. we need to restart the wso2 services again.
i had increase the socket time out as suggested.

Time out in the esb is defined in three levels.
endpoint timeout < socket timeout < synapse timeout.check[1]
If you have defined enpoint timeout for your endpoint you can increase it up to the timeout value of socket timeout. and you can in crease the socket time out to the vlaue of synapse timeout. default synapse timeout is 2 minutes.So even you increase the endpoint timeout and socket time out to 2 minutes and you dont get any response form your backend service,Then you should check your backend service.
once timeout occurred that enpoint will be suspended to 30000ms .So any request to that endpoint within the suspension period will be ignored by esb. you can disable the suspension period as mention here [2]
Default keepalive property is enabled in the esb .But some firewalls will ignore keep alive packets form esb .So there will be a actual connection between esb and firewall .But connection form firewall to backend might be closed.In that case disabling the keepalive property will create new connection for each request[3] and backend will give the response.
1.http://soatutorials.blogspot.in/2014/11/how-to-configure-timeouts-in-wso2-esb.html
2.http://miyurudw.blogspot.com/2012/02/disable-suspension-of-wso2-esb-synapse.html
3.https://udaraliyanage.wordpress.com/2014/06/17/wso2-esb-keep-alive-property/

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

jetty close the connection with idle timeout 30s instead of configured timeout(60s)

I am using spring boot for Rest API services.
We see lots of idle timeout problems when reading data. It reported "java.util.concurrent.TimeoutException: Idle timeout expired: 30000/30000 ms " Below is what I configured for the jetty thread pool. Anyone knows why it was failed with timeout 30s not 60s?
int threadPoolIdleTimeout = 60000;
ThreadPool threadpool = new QueuedThreadPool(maxThreads, maxThreads, threadPoolIdleTimeout,
new ArrayBlockingQueue(threadPoolQueueSize));
Unrelated.
That's the thread idle timeout, for reducing the number of idle threads in the thread pool.
The connection idle timeout is a different configuration.
Check the ServerConnector if a normal server connection.
Check the AsyncContext idle timeout if you are using Servlet Async Processing, or Servlet Async I/O.
Check the WebSocket Session if you are doing WebSocket requests.
Check the database DataSource configuration if you are worried about database connection idle timeouts.
Check the HTTP2 Session configuration for dealing with the virtual connections on an HTTP/2 connector.
And many more, etc ...
There's lots of idle timeouts, specific for the situation you are dealing with, be aware of them.

WSO2 AM endpoint timeout doesn't work correctly

In my case, I did a test with calling an API. Following is my selected logs:
2020-10-05 15:38:43,585 - sThroughMessageProcessor-12 - DEBUG g.apache.synapse.core.axis2.Axis2FlexibleMEPClient: [] Setting Timeout for endpoint : Endpoint [IDGen--v1.0_APIproductionEndpoint], URI : http://localhost:8082/sequence/batchId/next?length=20 to static timeout value : 15000
...
2020-10-05 15:39:03,660 - sThroughMessageProcessor-13 - DEBUG .apache.synapse.core.axis2.SynapseCallbackReceiver: [] Callback removed for request message id : urn:uuid:be12d50b-503b-4095-a296-ee36f9964d29. Pending callbacks count : 0
2020-10-05 15:39:03,661 - sThroughMessageProcessor-13 - DEBUG .apache.synapse.core.axis2.SynapseCallbackReceiver: [] Synapse received an asynchronous response message
You could see that, we set an endpoint timeout to 15s. And, the logs show that the callback was created at 15:38:43, and removed at 15:39:03, so it takes 20 seconds.
During monitoring, I found it works correctly but sometimes I works incorrectly.
Could you suggest any way I could investigate the root cause ?
PS: I set
'synapse.global_timeout_interval'=600000
'http.socket.timeout'=610000
Registered callbacks are not removed just after the endpoint timeout. There is a timeout handler it keeps checking the response and endpoint timeout periodically. So timeout_handler_interval[1] is used to define this timeout and there can be a delay to remove registered callback. Reducing this value can reduce this delay but it is an overhead to GW.
So what you experience is not an issue and expected behavior with synapse gateway.
[1] https://docs.wso2.com/display/EI611/Configuring+synapse.properties

WSO2 ESB scheduled message forwarding processor becomes inactive after reaching max delivery attempt

I tried to follow this link , and I did it step by step for four time, for the first 3 times I used WSO2 MB as a broker, and the last time I tried Apache ActiveMQ but the problem is, when I shut down SimpleQuoteService server and send messages to the proxy via SoapUI , they accumulate in my queue and my scheduled message forwarding processor becomes inactivated after reaching max delivery attempts but WSO2-ESB documentation is sayinq :
"To test the failover scenario, shut down the JMS broker(i.e., the original message store) and send a few messages to the proxy service.
You will see that the messages are not sent to the back-end since the original message store is not available. You will also see that the messages are stored in the failover message store."
Anyone to explain?!!!
You can disable deactivating the message processor setting "max.delivery.drop" parameter to 'Enabled'. It will drop the message after max delivery attempts without deactivating the processor. See here for docs(definitions) of these parameters.

How do I set timeout for TIdHTTPProxyServer (not connection timout)

I am using TIdHTTPProxyServer and now I want to terminate connection when it is success to connect to the target HTTP server but receive no response for a long time(i.g. 3 mins)
Currently I find no related property or event about it. And even if the client terminate the connection before the proxy server receive the response from the HTTP server. OnException Event will not be fired until the proxy server receive the response. (That is, if the proxy server still receive no response from HTTP Server, I even do not know the client has already terminate the connection...)
Any help will be appreciated.
Thanks!
Willy
Indy uses infinite timeouts by default. To do what you are asking for, you need to set the ReadTimeout property of the outbound connection to the target server. You can access that connection via the TIdHTTPProxyServerContext.OutboundClient property. Use the OnHTTPBeforeCommand event, which is triggered just before the OutboundClient connects to the target server, eg:
#include "IdTCPClient.hpp"
void __fastcall TForm1::IdHTTPProxyServer1HTTPBeforeCommand(TIdHTTPProxyServerContext *AContext)
{
static_cast<TIdTCPClient*>(AContext->OutboundClient)->ReadTimeout = ...;
}