Separate connect and read timeouts for JBossWS - web-services

Can anyone give a hint how to set separate connect and read timeouts when calling external webservice from JBossWS-Native client?
All I've found so far is how to set a single timeout:
bindingProvider.getRequestContext().put("org.jboss.ws.timeout", 1000);
The same question (unanswered for a long time) on JBoss forum:
http://community.jboss.org/thread/103582
Vesions in use: jbossws-native-2.0.1.SP2 and jbossws-native-3.1.1.GA on JBoss 4.2.x.

Examined the source - it's not possible at all. There's single timeout.
Went to jbossws jira willing to file a feature request.
But found JBWS-3114 and this message:
I've added this two properties
"javax.xml.ws.client.connectionTimeout",
"javax.xml.ws.client.receiveTimeout" to cxf and native stack for the
stack agnostic timeout configuration:
public void testConfigureTimeout() throws Exception
{
//Set timeout until a connection is established
((BindingProvider) port).getRequestContext().
put("javax.xml.ws.client.connectionTimeout", "6000");
//Set timeout until the response is received
((BindingProvider) port).getRequestContext().
put("javax.xml.ws.client.receiveTimeout", "1000");
String response = port.echo("testTimeout");
System.out.prinltn("Received response : response");
}
This should be included in the 3.4.0 release.
Rechecked the source - it's there!
Unfortunately, according to compatibility matrix, jbossws-3.4.0 is supported only since JBoss AS 5.0.1.

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

DefaultHandshakeHandler's determineUser not called on Production Server

I have a grails project which uses Spring websockets. I have implemented the DefaultHandshakeHandler to create random principal name for each new session and use convertAndSendToUser to send messages.
Everything works fine in local run. I am also able to deploy the WAR file on an AWS EC2 Linux Instance running latest tomcat. The file deploys fine and the Connect and Disconnect events on websockets can be detected correctly.*
The only problem is, My CustomHandshakeHandler's determineUser does not get called on production. Due to this my StompHeaderAccessor's principal is always null and the code starts spitting NPEs.
This is how i have declared the CustomHandshakeHandler :
class CustomHandshakeHandler extends DefaultHandshakeHandler {
#Override
protected Principal determineUser(ServerHttpRequest request,
WebSocketHandler wsHandler,
Map<String, Object> attributes) {
// Generate principal with UUID as name
return new StompPrincipal(UUID.randomUUID().toString())
}
}
This is how i set the handshake handler configuration :
#Override
void registerStompEndpoints(StompEndpointRegistry stompEndpointRegistry) {
stompEndpointRegistry.addEndpoint("/ws-ep") // Set websocket endpoint to connect to
.setHandshakeHandler(new CustomHandshakeHandler()) // Set custom handshake handler
.withSockJS() // Add Sock JS support for frontend
}
I tried deploying the same WAR file on local Tomcat 8 and it again works fine. It seems problem happens due to AWS. I also searched for some other AWS and websocket related issues. I came across some ELB compatibility related things but i Don't think that's my case since my websockets are working fine (Events are being received)
Can someone please help out or point me in right directions
Problem was with Nginx settings which upgrade the Http Request to WS request. This question helped a lot : Spring WebSocket: Handshake failed due to invalid Upgrade header: null

WSO2 MB Cluster Giving Connection reset by peer

Test cluster of two brokers, WKA membership scheme, PostgreSQL message store, working fine for a couple of days, then throwing following errors:
TID: [] [] [2016-07-19 12:09:24,738] ERROR {org.wso2.andes.server.protocol.MultiVersionProtocolEngine} - Error establishing session {org.wso2.andes.server.protocol.MultiVersionProtocolEngine}
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.read(SocketIoProcessor.java:218)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.process(SocketIoProcessor.java:198)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.access$400(SocketIoProcessor.java:45)
at org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run(SocketIoProcessor.java:485)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51)
at java.lang.Thread.run(Thread.java:745)
Startup of Message Broker looks fine, no errors, JDBC connection to PostgreSQL DB is ok, Registry mount looks ok. Then after that error appears in wso2carbon.log several times/minute.
Anyone any ideas? As far as I know nothing's changed and I don't know what it's trying to connect to.
This usually happens when client's whom connected to MB tries to create connections per message. jms is heavy connection and not recommended to create connections per each message. Therefore, please go through client implementation and verify connections are not created per message.
If by any chance you are using wso2 esb to publish/subscribe queues/topics to mb there is a property "transport.jms.CacheLevel" connection caching in esb axis2.xml.Read the documentation and use appropriate caching level for your usecase.
There was bug in connection caching property to be ignored in esb 4.8.1 which is currently fixed in 4.9.0 as well.
These are the possible cases I can think of with the given information. If you need more info please provide a detailed usecase.

How to define timeout for webservice client in websphere

Our application is hosted in websphere, my webservice client (jax-ws) is making webservice call to remote server. I will need to define timeout for this webservice call. I tried different way to set timeout up with no luck. here is what i tried:
Map<String, Object> requestContext = ((BindingProvider) binding).getRequestContext();
requestContext.put("com.ibm.websphere.webservices.jaxws.asynctimeout", 15000);
or
Map<String, Object> requestContext = ((BindingProvider) binding).getRequestContext();
requestContext.put(BindingProviderProperties.REQUEST_TIMEOUT, 15000);
requestContext.put(BindingProviderProperties.CONNECT_TIMEOUT, 15000);
None of them works
Any one can give hint, how to setup timeout for webservice client in websphere?
Thx
because Jax-WS in WAS relies on Axis 2 I believe you could use standard Axis 2 approach to do that, try(From axis 2 docs):
Timeout Configuration
Two timeout instances exist in the transport level, Socket timeout and Connection timeout. These can be configured either at deployment or run time. If configuring at deployment time, the user has to add the following lines in axis2.xml.
For Socket timeout:
<parameter name="SO_TIMEOUT">some_integer_value</parameter>
For Connection timeout:
<parameter name="CONNECTION_TIMEOUT">some_integer_value</parameter>
For runtime configuration, it can be set as follows within the client stub:
...
Options options = new Options();
options.setProperty(HTTPConstants.SO_TIMEOUT, new Integer(timeOutInMilliSeconds));
options.setProperty(HTTPConstants.CONNECTION_TIMEOUT, new Integer(timeOutInMilliSeconds));
// or
options.setTimeOutInMilliSeconds(timeOutInMilliSeconds);
...
If you want more information check: http://axis.apache.org/axis2/java/core/docs/http-transport.html
Also:
http://wso2.org/library/209
http://singztechmusings.wordpress.com/2011/05/07/how-to-configure-timeout-duration-at-client-side-for-axis2-web-services/
If you're using ServiceClient check this thread please: Axis2 ServiceClient options ignore timeout
Please let me know if it worked ;)
You should set JVM properties mentioned in following link -
https://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rwbs_jaxwstimeouts.html
bindingProvider.getRequestContext().put(com.ibm.wsspi.webservices.Constants.CONNECTION_TIMEOUT_PROPERTY, timeout);
bindingProvider.getRequestContext().put(com.ibm.wsspi.webservices.Constants.RESPONSE_TIMEOUT_PROPERTY, timeout);
Set timeout in seconds as String.

how to set connection/request timeout for jetty server?

I'm running an embedded jetty server (jetty 6.1.24) inside my application like this:
Handler handler=new AbstractHandler()
{
#Override
public void handle(String target, HttpServletRequest request,
HttpServletResponse response, int dispatch)
throws IOException, ServletException {
//this can take a long time
doSomething();
}
};
Server server = new Server(8080);
Connector connector = new org.mortbay.jetty.nio.SelectChannelConnector();
server.addConnector(connector);
server.setHandler(handler);
server.start();
I would like to set a timeout value (2 seconds) so that if handler.handle() method takes more than 2 seconds, jetty server will timeout and response to the client with 408 http code (request timeout).
This is to guarantee that my application will not hold the client request for a long time and always response within 2 seconds.
I did some research and tested it with "connector.setMaxIdleTime(2000);" but it doesn't work.
Take a look at the API for SelectChannelConnector (Jetty):
http://download.eclipse.org/jetty/7.6.17.v20150415/apidocs/org/eclipse/jetty/server/nio/SelectChannelConnector.html
I've tried to locate any timeout features of the channel (which controls incoming connections): setMaxIdleTime(), setLowResourceMaxIdleTime() and setSoLingerTime() are available it appears.
NOTE: the reason for your timeout feature not to work has to do with the nature of the socket on your operating system. Perhaps even the nature of Jetty (i've read about it somewhere, but cannot remember where it was).
NOTE2: i'm not sure why you try to limit the timeout, perhaps a better approach is limiting the buffer sizes? If you're trying to prevent denial of service...
Yes, this is possible. You could do this using DosFilter of Jetty. This filter is generally used to configure a DOS attack prevention mechanism for your Jetty web server. A property of this filter called 'MaxRequestMs' provides what you are looking for.
For more details, check this.
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/servlets/DoSFilter.html