How do JAXWS async calls work with polling - web-services

I need to invoke a long running task via a SOAP web service, using JAXWS on both ends, specifically, Apache CXF 2.6 on both ends.
I see that I can enable async methods in the CXF code generator, which creates two async methods per operation. Because of NAT issues, I cannot use WS-Addressing and callbacks. So I may want to use the other polling method.
I need to be sure that there will be no socket read timeouts using this mechanism, so I want to understand how it works.
Is it the case that a SOAP request is made to the server in a background thread which keeps the same, single, HTTP connection open, and the Future#isDone() checks to see if that thread has received a response?
If so, is there not a risk that a proxy server in between may define its own timeout, and cause an error if the server takes to long to respond?
What do other people do for invoking long running tasks via SOAP?

Yes, it would just keep checking the connection until a response is received. If something occurs between the client and server and the connection is lost, the response would not be retrievable.
For really long running things, the better approach would be to split the long running into two methods. One that would take the input and launch the work on a background thread and just return some sort of unique identifier. A second method would take that identifier and return the result. The client could call that method to kind of poll the server. That could be long running, and block or use the async methods or similar. If THAT requests times out, it could just call it again.

Related

Testing: When writing a HTTP response to the socket, write the headers, then sleep before writing the body

This is surely a weird one ... I'm doing some extreme integration style testing on a custom Java HTTP client for a backend service I'm working with. For reasons which don't matter here, the client has some specific quirks and a custom solution was the only real option.
For automated testing, I've built a "fake" version of the backend service by spinning up a Jetty server locally and having it behave in different ways e.g. return 500, wait e.g. 4 seconds before giving a response to simulate latency etc and firing off a battery of tests against it with the client on build time.
Given the nature of this client, there is an usual and specific scenario which I need to test and I'm trying to find a way to make my Jetty serve behave in the correct fashion. Basically, when returning HTTP response, I need to immediately return the HTTP Headers and the first few bytes of the HTTP body and then sleep. The goal is to trigger a socket timeout in the client specifically when reading the HTTP body.
Anyone know where in Jetty I could plug something in to force this behaviour? Was looking at the Connector interface but not so sure thats the right place.
Thanks for any suggestions.
Write a few bytes to the HttpServletResponse.getOutputStream(), then call HttpServletResponse.flushBuffer() to immediately commit the response.
Bonus tip: use HttpServletResponse.sendError(-1) to terminate the connection abruptly.

How to keep a HTTP long-polling connection open?

I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.

Does WSDL have the concept of an asynchronous web method?

I'm writing an API in WCF 4.6.1. The client(s) will not be written by me, and will not necessarily be in .NET (they could be in any language/platform).
There is a web method which does something that can take a long time, so I want to encourage the client to call it asynchronously. I know that the client can be written to treat the web method as async (threading, etc), but is there a way of "enforcing" the actual web service as an async operation? i.e. Does WSDL have a way to saying "this is an async method"?
Does WSDL have a way to saying "this is an async method"?
No it doesn't. The communication between the client and the service is synchronous even if the client thread does not block while that call is taking place. This is to say the invocation is asynchronous not that the web service method is asynchronous.
If you provide good documentation to say that for a particular operation it's advisable to use a separate thread because the response is slow to be generated you should be OK. Clients need to be built and the integration with the web service tested. The developers will notice the slow response and they will decide if they need to make the call in a non blocking way. Even blocking might be a solution for them, you never know, what you consider slow other might have no issue with.
If you want to "force" clients to not block for the response you could use for example WS-Addressing (I'm assuming here that you are using WCF for a SOAP web service) where your client provides a callback endpoint that you can invoke when the response is ready. This complicates a bit the client since it needs to have a receiving endpoint now. But a client developer might prefer to chose how she invokes the service (in a blocking/non blocking way) as opposed to having to implement the WS-Addressing spec.

How to send request and receive response asynchronously to a .NET webservice by gSOAP2

I have a .NET webservice and a client program which was written by C++. The client program is using gSOAP2 to access the web service. The problem is I need to make a client request and receiving the response from server asynchronously. I search a lot by google and also read gSOAP user guide in 7.3 and 7.4 section but I still don't figure out how to do it. Please help me if you know.
Many thanks,
Tien
I don't think that gsoap means the same thing by asyncronous as you do, an asyncronous gsoap client fires of a message and then forgets about it; from reading your question my understanding is that you want to start the SOAP request/response process, go away and do something else, and then come back latter or be notified when the response has been returned.
If this is the case then I'd suggest you look at using threads to get the behaviour you want. Start a new thread to make the call, your main thread can then be notified or can check back when the call has completed. If you need data back from the call then if I was doing this I'd be tempted to write a thread that communicates via a pair of threadsafe queues. One queue to send requests into the thread and one to pass responses back out. So the main thread writes to the input queue and reads the output queue. If you search on here for C++ threadsafe queue you'll get lots more info.

Forcing asmx web service to handle requests one at a time

I am debugging an ASMX web service that receives "bursts" of requests. i.e., it is likely that the web service will receive 100 asynchronous requests within about 1 or 2 seconds. Each request seems to take about a second to process (this is expected and I'm OK with this performance). What is important however, is that each request is dealt with sequentially and no parallel processing takes places. I do not want any concurrent request processing due to the external components called by the web service. Is there any way I can force the web service to only handle each response sequentially?
I have seen the maxconnection attribute in the machine.config but this seems to only work for outbound connections, where as I wish to throttle the incoming connections.
Please note that refactoring into WCF is not an option at this point in time.
We are usinng IIS6 on Win2003.
What I've done in the past is to simply put a lock statement around any access to the external resource I was using. In my case, it was a piece of unmanaged code that claimed to be thread-safe, but which in fact would trash the C runtime library heap if accessed from more than one thread at a time.
Perhaps you should be queuing the requests up internally and processing them one by one?
It may cause the clients to poll for results (if they even need them), but you'd get the sequential pipeline you wanted...
In IIS7 you can set up a limit of connections allowed to a web site. Can you use IIS7?