How to send request and receive response asynchronously to a .NET webservice by gSOAP2 - web-services

I have a .NET webservice and a client program which was written by C++. The client program is using gSOAP2 to access the web service. The problem is I need to make a client request and receiving the response from server asynchronously. I search a lot by google and also read gSOAP user guide in 7.3 and 7.4 section but I still don't figure out how to do it. Please help me if you know.
Many thanks,
Tien

I don't think that gsoap means the same thing by asyncronous as you do, an asyncronous gsoap client fires of a message and then forgets about it; from reading your question my understanding is that you want to start the SOAP request/response process, go away and do something else, and then come back latter or be notified when the response has been returned.
If this is the case then I'd suggest you look at using threads to get the behaviour you want. Start a new thread to make the call, your main thread can then be notified or can check back when the call has completed. If you need data back from the call then if I was doing this I'd be tempted to write a thread that communicates via a pair of threadsafe queues. One queue to send requests into the thread and one to pass responses back out. So the main thread writes to the input queue and reads the output queue. If you search on here for C++ threadsafe queue you'll get lots more info.

Related

Asynchronous Request Handling Using Multithreading

I am working on a module which uses 10 queues to handle threads and each of them send curl requests using curl_easy interface (along with Lock) so that; a single connection is maintained till the response is not received. I want to enhance request handling by using curl_multi interface where curl requests are sent by the thread and handled in parallel fashion.
I have created a separate code to implement it. I created 3 threads for instance, being handled one by one, the first thread sends request to curl_multi till it's running and there are transfers existing, which allocates resources using curl_easy interface for each transfer.
I have gone through a lot of examples but cannot figure out how to implement it in C++. Also because I have recently learnt multi threading and curl concepts in C++ I need assistance with the approach.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
Update - I have added two threads and each sends two requests simultaneously. curl_multi is being handled by an array of handles, for curl_easy.
I want to keep it free of arrays because that is limiting the number of requests.
Can it be made asynchronous and accept all transfers and exit only when the client/user does. There are enough examples of curl_multi therefore I am not clear of its implementation.
Reading the curl_multidocumentation, it doesn't seem as you have to create different threads for this, as it works via your multiple easy handles added to the multi handle object. You then call curl_multi_perform to start all transfers in a non-blocking way.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
I don't understand what you mean by this, do you mean that you just want to keep those connections alive until everything is transferred? If so, curl_multi already gives you info on the progress of your transfers which can help you determine what to do.
Hope it helps

Is there any way to build an interactive terminal using Django Channels with it's current limitations?

It seems with Django Channels each time anything happens on the websocket there is no persistent state. Even within the same websocket connection, you can not preserve anything between each call to receive() on a class based consumer. If it can't be serialized into the channel_session, it can't be stored.
I assumed that the class based consumer would be persisted for the duration of the web socket connection.
What I'm trying to build is a simple terminal emulator, where a shell session would be created when the websocket connects. Read data would be passed as input to the shell and the shell's output would be passed out the websocket.
I can not find a way to persist anything between calls to receive(). It seems like they took all the bad things about HTTP and brought them over to websockets. With each call to conenct(), recieve(), and disconnect() the whole Consumer class is reinstantiated.
So am I missing something obvious. Can I make another thread and have it read from a Group?
Edit: The answers to this can be found in the comments below. You can hack around it. Channels 3.0 will not instantiate the Consumers on every receive call.
The new version of Channels does not have this limitation. Consumers stay in memory for the duration of the websocket request.

libcurl multi asynchronous support

From the examples and documentation, it seems libcurl multi interface provides asynchronous support in batch mode i.e. easy handles are added to multi and then finally the requests are fired simultaneously with curl_multi_socket_action. Is it possible to trigger a request, when easy handle is added but the control returns to application after request is written on the socket?
EDIT:
It'll help in firing request in the below model, instead of firing requests in batch(assuming request creation on client side and processing on the server takes same duration)
Client -----|-----|-----|-----|
Server < >|-----|-----|-----|----|
The multi interface returns "control" to the application as soon as it would otherwise block. It will therefor also return control after it has sent off the request.
But I guess you're asking how you can figure out exactly when the request has been sent? I think that's only really possibly by using CURLOPT_DEBUGFUNCTION and seeing when the request is sent. Not really a convenient way...
you can check the documents this:
https://curl.haxx.se/libcurl/c/hiperfifo.html
It's combined with libevent and libcurl.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded.
Then libcurl will download all easy handles asynchronously by calling curl_multi_socket_action ,so the control will return to system.

how synchronize recv() when multithreading cpp CRT

I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.

How do JAXWS async calls work with polling

I need to invoke a long running task via a SOAP web service, using JAXWS on both ends, specifically, Apache CXF 2.6 on both ends.
I see that I can enable async methods in the CXF code generator, which creates two async methods per operation. Because of NAT issues, I cannot use WS-Addressing and callbacks. So I may want to use the other polling method.
I need to be sure that there will be no socket read timeouts using this mechanism, so I want to understand how it works.
Is it the case that a SOAP request is made to the server in a background thread which keeps the same, single, HTTP connection open, and the Future#isDone() checks to see if that thread has received a response?
If so, is there not a risk that a proxy server in between may define its own timeout, and cause an error if the server takes to long to respond?
What do other people do for invoking long running tasks via SOAP?
Yes, it would just keep checking the connection until a response is received. If something occurs between the client and server and the connection is lost, the response would not be retrievable.
For really long running things, the better approach would be to split the long running into two methods. One that would take the input and launch the work on a background thread and just return some sort of unique identifier. A second method would take that identifier and return the result. The client could call that method to kind of poll the server. That could be long running, and block or use the async methods or similar. If THAT requests times out, it could just call it again.