I'am working on a project that exposes a Web Api for Encrypting files and doing other tasks. What I want is to make the encryption task async, this is because files could be of size more than 1GB, and I donot want the client to keep waiting for the file to be encrypted. What I want is that once request for encryption is sent to the api the client is notified that your request is accepted and when it finishes a notification about success or failure is sent to the client again. Meanwhile client can do anything.
What are the best practices for this, moreover Iam working in asp.net mvc
You need to off load the encryption task to another thread in your serve. This will free up (complete) the request processing thread, and the client can continue with other stuff. You can wrap the encryption task such that after successful completion or failure, a callback is invoked. This callback must be responsible for notifying the client back.
To notify the client back, upon completion of the encryption task, you have several options, that you must code within your callback:
Email the client of the result.
If the client is a service and listens on a specific port, you can accept a callback URL in the initial encryption request, and can invoke this URL after encryption task. The assumption is that the client is running a http Service.
If there are any other integration points with the client (like filesystem, database, message oriented middleware), then use those to notify of task completion.
Related
I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.
I am working on Kurento custom plugin, In which I have to make some curl web request and send the audio to a server and wait for server's response. I was wondering is there any way by which we can raise events to java server from kurento custom plugin synchronously. Shall I make asyc calls to raise events or make my curl calls async ?
Events fired from the media server are asynchronous. Requests, on the other hand, are synchronous, as there is only one thread attending incoming requests.
I would suggest an event-based asynchronous model in all parts, so you don't block your call to your app server. If you still want to do that, you might wrap your asynchronous event in a synchronous call. You might want to have a look at some helper classes that we use for our tests: the AsyncManager and the AsyncEventManager. You can find an example of usage in any of the tests, but maybe this one is closer to what you want to achieve.
I can't find any definitive answer here. My IoT service needs to tollerate flaky connections. Currently, I manage a local cache myself and retry a cloud-blob transfer as often as required. Could I replace this with an Azure EventHub service? i.e. will the EventHub client (on IoT-Core) buffer events until the connection is available? If so, where is the info on this?
It doesn't seem so according to:
https://azure.microsoft.com/en-us/documentation/articles/event-hubs-programming-guide/
You are resposible for sending and caching it seems:
Send asynchronously and send at scale
You can also send events to an Event Hub asynchronously. Sending
asynchronously can increase the rate at which a client is able to send
events. Both the Send and SendBatch methods are available in
asynchronous versions that return a Task object. While this technique
can increase throughput, it can also cause the client to continue to
send events even while it is being throttled by the Event Hubs service
and can result in the client experiencing failures or lost messages if
not properly implemented. In addition, you can use the RetryPolicy
property on the client to control client retry options.
I just learned websockets but am still c++ ignorant.
I'm using websocket++ 0.3X, and it is a veritable godsend (can't wait for 1.0). If there are multiple concurrent connections, and one client sends the server a message, will the message trigger the handlers of all other clients? If not, how can this be done? (Is this multithreading?)
What I want to do is the obvious: update the database via a message from a client then update any other clients currently viewing the fields updated.
Sources:
http://www.zaphoyd.com/websocketpp/
https://github.com/zaphoyd/websocketpp/wiki
The on_message handler will be called only in the connection that received the message. That connection is responsible for updating the database and signaling to your program to send an update out to all other clients.
Take a look at the broadcast server example here: http://www.zaphoyd.com/websocketpp/manual/common-patterns/server-initiated-messages for a simple example of how to set this up.
I need to invoke a long running task via a SOAP web service, using JAXWS on both ends, specifically, Apache CXF 2.6 on both ends.
I see that I can enable async methods in the CXF code generator, which creates two async methods per operation. Because of NAT issues, I cannot use WS-Addressing and callbacks. So I may want to use the other polling method.
I need to be sure that there will be no socket read timeouts using this mechanism, so I want to understand how it works.
Is it the case that a SOAP request is made to the server in a background thread which keeps the same, single, HTTP connection open, and the Future#isDone() checks to see if that thread has received a response?
If so, is there not a risk that a proxy server in between may define its own timeout, and cause an error if the server takes to long to respond?
What do other people do for invoking long running tasks via SOAP?
Yes, it would just keep checking the connection until a response is received. If something occurs between the client and server and the connection is lost, the response would not be retrievable.
For really long running things, the better approach would be to split the long running into two methods. One that would take the input and launch the work on a background thread and just return some sort of unique identifier. A second method would take that identifier and return the result. The client could call that method to kind of poll the server. That could be long running, and block or use the async methods or similar. If THAT requests times out, it could just call it again.