Putting existing problems aside, I was moving to test the ability of the server to handle multiple requests in my server application in universal app. It appears that it cannot handle multiple requests as advertised in the documentation. (See the source file ServerTask.cpp and MainPage.xaml.cpp for the related code and the README for background information.)
In background i.e. suspended mode, subsequent requests end up with
WinRT information: The object identifier does not represent a valid object.
EDIT: Just run again and the background ServerTask is not executing at all. When ClientTask is triggered, the app is automatically woken up from Suspended state and netstat indicates that it is listening to the appropriate port but not responding to the requests.
While in the foreground mode, subsequent requests end up with
WinRT information: An existing connection was forcibly closed by the remote host.
which mean that I should not do
delete args->Socket;
after handling the request in MainPage::OnConnectionReceived. If I delete that line, it can handle 2-3 requests and still end up with the same exception. On the other hand, is it the right way to go, leaving open sockets?
How should it be implemented?
Related
I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.
I have a Django application web application, and I was wondering if it was possible to have nginx propagate the abort/close to uwsgi/Django.
Basically I know that nginx is aware of the premature abort/close because it defaults to uwsgi_ignore_client_abort to "off", and you get nginx 499 errors in your nginx logs when requests are aborted/closed before the response is sent. Once uwsgi finishes processing the request it throws an "IO Error" when it goes to return the response to nginx.
Turning uwsgi_ignore_client_abort to "on" just makes nginx unaware of the abort/close, and removes the uwsgi "IO Errors" because uwsgi can still write back to nginx.
My use case is that I have an application where people page through some ajax results very quickly, and so if the quickly page through I abort the pending ajax request for the page that they skipped, this keeps the client clean and efficient. But this does nothing for the server side (uwsgi/Django) because they still have to process every single request even if nothing will be waiting for the response.
Now obviously there may be certain pages, where I don't want the request to be prematurely aborted for any reason. But I use celery for long running requests that may fall into that category.
So is this possible? uwsgi's hariakari setting makes me think that it is at some level.... just can't figure out how to do it.
My use case is that I have an application where people page through some ajax results very quickly, and so if the quickly page through I abort the pending ajax request for the page that they skipped, this keeps the client clean and efficient.
Aborting an AJAX request on the client side is done through XMLHttpRequest.abort(). If the request has not yet been sent out when abort() is called, then the request won't go out. But if the request has been sent, the server won't know that the request has been aborted. The connection won't be closed, there won't be any message sent to the server, nothing. If you want the server to know that a request is no longer needed, you basically need to come up with a way to identify requests so that when you make the initial request you get an identifier for it. Then, through another AJAX request you could tell the server that an earlier request should be cancelled. (If you search questions about abort() like this one and search for "server" you'll find explanations saying the same.)
Note that uwsgi_ignore_client_abort is something that deals with connection closures at the TCP level. That's a different thing from aborting an AJAX request. There is generally no action you can take in JavaScript that will entail closing a TCP connection. The browser optimizes the creation and destruction of connections to suit its needs. Just now, I did this:
I used lsof to check whether any process had a connection to example.com. There were none. (lsof is a *nix utility that allows listing open files. Network connections are "files" in *nix.)
I opened a page to example.com in Chrome. lsof showed the connection and the process that opened it.
Then I closed the page.
I polled with lsof to see if the connection I identified earlier was still opened. It stayed open for about one minute after I closed the page even though there was no real need to keep the connection open.
And there's no amount of fiddling with uswgi settings that will make it be aware of aborts performed through XMLHttpRequest.abort()
The use-case scenario you gave was one where users were paging fast through some results. I can see two possibilities for the description given in the question:
The user waits for a refresh before paging further. For instance, Alice is looking through an list of user names sorted alphabetically for user "Zeno" and each time a new page is shown, she sees the name is not there and pages down. In this case, there's nothing to abort because the user's action is dependent on the request having been handled first. (The user has to see the new page before making a decision.)
The user just pages down without waiting for a refresh. Alice again is looking for "Zeno" but she figures it's going to be on the last page so click, click, click she goes. In this case, you can debounce the requests made to the server. Then the next page button is pressed, increment the number of the page that should be shown to the user but don't send the request right away. Instead, you wait for a small delay after the user ceases clicking the button to then send the request with final page number and so you make one request instead of a dozen. Here is an example of a debounce performed for a DataTables search.
Now obviously there may be certain pages, where I don't want the request to be prematurely aborted for any reason.
This is precisely the problem behind taking this one way or the other.
Obviously, you may not want to continue spending system resources processing a connection that has since been aborted, e.g., an expensive search operation.
But then maybe the connection was important enough that it still has to be processed even if the client has disconnected.
E.g., the very same expensive search operation, but one that's actually not client-specific, and will be cached by nginx for all subsequent clients, too.
Or maybe an operation that modifies the state of your application — you clearly wouldn't want to have your application to have an inconsistent state!
As mentioned, the problem is with uWSGI, not with NGINX. However, you cannot have uWSGI automatically decide what was your intention, without you revealing such intention yourself to uWSGI.
And how exactly will you reveal your intention in your code? A whole bunch of programming languages don't really support multithreaded and/or asynchronous programming models, which makes it entirely non-trivial to cancel operations.
As such, there is no magic solution here. Even the concurrency-friendly programming languages like Golang are having issues around the WithCancel context — you may have to pass it around in every function call that could possibly block, making the code very ugly.
Are you already doing the above context passing in Django? If not, then the solution is ugly but very simple — any time you can clearly abort the request, check whether the client is still connected with uwsgi.is_connected(uwsgi.connection_fd()):
http://lists.unbit.it/pipermail/uwsgi/2013-February/005362.html
From the examples and documentation, it seems libcurl multi interface provides asynchronous support in batch mode i.e. easy handles are added to multi and then finally the requests are fired simultaneously with curl_multi_socket_action. Is it possible to trigger a request, when easy handle is added but the control returns to application after request is written on the socket?
EDIT:
It'll help in firing request in the below model, instead of firing requests in batch(assuming request creation on client side and processing on the server takes same duration)
Client -----|-----|-----|-----|
Server < >|-----|-----|-----|----|
The multi interface returns "control" to the application as soon as it would otherwise block. It will therefor also return control after it has sent off the request.
But I guess you're asking how you can figure out exactly when the request has been sent? I think that's only really possibly by using CURLOPT_DEBUGFUNCTION and seeing when the request is sent. Not really a convenient way...
you can check the documents this:
https://curl.haxx.se/libcurl/c/hiperfifo.html
It's combined with libevent and libcurl.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded.
Then libcurl will download all easy handles asynchronously by calling curl_multi_socket_action ,so the control will return to system.
I am writing a simple web server with C++ that handles long-lived connections. However, I need to reload my web server from time to time. I wonder if there is a way that I can hand over the established connections from one process to another process to be able to retain my established connections after reload.
Would that be enough to only pass file descriptors? what would happen to connection states?
Any similar open source project that does the same thing?
Any thoughts or ideas?
Thanks,
I really have no idea whether this is possible, but I think not. If you fork() then the child will "inherit" the descriptors, but I don't know whether they behave like the should (though I suspect that they do.) And with forking, you can't run new code (can you?) Simple descriptor numbers are process-specific, so just passing them to a new, unrelated process won't work either, and they will be closed when your process terminates anyway.
One solution (in the absence of a simpler one,) is to break your server into two processes:
Front-end: A very simple process that just accepts the connections, keep them open and forwards any data it receives to the second process, and vice versa.
Server: The real web server, that does all the logic and processing, but does not communicate with the clients directly.
The first and second processes communicate via a simple protocol. One feature of this protocol must that it does support the second process being terminated and relaunched.
Now, you can reload the actual server process without losing the client connections (since they are handled by the front-end process.) And since this front-end is extremely simple and probably has very few configurations and bugs, you rarely need to reload it at all. (I'm assuming that you need to reload your server process because it runs into bugs that need to be fixed or you need to change configurations and stuff.)
Another important and helpful feature that this system can have is to be able to transition between server processes "gradually". That is, you already have a front-end and a server running, but you decide to reload the server. You launch another server process that connects to the front-end (while the old server is still running and connected,) and the front-end process forwards all the new client connections to the new server process (or even all the new requests coming from the existing client connections.) And when the old server finishes processing all the requests that it has under processing, it gracefully and cleanly exits.
As I said, this is a solution you might to try only if nothing easier and simpler is found.
Ok, strange setup, strange question. We've got a Client and an Admin web application for our SaaS app, running on asp.net-2.0/iis-6. The Admin application can change options displayed on the Client application. When those options are saved in the Admin we call a Webservice on the Client, from the Admin, to flush our cache of the options for that specific account.
Recently we started giving our Client application >1 Worker Processes, thus causing the cache of options to only be cleared on 1 of the currently running Worker Processes.
So, I obviously have other avenues of fixing this problem (however input is appreciated), but my question is: is there any way to target/iterate through each Worker Processes via a web request?
I'm making some assumptions here for this answer....
I'm assuming the client app is using one of the .NET caching classes to store your application's options?
When you say 'flush' do you mean flush them back to a configuration file or db table?
Because the cache objects and data won't be shared between processes you need a mechanism to signal to the code running on the other worker process that it needs to re-read it's options into its cache or force the process to restart (which is not exactly convenient and most likely undesirable).
If you don't have access to the client source to modify to either watch the options config file or DB table (say using a SqlCacheDependency) I think you're kinda stuck with this behaviour.
I have full access to admin and client, by cache, I mean .net's Cache object. By flush I mean removing the item from the Cache object.
I'm aware that both worker processes don't share the cache data. That's sort of my conundrum)
The system is the way it is to remove the need to hit sql every new-session that comes in. So I'm trying to find a solution that can just tell each worker process that the cache needs to be cleared w/o getting sql involved.