I have a Django application web application, and I was wondering if it was possible to have nginx propagate the abort/close to uwsgi/Django.
Basically I know that nginx is aware of the premature abort/close because it defaults to uwsgi_ignore_client_abort to "off", and you get nginx 499 errors in your nginx logs when requests are aborted/closed before the response is sent. Once uwsgi finishes processing the request it throws an "IO Error" when it goes to return the response to nginx.
Turning uwsgi_ignore_client_abort to "on" just makes nginx unaware of the abort/close, and removes the uwsgi "IO Errors" because uwsgi can still write back to nginx.
My use case is that I have an application where people page through some ajax results very quickly, and so if the quickly page through I abort the pending ajax request for the page that they skipped, this keeps the client clean and efficient. But this does nothing for the server side (uwsgi/Django) because they still have to process every single request even if nothing will be waiting for the response.
Now obviously there may be certain pages, where I don't want the request to be prematurely aborted for any reason. But I use celery for long running requests that may fall into that category.
So is this possible? uwsgi's hariakari setting makes me think that it is at some level.... just can't figure out how to do it.
My use case is that I have an application where people page through some ajax results very quickly, and so if the quickly page through I abort the pending ajax request for the page that they skipped, this keeps the client clean and efficient.
Aborting an AJAX request on the client side is done through XMLHttpRequest.abort(). If the request has not yet been sent out when abort() is called, then the request won't go out. But if the request has been sent, the server won't know that the request has been aborted. The connection won't be closed, there won't be any message sent to the server, nothing. If you want the server to know that a request is no longer needed, you basically need to come up with a way to identify requests so that when you make the initial request you get an identifier for it. Then, through another AJAX request you could tell the server that an earlier request should be cancelled. (If you search questions about abort() like this one and search for "server" you'll find explanations saying the same.)
Note that uwsgi_ignore_client_abort is something that deals with connection closures at the TCP level. That's a different thing from aborting an AJAX request. There is generally no action you can take in JavaScript that will entail closing a TCP connection. The browser optimizes the creation and destruction of connections to suit its needs. Just now, I did this:
I used lsof to check whether any process had a connection to example.com. There were none. (lsof is a *nix utility that allows listing open files. Network connections are "files" in *nix.)
I opened a page to example.com in Chrome. lsof showed the connection and the process that opened it.
Then I closed the page.
I polled with lsof to see if the connection I identified earlier was still opened. It stayed open for about one minute after I closed the page even though there was no real need to keep the connection open.
And there's no amount of fiddling with uswgi settings that will make it be aware of aborts performed through XMLHttpRequest.abort()
The use-case scenario you gave was one where users were paging fast through some results. I can see two possibilities for the description given in the question:
The user waits for a refresh before paging further. For instance, Alice is looking through an list of user names sorted alphabetically for user "Zeno" and each time a new page is shown, she sees the name is not there and pages down. In this case, there's nothing to abort because the user's action is dependent on the request having been handled first. (The user has to see the new page before making a decision.)
The user just pages down without waiting for a refresh. Alice again is looking for "Zeno" but she figures it's going to be on the last page so click, click, click she goes. In this case, you can debounce the requests made to the server. Then the next page button is pressed, increment the number of the page that should be shown to the user but don't send the request right away. Instead, you wait for a small delay after the user ceases clicking the button to then send the request with final page number and so you make one request instead of a dozen. Here is an example of a debounce performed for a DataTables search.
Now obviously there may be certain pages, where I don't want the request to be prematurely aborted for any reason.
This is precisely the problem behind taking this one way or the other.
Obviously, you may not want to continue spending system resources processing a connection that has since been aborted, e.g., an expensive search operation.
But then maybe the connection was important enough that it still has to be processed even if the client has disconnected.
E.g., the very same expensive search operation, but one that's actually not client-specific, and will be cached by nginx for all subsequent clients, too.
Or maybe an operation that modifies the state of your application — you clearly wouldn't want to have your application to have an inconsistent state!
As mentioned, the problem is with uWSGI, not with NGINX. However, you cannot have uWSGI automatically decide what was your intention, without you revealing such intention yourself to uWSGI.
And how exactly will you reveal your intention in your code? A whole bunch of programming languages don't really support multithreaded and/or asynchronous programming models, which makes it entirely non-trivial to cancel operations.
As such, there is no magic solution here. Even the concurrency-friendly programming languages like Golang are having issues around the WithCancel context — you may have to pass it around in every function call that could possibly block, making the code very ugly.
Are you already doing the above context passing in Django? If not, then the solution is ugly but very simple — any time you can clearly abort the request, check whether the client is still connected with uwsgi.is_connected(uwsgi.connection_fd()):
http://lists.unbit.it/pipermail/uwsgi/2013-February/005362.html
Related
Occasionally our Postgres database crashes and it can only be solved by restarting the server. We have tried incrementing the max connections and Django's CONN_MAX_AGE. Also, I am trying to learn how to set up PgBouncer. However, I am convinced the underlying issue must be something else which is fixable.
I am trying to find what that issue is. The problem is I wouldn't know where or what to begin to look at. Here are some pieces of information:
The errors are always OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections and OperationalError: could not write to hash-join temporary file: No space left on device. I think this is caused by opening too many database connections, but I have never managed to catch this going down live so that I could inspect pg_stat_activity and see what actual connections were active.
Looking at the error log, the same URL shows up for the most part. I've checked the nginx log and it's listed in many different lines, meaning the request is being made multiple times at once rather than Django logging the same error multiple times. All these requests are responded with 499 Client Closed Request. In addition to this same URL, there are of course sprinkled requests of other users trying to access our site.
I should mention that the logic the server processes when the URL in question is requested is pretty simple and I see nothing suspicious that could cause a database crash. However, for some reason, the page loads slowly in production.
I know this is very vague and very little to work with, but I am not used to working sysadmin, I only studied this kind of thing in college and so far I've only worked as a developer.
Those two problems are mostly independent.
Running out of connection slots won't crash the database. It just is a sign that you either don't use a connection pool or you have a connection leak, i.e. you forget to close transactions in your code.
Running out of space will crash your database if the condition persists.
I assume that the following happens in your system:
Because someone forgot a couple of join conditions or for some other reason, some of your queries take a very long time.
They also priduce a lot of (perhaps intermediate) results that are cached in temporary files that eventually fill up the disk. This out of space condition is cleared as soon as the query fails, but it can crash the database.
Because these queries take long and block a database session, your application keeps starting new sessions until it reaches the limit.
Solution:
Find and fix thise runaway queries. As a stop-gap, you can set statement_timeout to terminate all statements that take too long.
Use a connection pool with an upper limit so that you don't run out of connections.
We are using Django 1.3.1 and Postgres 9.1
I have a view which just fires multiple selects to get data from the database.
In Django documents it is mentioned, that when a request is completed then ROLLBACK is issued if only select statements were fired during a call to a view. But, I am seeing lot of "idle in transaction" in the log, especially when I have more than 200 requests. I don't see any commit or rollback statements in the postgres log.
What could be the problem? How should I handle this issue?
First, I would check out the related post What does it mean when a PostgreSQL process is “idle in transaction”? which covers some related ground.
One cause of "Idle in transaction" can be developers or sysadmins who
have entered "BEGIN;" in psql and forgot to "commit" or "rollback".
I've been there. :)
However, you mentioned your problem is related to have a lot of
concurrent connections. It sounds like investigating the "locks" tip
from the post above may be helpful to you.
A couple more suggestions: this problem may be secondary. The primary
problem might be that 200 connections is more than your hardware and
tuning can comfortably handle, so everything gets slow, and when things
get slow, more things are waiting for other things to finish.
If you don't have a reverse proxy like Nginx in front of your web app,
considering adding one. It can run on the same host without additional
hardware. The reverse proxy will serve to regulate the number of
connections to the backend Django web server, and thus the number of
database connections-- I've been here before with having too many
database connections and this is how I solved it!
With Apache's prefork model, there is 1=1 correspondence between the
number of Apache workers and the number of database connections,
assuming something like Apache::DBI is in use. Imagine someone connects
to the web server over a slow connection. The web and database server
take care of the request relatively quickly, but then the request is
held open on the web server unnecessarily long as the content is
dribbled back to the client. Meanwhile, the database connection slot is
tied up.
By adding a reverse proxy, the backend server can quickly delivery a
repliy back to the reverse proxy and then free the backend worker and
database slot.. The reverse proxy is then responsible for getting the
content back to the client, possibly holding open it's own connection
for longer. You may have 200 connections to the reverse proxy up front,
but you'll need far fewer workers and db slots on the backend.
If you graph the db slots with MRTG or similar, you'll see how many
slots you are actually using, and can tune down max_connections in
PostgreSQL, freeing those resources for other things.
You might also look at pg_top to
help monitor what your database is up to.
I understand this is an older question, but this article may describe the problem of idle transactions in django.
Essentially, Django's TransactionMiddleware will not explicitly COMMIT a transaction if it is not marked dirty (usually triggered by writing data). Yet, it still BEGINs a transaction for all queries even if they're read only. So, pg is left waiting to see if any more commands are coming and you get idle transactions.
The linked article shows a small modification to the transaction middleware to always commit (basically remove the condition that checks if the transaction is_dirty). I'll be trying this fix in a production environment shortly.
I'm putting together a website that will track user-defined events with time limits. Every user would be free to create events, and when the time limit expired, the server would need to take some action based on the outcome of the event. The specific component I'm struggling with is the time-keeping: think like eBay's auction clock -- it's set to expire at a certain time, clearly runs server-side, and takes some action when the time runs out. Searches for a "server side timer," unfortunately, just bring back results for a timer that gets the time from the server instead of the client. :(
The most obvious solution is to run a script on the server, some program that would watch all the clocks and take action when any of them expired. Tragically, I'll be using free web hosting, and sincerely doubt that I'll be able to find someone who'll let me run arbitrary stuff on their servers.
The solutions that I've looked into:
Major concept option 1: persuade each user's browser to run the necessary timers (trivial javascript), and when the timers expire, take necessary action. The problem with this approach is obvious: there could be hundreds, if not thousands, of simultaneous expiring timers (they'll tend to expire in clusters), and the worst case is that every possible user could be viewing their timer expire. That's a server overload waiting to happen at the worst possible instant.
Major concept option 2: have one really trusted browser, say, a user logged in to the website as "cron" which could run all of the timers at once. The action would all happen in that browser's javascript, and would work great, as long as that browser never crashed, that machine never failed, and that internet connection never went down.
As you can see, I feel like I'm barking up the wrong forest on this problem. Some other ideas that have presented themselves:
AJAX: I'm not seeing anything here that will do quite what I need. It's all browser-run stuff, nothing like a server-side process that could run independent of the user's browser.
PHP: Runs neatly on the server, but only in response to client requests. I'm not seeing any clean way to make PHP fork off a process and run a timer independent of the user's browser.
JS: same problems as PHP, but easier to read. ;)
Ruby: There may be some multi-threading with Ruby, but it isn't readily apparent to me. Would it be possible to have each user's browser check to see if a timer process was running for their event, and spawn a new server-side ruby process if it wasn't?
I'm wide open for ideas -- I've started playing with concepts in JS and PHP, but I'm not tied to any language, particularly. The only constraint, really, is that I won't own the server that I'm running the site on, so I can't just run a neat little local process that does what I need it to do. :(
Any thoughts? Thanks in advance,
Dan
ASP.NET has multi-threading. You can have a static variable to collect the event data, and use a thread to do whatever needed when the time comes. After you can empty the static variable so it's ready for future use.
http://leedale.wordpress.com/2007/07/22/multithreading-with-aspnet-20/
You might want to take a look at the Quartz scheduler for Java which also has a .NET version. With a friendly open source license (Apache 2.0) this is probably a very good starting point.
If you can control cron jobs, which at least I could on HostPapa's shared hosting, you could run a php file every second which checks the timers and takes action based on them.
I would suggest AJAX anyway, what we did on a game server was emulation of "server connects to client" via AJAX request to server without any time-out (asynchronous connection). Basically you create one extra connection for each client that hangs on the server and waits for the server to take self-invoked action. After the action is done you start a new hanging connection immediately so you have one hanging all the time (so the server can talk to your client any time it wants). You can send javascript code from the server that will decide what will happen next. You can check clients to have these hanging connections on the server side to count as valid and of course run your timers on the server.
I have an admin-controlled feature (importing database) that can take some time to finish, so I want to show some feedback to the user during that time - for example a progress bar, or just some messages. Even sending the page in parts during the long action would suffice.
What would be the simplest way to do it in Django?
Ajax Polling -- Using a client-side timer, you constantly poll the server about it's status. The process is like this: The user configures the database details and hits 'upload'. The file transfers and the page request starts an asynchronous process on the server to perform the database import. When the user clicks upload it starts a client-side timer which at regular intervals sends an AJAX request to the server to ask it about it's progress. The server returns JSON and the client side script figures out what it wants to do with it.
COMET -- I'm not as familiar with this, but traditional AJAX works by the client sending a request to the sever. It's known as 'pull' communication. In COMET, it's push. The server pushes back data to the client about it's progress, even if the server didn't ask for it. This creates a situation with less strain on the server than polling. Google turns up some results for people using COMET with Django.
Reverse AJAX -- Similar to COMET. Reverse Ajax with Django.
(I apologies, I know the least about the last 2, but I figured you'd at least like to know they exist)
There's no way to do this without some sort of client-side scripting, ie Ajax. You need something that will poll the server at regular intervals and show a response to the user. There's a snippet that shows how this might be done.
Of course, to make that possible you'll also have to farm off the import itself to an off-line process. This would do the import, and record its progress somewhere regularly (in a file, or the database) so that the Ajax can query it. A good way of doing this might be to use celery, the Django-based distributed task queue.
Finally, you'll need a simple view that the Ajax will call, which will query the long-running process (or look at the progress record that it creates) and report back to the client.
So, fairly complicated.
Ok so coming in from a completely different field of software development, I have a problem that's a little out of my experience. I'll state it as plainly as possible without giving out confidential details:
I want to make a server that "does stuff" when requested by a client on the same network. The client will most likely be a back-end to a content management system.
The request consists of some parameters, an input file and several output files.
The files are quite large, from 10MB - 100MB of data that must be processed (possibly more). The client can specify destination for output files.
The client needs to be able to find out the status of the request - eg position in queue, percent complete. And obviously when and where to pick up output.
So, my questions are - What is a good method for the client and server to communicate? Should the client poll the server, or provide a "callback" somehow for status updates?
At this point the implementation platform is completely open - anything from C to scripting languages like Ruby are available (at either end), my main issue is how the communication should occur.
First thought, set up some webservices between the machines. But webservices aren't going to be too friendly or efficient with the large files.
Simple appoach:
ServerA hits a web method on ServerB "BeginProcess". The response give you back a FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerA regularly polls a webmethod "GetProcessStatus(ticketNumber)", possible return values: Awaiting files, Percent complete, Finished
Slightly more complicated approach, without the polling.
ServerA hits a web method on ServerB "BeginProcess(postUrl)", and you send along a URL you want status updates POSTed to. Response: FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerB sends thru updates to the POST location on ServerA every XXX% completed.
For extra resilience you would keep the GetProcessStatus in case something gets lost in the ether...
Files that will be up to 100MB aren't a good choice for a webservice, since you run a risk of the HTTP session timing out before you have completed your processing.
Having a webservice for checking the status of these jobs would be more ideal. Handle the file transfers via FTP or whatever file transfer method you choose and poll a webservice for updates on status. When the process is completed, you might have an output file url returned that can be downloaded.