I have this issue for a while now where starting django's internal server (runserver) becomes near unusable because there's so many errors like this reported in the console:
Exception ignored in: <generator object SQLCompiler.setup_query.<locals>.<genexpr> at 0x2F2DE360>
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\django\db\models\sql\compiler.py", line 39, in <genexpr>
if all(self.query.alias_refcount[a] == 0 for a in self.query.alias_map):
SystemError: error return without exception set
Basically, these are generators not consumed and Python (at least 3.5 and up) reports this into console. And there are MANY!
This basically hogs the python process serving the app as well as PyCharm process trying to display all these errors in the console view. On a bad day, the app becomes like 10% of its normal speed because of this.
I am currently lessening this problem by implementing a filter on stderr which at least makes the console output usable again. It helps with CPU usage as well, but the problem is that those exceptions still happen and trigger PyCharm's hooks. As a result, CPU usage is still high, though not insane any more.
How can I get rid of this permanently? Any interpretation of "get rid" accepted in proposed solutions.
This seems to be an issue with PyCharm.
Try setting the environment variable PYDEVD_USE_FRAME_EVAL=NO, as suggested in this ticket on the PyCharm Issue Tracker.
Related
One particular function in my Django server performs a lot of computation during a request and that computation involves a lot of local variables. When an error is thrown during the computation, Django spits out all the function's local variables onto the error page. This process takes Django 10-15 seconds (presumably because of the number/size of my local variables)...
I want to disable the error page for just that particular function, so I can get to see the simple stacktrace faster.
I tried using the sensitive_variables decorator but that doesn't do anything when DEBUG=True.
I also tried simply catching the exception and throwing a new exception:
try:
return mymodule.performComputations(self)
except Exception as e:
raise Exception(repr(e))
but Django still manages to output all the local variables from the performComputations method.
My only option right now is to set DEBUG=False globally anytime I'm actively developing that particular code flow / likely to encounter errors. Any other suggestions? Thanks!
I am editing a Django 2.2 application and I received an Exception:
TypeError at /myappname/myurl
'RelatedManager' object is not iterable
Exception Location: /myprojectlocation/myappname/views.py in myview, line 41
Whoops, I accidentally typed .objects instead of .objects.all(). Easy fix.
But now, no matter how I edit views.py, I keep receiving the exact same exception, always pointing at line 41. I even commented line 41 out entirely, and still get the same exception. The exception Traceback shows the updated line 41, so I know I am definitely editing the right file and the changes are being saved, but the actual exception itself just keeps complaining about that RelatedManager no matter what I edit.
I've restarted the webserver and I've cleared all my browsing data. So what on earth is still "remembering" the old code that I've edited many times since?
Update: Everything is fine on the django development server (manage.py runserver). So apparently it's more of a uWSGI/nginx problem than anything I'm doing wrong with my django files.
Turns out that somehow my tmp/uwsgi.pid file wound up with the wrong process ID in its contents, so I was unwittingly failing to stop the master uWSGI process when I thought I was restarting the Django application.
I used pstree -p to see my process hierarchy and get the correct process ID of the master uWSGI process, edited uwsgi.pid to contain that correct ID, used pkill -9 -f \path\to\my\application to kill the process, and finally restarted my application again.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Django: IE doesn't load locahost or loads very SLOWLY
I just set up a clean dev environment on a computer running Windows 7 64-bit and installed all the latest officially released 64-bit versions of my tools including Django 1.3.1 and Python 2.7.2. I also got all the OS updates from MS and the computer vendor (HP), which I assume include fixes for IE9 bugs.
I am seeing exactly the same problem as reported 6 months ago in this StackOverflow question originally posed back on May 18 2011:
Django: IE doesn't load localhost or loads very SLOWLY
That is, Firefox works fine but IE9 hangs. The Django dev server , which seems to be running single threaded, seems to complete passing the response to the client and then sits waiting for the next request. IE9 however seems to think that it has not gotten the complete response (even though it has, including static pages referenced in the main page, to judge from the fact that it gets into the cache and can be gotten via am “X” disconnect followed by a refresh.)
My question is, is there a definitive resolution for this problem? In a response to the original question dated Aug 23 2011 Catalin Iacob says “I filled ticket 15178 and just confirmed that using the multithreaded development server fixes it. The fix is in revision 16427.” I am running the final Django 1.3.1 but I don’t know what its revision number is. Is the fix in question in 1.3.1? Do I have to enable multithreading with an option in settings.py or whatever?
EDIT: Thanks to user1043838 and nagisa and maybe others to come for posting concrete constructive solutions to the problem. I will try the fix that goes into settings.py because it’s non-invasive and easy to back out, but in general I want to work in as vanilla an environment as possible (Windows env that is). The problem bothers me but is far from a show-stopper at this point – firefox + firebug etc. is better for testing anyway – and if the cause is not outdated or misconfigured software then I can deal.
Disclaimer: I don't use Windows.
I will post multi threaded server fix as answer as I have a hunch that this is purely connection concurrency related problem.
In your project manage.py file add
import settings
# Multithreaded server...
if settings.DEBUG:
import SocketServer
import django.core.servers.basehttp
django.core.servers.basehttp.WSGIServer = \
type('WSGIServer',
(SocketServer.ThreadingMixIn,
django.core.servers.basehttp.WSGIServer,
object),
{})
just before if __name__ == "__main__": line. Then rerun your server by using same manage.py runserver and it should run as multi threaded server.
But be aware, it is even less stable than single threaded server and it tends to not serve files at all sometimes.
It seems to be working fairly well on Internet Explorer 10 Developer Preview on Windows 8 x64 Developer Preview with Django version 1.4 pre-alpha SVN-17202 running Satchmo version 0.9.2-pre hg-unknown.
Maybe upgrade?
How can I get Django 1.0 to write all errors to the console or a log file when running runserver in debug mode?
I've tried using a middleware class with process_exception function as described in the accepted answer to this question:
How do you log server errors on django sites
The process_exception function is called for some exceptions (eg: assert(False) in views.py) but process_exception is not getting called for other errors like ImportErrors (eg: import thisclassdoesnotexist in urs.py). I'm new to Django/Python. Is this because of some distinction between run-time and compile-time errors? But then I would expect runserver to complain if it was a compile-time error and it doesn't.
I've watched Simon Willison's fantastic presentation on Django debugging (http://simonwillison.net/2008/May/22/debugging/) but I didn't see an option that would work well for me.
In case it's relevant, I'm writing a Facebook app and Facebook masks HTTP 500 errors with their own message rather than showing Django's awesomely informative 500 page. So I need a way for all types of errors to be written to the console or file.
Edit: I guess my expectation is that if Django can return a 500 error page with lots of detail when I have a bad import (ImportError) in urls.py, it should be able to write the same detail to the console or a file without having to add any additional exception handling to the code. I've never seen exception handling around import statements.
Thanks,
Jeff
It's a bit extreme, but for debugging purposes, you can turn on the DEBUG_PROPAGATE_EXCEPTIONS setting. This will allow you to set up your own error handling. The easiest way to set up said error handling would be to override sys.excepthook. This will terminate your application, but it will work. There may be things you can do to make this not kill your app, but this will depend on what platform you're deploying this for. At any rate, never use this in production!
For production, you're pretty much going to have to have extensive error handling in place. One technique I've used is something like this:
>>> def log_error(func):
... def _call_func(*args, **argd):
... try:
... func(*args, **argd)
... except:
... print "error" #substitute your own error handling
... return _call_func
...
>>> #log_error
... def foo(a):
... raise AttributeError
...
>>> foo(1)
error
If you use log_error as a decorator on your view, it will automatically handle whatever errors happened within it.
The process_exception function is called for some exceptions (eg: assert(False) in views.py) but process_exception is not getting called for other errors like ImportErrors (eg: import thisclassdoesnotexist in urs.py). I'm new to Django/Python. Is this because of some distinction between run-time and compile-time errors?
In Python, all errors are run-time errors. The reason why this is causing problems is because these errors occur immediately when the module is imported before your view is ever called. The first method I posted will catch errors like these for debugging. You might be able to figure something out for production, but I'd argue that you have worse problems if you're getting ImportErrors in a production app (and you're not doing any dynamic importing).
A tool like pylint can help you eliminate these kinds of problems though.
The process_exception function is
called for some exceptions (eg:
assert(False) in views.py) but
process_exception is not getting
called for other errors like
ImportErrors (eg: import
thisclassdoesnotexist in urs.py). I'm
new to Django/Python. Is this because
of some distinction between run-time
and compile-time errors?
No, it's just because process_exception middleware is only called if an exception is raised in the view.
I think DEBUG_PROPAGATE_EXCEPTIONS (as mentioned first by Jason Baker) is what you need here, but I don't think you don't need to do anything additional (i.e. sys.excepthook, etc) if you just want the traceback dumped to console.
If you want to do anything more complex with the error (i.e. dump it to file or DB), the simplest approach would be the got_request_exception signal, which Django sends for any request-related exception, whether it was raised in the view or not.
The get_response and handle_uncaught_exception methods of django.core.handlers.BaseHandler are instructive (and brief) reading in this area.
without having to add any additional
exception handling to the code. I've
never seen exception handling around
import statements.
Look around a bit more, you'll see it done (often in cases where you want to handle the absence of a dependency in some particular way). That said, it would of course be quite ugly if you had to go sprinkling additional try-except blocks all over your code to make a global change to how exceptions are handled!
First, there are very few compile-time errors that you'll see through an exception log. If your Python code doesn't have valid syntax, it dies long before logs are opened for writing.
In Django runserver mode, a "print" statement writes to stdout, which you can see. This is not a good long-term solution, however, so don't count on it.
When Django is running under Apache, however, it depends on which plug-in you're using. mod_python isn't easy to deal with. mod_wsgi can be coerced into sending stdout and stderr to a log file.
Your best bet, however, is the logging module. Put an initialization into your top-level urls.py to configure logging. (Or, perhaps, your settings.py)
Be sure that every module has a logger available for writing log messages.
Be sure that every web services call you make has a try/except block around it, and you write the exceptions to your log.
http://groups.google.com/group/django-nashville/browse_thread/thread/b4a258250cfa285a?pli=1
If you are on a *nix system you could
write to a log (eg. mylog.txt) in python
then run "tail -f mylog.txt" in the console
this is a handy way to view any kind of log in near real time
I'm trying to test the happy-path for a piece of code which takes a long time to respond, and then begins writing a file to the response output stream, which prompts a download dialog in browsers.
The problem is that this process has failed in the past, throwing an exception after this long amount of work. Is there a way in selenium to wait-for-download or equivalent?
I could throw in a Thread.sleep, but that would be inaccurate and unnecessarily slow down the test run.
What should I do, here?
I had the same problem. I invented something to solve the problem. A tempt file is created by Python with '.part' extension. So, if still we have the temp, python can wait for 10 second and check again if the file is downloaded or not yet.
while True:
if os.path.isfile('ts.csv.part'):
sleep(10)
elif os.path.isfile('ts.csv'):
break
else:
sleep(10)
driver.close()
So you have two problems here:
You need to cause the browser to download the file
You need to measure when the downloaded file is complete
Neither problemc an be directly solved by Selenium (yet - 2.0 may help), but both are solvable problems. The first problem can be solved by GUI automation toolkits, such as AutoIT. But they can also be solved by simply sending an automated keypress at the OS level that simulates the enter key (works for Firefox, a little harder on some versions of Chrome and Safari). If you're using Java, you can use Robot to do that. Other languages have similar toolkits to do such a thing.
The second issue is probably best solved with some sort of proxy solution. For example, if your browser was configured to go through a proxy and that proxy had an API, you could query the proxy with that API to ask when network activity had ended.
That's what we do at http://browsermob.com, which is a a startup I founded that uses Selenium to do load testing. We've released some of the proxy code as open source, available at http://browsermob.com/tools.
But two problems still persist:
You need to configure the browser to use the proxy. In Selenium 2 this is easier, but it's possible to do it with Selenium 1 as well. The key is just making sure that your browser launcher brings up the browser with the right profile/settings.
There currently is no API for BrowserMob proxy to tell you when network traffic has stopped! This is a big hole in the concept of the project that I want to fix as soon as I get the time. However, if you're keen to help out, join the Google Group and I can definitely point you in the right direction.
Hope that helps you identify your various options. Best of luck!
This is Chrome-testing-only solution for controlling the downloads with javascript..
Using WebDriver (Selenium2) it can be done within Chrome's chrome:// which is HTML/CSS/Javascript:
driver.get( "chrome://downloads/" );
waitElement( By.CssSelector("#downloads-summary-text") );
// next javascript snippet cancels the last/current download
// if your test ends in file attachment downloading
// you'll very likely need this if you more re-instantiated tests left
((JavascriptExecutor)driver).executeScript("downloads.downloads_[0].cancel_();");
There are other Download.prototype.functions in "chrome://downloads/downloads.js"
This suites you if you just need to test some info note eg. caused by file attachment starting activity, and not the file itself.
Naturally you need to control step 1. - mentioned by Patrick above - and by this you control step 2. FOR THE TEST, not for the functionality of actual file download completion / cancel.
See also : Javascript: Cancel/Stop Image Requests which is relating to Browser stopping.
This falls under the "things that can't be automated" category. Selenium is built with JavaScipt and due to JavaScript sandbox restrictions it can't access downloads.
Selenium 2 might be able to do this once Alerts/Prompts have been implemented but that this won't happen for the next little while yet.
If you want to check for the download dialog, try with AutoIt. I use that for uploading and downloading the files. Using AutoIt with Se RC is easier.
def file_downloaded?(file)
while File.file?(file) == false
p "File downloading in progress..."
sleep 1
end
end
*Ruby Syntax