I'm importing a library with my code that gives me the
NotImplementedError: gevent is only usable from a single thread
the library is internal so I can't share it unfortunately.
I managed to solve this for the Django development server by adding:
import gevent.monkey
gevent.monkey.patch_all(signal=False, httplib=False)
as the first two lines of my manage.py file after the shebang. Then I got to deploying it on apache with mod_wsgi and tought that it would be enough to have it as the first two lines of my wsgi.py-file. This was wrong. I think I've tried everything now, does anyone have any idea of what to do?!
Any ideas of a file that is executed before the wsgi.py file where I could try the monkey patch?
I did not manage to solve this problem, but I managed to replace the two gevent client to other client types which made the problem go away...
Related
I think this is a simple fix, but I've deployed quite a few Django apps to Heroku and I still can't figure out what's going on.
Accessing https://dundjeon-finder.herokuapp.com/ gives me a 500 error when using the browser/curl, but if I shell into the app using heroku run ./manage.py shell I can render the views no problem. My logs aren't telling me anything (just that the response is 500) despite DEBUG being set to True, and Sentry isn't receiving an error (it has previously when the database env variable was set badly), so I'm assuming it's something to do with the way the request works.
The repo is public, any help would be much appreciated! The settings file is here.
Well it was because of using asgi instead of wsgi. I'm not sure why that caused the errors, but will do some more searching.
I've spent quite a bit of time searching and I'm amazed I've not found an answer to this.
I've got basic #app.errorhandler(500) code in my flask app. As expected, I get a debugger when running with DEBUG on, and I get my custom error page when it's false. The next stage of my build though is serving the app from gunicorn in a docker container, and I'm just getting generic "Internal Server Error"s when I do that. I'm guessing gunicorn is handling the errors now instead of flask? But I can't for the life of me figure out how to ask it to let flask handle the errors (if that's even possible), or make it use custom error pages.
The final stage will be gunicorn in docker behind nginx, but I think I've found a config directive for nginx to make it let gunicorn handle the errors - I just need to get gunicorn to pass it down one level further so I can use my custom error page, and fire off notifications to relevant people with details regarding the error that occurred (which I suspect I'd lose if I did a custom error page at the gunicorn or nginx level). Help would be GREATLY appreciated.
I have Flask + gunicorn + Nginx system, The Internal Server Error was handled by default with Nginx, I added the following config to let 500 errors handled by flask:
import logging
from werkzeug.exceptions import InternalServerError
import traceback
#app.errorhandler(InternalServerError)
def handle_500(e):
logging.error(traceback.format_exc())
I was able to fix it with the below handle. But I still can't properly explain it. In any case even though the original handling (#app.app_errorhandler(500)) worked fine with the dev flask server and not with gunicorn, I don't believe the problem is to be solved with gunicron. It needs to be solved within the code.
#app.app_errorhandler(Exception)
def handle_exception_error(err):
""" Defines how to handle Exception errors. """
logger.error(err)
return "INTERNAL_SERVER_ERROR", 500
So this code has been running for like a week now, today it is throwing this error. And this is not happening at the URL level, which many places seem to say.
I am using celery, djcelery and Django 1.9.5. In my celery task, in one part where I am trying to connect to my DB, it is throwing this error. The strange part is when I run the code line by line in shell, it works.
All this code runs inside a virtualenv being used by two projects which have exactly same requirements. to confirm, I just checked the django version in pip. It is 1.9.5
Please let me know if any extra info is required.
I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.
After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.
I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.
I am using Lighttpd + FastCGI + Django on a dev machine. I start FastCGI server via manage.py's command line option.
Problem is that I do make changes to sources quite often and I need to get FastCGI to pick up those changes automatically, just as "./manage.py runserver" does.
Is there a command-line option for that, perhaps, or any other solutions?
Have you looked at the code in the runserver part of manage.py that does the monitoring? I see no reason you could not just copy-paste that code into a custom manage.py script and set it to run the lighty restart command when changes were detected.
Alternatively, you could run a separate python program that did the restart using a package like pyinotify:
http://trac.dbzteam.org/pyinotify
I'm wondering if anyone has ever gotten this to work? I have tried implementing a reload mechanism using django's autoreload.py, unfortunately, I get errors when the fork occurs like:
django/core/servers/fastcgi.py", line 180, in runfastcgi
WSGIServer(WSGIHandler(), **wsgi_opts).run()
File "build/bdist.freebsd-6.4-RELEASE-p9-amd64/egg/flup/server/fcgi_fork.py", line 140, in run
File "build/bdist.freebsd-6.4-RELEASE-p9-amd64/egg/flup/server/preforkserver.py", line 119, in run
File "build/bdist.freebsd-6.4-RELEASE-p9-amd64/egg/flup/server/preforkserver.py", line 450, in _installSignalHandlers
ValueError: signal only works in main thread
My ideal setup would be to be able to reload/kill my fcgi process and start a new one after each time a code change is detected, similar to how django does this with their internal server. I also tried removing the threading from autoreload.py that would get past this error, but it does not seem to run the server properly (still investigating that).
Perhaps someone has tried CherryPies autoreload.py in the settings.py file for django?