django-celery works in development, fails in wsgi production: How to debug? - django

I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.

After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.

I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.

Related

Gunicorn is not seeing new settings

I updated one environment variable inside settings.py and restarted gunicorn, but it is seeing the old settings. Here is the command I use to start it:
nohup /opt/my_proj/.virtualenvs/my_proj/bin/python2 /opt/my_proj/.virtualenvs/my_proj/bin/gunicorn --bind=0.0.0.0:8008 --timeout=1800 --log-level=debug --error-logfile=/opt/my_proj/gunicorn_nik.log --enable-stdio-inheritance --capture-output --user=me --pythonpath=/opt/me/code/my_proj,/opt/me/code/my_proj/seqr_settings wsgi &
I printed out paths to make sure that the scripts are running under my 'my_proj' directory and also looked up in 'gunicorn_nik.log' verifying that there I see it pointing to 'my_proj' folder. Then I removed settings.py to make sure still that it is the file gunicorn is picking up. The startup failed. I tried modifying settings.py printing out something from it but it is not working, logger.info is not printing from there.
I have several Django projects on one cluster node running (not sure that it is important).
Its as if Django is storing some cached file and using it, but how come gunicorn restart does not fix it? Seems weird to me. Any suggestions would be greatly appreciated.
The issue was that in .bashrc an environment variable was set that was somehow overwriting the value in settings.py that was having the same name.

How to force application's stdout logs through uwsgi?

I have Django application running behind uwsgi inside Docker container. uwsgi is started via ENTRYPOINT and CMD parameters in Dockerfile. I succesfully connect it to separated Nginx container and check expected results in browser.
So far, so good.
Now I would like see application logs in Django container. But I am not able to find right combination of Django's settings LOGGING variable and uwsgi switches. I just see uwsgi standard logs which is useless for me.
Is it possible at all? It seems to me, that I must make some wrapper BASH script, like:
uwsgi --socket 0.0.0.0:80 --die-on-term --module myapp.wsgi:application --chdir /src --daemonize /dev/null
tail -f /common.log```
... set LOGGING inside Django to write into /common.log and tail it to output.
Is there some more elegant solution?
Updated 2016-02-24:
Yes, it is possible. I made mistake somewhere in my first tests. I published working example on https://github.com/msgre/uwsgi_logging.
use
log-master=true
in your uwsgi-conf.ini
or
--log-master
if you pass it as param

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

Permission problems prevent celery from running as daemon?

I'm currently having some trouble running celery as daemon. I use apache to serve my Django application, so I set uid and gid in celery setting all as "www-data". There are 2 places I know so far that need access permission: /var/log/celery/*.log, /var/run/celery/*.pid, and I already set them owned by "www-data". However, celery couldn't get started when I run sudo service celeryd start. If I get rid of the --uid and --gid option for the command, celery could get started by user "root".
One other thing I noticed is that if I could start celery using "root", it will put some files like: celery.bak, celery.dat, celery.dir in my CELERYD_CHDIR, which is my django application directory. I also changed the application directory owned by "www-data", celery still couldn't get started. I copied all the setting files from another machine in which celery runs fine, so I suppose it's not my setting's problem. Does anyone have any clue? Thanks.
Su to celery user and start celery from the command line. Most likely you have an app log, not celery, that you need permission for.

Getting broker started with django-celery

This is my first time using Celery so this might be a really easy question. I'm following the tutorial. I added BROKER_URL = "amqp://guest:guest#localhost:5672/" to my settings file. I added the simple task to my app. Now I do "ing the worker process" with
manage.py celeryd --loglevel=info --settings=settings
The settings=settings is needed for windows machines celery-django can't find settings.
I get
[Errno 10061] No connection could be made because the target machine actively refused it. Trying again in 2 seconds...
So it seems like the worker is not able to connect to the broker. Do I have to start the broker? Is it automatically started with manage.py runserver? Do I have to install something besides django-celery? Do I have to do something like manage.py runserver BROKER_URL?
Any pointers would be much appreciated.
You need to install broker first. Or try to use django db.
But i do not recommend use django db in production. Redis is ok. But it maybe problem run it on windows.