Permission problems prevent celery from running as daemon? - django

I'm currently having some trouble running celery as daemon. I use apache to serve my Django application, so I set uid and gid in celery setting all as "www-data". There are 2 places I know so far that need access permission: /var/log/celery/*.log, /var/run/celery/*.pid, and I already set them owned by "www-data". However, celery couldn't get started when I run sudo service celeryd start. If I get rid of the --uid and --gid option for the command, celery could get started by user "root".
One other thing I noticed is that if I could start celery using "root", it will put some files like: celery.bak, celery.dat, celery.dir in my CELERYD_CHDIR, which is my django application directory. I also changed the application directory owned by "www-data", celery still couldn't get started. I copied all the setting files from another machine in which celery runs fine, so I suppose it's not my setting's problem. Does anyone have any clue? Thanks.

Su to celery user and start celery from the command line. Most likely you have an app log, not celery, that you need permission for.

Related

Active Django settings file from Celery worker (how to set DJANGO_SETTINGS_MODULE Dynamically )

So I already looked around a lot for this but couldn't find a good answer. I'm using Celery celery and Django 3.2.13
., without django-celery package since newer versions of Celery don't require it anymore. I managed to set up tasks and execute them using Redis. Everything is working as it should there. However, I am integrating this in a existing, quite large, Django project. There we specified couple of Django settings files, not just one. We run different one depending on environment, for instance one for local machines and one for server. My problem is that I can't seem to be able to track down which settings file is "active" from the celery worker, which runs celery.py file in my project root (as documentation specifies). There the documentation requires to specify Django settings file like this:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', "celery_test.settings.development")
Now this works, but if I move the stuff locally I need to change it to settings.local to make it work, and that every time. Reading settings object in runtime like I do in standard Django files didn't work since celery worker executes in a different process. So, using this situation, does anyone have any idea on how to dynamically fetch active Django settings file from celery worker? Or perhaps pass it in as a variable when starting celery worker? (like for Django, etc --settings=project.settings.local) Thanks!
I found the command line solution
When initializing the celery worker on the command line, just set the environment variable prior to the celery command.
DJANGO_SETTINGS_MODULE='proj.settings' celery -A proj worker -l info
but im getting an error
my commannd line
DJANGO_SETTINGS_MODULE='celery_test.settings.development' celery -A celery_test worker -l info --pool=solo
DJANGO_SETTINGS_MODULE=celery_test.settings.development : The term
'DJANGO_SETTINGS_MODULE=celery_test.settings.development' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path
is correct and try again.
At line:1 char:1
DJANGO_SETTINGS_MODULE='celery_test.settings.development' celery -A c ...
+ CategoryInfo : ObjectNotFound: (DJANGO_SETTINGS...ngs.development:String) [], CommandNotFoundExcept
ion
+ FullyQualifiedErrorId : CommandNotFoundException

Celeryd with django

I've been trying to run a celeryd (celery 3.1.25) service for my django project, but when I define
CELERYD_USER="www-data"
CELERYD_GROUP="www-data"
in /etc/default/celeryd which seems to be used a lot, I get the following:
celery init v10.1.
Using config script: /etc/default/celeryd
This account is currently not available.
which of course has to do with the www-data user. If I change the user to, say, "celery", I get a permission denied error on a log file owned by www-data.
Really frustrated...

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

Getting broker started with django-celery

This is my first time using Celery so this might be a really easy question. I'm following the tutorial. I added BROKER_URL = "amqp://guest:guest#localhost:5672/" to my settings file. I added the simple task to my app. Now I do "ing the worker process" with
manage.py celeryd --loglevel=info --settings=settings
The settings=settings is needed for windows machines celery-django can't find settings.
I get
[Errno 10061] No connection could be made because the target machine actively refused it. Trying again in 2 seconds...
So it seems like the worker is not able to connect to the broker. Do I have to start the broker? Is it automatically started with manage.py runserver? Do I have to install something besides django-celery? Do I have to do something like manage.py runserver BROKER_URL?
Any pointers would be much appreciated.
You need to install broker first. Or try to use django db.
But i do not recommend use django db in production. Redis is ok. But it maybe problem run it on windows.

django-celery works in development, fails in wsgi production: How to debug?

I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.
After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.
I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.