Celerybeat service not starting - django

I'm trying to get celery running as a service and I'm having a problem with the CELERYBEAT_OPTS parameter. I can start the celery service just fine and I'm able to start celerybeat fine via the command line like this:
celery -A base beat -S djcelery.schedulers.DatabaseScheduler -l debug --pidfile=/tmp/celerybeat.pid
But when I start the celerybeat service like this:
sudo service celerybeat start
it doesn't start.
Here's my celerybeat config file at /etc/default/celerybeat:
export DJANGO_SETTINGS_MODULE="settings"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/path/to/.virtualenvs/django/bin/celery"
# Where to chdir at start.
CELERYD_CHDIR="/srv/myproj/"
# Extra arguments to celerybeat
# When the below line is commented out, the service starts!?!
CELERYBEAT_OPTS="-S djcelery.schedulers.DatabaseScheduler"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYBEAT_USER="myuser"
CELERYBEAT_GROUP="mygroup"
And the oddest part is, as noted in the config file, if I comment out the CELERYBEAT_OPTS line, I can start the service just fine using the service command. So something is causing the service not to start when I specify CELERYBEAT_OPTS="-S djcelery.schedulers.DatabaseScheduler" in the config file. Does anyone have any clue what's going on here or how I might be able to troubleshoot it? Thank you.

I added this line to the /etc/default/celerybeat file and it started working:
CELERY_APP="base"

Related

Django Celery with Redis Issues on Digital Ocean App Platform

After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!

Celery & Celery Beat daemons not running tasks

I've set up the celeryd and celerybeat daemon by following this guide. When the daemons start, everything is marked as ok, however Celery simply doesn't run any of the tasks set in my Django application.
This is my /etc/default/celeryd file:
CELERY_NODES="w1"
CELERY_BIN="/home/millez/myproject/venv/bin/celery"
CELERY_CHDIR="/home/millez/myproject"
CELERY_OPTS="-l info --pool=solo"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERY_CREATE_DIRS=1
and this is my /etc/default/celerybeat:
CELERY_BIN="/home/millez/myproject/venv/bin/celery"
CELERYBEAT_CHDIR="/home/millez/myproject"
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
If I manually restart the daemons, (sudo /etc/init.d/celeryd restart and sudo /etc/init.d/celerybeat restart), and then check their statuses, this is the only output I get:
celeryd (node celery) (pid 2493) is up...
Running the actual celery commands manually works fine, e.g celery -A myproject worker -l info, it seems to be an issue with the way I've set up the daemons. However, I'm not too Linux savvy, so if anyone happens to be able to see some easy oversight I've made, let me know, this is driving me insane.

Celery and Celerybeat are running, but don't run tasks

I'v already checked my code on local server and I'm sure everything is ok on my code. So it seems something is wrong on server configuration. I have a linux server (Ubuntu 16.04) and installed nginx, redis , ...Also I created configuration files for celery and celerybeat as below:
/etc/init.d/celeryd
/etc/default/celeryd
/etc/init.d/celerybeat
/etc/default/celerybeat
I checked their status ,both of them are running but when I check beat.log it doesn't do anything and only shows 'starting ...'
celeryd file:
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/amirali/AwesomeApp/awesome_env/bin/celery"
# App instance to use
CELERY_APP="AwesomeApp"
# Where to chdir at start. Where your manage.py is...
CELERYD_CHDIR="/home/amirali/AwesomeApp"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 -Ofair --concurrency=8"
# Set logging level to DEBUG
CELERYD_LOG_LEVEL="INFO"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
celerybeat file:
File: /etc/default/celerybeat
CELERYBEAT_LOG_LEVEL="info"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/amirali/AwesomeApp/awesome_env/bin/celery"
CELERYBEAT_USER="celery"
CELERYBEAT_GROUP="celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="AwesomeApp"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYBEAT_CHDIR="/home/amirali/AwesomeApp"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
export DJANGO_SETTINGS_MODULE="AwesomeApp.settings"
When we had to implement celery periodic tasks, it turned out celery-beat did not work properly, it had just stopping launch tasks in some moment.
After some tests we decided do not waste our time on it anymore and rely on linux crontab utility

Running celery as daemon does not create PID file

I have been scratching my brains on this one since past few days, I have seen other issues on stackoverflow (as it is a duplicate question) and I have tried everything to make this work, the workers are running fine but the celery is not starting up as a process.
I run the command:
sudo service celeryd start
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v3.1.23 (Cipater)
> Starting nodes...
> worker1#ip-172-31-21-215: OK
I run:
sudo service celeryd status
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
The celeryd down: no pidfiles found error is what I need to resolve.
I know this question is a duplicate one but still go along with me on this one because I have tried all of them and still unable to get it resolved.
I am deploying this script on Amazon Web Services. I am using a virtual environment.
The init.d script is taken directly from the here and then I gave it the required permissions.
Here is my configuration file:
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
# CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/<user>/.virtualenvs/<virtualenv_name>/bin/celery"
# App instance to use
# comment out this line if you don't use an app
# CELERY_APP="proj"
# or fully qualified:
CELERY_APP="<project_name>.settings:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/<user>/projects/<project_name>/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have used the process to create the celery user using this article.
My project is a Django project and I have specified the DJANGO_SETTINGS_MODULE environment variable in the celery setting file as specified in the documentation and also in the stackoverflow answer.
Do I need to change anything in the init.d script or anything else that needs to be added in the celery configuration file... Is it about the celery user that I have created because I also tried specifying
CELERYD_USER = ""
CELERYD_GROUP = ""
while also changing the DEFAULT_USER value to "" in the init.d script.
Still the issue persisted.
In one of the answers it was also told that there might be some errors in the project... but I did not find any such errors all thanks to my test cases.
PS : I have specified , and for privacy issues
they have their original names.
I was having this a similar issue on my ubuntu server [ERROR 2]FILE NOT FOUND. Turns out, /var/run/celery/ Directories don't get automatically created even if you set that in the celery.service configuration done in the celery example docs. You can make that directory, and grant the right permissions manually, but as soon you reboot the server the directory will vanish because it's in a temporary directory.
After some reading about how the linux system operates, I found out you just need to create a configuration file in /etc/tmpfiles.d/celery.conf with these lines
d /var/run/celery 0755 admin admin -
d /var/log/celery 0755 admin admin -
Note: you will need to use a different user:group other than 'admin' or you can create a user:group called admin specifically to handle your celery process.
You can read more about this configuration and the way it operates by typing
man tmpfiles.d
I had the issue and solved it just now, thank god! For me it was a permission issue. I had expected it to be in /var/run/celery or /var/log/celery but it turns out it was the log file I have setup Django logging for. For some reason celery wanted to write to that file (I have to look into that) but had no permission. I found the error with the verbose command and skip daemonization step:
# C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
This is an old thread but if anyone of you run into this error, I hope this may help!
Good luck!
I saw the same issue and it turned out to be a permissions issue.
Make sure to set the user/group that celery is running under to own the /var/log/celery/ and /var/run/celery/ folders.
See here for a step by step example:
Daemonizing celery

sudo /etc/init.d/celeryd start generates a "Unknown command: 'celeryd_multi'"

I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.