I have been scratching my brains on this one since past few days, I have seen other issues on stackoverflow (as it is a duplicate question) and I have tried everything to make this work, the workers are running fine but the celery is not starting up as a process.
I run the command:
sudo service celeryd start
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v3.1.23 (Cipater)
> Starting nodes...
> worker1#ip-172-31-21-215: OK
I run:
sudo service celeryd status
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
The celeryd down: no pidfiles found error is what I need to resolve.
I know this question is a duplicate one but still go along with me on this one because I have tried all of them and still unable to get it resolved.
I am deploying this script on Amazon Web Services. I am using a virtual environment.
The init.d script is taken directly from the here and then I gave it the required permissions.
Here is my configuration file:
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
# CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/<user>/.virtualenvs/<virtualenv_name>/bin/celery"
# App instance to use
# comment out this line if you don't use an app
# CELERY_APP="proj"
# or fully qualified:
CELERY_APP="<project_name>.settings:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/<user>/projects/<project_name>/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have used the process to create the celery user using this article.
My project is a Django project and I have specified the DJANGO_SETTINGS_MODULE environment variable in the celery setting file as specified in the documentation and also in the stackoverflow answer.
Do I need to change anything in the init.d script or anything else that needs to be added in the celery configuration file... Is it about the celery user that I have created because I also tried specifying
CELERYD_USER = ""
CELERYD_GROUP = ""
while also changing the DEFAULT_USER value to "" in the init.d script.
Still the issue persisted.
In one of the answers it was also told that there might be some errors in the project... but I did not find any such errors all thanks to my test cases.
PS : I have specified , and for privacy issues
they have their original names.
I was having this a similar issue on my ubuntu server [ERROR 2]FILE NOT FOUND. Turns out, /var/run/celery/ Directories don't get automatically created even if you set that in the celery.service configuration done in the celery example docs. You can make that directory, and grant the right permissions manually, but as soon you reboot the server the directory will vanish because it's in a temporary directory.
After some reading about how the linux system operates, I found out you just need to create a configuration file in /etc/tmpfiles.d/celery.conf with these lines
d /var/run/celery 0755 admin admin -
d /var/log/celery 0755 admin admin -
Note: you will need to use a different user:group other than 'admin' or you can create a user:group called admin specifically to handle your celery process.
You can read more about this configuration and the way it operates by typing
man tmpfiles.d
I had the issue and solved it just now, thank god! For me it was a permission issue. I had expected it to be in /var/run/celery or /var/log/celery but it turns out it was the log file I have setup Django logging for. For some reason celery wanted to write to that file (I have to look into that) but had no permission. I found the error with the verbose command and skip daemonization step:
# C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
This is an old thread but if anyone of you run into this error, I hope this may help!
Good luck!
I saw the same issue and it turned out to be a permissions issue.
Make sure to set the user/group that celery is running under to own the /var/log/celery/ and /var/run/celery/ folders.
See here for a step by step example:
Daemonizing celery
Related
After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!
I'v already checked my code on local server and I'm sure everything is ok on my code. So it seems something is wrong on server configuration. I have a linux server (Ubuntu 16.04) and installed nginx, redis , ...Also I created configuration files for celery and celerybeat as below:
/etc/init.d/celeryd
/etc/default/celeryd
/etc/init.d/celerybeat
/etc/default/celerybeat
I checked their status ,both of them are running but when I check beat.log it doesn't do anything and only shows 'starting ...'
celeryd file:
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/amirali/AwesomeApp/awesome_env/bin/celery"
# App instance to use
CELERY_APP="AwesomeApp"
# Where to chdir at start. Where your manage.py is...
CELERYD_CHDIR="/home/amirali/AwesomeApp"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 -Ofair --concurrency=8"
# Set logging level to DEBUG
CELERYD_LOG_LEVEL="INFO"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
celerybeat file:
File: /etc/default/celerybeat
CELERYBEAT_LOG_LEVEL="info"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/amirali/AwesomeApp/awesome_env/bin/celery"
CELERYBEAT_USER="celery"
CELERYBEAT_GROUP="celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="AwesomeApp"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYBEAT_CHDIR="/home/amirali/AwesomeApp"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
export DJANGO_SETTINGS_MODULE="AwesomeApp.settings"
When we had to implement celery periodic tasks, it turned out celery-beat did not work properly, it had just stopping launch tasks in some moment.
After some tests we decided do not waste our time on it anymore and rely on linux crontab utility
I'm running Redis server 2.8.17 on a Debian server 8.5. I'm using Redis as a session store for a Django 1.8.4 application.
I haven't changed the software configuration on my server for a couple of months and everything was working just fine until a week ago when Django began raising the following error:
MISCONF Redis is configured to save RDB snapshots but is currently not able to persist to disk. Commands that may modify the data set are disabled. Please check Redis logs for details...
I checked the redis log and saw this happening about once a second:
1 changes in 900 seconds. Saving...
Background saving started by pid 22213
Failed opening .rdb for saving: Permission denied
Background saving error
I've read these two SO questions 1, 2 but they haven't helped me find the problem.
ps shows that user "redis" is running the server:
redis 26769 ... /usr/bin/redis-server *.6379
I checked my config file for the redis file name and path:
grep ^dir /etc/redis/redis.conf =>
dir /var/lib/redis
grep ^dbfilename /etc =>
dbfilename dump.rdb
The permissons on /var/lib/redis are 755 and it's owned by redis:redis.
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis too.
I also ran strace on the server process:
ps -C redis-server # pid = 26769
sudo strace -p 26769 -o /tmp/strace.out
But when I examine the output, I don't see any errors. In particular I don't see a "Permission denied" error as I would expect.
Also, /var/lib/redis is not an NFS directory.
Does anyone know what else could be causing this? I'd hate to have to stop using Redis. I know I can run the command "set stop-writes-on-bgsave-error yes" but that doesn't solve the problem.
This is now happening on a daily basis and the only way I can stop the error is to restart the Redis server.
Thanks.
I just had a similar issue. Despite my config file being correct, when I checked the actual dbfilename and dir in redis-client, they were incorrect.
Run redis-cli and then
CONFIG GET dbfilenamewhich should return something like
1) "dbfilename"
2) "dump.rdb"
1) is just the key and 2) the value. Similarly then run CONFIG GET dir should return something like
1) "dir"
2) "/var/lib/redis"
Confirm that these are correct and if not, set them with CONFIG SET dir /correct/path
Hope this helps!
If you have moved Redis to a new mounted volume: /mnt/data-01.
sudo vim /etc/systemd/system/redis.service
Set ReadWriteDirectories=-/mnt/data-01
sudo mkdir /mnt/data-01/redis
Set chown and chmod on new redis data dir and rdb file.
The permissons on /var/lib/redis are 755 and it's owned by redis:redis
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis
Switch configurations while redis is running
$ redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
redis-cli 127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
127.0.0.1:6379> BGSAVE
tail /var/log/redis/redis.cnf (verify saved)
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here's what's actually going on:
The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.
Since you say your redis server has been working fine for a while and this just started happening, it seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.
You can resolve this problem by going into the redis-cli
Type redis-cli in the terminal
Then write config set stop-writes-on-bgsave-error no and it resolved my problem.
Hope it resolved your problem
Up to redis 3.2 it shipped with pretty insane defaults which opened the port to the public. In combination with the CONFIG SET instruction everybody can change your redis config from outside easily. If the error starts after some time, someone probably changed your config.
On your local machine check that
telnet SERVER_IP REDIS_PORT
is denied. Otherwise check your config, you should have the setting
bind 127.0.0.1
enabled.
Dependent on the user that runs redis, you should also check for damage that the intruder has done.
I'm trying to get celery running as a service and I'm having a problem with the CELERYBEAT_OPTS parameter. I can start the celery service just fine and I'm able to start celerybeat fine via the command line like this:
celery -A base beat -S djcelery.schedulers.DatabaseScheduler -l debug --pidfile=/tmp/celerybeat.pid
But when I start the celerybeat service like this:
sudo service celerybeat start
it doesn't start.
Here's my celerybeat config file at /etc/default/celerybeat:
export DJANGO_SETTINGS_MODULE="settings"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/path/to/.virtualenvs/django/bin/celery"
# Where to chdir at start.
CELERYD_CHDIR="/srv/myproj/"
# Extra arguments to celerybeat
# When the below line is commented out, the service starts!?!
CELERYBEAT_OPTS="-S djcelery.schedulers.DatabaseScheduler"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYBEAT_USER="myuser"
CELERYBEAT_GROUP="mygroup"
And the oddest part is, as noted in the config file, if I comment out the CELERYBEAT_OPTS line, I can start the service just fine using the service command. So something is causing the service not to start when I specify CELERYBEAT_OPTS="-S djcelery.schedulers.DatabaseScheduler" in the config file. Does anyone have any clue what's going on here or how I might be able to troubleshoot it? Thank you.
I added this line to the /etc/default/celerybeat file and it started working:
CELERY_APP="base"
I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.