I am running Django, Celery, and RabbitMQ in a Docker container.
It's all configured well and running, however when I am trying to install django-celery-beat I have a problem initialising the service.
Specifically, this command:
celery -A project beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
Results in this error:
celery.platforms.LockFailed: [Errno 13] Permission denied: '/usr/src/app/celerybeat.pid'
When looking at causes/solutions, the permission denied error appears to occur when the default scheduler (celery.beat.PersistentScheduler) attempts to track of the last run times in a local shelve database file and doesn't have write access.
However, I am using django-celery-beat and appying the --scheduler flag to use the django_celery_beat.schedulers service that should store the schedule in the Django database and therefore not require write access.
What else could be causing this problem? / How can I debug this further?
celerybeat (celery.bin.beat) creates a pid file where it stores process id
--pidfile
File used to store the process pid. Defaults to celerybeat.pid.
The program won’t start if this file already exists and the pid is
still alive.
You can leave --pidfile= as empty in your command but beware then it will not know if there is more than one celerybeat process active
Related
I'm facing an issue with Debugging Celery tasks running in a chain.
If I set the CELERY_ALWAYS_EAGER configuration the tasks will run on the same process one by one and I'm able to Debug.
but, when I set this configuration another problem is raised, I have an issue creating a socket.
socket.socket(socket.AF_INET,socket.SOCK_RAW,socket.IPPROTO_ICMP)
I get an error:
_sock = _realsocket(family, type, proto)
error: [Errno 1] Operation not permitted
I can guess it's a result of the CELERY_ALWAYS_EAGER configuration.
How can I handle this issue?
I would suggest not using CELERY_ALWAYS_EAGER and instead running the celery worker in a separate tab on your dev machine, like so:
celery -A proj worker -l INFO
This way you avoid nasty timing bugs because you dev and live setup behaves the same.
See https://docs.celeryproject.org/en/stable/django/first-steps-with-django.html#starting-the-worker-process
I have many EC2 instances that retain Celery jobs for processing. To efficiently start the overall task of completing the queue, I have tested AWS-RunBashScript in AWS' SSM with a BASH script that calls a Python script. For example, for a single instance this begins with sh start_celery.sh.
When I run the command in SSM, this is the following output (compare to other output below, after reading on):
/home/ec2-user/dh2o-py/venv/local/lib/python2.7/dist-packages/celery/utils/imports.py:167:
UserWarning: Cannot load celery.commands extension u'flower.command:FlowerCommand':
ImportError('No module named compat',)
namespace, class_name, exc))
/home/ec2-user/dh2o-py/tasks/task_harness.py:49: YAMLLoadWarning: calling yaml.load() without
Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
task_configs = yaml.load(conf)
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
failed to run commands: exit status 1
Note that only warnings are thrown. When I SSH to the same instance and run the same command (i.e. sh start_celery.sh), the following (same) output results BUT the process runs:
I have verified that the process does NOT run when doing this via SSM, and I have no idea why. As a work-around, I tried running the sh start_celery.sh command with bootstrapping in user data for each EC2, but that failed too.
So, why does SSM fail to actually run the process that I succeed in doing by actually via SSH to each instance running identical commands? The details below relate to machine and Python configuration:
I am running supervisor/celery on an amazon aws server. Attempting to deploy a new application version eventually fails because the celery processes are not started. I have taken a look at the supervisord.conf file to ensure that the programs are included, which they are. At the end of the supervisord.conf file I have the following include:
[include]
files=celeryd.conf
files=flower.conf
I try to restart celery with
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-default celeryd-slowtasks
celeryd-defualt and celeryd-slowtaks being the names of the programs listed in celeryd.conf. I get the following error:
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
If I run
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart all
I get
flower: stopped
httpd: stopped
httpd: started
flower: started
without any mention of celery. Any idea how to start figuring this issue out?
Check /opt/python/etc/supervisord.conf, you are probably including a folder that you don't expect to be included.
Also ensure that the instance of supervisor that is running is actually using the config file you ex
I'm running Redis server 2.8.17 on a Debian server 8.5. I'm using Redis as a session store for a Django 1.8.4 application.
I haven't changed the software configuration on my server for a couple of months and everything was working just fine until a week ago when Django began raising the following error:
MISCONF Redis is configured to save RDB snapshots but is currently not able to persist to disk. Commands that may modify the data set are disabled. Please check Redis logs for details...
I checked the redis log and saw this happening about once a second:
1 changes in 900 seconds. Saving...
Background saving started by pid 22213
Failed opening .rdb for saving: Permission denied
Background saving error
I've read these two SO questions 1, 2 but they haven't helped me find the problem.
ps shows that user "redis" is running the server:
redis 26769 ... /usr/bin/redis-server *.6379
I checked my config file for the redis file name and path:
grep ^dir /etc/redis/redis.conf =>
dir /var/lib/redis
grep ^dbfilename /etc =>
dbfilename dump.rdb
The permissons on /var/lib/redis are 755 and it's owned by redis:redis.
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis too.
I also ran strace on the server process:
ps -C redis-server # pid = 26769
sudo strace -p 26769 -o /tmp/strace.out
But when I examine the output, I don't see any errors. In particular I don't see a "Permission denied" error as I would expect.
Also, /var/lib/redis is not an NFS directory.
Does anyone know what else could be causing this? I'd hate to have to stop using Redis. I know I can run the command "set stop-writes-on-bgsave-error yes" but that doesn't solve the problem.
This is now happening on a daily basis and the only way I can stop the error is to restart the Redis server.
Thanks.
I just had a similar issue. Despite my config file being correct, when I checked the actual dbfilename and dir in redis-client, they were incorrect.
Run redis-cli and then
CONFIG GET dbfilenamewhich should return something like
1) "dbfilename"
2) "dump.rdb"
1) is just the key and 2) the value. Similarly then run CONFIG GET dir should return something like
1) "dir"
2) "/var/lib/redis"
Confirm that these are correct and if not, set them with CONFIG SET dir /correct/path
Hope this helps!
If you have moved Redis to a new mounted volume: /mnt/data-01.
sudo vim /etc/systemd/system/redis.service
Set ReadWriteDirectories=-/mnt/data-01
sudo mkdir /mnt/data-01/redis
Set chown and chmod on new redis data dir and rdb file.
The permissons on /var/lib/redis are 755 and it's owned by redis:redis
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis
Switch configurations while redis is running
$ redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
redis-cli 127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
127.0.0.1:6379> BGSAVE
tail /var/log/redis/redis.cnf (verify saved)
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here's what's actually going on:
The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.
Since you say your redis server has been working fine for a while and this just started happening, it seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.
You can resolve this problem by going into the redis-cli
Type redis-cli in the terminal
Then write config set stop-writes-on-bgsave-error no and it resolved my problem.
Hope it resolved your problem
Up to redis 3.2 it shipped with pretty insane defaults which opened the port to the public. In combination with the CONFIG SET instruction everybody can change your redis config from outside easily. If the error starts after some time, someone probably changed your config.
On your local machine check that
telnet SERVER_IP REDIS_PORT
is denied. Otherwise check your config, you should have the setting
bind 127.0.0.1
enabled.
Dependent on the user that runs redis, you should also check for damage that the intruder has done.
I have been scratching my brains on this one since past few days, I have seen other issues on stackoverflow (as it is a duplicate question) and I have tried everything to make this work, the workers are running fine but the celery is not starting up as a process.
I run the command:
sudo service celeryd start
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v3.1.23 (Cipater)
> Starting nodes...
> worker1#ip-172-31-21-215: OK
I run:
sudo service celeryd status
and I get:
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
The celeryd down: no pidfiles found error is what I need to resolve.
I know this question is a duplicate one but still go along with me on this one because I have tried all of them and still unable to get it resolved.
I am deploying this script on Amazon Web Services. I am using a virtual environment.
The init.d script is taken directly from the here and then I gave it the required permissions.
Here is my configuration file:
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
# CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/<user>/.virtualenvs/<virtualenv_name>/bin/celery"
# App instance to use
# comment out this line if you don't use an app
# CELERY_APP="proj"
# or fully qualified:
CELERY_APP="<project_name>.settings:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/<user>/projects/<project_name>/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have used the process to create the celery user using this article.
My project is a Django project and I have specified the DJANGO_SETTINGS_MODULE environment variable in the celery setting file as specified in the documentation and also in the stackoverflow answer.
Do I need to change anything in the init.d script or anything else that needs to be added in the celery configuration file... Is it about the celery user that I have created because I also tried specifying
CELERYD_USER = ""
CELERYD_GROUP = ""
while also changing the DEFAULT_USER value to "" in the init.d script.
Still the issue persisted.
In one of the answers it was also told that there might be some errors in the project... but I did not find any such errors all thanks to my test cases.
PS : I have specified , and for privacy issues
they have their original names.
I was having this a similar issue on my ubuntu server [ERROR 2]FILE NOT FOUND. Turns out, /var/run/celery/ Directories don't get automatically created even if you set that in the celery.service configuration done in the celery example docs. You can make that directory, and grant the right permissions manually, but as soon you reboot the server the directory will vanish because it's in a temporary directory.
After some reading about how the linux system operates, I found out you just need to create a configuration file in /etc/tmpfiles.d/celery.conf with these lines
d /var/run/celery 0755 admin admin -
d /var/log/celery 0755 admin admin -
Note: you will need to use a different user:group other than 'admin' or you can create a user:group called admin specifically to handle your celery process.
You can read more about this configuration and the way it operates by typing
man tmpfiles.d
I had the issue and solved it just now, thank god! For me it was a permission issue. I had expected it to be in /var/run/celery or /var/log/celery but it turns out it was the log file I have setup Django logging for. For some reason celery wanted to write to that file (I have to look into that) but had no permission. I found the error with the verbose command and skip daemonization step:
# C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
This is an old thread but if anyone of you run into this error, I hope this may help!
Good luck!
I saw the same issue and it turned out to be a permissions issue.
Make sure to set the user/group that celery is running under to own the /var/log/celery/ and /var/run/celery/ folders.
See here for a step by step example:
Daemonizing celery