I am trying to run celery and celerybeat in production. In my current django app I am able to test and run using the commands "celery -A Gorgon worker" and "celery -A Gorgon beat -l debug --max-interval=10". Also, I am running it through a virtualenv. Also, I am using redis as the task broker.
The whole app is running on a gunicorn server. But, when I try to daemonize the process it fails with a 111 connection error.
I have added the required scripts from https://github.com/celery/celery/tree/3.0/extra/generic-init.d into the directory /etc/init.d
As for the scripts in the /etc/defaults, they look like this:
My celeryd script is as follows
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
#CELERYD_NODES="worker1 worker2 worker3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/ubuntu/sites/virtualenv/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="Gorgon"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/sites/source"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
#CELERYD_USER="celery"
#CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
My celerybeat script is
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/ubuntu/sites/virtualenv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="Gorgon"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYBEAT_CHDIR="/home/ubuntu/sites/source"
# Extra arguments to celerybeat
#CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
How do it get my celery setup running as a daemon using my current virtual env in /home/ubuntu/sites/virtualenv
To run celery in daemon,you can use supervisor.
This link might help you to get an idea how to run celery in daemon mode a little bit.
http://www.hiddentao.com/archives/2012/01/27/processing-long-running-django-tasks-using-celery-rabbitmq-supervisord-monit/
Related
Overview:
I'm trying to run celery as a daemon for tasks to send emails. It worked fine in development, but not in production. I have my website up now, and every function works fine (no django errors), but the tasks aren't going through because the daemon isn't set up properly, and I get this error in ubuntu 16.04:
project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'
Installed programs / hardware, and what I've done so far:
I'm using Django 2.0.5, python 3.5, ubuntu 16.04, rabbitmq, and celery all on a VPS. Im using a venv for it all. I've installed supervisor too, and it's running when I check with sudo service --status-all because it has a + next to it. Erlang is also installed, and when I check with top, rabbitmq is running. Using sudo service rabbitmq-server status shows rabbitmq is active too.
Originally, I followed the directions at the celery website, but they were very confusing and I couldn't get it to work after ~40 hours of testing/reading/watching other people's solutions. Feeling very aggravated and defeated, I chose the directions here to get the daemon set up and hope I get somewhere, and I have got further, but I get the error above.
I read through the supervisor documentation, checked the process states to try and debug the problem, and program settings, and I'm lost because my paths are correct as far as I can tell, according to the documentation.
Here's my file structure stripped down:
home/
my_user/ # is a superuser
portfolio-project/
project/
__init__.py
celery.py
settings.py # this file is in here too
app_1/
app_2/
...
...
logs/
celery.log
myvenv/
bin/
celery # executable file, is colored green
celery_user_nobody/ # not a superuser, but created for celery tasks
etc/
supervisor/
conf.d/
project_celery.conf
Here is my project_celery.conf:
[program:project_celery]
command=/home/my_user/myvenv/bin/celery worker -A project --loglevel=INFO
directory=/home/my_user/portfolio-project/project
user=celery_user_nobody
numprocs=1
stdout_logfile=/home/my_user/logs/celery.log
stderr_logfile=/home/my_user/logs/celery.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000
Here's my init.py:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ['celery_app']
And here's my celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
UPDATE: Here is my settings.py:
This is the only setting I have because the example at the celery website django instructions shows nothing more, unless I were to use something like redis. I put this in my settings.py file because the django instructions say you can: CELERY_BROKER_URL = 'amqp://localhost'
UPDATE: I created the rabbitmq user:
$ sudo rabbitmqctl add_user rabbit_user1 mypassword
$ sudo rabbitmqctl add_vhost myvhost
$ sudo rabbitmqctl set_user_tags rabbit_user1 mytag
$ sudo rabbitmqctl set_permissions -p myvhost rabbit_user1 ".*" ".*" ".*"
And when I do sudo rabbitmqctl status, I get Status of node 'rabbit#django2-portfolio', but oddly, I don't see any nodes running like the following, because the directions here show that I should see that:
{nodes,[rabbit#myhost]},
{running_nodes,[rabbit#myhost]}]
Steps I followed:
I created the .conf and .log files in the places I said.
sudo systemctl enable supervisor
sudo systemctl start supervisor
sudo supervisorctl reread
sudo supervisorctl update # no errors up to this point
sudo supervisorctl status
And after 6 I get this error:
project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'
UPDATE: I checked the error logs, and I have multiple instances of the following in /var/log/rabbitmq/rabbit#django2-portfolio.log:
=INFO REPORT==== 9-Aug-2018::18:26:58 ===
connection <0.690.0> (127.0.0.1:42452 -> 127.0.0.1:5672): user 'guest' authenticated and granted access to vhost '/'
=ERROR REPORT==== 9-Aug-2018::18:29:58 ===
closing AMQP connection <0.687.0> (127.0.0.1:42450 -> 127.0.0.1:5672):
missed heartbeats from client, timeout: 60s
Closing statement:
Anyone have any idea what's going on? When I look at my absolute paths in my project_celery.conf file, I see everything set correctly, but something's obviously wrong. Looking over my code more, rabbitmq says no nodes are running. when I do sudo rabbitmqctl status, but celery does when I do celery status (it shows OK 1 node online).
Any help would be greatly appreciated. I even made this account specifically because I had this problem. It's driving me mad. And if anyone needs any more info, please ask. This is my first time deploying anything, so I'm not a pro.
Can you try any of the following in your project_celery.conf
command=/home/my_user/myvenv/bin/celery worker -A celery --loglevel=INFO
directory=/home/my_user/portfolio-project/project
or
command=/home/my_user/myvenv/bin/celery worker -A project.celery --loglevel=INFO
directory=/home/my_user/portfolio-project/
Additionally, in celery.py can you add the parent folder of the project module to sys.path (or make sure that you've packaged your deploy properly and have installed it via pip or otherwise)?
I suspect (from your comments with #Jack Shedd that you're referring to a non-existent project due to where directory is set relative to the magic celery.py file.)
This question asks but the answer isn't about using Gunicorn.
How do I correctly set DJANGO_SETTINGS_MODULE to production in my production environment, and to staging in my staging environment?
I have two settings files - staging.py and production.py.
Because I was having trouble setting the variables, I simply made my default lines in manage.py and wsgi.py look like:
manage.py
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
wsgi.py
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
...so that in production, no matter the shenanigans with this pesky variable, my production app would remain on production settings if DJANGO_SETTINGS_MODULE wasn't set.
The problem is that I want my staging app to remain on staging settings so that emails don't go out from that (separate) server.
I have the above files in staging, as well as these attempts to properly set settings.staging:
gunicorn.conf:
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /src
script
export DJANGO_SETTINGS_MODULE="settings.staging"
exec /opt/Envs/mysite/bin/gunicorn \
--name=mysite \
--pythonpath=/opt/Envs/mysite/lib/python2.7/site-packages \
--bind=127.0.0.1:9000 \
--config /src/bin/gunicorn/gunicorn.py \
mysite.wsgi:application
end script
Also, a file named /etc/profile.d/myenvvars.sh that contains:
export DJANGO_SETTINGS_MODULE=settings.staging
And finally, I'm using virtualenvwrapper, with this line in /opt/Envs/myappenv/bin:
export DJANGO_SETTINGS_MODULE=settings.staging
As you can see, I'm trying the belt and suspenders technique to keep settings to staging on the staging server. However, despite these FOUR ways of trying to set DJANGO_SETTINGS_MODULE=settings.staging, it's still sometimes defaulting to settings.production and sending out emails.
What is the proper way to set DJANGO_SETTINGS_MODULE once and for all on both my staging and production servers?
having an issue with upstart where i can start it but when i run
sudo stop up
it hangs
this is the .conf file
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
pwd
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
would really appreciate it if someone could give me an idea of why it hangs?
think i have figured it out, or atleast got something working using.
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
/usr/bin/python manage.py run_gunicorn -c config/gunicorn
end script
things ive learnt that may help others:
dont use exec inside the script tags, just code it as if you were in a shell
use expect fork if you fork once
use expect daemon if you fork twice
I have a django app that i am looking to deploy. I would like to use upstart to run the app.
So far I have added the upstart.conf file to /etc/init
and tried to run it using
start upstart
but all i get is
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.90" (uid=1000 pid=5873 comm="start upstart ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
the contents of the .conf file are:
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
pre-start script
chdir $DJANGO_HOME
exec /usr/bin/python rm sqlite3.db
exec /usr/bin/python manage.py syncdb
exec /usr/bin/python manage.py loaddata fixtures/data.json
emit django_starting
end script
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
i have also tried using a much simpler .conf file but have come up with more or less the same error.
Would really appreciate it if someone could give me an idea of what im doing wrong
Upstart jobs can only be started by root, and that error appears if you try to start one as a normal user. Try this:
sudo start upstart
Last week I setup RabbitMQ and Celery on my production system after having tested it on my local dev and all worked fine.
I get the feeling that my tasks are not being executed on production since I have about 1200 tasks that are still in the queue.
I run a CentOS 5.4 setup, with celeryd and celerybeat daemons and WSGI
I have made the import onto the wsgi module.
When I run, /etc/init.d/celeryd start I get the following response
[root#myvm myproject]# /etc/init.d/celeryd start
celeryd-multi v2.3.1
> Starting nodes...
> w1.myvm.centos01: OK
When I run /etc/init.d/celerybeat start I get the following response
[root#myvm fundedmyprojectbyme]# /etc/init.d/celerybeat start
Starting celerybeat...
So by the output it seems that the items are executed successully - although when looking at the queues they only seem to get more than getting executed.
Now if I perform the same execution, but use django's manage.py instead ./manage.py celeryd and ./manage.py celerybeat the tasks immediately start to get processed.
My /etc/default/celeryd
# Where to chdir at start.
CELERYD_CHDIR="/www/myproject/"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
my /etc/default/celerybeat
# Where the Django project is.
CELERYD_CHDIR="/www/myproject/"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
# Path to celeryd
CELERYD="/www/myproject/manage.py celeryd"
# Path to celerybeat
CELERYBEAT="/www/myproject/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
my /etc/init.d files for /celeryd and /celerybeat are based on the generic scripts
Am I missing a part of the configuration???
I ran into a situation once where I have to add 'python' as a prefix to the CELERYD_MULTI variable.
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="python $CELERYD_CHDIR/manage.py celeryd_multi"
For whatever reason, my manage.py script would not execute normally (even though I had chmod +x and configured my shebang's properly.) You might try this to see if it works.
Try running the following and see what the output tells you:
sh -x /etc/init.d/celeryd start
In my case there were some permission problems on /var/log for the user that celery runs as