upstart hangs on stop - django

having an issue with upstart where i can start it but when i run
sudo stop up
it hangs
this is the .conf file
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
pwd
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
would really appreciate it if someone could give me an idea of why it hangs?

think i have figured it out, or atleast got something working using.
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
/usr/bin/python manage.py run_gunicorn -c config/gunicorn
end script
things ive learnt that may help others:
dont use exec inside the script tags, just code it as if you were in a shell
use expect fork if you fork once
use expect daemon if you fork twice

Related

Run celery with Django start

I am using Django 1.11 and Celery 4.0.2.
We are using a PaaS (OpenShift 3) which runs over kubernetes - Dockers.
I am using a Python image, it knows only how to run one command on start (and follow for exit code - restart if fails),
How can I run celery worker in the same time I am running Django to make sure that failure of one of them will kill the both process (worker and Django)
I am using wsgi and gevent to start Django
Thank you!
You could use circus (supervisord is an alternative but they don't support python 3 currently)
In circus you create a circus.ini in your project directory.
Something like:
[watcher:celery]
working_dir = /var/www/your_app
virtualenv = virtualenv
cmd = celery
args = worker --app=your_app --loglevel=DEBUG -E
[watcher:django]
working_dir = /var/www/your_app
virtualenv = virtualenv
cmd = python
args = manage.py runserver
Then you start both with:
virtualenv/bin/circusd circus.ini
It should start both processes. I think this is a good way to create a "start" plan for your project. Maybe you want to add celerybeat or use channels (websockets in django), so you just can add a new watcher in your circus.ini. It's pretty dynamic

Celery not running in production

I am trying to run celery and celerybeat in production. In my current django app I am able to test and run using the commands "celery -A Gorgon worker" and "celery -A Gorgon beat -l debug --max-interval=10". Also, I am running it through a virtualenv. Also, I am using redis as the task broker.
The whole app is running on a gunicorn server. But, when I try to daemonize the process it fails with a 111 connection error.
I have added the required scripts from https://github.com/celery/celery/tree/3.0/extra/generic-init.d into the directory /etc/init.d
As for the scripts in the /etc/defaults, they look like this:
My celeryd script is as follows
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
#CELERYD_NODES="worker1 worker2 worker3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/ubuntu/sites/virtualenv/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="Gorgon"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/sites/source"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
#CELERYD_USER="celery"
#CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
My celerybeat script is
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/ubuntu/sites/virtualenv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="Gorgon"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYBEAT_CHDIR="/home/ubuntu/sites/source"
# Extra arguments to celerybeat
#CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
How do it get my celery setup running as a daemon using my current virtual env in /home/ubuntu/sites/virtualenv
To run celery in daemon,you can use supervisor.
This link might help you to get an idea how to run celery in daemon mode a little bit.
http://www.hiddentao.com/archives/2012/01/27/processing-long-running-django-tasks-using-celery-rabbitmq-supervisord-monit/

how to use upstart?

I have a django app that i am looking to deploy. I would like to use upstart to run the app.
So far I have added the upstart.conf file to /etc/init
and tried to run it using
start upstart
but all i get is
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.90" (uid=1000 pid=5873 comm="start upstart ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
the contents of the .conf file are:
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
pre-start script
chdir $DJANGO_HOME
exec /usr/bin/python rm sqlite3.db
exec /usr/bin/python manage.py syncdb
exec /usr/bin/python manage.py loaddata fixtures/data.json
emit django_starting
end script
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
i have also tried using a much simpler .conf file but have come up with more or less the same error.
Would really appreciate it if someone could give me an idea of what im doing wrong
Upstart jobs can only be started by root, and that error appears if you try to start one as a normal user. Try this:
sudo start upstart

Getting Django in a VirtualEnv to run through Upstart

I've been trying to trudge through the docs and examples to get my Django running through upstart so I can have it running all the time but am unable to so.
Here's my upstart configuration file located at /etc/init/myapp.conf:
start on startup
#expect daemon
#respawn
console output
script
chdir /app/env/bin
exec source activate
exec /app/env/bin/python /app/src/manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
end script
When I type sudo service myapp start, the console says that it has started but it doesn't seem to be running.
Is it possible to see some debugging output to see what's going wrong?
I need to run my Django application as another user — i.e. djangouser. How can I do so?
(I've been commenting out some lines to test where the service is going wrong). This is not for production use but my internal development use only.
Thanks.
Edit #1:
I have wrapped both my commands into a simple script at /app/run.sh
#!/bin/bash
cd /app/env/bin
source activate
cd /app/src
python manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
..and I've modified my /etc/init/myapp.conf to
start on startup
expect daemon
exec su - djangouser -c "bash /app/run.sh"
When executing sudo service myapp start — the application starts but the PID is wrong and I can't seem to kill it with sudo service myapp stop
Any ideas?
Change:
exec source activate
By just:
source activate
This will load the virtual environment. You should probably drop the other "exec". If that doesn't work, please post your upstart logs.
A couple of remarks:
logging the output to somewhere else than /dev/null might be useful :)
runserver is not ment to be stable, I see it crashing sometimes and in that case i guess you'll need to force upstart to reload, or put the runserver call in a while loop
you will not be able to use an interactive debugger like ipdb with this setup
How about using nginx and uwsgi with your virtualenv. this will give you a production like environment but will also start your django app at start up. if you are using ubuntu 10 you should take a look at uwsgi-python, otherwise just install the latest uwsgi. i usually start my virtualenv in uwsgi like so : sudo nano /etc/uwsgi-python/apps-available/app.xml
<uwsgi>
<socket>127.0.0.1:8889</socket>
<pythonpath>/home/user/code/</pythonpath>
<virtualenv>/home/user/code</virtualenv>
<pythonpath>/home/user/code/app</pythonpath>
<app mountpoint="/">
<script>uwsgiApp</script>
</app>
</uwsgi>
also setup yournginx files at /etc/nginx/apps-available/default (the file is a bit straight forward). this will help you have your django app at all times,
su is problematic becouse it forks the process. You can use sudo -u djangouser instead or simply add
setuid djangouser
in your conf file.
This should work on Ubuntu 14.04 and possibly other versions as well:
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app start
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# cat /var/log/upstart/my_app.log
Performing system checks...
System check identified no issues (0 silenced).
You have unapplied migrations; your app may not work properly until they are applied.
Run 'python manage.py migrate' to apply them.
June 30, 2015 - 06:54:18
Django version 1.8.2, using settings 'my_test.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app stop
my_app stop/waiting
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app stop/waiting
Here is the config to make it work:
root#vagrant-ubuntu-trusty-64:/etc/init# cat my_app.conf
description "my_app upstart script"
start on runlevel [23]
respawn
script
su vagrant -c "source /home/vagrant/dj_app/bin/activate; /home/vagrant/dj_app/bin/python /home/vagrant/my_test/manage.py runserver 0.0.0.0:8080"
end script

Rabbitmq celeryd celerybeat not executing tasks in production as Daemon

Last week I setup RabbitMQ and Celery on my production system after having tested it on my local dev and all worked fine.
I get the feeling that my tasks are not being executed on production since I have about 1200 tasks that are still in the queue.
I run a CentOS 5.4 setup, with celeryd and celerybeat daemons and WSGI
I have made the import onto the wsgi module.
When I run, /etc/init.d/celeryd start I get the following response
[root#myvm myproject]# /etc/init.d/celeryd start
celeryd-multi v2.3.1
> Starting nodes...
> w1.myvm.centos01: OK
When I run /etc/init.d/celerybeat start I get the following response
[root#myvm fundedmyprojectbyme]# /etc/init.d/celerybeat start
Starting celerybeat...
So by the output it seems that the items are executed successully - although when looking at the queues they only seem to get more than getting executed.
Now if I perform the same execution, but use django's manage.py instead ./manage.py celeryd and ./manage.py celerybeat the tasks immediately start to get processed.
My /etc/default/celeryd
# Where to chdir at start.
CELERYD_CHDIR="/www/myproject/"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
my /etc/default/celerybeat
# Where the Django project is.
CELERYD_CHDIR="/www/myproject/"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
# Path to celeryd
CELERYD="/www/myproject/manage.py celeryd"
# Path to celerybeat
CELERYBEAT="/www/myproject/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
my /etc/init.d files for /celeryd and /celerybeat are based on the generic scripts
Am I missing a part of the configuration???
I ran into a situation once where I have to add 'python' as a prefix to the CELERYD_MULTI variable.
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="python $CELERYD_CHDIR/manage.py celeryd_multi"
For whatever reason, my manage.py script would not execute normally (even though I had chmod +x and configured my shebang's properly.) You might try this to see if it works.
Try running the following and see what the output tells you:
sh -x /etc/init.d/celeryd start
In my case there were some permission problems on /var/log for the user that celery runs as