how to use upstart? - django

I have a django app that i am looking to deploy. I would like to use upstart to run the app.
So far I have added the upstart.conf file to /etc/init
and tried to run it using
start upstart
but all i get is
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.90" (uid=1000 pid=5873 comm="start upstart ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
the contents of the .conf file are:
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
pre-start script
chdir $DJANGO_HOME
exec /usr/bin/python rm sqlite3.db
exec /usr/bin/python manage.py syncdb
exec /usr/bin/python manage.py loaddata fixtures/data.json
emit django_starting
end script
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
i have also tried using a much simpler .conf file but have come up with more or less the same error.
Would really appreciate it if someone could give me an idea of what im doing wrong

Upstart jobs can only be started by root, and that error appears if you try to start one as a normal user. Try this:
sudo start upstart

Related

Run celery with Django start

I am using Django 1.11 and Celery 4.0.2.
We are using a PaaS (OpenShift 3) which runs over kubernetes - Dockers.
I am using a Python image, it knows only how to run one command on start (and follow for exit code - restart if fails),
How can I run celery worker in the same time I am running Django to make sure that failure of one of them will kill the both process (worker and Django)
I am using wsgi and gevent to start Django
Thank you!
You could use circus (supervisord is an alternative but they don't support python 3 currently)
In circus you create a circus.ini in your project directory.
Something like:
[watcher:celery]
working_dir = /var/www/your_app
virtualenv = virtualenv
cmd = celery
args = worker --app=your_app --loglevel=DEBUG -E
[watcher:django]
working_dir = /var/www/your_app
virtualenv = virtualenv
cmd = python
args = manage.py runserver
Then you start both with:
virtualenv/bin/circusd circus.ini
It should start both processes. I think this is a good way to create a "start" plan for your project. Maybe you want to add celerybeat or use channels (websockets in django), so you just can add a new watcher in your circus.ini. It's pretty dynamic

not able to run django server in chef

After vagrant up, django server should be running automatically, but I'm not able to get it to work,
After vagrant ssh, manage.py is in /pro/e7,
In the chef recipe, I do
execute 'django_server' do
command 'python ./manage.py runserver'
end
When I issue command
vagrant provision
it said
python can't find the file.
I have no idea how to fix it.. please help if you know it
Python don't know the path of manage.py. either use full path of manage.pyor if path is /pro/e7/manage.py then use like :
execute "django_server" do
cwd "/pro/e7"
command "python ./manage.py runserver"
end
if you don't know the path of manage.py or you want store it at specific location then you can do:
cookbook_file "/home/username/manage.py" do
source "/full-path-of/manage.py(in hosts system)"
owner "username"
group "username"
mode "0755"
action :create
end
execute "django_server" do
cwd "/home/username/"
command "python ./manage.py runserver"
end
it will put manage.py file in /home/username/ in vagrant guest machine and execute that.

upstart hangs on stop

having an issue with upstart where i can start it but when i run
sudo stop up
it hangs
this is the .conf file
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
pwd
exec /usr/bin/python manage.py run_gunicorn -c config/gunicorn
#exec /usr/bin/python manage.py runserver $DJANGO_HOST:$DJANGO_PORT &
# create a custom event in case we want to chain later
emit django_running
end script
would really appreciate it if someone could give me an idea of why it hangs?
think i have figured it out, or atleast got something working using.
# my upstart django script
# this script will start/stop my django development server
# optional stuff
description "start and stop the django development server"
version "1.0"
author "Calum"
console log
# configuration variables.
# You'll want to change thse as needed
env DJANGO_HOME=/home/calum/django/django-nexus7/nexus7
env DJANGO_PORT=8000
env DJANGO_HOST=0.0.0.0 # bind to all interfaces
# tell upstart we're creating a daemon
# upstart manages PID creation for you.
#expect fork
script
# My startup script, plain old shell scripting here.
chdir $DJANGO_HOME
/usr/bin/python manage.py run_gunicorn -c config/gunicorn
end script
things ive learnt that may help others:
dont use exec inside the script tags, just code it as if you were in a shell
use expect fork if you fork once
use expect daemon if you fork twice

Getting Django in a VirtualEnv to run through Upstart

I've been trying to trudge through the docs and examples to get my Django running through upstart so I can have it running all the time but am unable to so.
Here's my upstart configuration file located at /etc/init/myapp.conf:
start on startup
#expect daemon
#respawn
console output
script
chdir /app/env/bin
exec source activate
exec /app/env/bin/python /app/src/manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
end script
When I type sudo service myapp start, the console says that it has started but it doesn't seem to be running.
Is it possible to see some debugging output to see what's going wrong?
I need to run my Django application as another user — i.e. djangouser. How can I do so?
(I've been commenting out some lines to test where the service is going wrong). This is not for production use but my internal development use only.
Thanks.
Edit #1:
I have wrapped both my commands into a simple script at /app/run.sh
#!/bin/bash
cd /app/env/bin
source activate
cd /app/src
python manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
..and I've modified my /etc/init/myapp.conf to
start on startup
expect daemon
exec su - djangouser -c "bash /app/run.sh"
When executing sudo service myapp start — the application starts but the PID is wrong and I can't seem to kill it with sudo service myapp stop
Any ideas?
Change:
exec source activate
By just:
source activate
This will load the virtual environment. You should probably drop the other "exec". If that doesn't work, please post your upstart logs.
A couple of remarks:
logging the output to somewhere else than /dev/null might be useful :)
runserver is not ment to be stable, I see it crashing sometimes and in that case i guess you'll need to force upstart to reload, or put the runserver call in a while loop
you will not be able to use an interactive debugger like ipdb with this setup
How about using nginx and uwsgi with your virtualenv. this will give you a production like environment but will also start your django app at start up. if you are using ubuntu 10 you should take a look at uwsgi-python, otherwise just install the latest uwsgi. i usually start my virtualenv in uwsgi like so : sudo nano /etc/uwsgi-python/apps-available/app.xml
<uwsgi>
<socket>127.0.0.1:8889</socket>
<pythonpath>/home/user/code/</pythonpath>
<virtualenv>/home/user/code</virtualenv>
<pythonpath>/home/user/code/app</pythonpath>
<app mountpoint="/">
<script>uwsgiApp</script>
</app>
</uwsgi>
also setup yournginx files at /etc/nginx/apps-available/default (the file is a bit straight forward). this will help you have your django app at all times,
su is problematic becouse it forks the process. You can use sudo -u djangouser instead or simply add
setuid djangouser
in your conf file.
This should work on Ubuntu 14.04 and possibly other versions as well:
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app start
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# cat /var/log/upstart/my_app.log
Performing system checks...
System check identified no issues (0 silenced).
You have unapplied migrations; your app may not work properly until they are applied.
Run 'python manage.py migrate' to apply them.
June 30, 2015 - 06:54:18
Django version 1.8.2, using settings 'my_test.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app stop
my_app stop/waiting
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app stop/waiting
Here is the config to make it work:
root#vagrant-ubuntu-trusty-64:/etc/init# cat my_app.conf
description "my_app upstart script"
start on runlevel [23]
respawn
script
su vagrant -c "source /home/vagrant/dj_app/bin/activate; /home/vagrant/dj_app/bin/python /home/vagrant/my_test/manage.py runserver 0.0.0.0:8080"
end script

Rabbitmq celeryd celerybeat not executing tasks in production as Daemon

Last week I setup RabbitMQ and Celery on my production system after having tested it on my local dev and all worked fine.
I get the feeling that my tasks are not being executed on production since I have about 1200 tasks that are still in the queue.
I run a CentOS 5.4 setup, with celeryd and celerybeat daemons and WSGI
I have made the import onto the wsgi module.
When I run, /etc/init.d/celeryd start I get the following response
[root#myvm myproject]# /etc/init.d/celeryd start
celeryd-multi v2.3.1
> Starting nodes...
> w1.myvm.centos01: OK
When I run /etc/init.d/celerybeat start I get the following response
[root#myvm fundedmyprojectbyme]# /etc/init.d/celerybeat start
Starting celerybeat...
So by the output it seems that the items are executed successully - although when looking at the queues they only seem to get more than getting executed.
Now if I perform the same execution, but use django's manage.py instead ./manage.py celeryd and ./manage.py celerybeat the tasks immediately start to get processed.
My /etc/default/celeryd
# Where to chdir at start.
CELERYD_CHDIR="/www/myproject/"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
my /etc/default/celerybeat
# Where the Django project is.
CELERYD_CHDIR="/www/myproject/"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
# Path to celeryd
CELERYD="/www/myproject/manage.py celeryd"
# Path to celerybeat
CELERYBEAT="/www/myproject/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
my /etc/init.d files for /celeryd and /celerybeat are based on the generic scripts
Am I missing a part of the configuration???
I ran into a situation once where I have to add 'python' as a prefix to the CELERYD_MULTI variable.
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="python $CELERYD_CHDIR/manage.py celeryd_multi"
For whatever reason, my manage.py script would not execute normally (even though I had chmod +x and configured my shebang's properly.) You might try this to see if it works.
Try running the following and see what the output tells you:
sh -x /etc/init.d/celeryd start
In my case there were some permission problems on /var/log for the user that celery runs as