starting Django project using cron - django

I have a Django project which I want to start every time the device reboots. I can start the Django application manually using the following 4 commands
cd /home/pi/reg-sign
source djenv/bin/activate
cd /home/pi/reg-sign/mysite
python manage.py runserver 192.168.4.1:8000
But I cannot get this working in a crontab line.
So far I have;
#reboot cd /home/pi/reg-sign && /home/pi/reg-sign/djenv/bin/python /home/pi/reg-sign/mysite/manage.py python runserver 192.168.4.1:8000
however this does not start the server.
Any help would be much appreciated.
Many thanks,

Related

Jenkins execution of manage.py runserver

I couldn't find information how to get build result from manage.py runserver
It runs contantly and no server log is outputed.
This way I can't execute the next shell commant or trigger the next job.
The only solition I have come to is to use parallel jobs.
Any one here with better idea?
Thanks.
And just few mins ago my collegue showed me to use nohup and with setting BUILD_ID of Jenkins it would be like this to get Success from the build and still the server running
BUILD_ID=dontKillMe nohup python manage.py runserver host_server &

jenkins start django server after successful build

We use jenkins as continious integration system. We have two django servers validated by jenkins.
jenkins validates successully the first server. The second server depends on the first one. Thus we would like to launch at the end of the first server validation the first server itself.
We are using python, virtualenv and django and defined the Virtualenv Builder as follow:
pip install -r requirements.txt
rm -f .coverage
fab localhost test
coverage xml
nohup python manage.py runserver 9090 &
The issue is that the build never ends due to the nohup.
How can I launch the server after a successful build?
I had the same problem.
Ken,
I tried using fabric, but again python manage.py runserver - runs continuosly, so the next command is not starting.
And just few mins ago my collegue showed me how to use nohup and with variable BUILD_ID of Jenkins it would be like this to get Success from the build and leave the Django server running:
BUILD_ID=dontKillMe nohup python manage.py runserver host_server &
This worked for our Django project testing.
Since you are using fabric to test, I would recommend defining another fabric task, say, deploy, which you could call assuming the build succeeds.
Much like the call to fab completes for a successful build such that you get to the nohup line, I would expect the deploy task to return also.
You may also want to consider making the server a service (either via an /etc/init.d style script, or upstart if Ubuntu), and have the fabric task stop the currently running one, copy over whatever new files it needs (or similar process), and then restart it.
Assuming what you have above is a bash script or similar, you may want to also define set -e so that, in case any of the commands returns a non-success code, the script will fail, and in turn, fail the build.

Django 1.2 - Management - Command - Can't Run manage.py commands on crontab

On my project i have an app : my_app with Managment command : my_command.py
On SSH i try :
my/folder/project/and/app/python2.4 manage.py my_command all is ok
but if i try : python2.4 /my/folder/project/and/app/manage.py my_command, manage.py doesn't know my command...
i try to run my command on a crontab..
Thx
laurent
In my experience I had this kind of issues for several reasons.
I'd check first the python interpreter used. If you are using virtualenv or something like that you should ensure you are using the correct python executable.
If your server has selinux, you should ensure it's not denying the cron to read some files.
I also had an issue like this because the settings file (I used a separate setting file to make it less verbose) didn't exist.

Getting Django in a VirtualEnv to run through Upstart

I've been trying to trudge through the docs and examples to get my Django running through upstart so I can have it running all the time but am unable to so.
Here's my upstart configuration file located at /etc/init/myapp.conf:
start on startup
#expect daemon
#respawn
console output
script
chdir /app/env/bin
exec source activate
exec /app/env/bin/python /app/src/manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
end script
When I type sudo service myapp start, the console says that it has started but it doesn't seem to be running.
Is it possible to see some debugging output to see what's going wrong?
I need to run my Django application as another user — i.e. djangouser. How can I do so?
(I've been commenting out some lines to test where the service is going wrong). This is not for production use but my internal development use only.
Thanks.
Edit #1:
I have wrapped both my commands into a simple script at /app/run.sh
#!/bin/bash
cd /app/env/bin
source activate
cd /app/src
python manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
..and I've modified my /etc/init/myapp.conf to
start on startup
expect daemon
exec su - djangouser -c "bash /app/run.sh"
When executing sudo service myapp start — the application starts but the PID is wrong and I can't seem to kill it with sudo service myapp stop
Any ideas?
Change:
exec source activate
By just:
source activate
This will load the virtual environment. You should probably drop the other "exec". If that doesn't work, please post your upstart logs.
A couple of remarks:
logging the output to somewhere else than /dev/null might be useful :)
runserver is not ment to be stable, I see it crashing sometimes and in that case i guess you'll need to force upstart to reload, or put the runserver call in a while loop
you will not be able to use an interactive debugger like ipdb with this setup
How about using nginx and uwsgi with your virtualenv. this will give you a production like environment but will also start your django app at start up. if you are using ubuntu 10 you should take a look at uwsgi-python, otherwise just install the latest uwsgi. i usually start my virtualenv in uwsgi like so : sudo nano /etc/uwsgi-python/apps-available/app.xml
<uwsgi>
<socket>127.0.0.1:8889</socket>
<pythonpath>/home/user/code/</pythonpath>
<virtualenv>/home/user/code</virtualenv>
<pythonpath>/home/user/code/app</pythonpath>
<app mountpoint="/">
<script>uwsgiApp</script>
</app>
</uwsgi>
also setup yournginx files at /etc/nginx/apps-available/default (the file is a bit straight forward). this will help you have your django app at all times,
su is problematic becouse it forks the process. You can use sudo -u djangouser instead or simply add
setuid djangouser
in your conf file.
This should work on Ubuntu 14.04 and possibly other versions as well:
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app start
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# cat /var/log/upstart/my_app.log
Performing system checks...
System check identified no issues (0 silenced).
You have unapplied migrations; your app may not work properly until they are applied.
Run 'python manage.py migrate' to apply them.
June 30, 2015 - 06:54:18
Django version 1.8.2, using settings 'my_test.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app stop
my_app stop/waiting
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app stop/waiting
Here is the config to make it work:
root#vagrant-ubuntu-trusty-64:/etc/init# cat my_app.conf
description "my_app upstart script"
start on runlevel [23]
respawn
script
su vagrant -c "source /home/vagrant/dj_app/bin/activate; /home/vagrant/dj_app/bin/python /home/vagrant/my_test/manage.py runserver 0.0.0.0:8080"
end script

Running non-django commands from a sub-directory for a Django project hosted on Heroku?

I've deployed a Django application on Heroku. The application by itself works fine. I can run commands such as heroku run python project/manage.py syncdband heroku run python project/manage.py shell and this works well.
My Django project makes use of the Python web scraping library called Scrapy. Scrapy comes with a command called scrapy crawl abc which helps me scrape websites I have defined in the scrapy application. When I run a scrapy command such as scrapy crawl spidername on my local machine, the application is able to scrape date and copy it to my database. However when I run the same command on Heroku under a sub-directory of my project directory heroku run scrapy crawl spidername, nothing happens.
I don't see anything in the Heroku logs which can point to where I'm going wrong:
2012-01-26T15:45:38+00:00 heroku[run.1]: State changed from created to starting
2012-01-26T15:45:43+00:00 app[run.1]: Awaiting client
2012-01-26T15:45:43+00:00 app[run.1]: Starting process with command `project/spiderMainDir scrapy crawl spidername`
2012-01-26T15:45:44+00:00 heroku[run.1]: State changed from starting to up
2012-01-26T15:45:46+00:00 heroku[run.1]: State changed from up to complete
2012-01-26T15:45:46+00:00 heroku[run.1]: Process exited
Some additional information:
My scrapy app calls pipelines.py to save the scraped items to the database. In the pipelines.py file, this is what I've written to invoke the Django settings so that I can import my models and save data to the database from the scrapy application.
import os,sys
PROJECT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(PROJECT_PATH)
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
Any pointers on where exactly am I going wrong? How do I execute the scrapy command on Heroku such that my application can scrape an external website and save that data to the database. Isn't the way external commands are run in Heroku like - heroku run command?
I'm answering my own question because I discovered what the problem was. Heroku for some reason was not able to find scrapy when I executed the command from a sub-directory and not the top-level directory.
The command heroku run ... is generally run from the top-level directory. For my project which uses scrapy, I was required to go to a sub-directory and run the scrapy command from the sub-directory (this is how scrapy is designed). This wasn't working in Heroku. So I went to the Heroku bash by typing heroku run bash to see what was going on. When I ran the scrapy command from the top-level directory, Heroku recognized the command but when I went to a sub-directory, it failed to recognize the scrapy command. I suppose there is some problem related to the path. From the sub-directory, I had to specify the complete path to scrapy (~/bin/scrapy crawl spidername) to be able to execute it.
To run the scrapy command without going to the Heroku bash manually each time, my work around this problem was that I created a shell script containing the following code and put it under the bin directory of my top-level directory and pushed the changes to Heroku.
bin/scrapy.sh :
#!/usr/bin/env bash
cd ~/project/spiderSubDirectory
~/bin/scrapy $#
After this was done, I could execute $ heroku run scrapy.sh crawl spidername from my local bash. I suppose its not the best solution but this works.
Isn't the way external commands are run in Heroku like - heroku run
appdir command?
It's actually heroku run command. By including your appdir in there, it resulted in an invalid command. Heroku's output doesn't give useful error messages when these commands fail, and instead just tells you that the command finished which is what you're seeing. So for you, just change the command to something like:
heroku run scrapy crawl spidername