cron job 'django-cron' not running in ubuntu cron tab - django

I am using django_cron for a schedule a job, when i am use python manage.py runcrons this work good. but after adding the cron job in ubuntu cron list job is not executing.
My setting.py is:
CRON_CLASSES = [
"home.cron.HomeCronJob",
]
FAILED_RUNS_CRONJOB_EMAIL_PREFIX = []
INSTALLED_APPS = (
'django.contrib.auth',
'..................'
'django_cron',
)
My cron.py file is:
from django_cron import CronJobBase, Schedule
from home.management.commands.auto_renueva import republishAds
class HomeCronJob(CronJobBase):
RUN_EVERY_MINS = 2
MIN_NUM_FAILURES = 2
schedule = Schedule(run_every_mins=RUN_EVERY_MINS)
code = 'home.home_cron_job'
def do(self):
republishAds()
then I have created a shell script for run this job, cron.sh:
#! /bin/bash
source /home/cis/ENV/muna/bin/activate
python /home/cis/DjangoLive/Newmunda/mund2anuncios/manage.py runcrons
deactivate
and the code i have added in ubuntu cron file are:
*/1 * * * * /home/cis/DjangoLive/Newmunda/mund2anuncios/crons.sh >> /home/cis/Desktop/crons.log 3 >> /home/cis/Desktop/cron_errors.log
Please suggest me what i am doing wrong Here.
Thanks in Advance

As a guess
python /home/cis/DjangoLive/Newmunda/mund2anuncios/manage.py runcrons
Will fail because PATH is not set in cron environment. You should include the full path to the python interpreter.
Other common error in cron jobs are no execution permissions on scripts. Normally cron errors are emailed to root, so you should have more info about errors on root mailbox

Related

Can't send email in management command run by cron

I have a strange problem with a Django management command I am running via cron.
I've a production server set up to use Mailgun. I've a management command that simply sends an email:
from django.core.mail import send_mail
class Command(BaseCommand):
help = 'Send email'
def handle(self, *args, **options):
send_mail('Test email', 'Test content', 'noreply#example.com', ['me#example.com',], fail_silently=False)
This script works perfectly if I run it via the command line (I'm using virtualenvwrapper):
> workon myapp
> python manage.py do_command
or directly:
> /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
But when I set it up with cron (crontab -e):
*/1 * * * * /home/user/.venvs/project/bin/python /home/user/project/manage.py do_command
The script runs (without error), but the email isn't sent.
What could be going on?
OK, the issue was that the wrong DJANGO_SETTINGS_MODULE env var was set and there were a few things throwing me off the scent:
My manage.py script defaults to the "development" version of my settings: settings.local and this uses the command line email backend. Cron suppresses all output so I wasn't seeing that happening.
Secondly, I was testing in a shell that already has DJANGO_SETTINGS_MODULE set to settings.production, so it appeared that the script ran correctly when I ran it on the command line.
The fix is easy, add DJANGO_SETTINGS_MODULE to the crontab:
DJANGO_SETTINGS_MODULE=config.settings.production
*/1 * * * * ...

django-celery as a daemon: not working

I have a website project written with django, celery and rabbitmq. And a '.delay' task (the task creates a new folder) is called when a button is clicked.
Everything works fine with celery (the .delay task is called, and a new folder is created) when I run celery with manage.py like:
python manage.py celeryd
However, when I ran celery as the daemon, even there was no error, the task was not executed (no folder was created).
I was kind of following the tutorial: http://www.arruda.blog.br/programacao/django-celery-in-daemon/
My settings are:
/etc/default/celeryd
:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/myproject"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS=""
# Name of the celery config module.
CELERY_CONFIG_MODULE="myproject.settings"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/w1.log"
CELERYD_PID_FILE="/var/run/celery/w1.pid"
# Workers should run as an unprivileged user.
#CELERYD_USER="root"
#CELERYD_GROUP="root"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myproject.settings"
the correlated folders are created too
for the '/etc/default/celeryd/init.d' file, I used this version:
https://raw.github.com/ask/celery/1da3aa43d1e6de525beeda398d0acb8841d5b4d2/contrib/generic-init.d/celeryd
for /var/www/myproject/myproject/settings.py, I have:
:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
INSTALLED_APPS = (
'djcelery',
...
)
There was no error when I start celery by using:
/etc/init.d/celeryd start
and no results neither. Does someone know how to fix the problem?
Celery's docs have a daemon troubleshooting section that might be helpful. Celery has a flag that lets you run your init script without actually daemonizing, and that should show what's going wrong:
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
Newer versions of that init script have a dryrun command that's an easier-to-remember way to run the start command without daemonizing.

ConfigParser does not load sections when run via crontab

When I run python script via command line, everything works just perfect, but wehn the script is being running from cron ConfigParser creates an empty list of sections
me = singleton.SingleInstance()
######### Accessing the configuration file #######################################
config = ConfigParser.RawConfigParser()
config.read('./matrix.cfg')
sections = config.sections()
######### Build the current map ##################################################
print sections
Here is the cron job
* * * * * /usr/bin/python /etc/portmatrix/matrix.py | logger
and here is the output
Feb 12 12:59:01 dns01 CRON[30879]: (root) CMD (/usr/bin/python /etc/portmatrix/matrix.py | logger)
Feb 12 12:59:01 dns01 logger: []
ConfigParser tries to read the file ./matrix.cfg.
Now the path of this file is ./ which means in the current directory.
So what assumption do you make about the current directory when being run from cron? (I guess you have a file /etc/portmatrix/matrix.cfg and you assume that ./ really means "in the same directory as the running script" - this however is not true)
The simple fix is to provide the full path to the configuration file. E.g.:
config = ConfigParser.RawConfigParser()
config.read('/etc/portmatrix/matrix.cfg')
I ran into a similar situation and I was able to fix it thanks to umläute's answer.
I'm using os however, so in your case, this would be something like:
import os
...
base_path = os.path.dirname(os.path.realpath(__file__))
config = ConfigParser.RawConfigParser()
config.read(os.path.join(base_path, 'matrix.cfg')

celery parallel tasking error 'no result backend configured'

Running django-celery 3.1.16, Celery 3.1.17, Django 1.4.16. Trying to run some parallel tasks using 3 workers and collect the results using the following:
from celery import group
positions = []
jobs = group(celery_calculate_something.s(data.id) for data in a_very_big_list)
results = jobs.apply_async()
positions.extend(results.get())
The task celery_calculate_something returns an object to place the in the results list:
app.task(ignore_result=False)
def celery_calculate_something(id):
<do stuff>
No matter what I try, I always get the same result when calling get() on results:
No result backend configured. Please see the documentation for more information.
However, the results backend IS configured - I have many other tasks with ignore_result=False merrily adding to the tasks meta table in django_celery. It is something to do with using the results returned from group(). I should note it is not set explicitly in settings - it seems that django-celery has set it automatically for you.
I have the worker collecting events using:
manage.py celery worker -l info -E
and celerycam running with
python manage.py celerycam
Inspecting the results object returned (an instance of GroupResult) I can see that the backend attr is an instance of DisabledBackend. Is this the problem? What have I mis-understood?
You did not configure the results backend, so basically you need tables to store the results, since you have django-celery add it to INSTALLED_APPS in your settings.py file and then perform the migration (python manage.py migrate) After that open your celery.py file and modify your backend to djcelery.backends.database:DatabaseBackend. Here's an example
app = Celery('almanet',
broker='amqp://guest#localhost//',
backend='djcelery.backends.database:DatabaseBackend',
include=['alm_crm.tasks'] #References your tasks. Donc forget to put the whole absolute path.
)
After that you can import results from celery import result Now you can save the result and extract the result by job.id
from celery import group
positions = []
jobs = group(celery_calculate_something.s(data.id) for data in
a_very_big_list)
results = jobs.apply_async()
results.save()
some_task_result = result.GroupResult.restore(results.id)
print some_task_results.ready()

Running periodic tasks with django and celery

I'm trying create a simple background periodic task using Django-Celery-RabbitMQ combination. I installed Django 1.3.1, I downloaded and setup djcelery. Here is how my settings.py file looks like:
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
....
import djcelery
djcelery.setup_loader()
...
INSTALLED_APPS = (
'djcelery',
)
And I put a 'tasks.py' file in my application folder with the following contents:
from celery.task import PeriodicTask
from celery.registry import tasks
from datetime import timedelta
from datetime import datetime
class MyTask(PeriodicTask):
run_every = timedelta(minutes=1)
def run(self, **kwargs):
self.get_logger().info("Time now: " + datetime.now())
print("Time now: " + datetime.now())
tasks.register(MyTask)
And then I start up my django server (local development instance):
python manage.py runserver
Then I start up the celerybeat process:
python manage.py celerybeat --logfile=<path_to_log_file> -l DEBUG
I can see entries like this in the log:
[2012-04-29 07:50:54,671: DEBUG/MainProcess] tasks.MyTask sent. id->72a5963c-6e15-4fc5-a078-dd26da663323
And I also can see the corresponding entries getting created in database, but I can't find where it is logging the text I specified in the actual run function in MyTask class.
I tried fiddling with the logging settings, tried using the django logger instead of celery logger, but of no use. I'm not even sure, my task is getting executed. If I print any debug information in the task, where does it go?
Also, this is first time I'm working with any type of message queuing system. It looks like the task will get executed as part of the celerybeat process - outside the django web framework. Will I still be able to access all the django models I created.
Thanks,
Venkat.
Celerybeat it stuff, which pushes task when it need, but not executing them. You tasks instances stored in RabbitMq server. You need to execute celeryd daemon for executing your tasks.
python manage.py celeryd --logfile=<path_to_log_file> -l DEBUG
Also if you using RabbitMq, I recommend to you to install special rabbitmq management plugins:
rabbitmq-plugins list
rabbitmq-enable rabbitmq_management
service rabbitmq-server restart
It will be available at http://:55672/ login: guest pass: guest. Here you can check how many tasks in your rabbit instance online.
You should check the RabbitMQ logs, since celery sends the tasks to RabbitMQ and it should execute them. So all the prints of the tasks should be in RabbitMQ logs.