django "manage.py index" does not execute as a cron job - django

I am trying to develop a site using pinax.
To index the models using djapian I've been trying to run "manage.py index" as a cron job but keep getting a pinax error. "Error: No module named notification". However the task executes correctly when i run it from the shell. My crontab definition is as follows:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/mypath/test_project
# m h dom mon dow user command
*/1 * * * * root python /root/mypath/test_project/manage.py index >>/tmp/backup.log 2>&1
Can anyone explain why I am receiving this error?

Your error is probably because you don't have your PYTHONPATH set properly, especially to include the path to the "notification" module. You also need to set the DJANGO_SETTINGS_MODULE path, if it isn't already set in your environment.
Here's a shell script I use to wrap my own django based cron task:
#!/bin/sh
DJANGO_SETTINGS_MODULE=mysettings
export DJANGO_SETTINGS_MODULE
PYTHONPATH=/path/to/python_libs:/path/to/my_django_apps
export PYTHONPATH
/path/to/python /path/to/my_django_script

As ars alluded to, cron runs with an entirely different set of environment variables than you do. The easiest way to fix that is to use a script similar to what he posted.

Related

Active Django settings file from Celery worker (how to set DJANGO_SETTINGS_MODULE Dynamically )

So I already looked around a lot for this but couldn't find a good answer. I'm using Celery celery and Django 3.2.13
., without django-celery package since newer versions of Celery don't require it anymore. I managed to set up tasks and execute them using Redis. Everything is working as it should there. However, I am integrating this in a existing, quite large, Django project. There we specified couple of Django settings files, not just one. We run different one depending on environment, for instance one for local machines and one for server. My problem is that I can't seem to be able to track down which settings file is "active" from the celery worker, which runs celery.py file in my project root (as documentation specifies). There the documentation requires to specify Django settings file like this:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', "celery_test.settings.development")
Now this works, but if I move the stuff locally I need to change it to settings.local to make it work, and that every time. Reading settings object in runtime like I do in standard Django files didn't work since celery worker executes in a different process. So, using this situation, does anyone have any idea on how to dynamically fetch active Django settings file from celery worker? Or perhaps pass it in as a variable when starting celery worker? (like for Django, etc --settings=project.settings.local) Thanks!
I found the command line solution
When initializing the celery worker on the command line, just set the environment variable prior to the celery command.
DJANGO_SETTINGS_MODULE='proj.settings' celery -A proj worker -l info
but im getting an error
my commannd line
DJANGO_SETTINGS_MODULE='celery_test.settings.development' celery -A celery_test worker -l info --pool=solo
DJANGO_SETTINGS_MODULE=celery_test.settings.development : The term
'DJANGO_SETTINGS_MODULE=celery_test.settings.development' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path
is correct and try again.
At line:1 char:1
DJANGO_SETTINGS_MODULE='celery_test.settings.development' celery -A c ...
+ CategoryInfo : ObjectNotFound: (DJANGO_SETTINGS...ngs.development:String) [], CommandNotFoundExcept
ion
+ FullyQualifiedErrorId : CommandNotFoundException

Python Django set DJANGO_SETTINGS_MODULE when migrating source code

Task: Set up a new running environment by being provided python/Django source code and some additional details.
Problem: Cannot get Django-admin to validate due to missing/incorrect settings configuration
"django.core.exceptions.ImproperlyConfigured Requested setting USE_I18N, but settings are not configured. ..... "
You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings
Env Details: Ubuntu OS, Python 2.7, Django 1.7, PostgreSQL (also Supervisor + gunicorn) Running a venv located **/home/dave/python-env/vas/**bin/activate
Python sys.path
/usr/lib/python2.7/* (multiple defined)
/home/dave/python-env/vas/python2.7/site-packages
So tried several methods (including #export DJANGO_SETTINGS_MODULE=project-name.settings....) with little success.
How can one set the DJANGO_SETTINGS_MODULE variable?
os.environ.setdefault() is set in wsgi (I know this is the next step)
BUT this value is also set in /manage.py ...?
os.environ.setdefault("DJANGO_SETTINGS_MODULE", project_name.settings)
The directory /var/www/app/ (where the python source code is located) has several files, one of them is the project_name where the settings.py sits....
I am new to python/django...
Trying to get django-admin.py validate to validate.
Update: Running #python manage.py runserver runs OK. #python manage.py validate|check returns "System check identified no issues (0 silenced)
Running #django-admin.py check returns the error in question. "You must either define DJANGO_SETTINGS_MODULE ...."
UPDATE 2: Solution
Turns out you don't need django-admin.py as suggested by (Alasdair) and you can use manage.py.
Details - If the 'manage.py check' function returns no issues and #pip install -r requirements.txt completes within your virtual environment THEN one can run #manage.py createsuperuser
I was able to use #manage.py runserver after creating a super user, and using this new user (as the database tables were empty for security reasons) I was able to log into 127.0.0.1:8000/admin. From there the models/tables were visible and using the admin functions I could create a new user + group to access the original system that was being migrated as an admin user.
Also not that a database was required (running postgres) with db/username/pass as per settings files and a git repository (at least empty initialised) for raven...
hope this helps someone coming into python.

Django-Crontab with deployment environment

I'm developing a module with the crontab.
Actually, the framework I'm using is django so I did install 'django-crontab'
I did test as the instruction did and make it with localhost environment.
When I deployed("sudo service apache2 restart") it on AWS after doing a command 'python manage.py crontab add', it didn't work.
I thiknk it's working on only localhost environment, isn't it?
How can I solve this problem?
If you have more than one profile in your django settings, you shuold specify one before add crontab. if not specified, django crontab run as default environment, which is develop mostly. To run it on product environment , you should do these:
specify crontab enviromen in settings.product.py, something like
CRONTAB_DJANGO_SETTINGS_MODULE = 'gold.settings.product'
specify settings profile and add crontab
export MYPROJECT_PROFILE = product
python manage.py crontab add

Cron Job that access django Models

What i have done?
I have written a small application in django with few models with sqlite3 as backend. Now i want to write a python code that clears the database elements based on certain condition.
Question:
How can i achieve the above requirement?
I think the cleanest way to do this is to write your own django-admin command.
You can then run the command using manage.py:
python manage.py your_command
Having a command that can be run in shell, you can easily put in into your crontab. Optionally django-admin commands can receive command line arguments, if needed.

Supervising virtualenv django app via supervisor

I'm trying to use supervisor in order to manage my django project running gunicorn inside a virtualenv.
My conf file looks like this:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
process_name=%(program_name)s
user=www-data
autostart=false
stdout_logfile=/var/log/gunicorn_diasporamas.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=2
stderr_logfile=/var/log/gunicorn_diasporamas_errors.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=2enter code here
The problem is, I need supervisor to launch the command after it has run 'source bin/activate' in my virtualenv. I've been hanging around google trying to find an answer but didn't find anything.
Note: I don't want to use virtualenvwrapper
Any help please?
The documentation for the virtualenv activate script says that it only modifies the PATH environment variable, in which case you can do:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin"
...
Since version 3.2 you can use variable expansion to preserve the existing PATH too:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin:%(ENV_PATH)s"
...