How can I perform Django's `syncdb --noinput` with call_command? - django

>>> from django.core.management import call_command
>>> call_command('syncdb')
executes the syncdb management command from within a python script. However, I want to run the equivalent of
$ python manage.py syncdb --noinput
from within a python shell or script. How can I do that?
The following lines don't work without interrupting me with the question whether I want to create a super user.
>>> call_command('syncdb', noinput = True) # asks for input
>>> call_command('syncdb', 'noinput') # raises an exception
I use Django 1.3.

call_command('syncdb', interactive = False)
EDIT:
I found the answer in the source code. The source code for all management commands can be found in a python module called management/commands/(command_name).py
The python module where the syncdb command resides is django.core.management.commands.syncdb
To find the source code of the command you can do something like this:
(env)$ ./manage.py shell
>>> from django.core.management.commands import syncdb
>>> syncdb.__file__
'/home/user/env/local/lib/python2.7/site-packages/django/core/management/commands/syncdb.pyc'
>>>
Of course, check the contents of syncdb.py, and not syncdb.pyc.
Or looking at the online source, the syncdb.py script contains:
make_option('--noinput', action='store_false', dest='interactive', default=True,
help='Tells Django to NOT prompt the user for input of any kind.'),
that tells us that instead of --noinput on the command line, we should use interactive if we want to automate commands with the call_command function.

Related

Shebang command to call script from existing script - Python

I am running a python script on my raspberry pi, at the end of which I want to call a second python script in the same directory. I call it using the os.system() command as shown in the code snippet below but get import errors. I understand this is because the system interprets the script name as a shell command and needs to be told to run it using python, using the shebang line at the beginning of my second script.
#!/usr/bin/env python
However doing so does not solve the errors
Here is the ending snippet from the first script:
# Time to Predict E
end3 = time.time()
prediction_time = end3-start3
print ("\nPrediction time: ", prediction_time, "seconds")
i = i+1
print (i)
script = '/home/pi/piNN/exampleScript.py'
os.system('"' + script + '"')
and here is the beginning of my second script:
'#!usr/bin/env python'
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
#from picamera import PiCamera
import argparse
import sys
import time
import numpy as np
import tensorflow as tf
import PIL.Image as Image
Any help is greatly appreciated :)
Since you have not posted the actual errors that you get when you run your code, this is my best guess. First, ensure that exampleScript.py is executable:
chmod +x /home/pi/piNN/exampleScript.py
Second, add a missing leading slash to the shebang in exampleScript.py, i.e. change
'#!usr/bin/env python'
to
'#!/usr/bin/env python'
The setup that you have here is not ideal.
Consider simply importing your other script (make sure they are in the same directory). Importing it will result in the execution of all executable python code inside the script that is not wrapped in if __name__ == "__main__":. While on the topic, should you need to safeguard some code from being executed, place it in there.
I have 2 python file a.py and b.py and I set execute permission for b.py with.
chmod a+x b.py
Below is my sample:
a.py
#!/usr/bin/python
print 'Script a'
import os
script = './b.py'
os.system('"' + script + '"')
b.py
#!/usr/bin/python
print 'Script b'
Execute "python a.py", the result is:
Script a
Script b

Add method imports to shell_plus

In shell_plus, is there a way to automatically import selected helper methods, like the models are?
I often open the shell to type:
proj = Project.objects.get(project_id="asdf")
I want to replace that with:
proj = getproj("asdf")
Found it in the docs. Quoted from there:
Additional Imports
In addition to importing the models you can specify other items to
import by default. These are specified in SHELL_PLUS_PRE_IMPORTS and
SHELL_PLUS_POST_IMPORTS. The former is imported before any other
imports (such as the default models import) and the latter is imported
after any other imports. Both have similar syntax. So in your
settings.py file:
SHELL_PLUS_PRE_IMPORTS = (
('module.submodule1', ('class1', 'function2')),
('module.submodule2', 'function3'),
('module.submodule3', '*'),
'module.submodule4'
)
The above example would directly translate to the following python
code which would be executed before the automatic imports:
from module.submodule1 import class1, function2
from module.submodule2 import function3
from module.submodule3 import *
import module.submodule4
These symbols will be available as soon as the shell starts.
ok, two ways:
1) using PYTHONSTARTUP variable (see this Docs)
#in some file. (here, I'll call it "~/path/to/foo.py"
def getproj(p_od):
#I'm importing here because this script run in any python shell session
from some_app.models import Project
return Project.objects.get(project_id="asdf")
#in your .bashrc
export PYTHONSTARTUP="~/path/to/foo.py"
2) using ipython startup (my favourite) (See this Docs,this issue and this Docs ):
$ pip install ipython
$ ipython profile create
# put the foo.py script in your profile_default/startup directory.
# django run ipython if it's installed.
$ django-admin.py shell_plus

Django loaddata settings error

When trying to use loaddata on my local machine (win/sqlite):
python django-admin.py loaddata dumpdata.json
I get the following error:
raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE) ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I am using djangoconfig app if that helps:
"""
Django-config settings loader.
"""
import os
CONFIG_IDENTIFIER = os.getenv("CONFIG_IDENTIFIER")
if CONFIG_IDENTIFIER is None:
CONFIG_IDENTIFIER = 'local'
# Import defaults
from config.base import *
# Import overrides
overrides = __import__(
"config." + CONFIG_IDENTIFIER,
globals(),
locals(),
["config"]
)
for attribute in dir(overrides):
if attribute.isupper():
globals()[attribute] = getattr(overrides, attribute)
projects>python manage.py loaddata dumpdata.json --settings=config.base
WARNING: Forced to run environment with LOCAL configuration.
Problem installing fixture 'dumpdata.json': Traceback
(most recent call last):
File "loaddata.py", line 174, in handle
obj.save(using=using)
...
File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line
234, in execute
return Database.Cursor.execute(self, query, params)
IntegrityError: columns app_label, model are not unique
Don't use django-admin.py for anything other than setting up an initial project. Once you have a project, use manage.py instead - it sets up the reference to your settings file.
syncdb will load content_types, you need to clear that table before loading data. Something like this:
c:\> sqlite3 classifier.db
sqlite> delete from django_content_type;
sqlite> ^Z
c:\> python django-admin.py loaddata dumpdata.json
Also, make sure you do not create a superuser, or any user, when you syncdb, as those are likely to also collide with your data fixture ...
There are two standard ways to provide your settings to Django.
Using set (or export on Unix) set DJANGO_SETTINGS_MODULE=mysite.settings
Alternatively as an option with django-admin.py --settings=mysite.settings
Django-config does things differently because it allows you to have multiple settings files. Django-config works with manage.py to specify which to use. You should use manage.py whenever possible; it sets up the environment. In your case try this where --settings points to the specific .py file you want to use from django-config's config folder.
django-admin.py loaddata dumpdata.json --settings=<config/settings.py>
Actually --settings wants python package syntax so maybe <mysite>.config.<your settings>.py

Django timer thread

I would like to compute some information in my Django application on regular basis.
I need to select and insert data each second and want to use Django ORM.
How can I do this?
In a shell script, set the DJANGO_SETTINGS_MODULE variable and call a python script
export DJANGO_SETTINGS_MODULE=yourapp.settings
python compute_some_info.py
In compute_some_info.py, set up django and import your modules (look at how the manage.py script sets up to run Django)
#!/usr/bin/env python
import sys
try:
import settings # Assumed to be in the same directory.
except ImportError:
sys.stderr.write("Error: Can't find the file 'settings.py'")
sys.exit(1)
sys.path = sys.path + ['/yourapphome']
from yourapp.models import YourModel
YourModel.compute_some_info()
Then call your shell script in a cron job.
Alternatively -- you can just keep running and sleeping (better if it's every second) -- you would still want to be outside of the webserver and in your own process that is set up this way.
One way to do it would be to create a custom command, and invoke python manage.py your_custom_command from cron or windows scheduler.
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
For example, create myapp/management/commands/myapp_task.py which reads:
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
def handle_noargs(self, **options):
print 'Doing task...'
# invoke the functions you need to run on your project here
print 'Done'
Then you can run it from cron like this:
export DJANGO_SETTINGS_MODULE=myproject.settings; export PYTHONPATH=/path/to/project_parent; python manage.py myapp_task

How can I call a custom Django manage.py command directly from a test driver?

I want to write a unit test for a Django manage.py command that does a backend operation on a database table. How would I invoke the management command directly from code?
I don't want to execute the command on the Operating System's shell from tests.py because I can't use the test environment set up using manage.py test (test database, test dummy email outbox, etc...)
The best way to test such things - extract needed functionality from command itself to standalone function or class. It helps to abstract from "command execution stuff" and write test without additional requirements.
But if you by some reason cannot decouple logic form command you can call it from any code using call_command method like this:
from django.core.management import call_command
call_command('my_command', 'foo', bar='baz')
Rather than do the call_command trick, you can run your task by doing:
from myapp.management.commands import my_management_task
cmd = my_management_task.Command()
opts = {} # kwargs for your command -- lets you override stuff for testing...
cmd.handle_noargs(**opts)
the following code:
from django.core.management import call_command
call_command('collectstatic', verbosity=3, interactive=False)
call_command('migrate', 'myapp', verbosity=3, interactive=False)
...is equal to the following commands typed in terminal:
$ ./manage.py collectstatic --noinput -v 3
$ ./manage.py migrate myapp --noinput -v 3
See running management commands from django docs.
The Django documentation on the call_command fails to mention that out must be redirected to sys.stdout. The example code should read:
from django.core.management import call_command
from django.test import TestCase
from django.utils.six import StringIO
import sys
class ClosepollTest(TestCase):
def test_command_output(self):
out = StringIO()
sys.stdout = out
call_command('closepoll', stdout=out)
self.assertIn('Expected output', out.getvalue())
Building on Nate's answer I have this:
def make_test_wrapper_for(command_module):
def _run_cmd_with(*args):
"""Run the possibly_add_alert command with the supplied arguments"""
cmd = command_module.Command()
(opts, args) = OptionParser(option_list=cmd.option_list).parse_args(list(args))
cmd.handle(*args, **vars(opts))
return _run_cmd_with
Usage:
from myapp.management import mycommand
cmd_runner = make_test_wrapper_for(mycommand)
cmd_runner("foo", "bar")
The advantage here being that if you've used additional options and OptParse, this will sort the out for you. It isn't quite perfect - and it doesn't pipe outputs yet - but it will use the test database. You can then test for database effects.
I am sure use of Micheal Foords mock module and also rewiring stdout for the duration of a test would mean you could get some more out of this technique too - test the output, exit conditions etc.
The advanced way to run manage command with a flexible arguments and captured output
argv = self.build_argv(short_dict=kwargs)
cmd = self.run_manage_command_raw(YourManageCommandClass, argv=argv)
# Output is saved cmd.stdout.getvalue() / cmd.stderr.getvalue()
Add code to your base Test class
#classmethod
def build_argv(cls, *positional, short_names=None, long_names=None, short_dict=None, **long_dict):
"""
Build argv list which can be provided for manage command "run_from_argv"
1) positional will be passed first as is
2) short_names with be passed after with one dash (-) prefix
3) long_names with be passed after with one tow dashes (--) prefix
4) short_dict with be passed after with one dash (-) prefix key and next item as value
5) long_dict with be passed after with two dashes (--) prefix key and next item as value
"""
argv = [__file__, None] + list(positional)[:]
for name in short_names or []:
argv.append(f'-{name}')
for name in long_names or []:
argv.append(f'--{name}')
for name, value in (short_dict or {}).items():
argv.append(f'-{name}')
argv.append(str(value))
for name, value in long_dict.items():
argv.append(f'--{name}')
argv.append(str(value))
return argv
#classmethod
def run_manage_command_raw(cls, cmd_class, argv):
"""run any manage.py command as python object"""
command = cmd_class(stdout=io.StringIO(), stderr=io.StringIO())
with mock.patch('django.core.management.base.connections.close_all'):
# patch to prevent closing db connecction
command.run_from_argv(argv)
return command