Django management command cannot see arguments? - django

Since upgrading to Django 1.8, there's a strange bug in my Django management command.
I run it as follows:
python manage.py my_command $DB_NAME $DB_USER $DB_PASS
And then I collect the arguments as follows:
class Command(BaseCommand):
def handle(self, *args, **options):
print args
db_name = args[0]
db_user = args[1]
db_pass = args[2]
self.conn = psycopg2.connect(database=db_name, user=db_user,
password=db_pass)
Previously this worked fine, but now I see this error:
usage: manage.py my_command [-h] [--version] [-v {0,1,2,3}]
[--settings SETTINGS]
[--pythonpath PYTHONPATH]
[--traceback] [--no-color]
manage.py my_command: error: unrecognized arguments: test test test
It's not even getting as far as the print args statement.
If I run it without any arguments, then it errors on the args[0] line, unsurprisingly.
Am I using args wrong here? Or is something else going on?

It is a change in Django 1.8. As detailed here:
Management commands that only accept positional arguments
If you have written a custom management command that only accepts positional arguments and you didn’t specify the args command variable, you might get an error like Error: unrecognized arguments: ..., as variable parsing is now based on argparse which doesn’t implicitly accept positional arguments. You can make your command backwards compatible by simply setting the args class variable. However, if you don’t have to keep compatibility with older Django versions, it’s better to implement the new add_arguments() method as described in Writing custom django-admin commands.

def add_arguments(self, parser):
parser.add_argument('args', nargs='*')
Add the above for compatibility, breaking it was a really unwise decision from the folks updating the django.

Related

How can I a Djanog auth user on the command line and not have to manually enter the password?

I'm using Django 3.2 and the auth module. I would like to create a super user on the command line (for eventual inclusion in a docker script). When I try this
$ python manage.py createsuperuser --username=joe --email=joe#example.com
I'm prompted for a password. The module does not seem to support a "--password" argument ...
$ python manage.py createsuperuser --username=joe --email=joe#example.com --password=password
usage: manage.py createsuperuser [-h] [--username USERNAME] [--noinput] [--database DATABASE]
[--email EMAIL] [--version] [-v {0,1,2,3}] [--settings SETTINGS]
[--pythonpath PYTHONPATH] [--traceback] [--no-color] [--force-color]
[--skip-checks]
manage.py createsuperuser: error: unrecognized arguments: --password=password
Is there a way I can auto-create a user without manual intervention?
This is done for security reasons: usually shell command are written to a history file, and if you thus would pass the password as parameter, a hacker could eventually look at the history and thus obtain the password of the superuser. It can read the content of the DJANGO_SUPERUSER_PASSWORD variable to set the password.
You thus can set the password with:
DJANGO_SUPERUSER_PASSWORD=somepassword python manage.py createsuperuser --no-input --username=joe --email=joe#example.com
I would however strongly advise not to set the DJANGO_SUPERUSER_PASSWORD in the script file, but define the environment variable elsewhere.

Celery: Getting unexpected run_command() takes 1 positional argument but 318 were given error

I am attempting to run an async task and am getting an unexpected error: run_command() takes 1 positional argument but 318 were given.
I have a list of commands that I want to run from a celery task.
run_command.chunks(iter(commands), 10).group().apply_async()
#task
def run_command(commands):
for command in commands:
print("RUNNING, ", command)
print("Pid: ", os.getpid())
os.system(command)
As shown above, I am attempting to break down my command into batches that will be executed in parallel.
Thanks for the help
Celery treats its positional arguments as *args, so every command in your commands iterable should looks like ('commandtext',)
commands = ['hello', 'world', '!']
#task
def run_command(command):
'''run command'''
run_command.chunks(zip(commands), 2).group().apply_async()

How to Set a Default Subparser using Argparse Module with Python 2.7

I'm using Python 2.7 and I'm trying to accomplish a shell like behavior using argparse.
My issue, in general, that I cannot seem to find a way, in Python 2.7, to use argparse's subparsers as optional.
It's kind of hard to explain my issue so I'll describe what I require from my program.
The program has 2 modes of work:
Starting the program with a given command (each command has it's own
additional arguments) and additional arguments will run a specific
task.
Starting the program without a command will start a shell-like program that can take a line of arguments and process them as if the
program was called with the given line as it's arguments.
So, if for example my program supports 'cmd1' and 'cmd2' commands, I could use it like so:
python program.py cmd1 additional_args1
python program.py cmd2 additional_args2
or with shell mode:
python program.py
cmd1 additional_args1
cmd2 additional_args2
quit
In addition, I also want my program to be able to take optional global arguments that will effect all commands.
For that I'm using argparse like so (This is a pure example):
parser = argparse.ArgumentParser(description="{} - Version {}".format(PROGRAM_NAME, PROGRAM_VERSION))
parser.add_argument("-i", "--info", help="Display more information")
subparsers = parser.add_subparsers()
parserCmd1 = subparsers.add_parser("cmd1", help="First Command")
parserCmd1.set_defaults(func=cmd1)
parserCmd2 = subparsers.add_parser("cmd2", help="Second Command")
parserCmd2.add_argument("-o", "--output", help="Redirect Output")
parserCmd2.set_defaults(func=cmd2)
So I can call cmd1 (with no additional args) or cmd2 (with or without -o flag). And for both I can add flag -i to display even more information of the called command.
My issue is that I cannot activate shell mode, because I have to provide cmd1 or cmd2 as an argument (because of using subparsers which are mandatory)
Restrictions:
I cannot use Python 3 (I know it can be easily done there)
Because of global optional arguments I cannot check to see if I get no arguments to skip arg parsing.
I don't want to add a new command to call shell, it must be when providing no command at all
So how can I achieve This kind of behavior with argparse and python 2.7?
Another idea is to use a 2 stage parsing. One handles 'globals', returning strings it can't handle. Then conditionally handle the extras with subparsers.
import argparse
def cmd1(args):
print('cmd1', args)
def cmd2(args):
print('cmd2', args)
parser1 = argparse.ArgumentParser()
parser1.add_argument("-i", "--info", help="Display more information")
parser2 = argparse.ArgumentParser()
subparsers = parser2.add_subparsers(dest='cmd')
parserCmd1 = subparsers.add_parser("cmd1", help="First Command")
parserCmd1.set_defaults(func=cmd1)
parserCmd2 = subparsers.add_parser("cmd2", help="Second Command")
parserCmd2.add_argument("-o", "--output", help="Redirect Output")
parserCmd2.set_defaults(func=cmd2)
args, extras = parser1.parse_known_args()
if len(extras)>0 and extras[0] in ['cmd1','cmd2']:
args = parser2.parse_args(extras, namespace=args)
args.func(args)
else:
print('doing system with', args, extras)
sample runs:
0901:~/mypy$ python stack46667843.py -i info
('doing system with', Namespace(info='info'), [])
0901:~/mypy$ python stack46667843.py -i info extras for sys
('doing system with', Namespace(info='info'), ['extras', 'for', 'sys'])
0901:~/mypy$ python stack46667843.py -i info cmd1
('cmd1', Namespace(cmd='cmd1', func=<function cmd1 at 0xb74b025c>, info='info'))
0901:~/mypy$ python stack46667843.py -i info cmd2 -o out
('cmd2', Namespace(cmd='cmd2', func=<function cmd2 at 0xb719ebc4>, info='info', output='out'))
0901:~/mypy$
A bug/issue (with links) on the topic of 'optional' subparsers.
https://bugs.python.org/issue29298
Notice that this has a recent pull request.
With your script and the addition of
args = parser.parse_args()
print(args)
results are
1008:~/mypy$ python3 stack46667843.py
Namespace(info=None)
1009:~/mypy$ python2 stack46667843.py
usage: stack46667843.py [-h] [-i INFO] {cmd1,cmd2} ...
stack46667843.py: error: too few arguments
1009:~/mypy$ python2 stack46667843.py cmd1
Namespace(func=<function cmd1 at 0xb748825c>, info=None)
1011:~/mypy$ python3 stack46667843.py cmd1
Namespace(func=<function cmd1 at 0xb7134dac>, info=None)
I thought the 'optional' subparsers affected both Py2 and 3 versions, but apparently it doesn't. I'll have to look at the code to verify why.
In both languages, subparsers.required is False. If I set it to true
subparsers.required=True
(and add a dest to the subparsers definition), the PY3 error message is
1031:~/mypy$ python3 stack46667843.py
usage: stack46667843.py [-h] [-i INFO] {cmd1,cmd2} ...
stack46667843.py: error: the following arguments are required: cmd
So there's a difference in how the 2 versions test for required arguments. Py3 pays attention to the required attribute; Py2 (apparently) uses the earlier method of checking whether the positionals list is empty or not.
Checking for required arguments occurs near the end of parser._parse_known_args.
Python2.7 includes
# if we didn't use all the Positional objects, there were too few
# arg strings supplied.
if positionals:
self.error(_('too few arguments'))
before the iteration that checks action.required. That's what's catching the missing cmd and saying too few arguments.
So a kludge is to edit your argparse.py and remove that block so it matches the corresponding section of the Py3 version.

How to show help for all subparsers in argparse when using a Class()

very new to working with argparse and the cmd line. I've started to build a parser that allows for a front-end user to enter in data via the cmd terminal. The parser is calling the API() class that I have created (which creates the SQLALCHEMY session and etc.), example shown here:
class API(object):
def __init__(self):
# all the session / engine config here
def create_user(self, username, password, firstname, lastname, email):
new_user = User(username, password, firstname, lastname, email)
self.session.add(new_user)
self.session.commit()
print(username, firstname, lastname)
def retrieve_user(self, username, firstname, lastname):
# code here ... etc .
to implement in the CMD file here:
def main():
parser = argparse.ArgumentParser(prog='API_ArgParse', description='Create, Read, Update, and Delete (CRUD) Interface Commands')
subparsers = parser.add_subparsers(
title='subcommands', description='valid subcommands', help='additional help')
api = API() # calling the API class functions/engine
# Create command for 'user'
create_parser = subparsers.add_parser('create_user', help='create a user')
create_parser.add_argument('username', type=str, help='username of the user')
create_parser.add_argument('password', type=str, help='password')
create_parser.add_argument('firstname', type=str, help='first name')
create_parser.add_argument('lastname', type=str, help='last name')
create_parser.add_argument('email', type=str, help='email address')
#args = parser.parse_args() <--EDIT:removed from here and placed on bottom
api.create_user(args.username, args.password, args.firstname, args.lastname, args.email)
# Retrieve command for 'user'
retrieve_parser = subparsers.add_parser('retrieve_user', help='retrieve a user')
retrieve_parser.add_argument('username', type=str, help='username')
retrieve_parser.add_argument('firstname', type=str, help='first name')
retrieve_parser.add_argument('lastname', type=str, help='last name')
api.retrieve_user(args.username, args.firstname, args.lastname)
NEW EDIT/ADDITION OF args = parser.parse_args() to use both commands to reflect comments below.
args = parser.parse_args()
print(args)
if __name__ == '__main__':
main()
and so on...
My problem is that the terminal is NOT printing the help command for the new parsers (e.g. retrieve_parser, update_parser, etc.). Do I have to create a "args = parser.parse_arg()" per section?
Secondly, do I create a "args = create_parser.parse_args()" in-place of just "parser.parse..." I notice they print two different things on the terminal.
Any clarification about where to place the parse_arg() method (taking into consideration the use of the API() function) is greatly appreciated!!
Normally you call the parse_args method after you have created the parser (and that includes any subparsers), and before you need to use the resulting Namespace (usually called args).
retrieve_parser.add_argument('lastname', type=str, help='last name')
args = parser.parse_args() # <=========
api.retrieve_user(args.username, args.firstname, args.lastname)
The purpose of parse_args is to read the sys.argv list (which the shell/interpreter created from your command line), and 'parse' it using the specficiations you created with add_argument and so on.
The main parser 'knows' about the subparsers, and will pass the argv list to the appropriate one (as selected by names like 'retrieve_user').
If the the command line includes -h (or --help) the parser will display a help message and exit. This message lists the subparsers, but does not show their arguments. If -h follows a subparser name, it's the subparser that will display the help.
python main.py -h # main help
python main.py retrieve_user -h # subparser help
parser.print_help() can be used in your code to show that same help message. retrieve_parser.print_help() will do the same for the subparser. Obviously these commands will only work within main, and are of more use when debugging (not production).
There's no provision in argparse to display the help for all subparsers. However you could construct such a function from the print_help commands that I just described. There probably is a SO answer with such a function (but I don't know what would be good search term).
Crudely, main() could include:
parser.add_argument('--bighelp', action='store_true', help='show help for all subparsers')
....
if args.bighelp:
parser.print_help()
subparser1.print_help()
subparser2.print_help()
....
sys.exit(1) # if want to quit
It's possible to get a list of the subparsers from the parser, but it's probably easier and clearer to maintain your own list.
I was focusing on how to get the parser working, not on the next step of calling the api.
You need to modify the add_subparsers, adding a dest parameter.
subparsers = parser.add_subparsers(dest='cmd',
title='subcommands', description='valid subcommands', help='additional help')
Then after you have define all subparsers do:
# should call cmd since I have stored `.dest` there e.g.`subparsers = parser.add_subparsers(dest='cmd'...`
args = parser.parse_args()
if args.cmd in ['create_user']: # or just == 'create_user'
api.create_user(args...)
elif args.cmd in ['retrieve_user']:
api.retrieve_user(args...)
The argparse docs shows how you can streamline this by invoking:
args.subcmdfunc(args)
See the last 2 examples in the sub-commands section: https://docs.python.org/3/library/argparse.html#sub-commands
Either way, the idea is to find out from the args namespace which subcommand the user specified, and then calling the correct api method.
Keep in mind that parser.parse_args() is equivalent to parser.parse_args(sys.argv[1:]). So whatever parser you use, whether the main one or one of subparsers, it must be prepared to handle all the strings that your user might give. It's the main parser that knows how to handle the subparser names. The subparser only has to handle the strings that follow its name. It will probably give an error if given the full argv[1:] list. So normally you don't call parse_args for a specific subparser.

How can I call a custom Django manage.py command directly from a test driver?

I want to write a unit test for a Django manage.py command that does a backend operation on a database table. How would I invoke the management command directly from code?
I don't want to execute the command on the Operating System's shell from tests.py because I can't use the test environment set up using manage.py test (test database, test dummy email outbox, etc...)
The best way to test such things - extract needed functionality from command itself to standalone function or class. It helps to abstract from "command execution stuff" and write test without additional requirements.
But if you by some reason cannot decouple logic form command you can call it from any code using call_command method like this:
from django.core.management import call_command
call_command('my_command', 'foo', bar='baz')
Rather than do the call_command trick, you can run your task by doing:
from myapp.management.commands import my_management_task
cmd = my_management_task.Command()
opts = {} # kwargs for your command -- lets you override stuff for testing...
cmd.handle_noargs(**opts)
the following code:
from django.core.management import call_command
call_command('collectstatic', verbosity=3, interactive=False)
call_command('migrate', 'myapp', verbosity=3, interactive=False)
...is equal to the following commands typed in terminal:
$ ./manage.py collectstatic --noinput -v 3
$ ./manage.py migrate myapp --noinput -v 3
See running management commands from django docs.
The Django documentation on the call_command fails to mention that out must be redirected to sys.stdout. The example code should read:
from django.core.management import call_command
from django.test import TestCase
from django.utils.six import StringIO
import sys
class ClosepollTest(TestCase):
def test_command_output(self):
out = StringIO()
sys.stdout = out
call_command('closepoll', stdout=out)
self.assertIn('Expected output', out.getvalue())
Building on Nate's answer I have this:
def make_test_wrapper_for(command_module):
def _run_cmd_with(*args):
"""Run the possibly_add_alert command with the supplied arguments"""
cmd = command_module.Command()
(opts, args) = OptionParser(option_list=cmd.option_list).parse_args(list(args))
cmd.handle(*args, **vars(opts))
return _run_cmd_with
Usage:
from myapp.management import mycommand
cmd_runner = make_test_wrapper_for(mycommand)
cmd_runner("foo", "bar")
The advantage here being that if you've used additional options and OptParse, this will sort the out for you. It isn't quite perfect - and it doesn't pipe outputs yet - but it will use the test database. You can then test for database effects.
I am sure use of Micheal Foords mock module and also rewiring stdout for the duration of a test would mean you could get some more out of this technique too - test the output, exit conditions etc.
The advanced way to run manage command with a flexible arguments and captured output
argv = self.build_argv(short_dict=kwargs)
cmd = self.run_manage_command_raw(YourManageCommandClass, argv=argv)
# Output is saved cmd.stdout.getvalue() / cmd.stderr.getvalue()
Add code to your base Test class
#classmethod
def build_argv(cls, *positional, short_names=None, long_names=None, short_dict=None, **long_dict):
"""
Build argv list which can be provided for manage command "run_from_argv"
1) positional will be passed first as is
2) short_names with be passed after with one dash (-) prefix
3) long_names with be passed after with one tow dashes (--) prefix
4) short_dict with be passed after with one dash (-) prefix key and next item as value
5) long_dict with be passed after with two dashes (--) prefix key and next item as value
"""
argv = [__file__, None] + list(positional)[:]
for name in short_names or []:
argv.append(f'-{name}')
for name in long_names or []:
argv.append(f'--{name}')
for name, value in (short_dict or {}).items():
argv.append(f'-{name}')
argv.append(str(value))
for name, value in long_dict.items():
argv.append(f'--{name}')
argv.append(str(value))
return argv
#classmethod
def run_manage_command_raw(cls, cmd_class, argv):
"""run any manage.py command as python object"""
command = cmd_class(stdout=io.StringIO(), stderr=io.StringIO())
with mock.patch('django.core.management.base.connections.close_all'):
# patch to prevent closing db connecction
command.run_from_argv(argv)
return command