Add method imports to shell_plus - django

In shell_plus, is there a way to automatically import selected helper methods, like the models are?
I often open the shell to type:
proj = Project.objects.get(project_id="asdf")
I want to replace that with:
proj = getproj("asdf")

Found it in the docs. Quoted from there:
Additional Imports
In addition to importing the models you can specify other items to
import by default. These are specified in SHELL_PLUS_PRE_IMPORTS and
SHELL_PLUS_POST_IMPORTS. The former is imported before any other
imports (such as the default models import) and the latter is imported
after any other imports. Both have similar syntax. So in your
settings.py file:
SHELL_PLUS_PRE_IMPORTS = (
('module.submodule1', ('class1', 'function2')),
('module.submodule2', 'function3'),
('module.submodule3', '*'),
'module.submodule4'
)
The above example would directly translate to the following python
code which would be executed before the automatic imports:
from module.submodule1 import class1, function2
from module.submodule2 import function3
from module.submodule3 import *
import module.submodule4
These symbols will be available as soon as the shell starts.

ok, two ways:
1) using PYTHONSTARTUP variable (see this Docs)
#in some file. (here, I'll call it "~/path/to/foo.py"
def getproj(p_od):
#I'm importing here because this script run in any python shell session
from some_app.models import Project
return Project.objects.get(project_id="asdf")
#in your .bashrc
export PYTHONSTARTUP="~/path/to/foo.py"
2) using ipython startup (my favourite) (See this Docs,this issue and this Docs ):
$ pip install ipython
$ ipython profile create
# put the foo.py script in your profile_default/startup directory.
# django run ipython if it's installed.
$ django-admin.py shell_plus

Related

Python script on Django shell not seeing import if import not set as global?

I have searched the stackoverflow and wasn't able to find this. I have noticed something I can not wrap my head around. When run as normal python script import works ok, but when run from Django shell it behaves weird, needs to set import as global to be seen.
You can reproduce it like this. Make a file test.py in folder with manage.py. Code you can test with is this.
This doesn't work, code of test.py:
#!/usr/bin/env python3
import chardet
class LoadList():
def __init__(self):
self.email_list_path = '/home/omer/test.csv'
#staticmethod
def check_file_encoding(file_to_check):
encoding = chardet.detect(open(file_to_check, "rb").read())
return encoding
def get_encoding(self):
return self.check_file_encoding(self.email_list_path)['encoding']
print(LoadList().get_encoding())
This works ok when chardet set as global inside test.py file:
#!/usr/bin/env python3
import chardet
class LoadList():
def __init__(self):
self.email_list_path = '/home/omer/test.csv'
#staticmethod
def check_file_encoding(file_to_check):
global chardet
encoding = chardet.detect(open(file_to_check, "rb").read())
return encoding
def get_encoding(self):
return self.check_file_encoding(self.email_list_path)['encoding']
print(LoadList().get_encoding())
First run is without global chardet and you can see the error. Second run is with global chardet set and you can see it works ok.
What is going on and can someone explain this to me? Why it isn't seen until set as global?
Piping a file into shell is the same as piping it into the python command. It's not the same as running the file with python test.py. I suspect it's something to do with the way the the newlines are interpreted as to how the file is really parsed, but don't have time to check.
Instead of this approach I'd recommend you write a custom management command.

Imported library 'owaspapi' contains no keywords. (if it's installed using pip)

I have made a library for Robot Framework (myapi.py). If I place it in the same directory with my robot test I can import the library like this:
Library myapi.py
It works just fine.
However, I made the library pip installable so that others may take it into use in other projects easily. The library installs just fine with pip. I also changed the robot test to import the library like this:
Library myapi
When I run the robot test I get warning:
[ WARN ] Imported library 'myapi' contains no keywords.
Here's the (pip installable) library file structure:
setup.py
myapi
\__init__.py
\myapi.py
\version.py
The setup.py content is:
from setuptools import setup, find_packages
exec(open('myapi/version.py').read())
setup(
name='myapi',
version=__version__,
packages=['myapi'],
install_requires=['requests']
)
The init.py content is:
from .version import __version__
The version.py content is:
__version__ = '1.1.0'
The myapi.py content is (included only the first function I have):
import requests
import time
from time import strftime
import urllib2
__all__ = ['create_new_MY_session']
def create_new_MY_session():
session_name = strftime('my_session_%S_%H_%M_%d_%m_%Y')
r = requests.get("http://localhost:8080/JSON/core/action/newSession/?zapapiformat=JSON&name=" + session_name + "/'")
print ("Creating new session: " + session_name + ". Status code...")
print (r.status_code)
assert (r.status_code) == 200
And finally the beginning of the robot test (login.robot):
*** Settings ***
Suite Setup Open Firefox With Proxy
Suite Teardown Close Browser
Library mypapi
Library OperatingSystem
Library Selenium2Library
Resource ws_keywords/product/webui.robot
*** Test Cases ***
MY Start New MY Session
Create New MY Session
I wonder if the library works just fine when located right next to the robot test, what am I missing if I make it pip installable...? Why does it complain that there are no keywords?
In your myapi.py file you missing the class reference. When the file is placed inside your Robot Framework project this wasn't an issue, but when creating a pip installable module, this is required. A basic Python Library code example is this:
myapi.py
class myapi(object):
ROBOT_LIBRARY_VERSION = 1.0
def __init__(self):
pass
def keyword(self):
pass

Crontab cannot find the function imported in my python code

i want to import a function from another python code.
It works when i run manually but not in crontab.
So i have this:
file1.py (the main code)
file2.py (contains a function named read())
So i tried this:
file1.py
import file2
url = 'api:v1/stack/alias'
params = urllib.urlencode({'local': file2.read()})
...
So it works when i execute it manually but not when it added in crontag.
After googling i found another solution:
fil1.py
import sys
sys.path.append('/home/pi')
import file2
It works when executed manually but still not by crontab.
So is there another way to do it?
Thank you

How can I perform Django's `syncdb --noinput` with call_command?

>>> from django.core.management import call_command
>>> call_command('syncdb')
executes the syncdb management command from within a python script. However, I want to run the equivalent of
$ python manage.py syncdb --noinput
from within a python shell or script. How can I do that?
The following lines don't work without interrupting me with the question whether I want to create a super user.
>>> call_command('syncdb', noinput = True) # asks for input
>>> call_command('syncdb', 'noinput') # raises an exception
I use Django 1.3.
call_command('syncdb', interactive = False)
EDIT:
I found the answer in the source code. The source code for all management commands can be found in a python module called management/commands/(command_name).py
The python module where the syncdb command resides is django.core.management.commands.syncdb
To find the source code of the command you can do something like this:
(env)$ ./manage.py shell
>>> from django.core.management.commands import syncdb
>>> syncdb.__file__
'/home/user/env/local/lib/python2.7/site-packages/django/core/management/commands/syncdb.pyc'
>>>
Of course, check the contents of syncdb.py, and not syncdb.pyc.
Or looking at the online source, the syncdb.py script contains:
make_option('--noinput', action='store_false', dest='interactive', default=True,
help='Tells Django to NOT prompt the user for input of any kind.'),
that tells us that instead of --noinput on the command line, we should use interactive if we want to automate commands with the call_command function.

Django timer thread

I would like to compute some information in my Django application on regular basis.
I need to select and insert data each second and want to use Django ORM.
How can I do this?
In a shell script, set the DJANGO_SETTINGS_MODULE variable and call a python script
export DJANGO_SETTINGS_MODULE=yourapp.settings
python compute_some_info.py
In compute_some_info.py, set up django and import your modules (look at how the manage.py script sets up to run Django)
#!/usr/bin/env python
import sys
try:
import settings # Assumed to be in the same directory.
except ImportError:
sys.stderr.write("Error: Can't find the file 'settings.py'")
sys.exit(1)
sys.path = sys.path + ['/yourapphome']
from yourapp.models import YourModel
YourModel.compute_some_info()
Then call your shell script in a cron job.
Alternatively -- you can just keep running and sleeping (better if it's every second) -- you would still want to be outside of the webserver and in your own process that is set up this way.
One way to do it would be to create a custom command, and invoke python manage.py your_custom_command from cron or windows scheduler.
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
For example, create myapp/management/commands/myapp_task.py which reads:
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
def handle_noargs(self, **options):
print 'Doing task...'
# invoke the functions you need to run on your project here
print 'Done'
Then you can run it from cron like this:
export DJANGO_SETTINGS_MODULE=myproject.settings; export PYTHONPATH=/path/to/project_parent; python manage.py myapp_task