Is there a way to directly make use of SMOTE to a xxx.csv in python-weka-wrapper3 - weka

For example,I have a data-set,which has 18 instances(class value:True:False=12:6),and it has 11 attributes.Can just make use SMOTE in python like in weka?
How can I make it by code?
enter image description here

Have you looked at the API examples and the example repository? These resources explain most if not all of the available functionality.
Here is an example of loading a CSV file and applying the SMOTE filter:
import weka.core.jvm as jvm
from weka.core.classes import from_commandline
from weka.core.packages import install_missing_package, installed_package
from weka.core.converters import load_any_file
jvm.start(packages=True)
# install SMOTE if necessary
installed = installed_package("SMOTE")
if not installed:
success, restart = install_missing_package("SMOTE")
if restart:
print("Please rerun script")
jvm.stop()
import sys
sys.exit(0)
# load data
data = load_any_file("/some/where/iris.csv", class_index="last")
# if the default parameters for loading the CSV file don't work,
# you need to configure the CSVLoader yourself and set the
# appropriate options (or even use a 3rd-party package):
# from weka.core.converters import Loader
# loader = Loader(classname="weka.core.converters.CSVLoader", options=[])
# data = loader.load_file("/some/where/iris.csv", class_index="last")
print(data.num_instances)
# apply SMOTE
# replace the command-line with the one that you can copy/paste from
# the Weka Explorer via right-click menu
smote = from_commandline("weka.filters.supervised.instance.SMOTE -C 0 -K 5 -P 100.0 -S 1", classname="weka.filters.Filter")
smote.inputformat(data)
filtered = smote.filter(data)
print(filtered.num_instances)
jvm.stop()

Related

How to configure firebase admin at Django server?

I am trying to add custom token for user authentication (phone number and password) and as per reference documents I would like to configure server to generate custom token.
I have installed, $ sudo pip install firebase-admin
and also setup an environment : export GOOGLE_APPLICATION_CREDENTIALS="[to json file at my server]"
I am using Django project at my server where i have created all my APIs.
I am stucked at this point where it says to initialize app:
default_app = firebase_admin.initialize_app()
Where should i write the above statement within Django files? and how should i generate endpoint to get custom token?
Regards,
PD
pip install firebase-admin
credentials.json file includes some private keys. So, you can’t add to your project directly. If you’re using the git version system and you want to host this file in your project folder, you must add the file name to your “.gitignore”.
Set your operation system environ variable. You can use for MacOSX or Linux distributions; For set a variable in window os (https://www.computerhope.com/issues/ch000549.htm).
$ export GOOGLE_APPLICATION_CREDENTIALS='/path/to/credentials.json'
This part is important, google package (It came with firebase_admin package) looking at some conditions for credentials. One of them is os.environ.get(‘GOOGLE_APPLICATION_CREDENTIALS’). If you set this file, than you don’t need to anythig fot initialize firebase. Otherwise you should define manually.
For initial firebase look at. Set up configurations (https://firebase.google.com/docs/firestore/quickstart#initialize )
Create a file named “firebase.py”.
$ touch firebase.py
Now we can use the “firebase_admin” package for querying. Our firebase.py seems like;
import time
from datetime import timedelta
from uuid import uuid4
from firebase_admin import firestore, initialize_app
__all__ = ['send_to_firebase', 'update_firebase_snapshot']
initialize_app()
def send_to_firebase(raw_notification):
db = firestore.client()
start = time.time()
db.collection('notifications').document(str(uuid4())).create(raw_notification)
end = time.time()
spend_time = timedelta(seconds=end - start)
return spend_time
def update_firebase_snapshot(snapshot_id):
start = time.time()
db = firestore.client()
db.collection('notifications').document(snapshot_id).update(
{'is_read': True}
)
end = time.time()
spend_time = timedelta(seconds=end - start)
return spend_time
You Refer this link(https://medium.com/#canadiyaman/how-to-use-firebase-with-django-project-34578516bafe)

python + wx & uno to fill libreoffice using ubuntu 14.04

I collected user data using a wx python gui and than I used uno to fill this data into an openoffice document under ubuntu 10.xx
user + my-script ( +empty document ) --> prefilled document
After upgrading to ubuntu 14.04 uno doesn't work with python 2.7 anymore and now we have libreoffice instead of openoffice in ubuntu. when I try to run my python2.7 code, it says:
ImportError: No module named uno
How could I bring it back to work?
what I tried:
installed https://pypi.python.org/pypi/unotools v0.3.3
sudo apt-get install libreoffice-script-provider-python
converted the code to python3 and got uno importable, but wx is not importable in python3 :-/
ImportError: No module named 'wx'
googled and read python3 only works with wx phoenix
so tried to install: http://wxpython.org/Phoenix/snapshot-builds/
but wasn't able to get it to run with python3
is there a way to get the uno bridge to work with py2.7 under ubuntu 14.04?
Or how to get wx to run with py3?
what else could I try?
Create a python macro in LibreOffice that will do the work of inserting the data into LibreOffice and then in your python 2.7 code envoke the macro.
As the macro is running from with LibreOffice it will use python3.
Here is an example of how to envoke a LibreOffice macro from the command line:
#!/usr/bin/python3
# -*- coding: utf-8 -*-
##
# a python script to run a libreoffice python macro externally
# NOTE: for this to run start libreoffice in the following manner
# soffice "--accept=socket,host=127.0.0.1,port=2002,tcpNoDelay=1;urp;" --writer --norestore
# OR
# nohup soffice "--accept=socket,host=127.0.0.1,port=2002,tcpNoDelay=1;urp;" --writer --norestore &
#
import uno
from com.sun.star.connection import NoConnectException
from com.sun.star.uno import RuntimeException
from com.sun.star.uno import Exception
from com.sun.star.lang import IllegalArgumentException
def uno_directmacro(*args):
localContext = uno.getComponentContext()
localsmgr = localContext.ServiceManager
resolver = localsmgr.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", localContext )
try:
ctx = resolver.resolve("uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext")
except NoConnectException as e:
print ("LibreOffice is not running or not listening on the port given - ("+e.Message+")")
return
msp = ctx.getValueByName("/singletons/com.sun.star.script.provider.theMasterScriptProviderFactory")
sp = msp.createScriptProvider("")
scriptx = sp.getScript('vnd.sun.star.script:directmacro.py$directmacro?language=Python&location=user')
try:
scriptx.invoke((), (), ())
except IllegalArgumentException as e:
print ("The command given is invalid ( "+ e.Message+ ")")
return
except RuntimeException as e:
print("An unknown error occurred: " + e.Message)
return
except Exception as e:
print ("Script error ( "+ e.Message+ ")")
print(e)
return
return(None)
uno_directmacro()
And this is the corresponding macro code within LibreOffice called "directmacro.py" and stored in the User area for libreOffice macros (which would normally be $HOME/.config/libreoffice/4/user/Scripts/python :
#!/usr/bin/python
from com.sun.star.awt.MessageBoxButtons import BUTTONS_OK, BUTTONS_OK_CANCEL, BUTTONS_YES_NO, BUTTONS_YES_NO_CANCEL, BUTTONS_RETRY_CANCEL, BUTTONS_ABORT_IGNORE_RETRY
from com.sun.star.awt.MessageBoxButtons import DEFAULT_BUTTON_OK, DEFAULT_BUTTON_CANCEL, DEFAULT_BUTTON_RETRY, DEFAULT_BUTTON_YES, DEFAULT_BUTTON_NO, DEFAULT_BUTTON_IGNORE
from com.sun.star.awt.MessageBoxType import MESSAGEBOX, INFOBOX, WARNINGBOX, ERRORBOX, QUERYBOX
def directmacro(*args):
import socket, time
class FontSlant():
from com.sun.star.awt.FontSlant import (NONE, ITALIC,)
#get the doc from the scripting context which is made available to all scripts
desktop = XSCRIPTCONTEXT.getDesktop()
model = desktop.getCurrentComponent()
text = model.Text
tRange = text.End
cursor = desktop.getCurrentComponent().getCurrentController().getViewCursor()
doc = XSCRIPTCONTEXT.getDocument()
parentwindow = doc.CurrentController.Frame.ContainerWindow
# your cannot insert simple text and text into a table with the same method
# so we have to know if we are in a table or not.
# oTable and oCurCell will be null if we are not in a table
oTable = cursor.TextTable
oCurCell = cursor.Cell
insert_text = "This is text inserted into a LibreOffice Document\ndirectly from a macro called externally"
Text_Italic = FontSlant.ITALIC
Text_None = FontSlant.NONE
cursor.CharPosture=Text_Italic
if oCurCell == None: # Are we inserting into a table or not?
text.insertString(cursor, insert_text, 0)
else:
cell = oTable.getCellByName(oCurCell.CellName)
cell.insertString(cursor, insert_text, False)
cursor.CharPosture=Text_None
return None
You will of course need to adapt the code to either accept data as arguments, read it from a file or whatever.
Ideally I would say use python 3, because python 2 is becoming outdated. The switch requires quite a bit of new coding changes, but better sooner than later. So I tried:
sudo pip3 install -U --pre \
-f http://wxpython.org/Phoenix/snapshot-builds/ \
wxPython_Phoenix
However this gave me errors, and I didn't want to spend the next couple of days working through them. Probably the pre-release versions are not ready for prime time yet.
So instead, what I recommend is to switch to AOO for now. See https://stackoverflow.com/a/27980255/5100564 for instructions. AOO does not have all the latest features that LO has, but it is a good solid Office product.
Apparently it is also possible to rebuild LibreOffice with python 2 using this script: https://gist.github.com/hbrunn/6f4a007a6ff7f75c0f8b

Selenium Python Configure Jenkins to run build. My build fails

I am trying to configure Jenkins to build my Selenium Webdriver Python code.
When i click Build Now it fails
The Console output shows the following:
Building in workspace C:\Program Files\Jenkins\workspace\ClearCore
[ClearCore] $ cmd /c call C:\Windows\TEMP\hudson6133135491793466847.bat
C:\Program Files\Jenkins\workspace\ClearCore>copy E:\RL Fusion\projects\Jenkins sample\ClearCore501\TestCases\*.py
The system cannot find the file specified.
C:\Program Files\Jenkins\workspace\ClearCore>python smoketests.py
python: can't open file 'smoketests.py': [Errno 2] No such file or directory
C:\Program Files\Jenkins\workspace\ClearCore>exit 2
Build step 'Execute Windows batch command' marked build as failure
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: No test report files were found. Configuration error?
Finished: FAILURE
In PyCharm i have a smoketests.py file as follows:
import unittest
from xmlrunner import xmlrunner
from TestCases.LoginPage_TestCase import LoginPage_TestCase
from TestCases.AdministrationPage_TestCase import AdministrationPage_TestCase
from TestCases.DataConfigurationPage_TestCase import DataConfigurationPage_TestCase
login_tests = unittest.TestLoader().loadTestsFromTestCase(LoginPage_TestCase)
admin_tests = unittest.TestLoader().loadTestsFromTestCase(AdministrationPage_TestCase)
dataconf_tests = unittest.TestLoader().loadTestsFromTestCase(DataConfigurationPage_TestCase)
smoke_tests = unittest.TestSuite([login_tests, admin_tests, dataconf_tests])
xmlrunner.XMLTestRunner(verbosity=2, output='test-reports').run(smoke_tests)
I have a test_HTMLRunner.py as follows:
import unittest
import HTMLTestRunner
import os
from TestCases.LoginPage_TestCase import LoginPage_TestCase
from TestCases.AdministrationPage_TestCase import AdministrationPage_TestCase
from TestCases.DataConfigurationPage_TestCase import DataConfigurationPage_TestCase
# get the directory path to output report file
result_dir = os.getcwd()
login_tests = unittest.TestLoader().loadTestsFromTestCase(LoginPage_TestCase)
admin_tests = unittest.TestLoader().loadTestsFromTestCase(AdministrationPage_TestCase)
dataconf_tests = unittest.TestLoader().loadTestsFromTestCase(DataConfigurationPage_TestCase)
smoke_tests = unittest.TestSuite([login_tests, admin_tests, dataconf_tests])
# open the report file
outfile = open(result_dir + "\TestReport.html", "w")
# configure HTMLTestRunner options
runner = HTMLTestRunner.HTMLTestRunner(stream=outfile,
title='Test Report',
description='LADEMO create a basic project test')
# run the suite using HTMLTestRunner
runner.run(smoke_tests)
I have a suite.py as follows:
import sys
import unittest
import HTMLTestRunner
import os
import unittest
import AdministrationPage_TestCase
import LoginPage_TestCase
import DataConfigurationPage_TestCase
class Test_Suite(unittest.TestCase):
def test_main(self):
# suite of TestCases
self.suite = unittest.TestSuite()
self.suite.addTests([
unittest.defaultTestLoader.loadTestsFromTestCase(LoginPage_TestCase.LoginPage_TestCase),
unittest.defaultTestLoader.loadTestsFromTestCase(AdministrationPage_TestCase.AdministrationPage_TestCase),
unittest.defaultTestLoader.loadTestsFromTestCase(DataConfigurationPage_TestCase.DataConfigurationPage_TestCase),
])
runner = unittest.TextTestRunner()
runner.run (self.suite)
import unittest
if __name__ == "__main__":
unittest.main()
In Jenkins I have configured the following:
From the section Build, Execute Windows Batch Command
copy E:\RL Fusion\projects\Jenkins sample\ClearCore501\TestCases\*.py
python smoketests.py
From the section Post-Build Actions, Publish JUnit test result report
test_reports/*..xml
Below test_reports/*..xml it shows:
‘test_reports/*..xml’ doesn’t match anything: even ‘test_reports’ doesn’t exist
How do i get this to work please? What am i doing wrong?
Is there any sample demo I could follow and then I can get my setup to work?
Thanks,
Riaz
The problem looks to be in the copy step of you batch file. Notice how it says it cant find the file. Surround the source and destination paths with double quotes so that windows knows your path has spaces in it.
It also appears the copy operation doesn't have a destination specified. You should may want to specify that too. Although apparently that isn't a requirement, as I just found out :).
Once the copy operation succeeds, check the workspace directory to see if the file(s) you expect are present.
Alternatively, you can tell the Jenkins job to use a custom workspace, the directory where your tests live. With this configuration you don't even have to worry about copying files.
Here's how:
In the job config in Jenkins, open the Advanced Project Options and select use custom workspace and set the directory to E:\RL Fusion\projects\Jenkins sample\ClearCore501\TestCases\.
Then the build command can just be python smoketests.py.

Running PySpark on and IDE like Spyder?

I could run PySpark from the terminal line and everything works fine.
~/spark-1.0.0-bin-hadoop1/bin$ ./pyspark
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.0.0
/_/
Using Python version 2.7.6 (default, May 27 2014 14:50:58)
However when I try to this on a Python IDE
import pyspark
ImportError: No module named pyspark
How do I import it like other Python libraries such numpy, scikit etc.?
Working in the terminal works fine, I just wanted to work in the IDE.
I wrote this launcher script a while back expressly for that purpose. I wanted to be able to interact with the pyspark shell from within the bpython(1) code-completion interpreter and WING IDE, or any IDE for that matter because they have code completion as well as provide a complete development experience. Learning Spark core by just typing 'pyspark' isn't good enough. So I wrote this. This was written in a Cloudera CDH5 environment, but with a little tweaking you can get this to work in whatever your environment is (even manually installed ones).
How to use:
NOTE: You can place all of the following in your .profile (or equivalent).
(1) linux$ export MASTER='yarn-client | local[NN] | spark://host:port'
(2) linux$ export SPARK_HOME=/usr/lib/spark # Your's will vary.
(3) linux$ export JAVA_HOME=/usr/java/latest # Your's will vary.
(4) linux$ export NAMENODE='vps00' # Your's will vary.
(5) linux$ export PYSTART=${PYTHONSTARTUP} # See in-line commends about the reason for the need for this alias to PYTHONSTARTUP.
(6) linux$ export HADOOP_CONF_DIR=/etc/hadoop/conf # Your's will vary. This one may not be necessary to set. Try and see.
(7) linux$ export HADOOP_HOME=/usr/lib/hadoop # Your's will vary. This one may not be necessary to set. Try and see.
(8) bpython -i /path/to/script/below # The moment of truth. Note that this is 'bpython' (not just plain 'python', which would not give the code completion you desire).
>>> sc
<pyspark.context.SparkContext object at 0x2798110>
>>>
Now for use with an IDE, you simply determine how to specify the equivalent of a PYTHONSTARTUP script for that IDE, and set that to '/path/to/script/below'. For example, as I described in the in-line comments below, for WING IDE you simply set the key/value pair 'PYTHONSTARTUP=/path/to/script/below' inside the project's properties section.
See in-line comments for more information.
#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# ===========================================================================
# Author: Noel Milton Vega (PRISMALYTICS, LLC.)
# ===========================================================================
# Start-up script for 'python(1)', 'bpython(1)', and Python IDE iterpreters
# when you want a 'client-mode' SPARK Shell (i.e. interactive SPARK shell)
# environment either LOCALLY, on a SPARK Standalone Cluster, or on SPARK
# YARN cluster. The code-sense/intelligence of bpython(1) and IDEs, in
# particular will aid in learning the SPARK core API.
#
# This script basically (1) first sets up an environment to launch a SPARK
# Shell, then (2) launches the SPARK Shell using the 'shell.py' python script
# provided in the distribution's SPARK_HOME; and finally (3) imports our
# favorite Python modules (for convenience; e.g. numpy, scipy; etc.).
#
# IMPORTANT:
# DON'T RUN THIS SCRIPT DIRECTLY. It is meant to be read in by interpreters
# (similar, in that respect, to a PYTHONSTARTUP script).
#
# Thus, there are two ways to use this file:
# # We can't refer to PYTHONSTARTUP inside this file b/c that causes a recursion loop
# # when calling this from within IDEs. So in step (0) we alias PYTHONSTARTUP to
# # PYSTARTUP at the O/S level, and use that alias here (since no conflict with that).
# (0): user$ export PYSTARTUP=${PYTHONSTARTUP} # We can't use PYTHONSTARTUP in this file
# (1): user$ export MASTER='yarn-client | local[NN] | spark://host:port'
# user$ bpython|python -i /path/to/this/file
#
# (2): From within your favorite IDE, specify it as your python startup
# script. For example, from within a WINGIDE project, set the following
# variables within a WING Project: 'Project -> Project Properties':
# 'PYTHONSTARTUP=/path/to/this/very/file'
# 'MASTER=yarn-client | local[NN] | spark://host:port'
# ===========================================================================
import sys, os, glob, subprocess, random
namenode = os.getenv('NAMENODE')
SPARK_HOME = os.getenv('SPARK_HOME')
# ===========================================================================
# =================================================================================
# This functions emulates the action of "source" or '.' that exists in bash(1),
# and can be used to set PYTHON environment variables (in Pythons globals dict).
# =================================================================================
def source(script, update=True):
proc = subprocess.Popen(". %s; env -0" % script, stdout=subprocess.PIPE, shell=True)
output = proc.communicate()[0]
env = dict((line.split("=", 1) for line in output.split('\x00') if line))
if update: os.environ.update(env)
return env
# ================================================================================
# ================================================================================
# Here, we get the name of our current SPARK Assembly JAR file name (locally). We
# use that to create a HDFS URL that points to it's location in HDFS when using
# YARN (i.e. when 'export MASTER=yarn-client'; we ignore it otherwise).
# ================================================================================
# Remember to always upload/update your distribution's current SPARK Assembly JAR
# to HDFS like this:
# $ hdfs dfs -mkdir -p /user/spark/share/lib" # Only necessary to do once!
# $ hdfs dfs -rm "/user/spark/share/lib/spark-assembly-*.jar" # Remove old version.
# $ hdfs dfs -put ${SPARK_HOME}/assembly/lib/spark-assembly-[0-9]*.jar /user/spark/share/lib/
# ================================================================================
SPARK_JAR_LOCATION = glob.glob(SPARK_HOME + '/lib/' + 'spark-assembly-[0-9]*.jar')[0].split("/")[-1]
SPARK_JAR_LOCATION = 'hdfs://' + namenode + ':8020/user/spark/share/lib/' + SPARK_JAR_LOCATION
# ================================================================================
# ================================================================================
# Update Pythons globals environment variable dict with necessary environment
# variables that the SPARK Shell will be looking for. Some we set explicitly via
# an in-line dictionary, as shown below. And the rest are set by 'source'ing the
# global SPARK environment file (although we could have included those explicitly
# here too, if we preferred not to touch that system-wide file -- and leave it as FCS).
# ================================================================================
spark_jar_opt = None
MASTER = os.getenv('MASTER') if os.getenv('MASTER') else 'local[8]'
if MASTER.startswith('yarn-'): spark_jar_opt = ' -Dspark.yarn.jar=' + SPARK_JAR_LOCATION
elif MASTER.startswith('spark://'): pass
else: HADOOP_HOME = ''
# ================================================================================
# ================================================================================
# Build '--driver-java-options' options for spark-shell, pyspark, or spark-submit.
# Many of these are set in '/etc/spark/conf/spark-defaults.conf' (and thus
# commented out here, but left here for reference completeness).
# ================================================================================
# Default UI port is 4040. The next statement allows us to run multiple SPARK shells.
DRIVER_JAVA_OPTIONS = '-Dspark.ui.port=' + str(random.randint(1025, 65535))
DRIVER_JAVA_OPTIONS += spark_jar_opt if spark_jar_opt else ''
# ================================================================================
# ================================================================================
# Build PYSPARK_SUBMIT_ARGS (i.e. the sames ones shown in 'pyspark --help'), and
# apply them to the O/S environment.
# ================================================================================
DRIVER_JAVA_OPTIONS = "'" + DRIVER_JAVA_OPTIONS + "'"
PYSPARK_SUBMIT_ARGS = ' --master ' + MASTER # Remember to set MASTER on UNIX CLI or in the IDE!
PYSPARK_SUBMIT_ARGS += ' --driver-java-options ' + DRIVER_JAVA_OPTIONS # Built above.
# ================================================================================
os.environ.update(source('/etc/spark/conf/spark-env.sh', update = False))
os.environ.update({ 'PYSPARK_SUBMIT_ARGS' : PYSPARK_SUBMIT_ARGS })
# ================================================================================
# ================================================================================
# Next, adjust 'sys.path' so SPARK Shell has the python modules it needs.
# ================================================================================
SPARK_PYTHON_DIR = SPARK_HOME + '/python'
PY4J = glob.glob(SPARK_PYTHON_DIR + '/lib/' + 'py4j-*-src.zip')[0].split("/")[-1]
sys.path = [SPARK_PYTHON_DIR, SPARK_PYTHON_DIR + '/lib/' + PY4J] + sys.path
# ================================================================================
# ================================================================================
# With our environment set, we start the SPARK Shell; and then to that, we add
# our favorite Python imports (e.g. numpy, scipy; etc).
# ================================================================================
print('PYSPARK_SUBMIT_ARGS:' + PYSPARK_SUBMIT_ARGS) # For visual debug.
execfile(SPARK_HOME + '/python/pyspark/shell.py', globals()) # Start the SPARK Shell.
execfile(os.getenv('PYSTARTUP')) # Next, load our favorite Python modules.
# ================================================================================
Enjoy and good luck! =:)
Thanks Ophir YokTon's upper post, I Finally managed to do it with "Spark 1.4.1+ Spyder2.3.4.
Here I would like to give one summary on all my steps to do it, hope it can help some people in the similiar situations.
Add PYTHONPATH variable into .bashrc. (of course you can put into other relavent profile file)
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
Make it effective by
source .bashrc
Create one copy of spyder as spyder.py on your spyder bin directory
cp spyder spyder.py
Start Spyder IDE with following command
spark-submit spyder.py
I implemented the sample "simple app" from apache spark and passed the running test it in spyder environment. please refer to the picture "http://i.stack.imgur.com/xTv6s.gif"
pyspark isn't probably at your pythonpath variable. Go to location where pyspark folder is located and add that folder to your class path.
If you just want to import the module , adding it to python path is enough
If you want to run complete scripts from the IDE, you can create a 'tool' that uses spark-submit to execute your script from the IDE (instead of normal run)
Specifically for spyder (or other IDE's that are written in python) you can run the IDE from within spark-submit
example:
spark-submit.cmd c:\Python27\Scripts\spyder.py
note that I had to rename spyder to spyder.py - it appears spark submit relies on the extension do distinguish between python, java, or scala
add any required parameters to spark-submit

Add method imports to shell_plus

In shell_plus, is there a way to automatically import selected helper methods, like the models are?
I often open the shell to type:
proj = Project.objects.get(project_id="asdf")
I want to replace that with:
proj = getproj("asdf")
Found it in the docs. Quoted from there:
Additional Imports
In addition to importing the models you can specify other items to
import by default. These are specified in SHELL_PLUS_PRE_IMPORTS and
SHELL_PLUS_POST_IMPORTS. The former is imported before any other
imports (such as the default models import) and the latter is imported
after any other imports. Both have similar syntax. So in your
settings.py file:
SHELL_PLUS_PRE_IMPORTS = (
('module.submodule1', ('class1', 'function2')),
('module.submodule2', 'function3'),
('module.submodule3', '*'),
'module.submodule4'
)
The above example would directly translate to the following python
code which would be executed before the automatic imports:
from module.submodule1 import class1, function2
from module.submodule2 import function3
from module.submodule3 import *
import module.submodule4
These symbols will be available as soon as the shell starts.
ok, two ways:
1) using PYTHONSTARTUP variable (see this Docs)
#in some file. (here, I'll call it "~/path/to/foo.py"
def getproj(p_od):
#I'm importing here because this script run in any python shell session
from some_app.models import Project
return Project.objects.get(project_id="asdf")
#in your .bashrc
export PYTHONSTARTUP="~/path/to/foo.py"
2) using ipython startup (my favourite) (See this Docs,this issue and this Docs ):
$ pip install ipython
$ ipython profile create
# put the foo.py script in your profile_default/startup directory.
# django run ipython if it's installed.
$ django-admin.py shell_plus