spyderlib funny behaviour with any() and all() - python-2.7

If I'm on Python 2.7 and I've just started using the Spyder IDE.
On a terminal python version if I do
any(i ==1 for i in [1,2,3,4])
I get the answer
True
While if I do the same in Spyder I get the response
<generator object <genexpr> at 0x3fc8af0>
Why is it doing that? Am I missing a setting or might this be a different version of Python (it says 2.7)

Here's a quote from another related question about Spyder's Python console behavior:
One of Spyder's primary design goals is to make interactive scientific
computing as painless as possible. To facilitate that, by default
Spyder launches a custom-tailored interactive Python session at
startup. It achieves this customization by setting an environment
variable called PYTHONSTARTUP which specifies the path to a script
that will be executed at interpreter startup. You can control this
setting under Preferences...Console...Advanced settings. By default,
Spyder points to scientific_startup.py, which imports a whole host of
scientific modules and functions directly into the main namespace so
that quick, interactive exploration is easy.
As a consequence, the behavior you are experiencing is because you are actually calling the numpy versions of any and all which have been placed directly into the main namespace. To verify this, call
np.any(i ==1 for i in [1,2,3,4])
or
np.all(i ==1 for i in [1,2,3,4])
in the Spyder Python console, and you'll get the same generator objects being returned. By the way, these last two calls magically work because the startup script also does import numpy as np. For more details on what else is imported, type scientific at the Spyder Python console prompt.

Related

Multiple Tkinter windows pop up at runtime after failed attempts

First of all, I'm quite new at Python, Stackoverflow and programming in general so please forgive any formal errors I might have made as I'm still trying to grasp many of the required conceptual programming protocols.
Here's the issue:
I'm trying to work around a specific, seemingly simple problem I've been having when using Tkinter: Whenever I'm fiddling with some code that confuses me, it generally takes many attempts until I finally find a working solution. So I write some code, run it, get an error, make some changes, run it again, get another error, change it again... and so on until a working result is achieved.
When the code finally works, I unfortunately also get additional Tkinter main windows popping up for every failed run I've executed. So if I've made, say, 20 changes before I eventually achieve working code, 20 additional Tkinter windows pop up. Annoying...
Now, I was thinking that maybe handling the exceptions with try/except might avoid this, but I'm unsure how to accomplish this properly.
I've been looking for a solution but can't seem to find a post that addresses this issue. I'm actually not really sure how to formulate the problem correctly... Anybody has some advice concerning this?
The following shows a simple but failed attempt of how I'm trying to circumvent this. The code works as is, but if you make a little typo in the code, run it a couple of times, then undo the typo and run the code again, you'll get multiple Tkinter windows which is what I'm trying to avoid.
Any help is, of course, appreciated...
(btw, I'm using Python 2.7.13.)
import Tkinter as tk
class App(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self)
self.root = parent
self.canvas = tk.Canvas(self)
self.canvas.pack(expand=1,fill='both')
self.bindings()
def click(self,e):
print 'clicked'
def bindings(self):
self.root.bind('<1>',self.click)
def main():
root = tk.Tk()
app = App(root)
app.pack()
root.mainloop()
if __name__ == '__main__':
try:
main()
except:
print 'Run failed...'
Alright, great. The issue has indeed nothing to do with Tkinter or Python but with the IDE itself. Thank you Ethan for pointing that out.
PyScripter has several modes or engines. I've been running scripts with its internal engine which is faster but does not reinitialize with every run. I believe this causes the problem. The remote engine, on the other hand, does reinitialize with every run. This avoids the failed run popups.
A more in-depth explanation from the PyScripter manual below:
Python Engines:
Internal
It is faster than the other options however if there are problems with
the scripts you are running or debugging they could affect the
reliability of PyScripter and could cause crashes. Another limitation
of this engine is that it cannot run or debug GUI scripts nor it can
be reinitialized.
Remote
This the default engine of PyScripter and is the recommended engine
for most Python development tasks. It runs in a child process and
communicates with PyScripter using rpyc. It can be used to run and
debug any kind of script. However if you run or debug GUI scripts you
may have to reinitialize the engine after each run.
Remote Tk
This remote Python engine is specifically created to run and debug
Tkinter applications including pylab using the Tkagg backend. It also
supports running pylab in interactive mode. The engine activates a
Tkinter mainloop and replaces the mainloop with a dummy function so
that the Tkinter scripts you are running or debugging do not block the
engine. You may even develop and test Tkinter widgets using the
interactive console.
Remote Wx
This remote Python engine is specifically created to run and debug
wxPython applications including pylab using the WX and WXAgg backends.
It also supports running pylab in interactive mode. The engine
activates a wx MainLoop and replaces the MainLoop with a dummy
function so that the wxPython scripts you are running or debugging do
not block the engine. You may even develop and test wxPython Frames
and Apps using the interactive console. Please note that this engine
prevents the redirection of wxPython output since that would prevent
the communication with Pyscripter.
When using the Tk and Wx remote engines you can of course run or debug
any other non-GUI Python script. However bear in mind that these
engines may be slightly slower than the standard remote engine since
they also contain a GUI main loop. Also note that these two engines
override the sys.exit function with a dummy procedure.

Python Process Python segmentation fault (core dumped) [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 8 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are your best tips for debugging Python?
Please don't just list a particular debugger without saying what it can actually do.
Related
What are good ways to make my Python code run first time? - This discusses minimizing errors
PDB
You can use the pdb module, insert pdb.set_trace() anywhere and it will function as a breakpoint.
>>> import pdb
>>> a="a string"
>>> pdb.set_trace()
--Return--
> <stdin>(1)<module>()->None
(Pdb) p a
'a string'
(Pdb)
To continue execution use c (or cont or continue).
It is possible to execute arbitrary Python expressions using pdb. For example, if you find a mistake, you can correct the code, then type a type expression to have the same effect in the running code
ipdb is a version of pdb for IPython. It allows the use of pdb with all the IPython features including tab completion.
It is also possible to set pdb to automatically run on an uncaught exception.
Pydb was written to be an enhanced version of Pdb. Benefits?
http://pypi.python.org/pypi/pudb, a full-screen, console-based Python debugger.
Its goal is to provide all the niceties of modern GUI-based debuggers in a more lightweight and keyboard-friendly package. PuDB allows you to debug code right where you write and test it – in a terminal. If you've worked with the excellent (but nowadays ancient) DOS-based Turbo Pascal or C tools, PuDB's UI might look familiar.
Nice for debugging standalone scripts, just run
python -m pudb.run my-script.py
If you are using pdb, you can define aliases for shortcuts. I use these:
# Ned's .pdbrc
# Print a dictionary, sorted. %1 is the dict, %2 is the prefix for the names.
alias p_ for k in sorted(%1.keys()): print "%s%-15s= %-80.80s" % ("%2",k,repr(%1[k]))
# Print the instance variables of a thing.
alias pi p_ %1.__dict__ %1.
# Print the instance variables of self.
alias ps pi self
# Print the locals.
alias pl p_ locals() local:
# Next and list, and step and list.
alias nl n;;l
alias sl s;;l
# Short cuts for walking up and down the stack
alias uu u;;u
alias uuu u;;u;;u
alias uuuu u;;u;;u;;u
alias uuuuu u;;u;;u;;u;;u
alias dd d;;d
alias ddd d;;d;;d
alias dddd d;;d;;d;;d
alias ddddd d;;d;;d;;d;;d
Logging
Python already has an excellent built-in logging module. You may want to use the logging template here.
The logging module lets you specify a level of importance; during debugging you can log everything, while during normal operation you might only log critical things. You can switch things off and on.
Most people just use basic print statements to debug, and then remove the print statements. It's better to leave them in, but disable them; then, when you have another bug, you can just re-enable everything and look your logs over.
This can be the best possible way to debug programs that need to do things quickly, such as networking programs that need to respond before the other end of the network connection times out and goes away. You might not have much time to single-step a debugger; but you can just let your code run, and log everything, then pore over the logs and figure out what's really happening.
EDIT: The original URL for the templates was: http://aymanh.com/python-debugging-techniques
This page is missing so I replaced it with a reference to the snapshot saved at archive.org: http://web.archive.org/web/20120819135307/http://aymanh.com/python-debugging-techniques
In case it disappears again, here are the templates I mentioned. This is code taken from the blog; I didn't write it.
import logging
import optparse
LOGGING_LEVELS = {'critical': logging.CRITICAL,
'error': logging.ERROR,
'warning': logging.WARNING,
'info': logging.INFO,
'debug': logging.DEBUG}
def main():
parser = optparse.OptionParser()
parser.add_option('-l', '--logging-level', help='Logging level')
parser.add_option('-f', '--logging-file', help='Logging file name')
(options, args) = parser.parse_args()
logging_level = LOGGING_LEVELS.get(options.logging_level, logging.NOTSET)
logging.basicConfig(level=logging_level, filename=options.logging_file,
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
# Your program goes here.
# You can access command-line arguments using the args variable.
if __name__ == '__main__':
main()
And here is his explanation of how to use the above. Again, I don't get the credit for this:
By default, the logging module prints critical, error and warning messages. To change this so that all levels are printed, use:
$ ./your-program.py --logging=debug
To send log messages to a file called debug.log, use:
$ ./your-program.py --logging-level=debug --logging-file=debug.log
It is possible to print what Python lines are executed (thanks Geo!). This has any number of applications, for example, you could modify it to check when particular functions are called or add something like ## make it only track particular lines.
code.interact takes you into a interactive console
import code; code.interact(local=locals())
If you want to be able to easily access your console history look at: "Can I have a history mechanism like in the shell?" (will have to look down for it).
Auto-complete can be enabled for the interpreter.
ipdb is like pdb, with the awesomeness of ipython.
print statements
Some people recommend a debug_print function instead of print for easy disabling
The pprint module is invaluable for complex structures
the obvious way to debug a script
python -m pdb script.py
useful when that script raises an exception
useful when using virtualenv and pdb command is not running with the venvs python version.
if you don't know exactly where that script is
python -m pdb ``which <python-script-name>``
PyDev
PyDev has a pretty good interactive debugger. It has watch expressions, hover-to-evaluate, thread and stack listings and (almost) all the usual amenities you expect from a modern visual debugger. You can even attach to a running process and do remote debugging.
Like other visual debuggers, though, I find it useful mostly for simple problems, or for very complicated problems after I've tried everything else. I still do most of the heavy lifting with logging.
If you are familiar with Visual Studio, Python Tools for Visual Studio is what you look for.
Winpdb is very nice, and contrary to its name it's completely cross-platform.
It's got a very nice prompt-based and GUI debugger, and supports remote debugging.
In Vim, I have these three bindings:
map <F9> Oimport rpdb2; rpdb2.start_embedded_debugger("asdf") #BREAK<esc>
map <F8> Ofrom nose.tools import set_trace; set_trace() #BREAK<esc>
map <F7> Oimport traceback, sys; traceback.print_exception(*sys.exc_info()) #TRACEBACK<esc>
rpdb2 is a Remote Python Debugger, which can be used with WinPDB, a solid graphical debugger. Because I know you'll ask, it can do everything I expect a graphical debugger to do :)
I use pdb from nose.tools so that I can debug unit tests as well as normal code.
Finally, the F7 mapping will print a traceback (similar to the kind you get when an exception bubbles to the top of the stack). I've found it really useful more than a few times.
Defining useful repr() methods for your classes (so you can see what an object is) and using repr() or "%r" % (...) or "...{0!r}..".format(...) in your debug messages/logs is IMHO a key to efficient debugging.
Also, the debuggers mentioned in other answers will make use of the repr() methods.
Getting a stack trace from a running Python application
There are several tricks here. These include
Breaking into an interpreter/printing a stack trace by sending a signal
Getting a stack trace out of an unprepared Python process
Running the interpreter with flags to make it useful for debugging
If you don't like spending time in debuggers (and don't appreciate poor usability of pdb command line interface), you can dump execution trace and analyze it later. For example:
python -m trace -t setup.py install > execution.log
This will dump all source line of setup.py install execution to execution.log.
To make it easier to customize trace output and write your own tracers, I put together some pieces of code into xtrace module (public domain).
When possible, I debug using M-x pdb in emacs for source level debugging.
There is a full online course called "Software Debugging" by Andreas Zeller on Udacity, packed with tips about debugging:
Course Summary
In this class you will learn how to debug programs systematically, how
to automate the debugging process and build several automated
debugging tools in Python.
Why Take This Course?
At the end of this course you will have a solid understanding about
systematic debugging, will know how to automate debugging and will
have built several functional debugging tools in Python.
Prerequisites and Requirements
Basic knowledge of programming and Python at the level of Udacity
CS101 or better is required. Basic understanding of Object-oriented
programming is helpful.
Highly recommended.
if you want a nice graphical way to print your call stack in a readable fashion, check out this utility: https://github.com/joerick/pyinstrument
Run from command line:
python -m pyinstrument myscript.py [args...]
Run as a module:
from pyinstrument import Profiler
profiler = Profiler()
profiler.start()
# code you want to profile
profiler.stop()
print(profiler.output_text(unicode=True, color=True))
Run with django:
Just add pyinstrument.middleware.ProfilerMiddleware to MIDDLEWARE_CLASSES, then add ?profile to the end of the request URL to activate the profiler.

iPython - "broken" shell/terminal after realoading Django

I'm embedding iPython shell in a Django script (with development server, e.g. runserver at localhost) like this:
...
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
ipshell()
...
which gives me interactive shell at the desired place. Now, if modify the source code, Django automatically reloads, probably without correctly quitting iPython shell, and "breaks" my terminal emulator (xterm, konsole) - text becomes invisible, etc. (same effect if iPython running inside Django is terminated with Ctrl+d).
Any suggestions as what could be causing this? (I'm probably using iPython in a wrong way, but who knows).
I cannot answer the question why it's going wrong, but I can tell you how to recover from it: quit the debugging server and give the reset command.
Another way to prevent this from happening is to use the --reload switch on the runserver command. This means that Django will not reload after a change, but it also doesn't break your debugger.
This issue is already fixed: http://code.djangoproject.com/ticket/15565
Thanks Django.

Run a C++ Program from Django Framework

I need to run a C++ Program from Django Framework. In a sense, I get inputs from UI in views.py . Once I have these inputs, I need to process the input using my C++ program and use those results. Is it possible ?
Compile that C++ program to executable and call with subprocess module from python
You can use swig to create a C++ module that can be imported in python.
An alternative is boost::python (but personnaly, I prefer swig).
One way of doing this would be to use os.popen. Assuming your C++ executable is in the system wide path and is named mycpp, you would do something like:
results = os.popen('mycpp %s' % user_input).read()
However, this could get computationally expensive real fast if you're invoking this command often 'cause os.popen basically forks off a subprocess. Also, as noted in the docs, it's been deprecated since Python 2.6 so proceed with caution.
Assuming you are on *nix, compile your C++ program and store it somewhere on your system, say /home/rishabh/myexe.
Now from your django app call the executable using commands module:
import commands
status, res = commands.getstatusoutput("/home/rishabh/myexe")
# status contains process status (0 for success, non-zero for unsuccesful termination) and res contains the output of the process

writing pexpect like program in c++ on Linux

Is there any way of writing pexpect like small program which can launch a process and pass the password to that process?
I don't want to install and use pexpect python library but want to know the logic behind it so that using linux system apis I can build something similar.
You could just use "expect". It is very light weight and is made to do what youre describing.
For very simple cases, empty is one option. It's a lightweight C program, and it can be used straight from a shell script and doesn't require Tcl.
For Debian/Ubuntu, the package is empty-expect.