Multiple Tkinter windows pop up at runtime after failed attempts - python-2.7

First of all, I'm quite new at Python, Stackoverflow and programming in general so please forgive any formal errors I might have made as I'm still trying to grasp many of the required conceptual programming protocols.
Here's the issue:
I'm trying to work around a specific, seemingly simple problem I've been having when using Tkinter: Whenever I'm fiddling with some code that confuses me, it generally takes many attempts until I finally find a working solution. So I write some code, run it, get an error, make some changes, run it again, get another error, change it again... and so on until a working result is achieved.
When the code finally works, I unfortunately also get additional Tkinter main windows popping up for every failed run I've executed. So if I've made, say, 20 changes before I eventually achieve working code, 20 additional Tkinter windows pop up. Annoying...
Now, I was thinking that maybe handling the exceptions with try/except might avoid this, but I'm unsure how to accomplish this properly.
I've been looking for a solution but can't seem to find a post that addresses this issue. I'm actually not really sure how to formulate the problem correctly... Anybody has some advice concerning this?
The following shows a simple but failed attempt of how I'm trying to circumvent this. The code works as is, but if you make a little typo in the code, run it a couple of times, then undo the typo and run the code again, you'll get multiple Tkinter windows which is what I'm trying to avoid.
Any help is, of course, appreciated...
(btw, I'm using Python 2.7.13.)
import Tkinter as tk
class App(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self)
self.root = parent
self.canvas = tk.Canvas(self)
self.canvas.pack(expand=1,fill='both')
self.bindings()
def click(self,e):
print 'clicked'
def bindings(self):
self.root.bind('<1>',self.click)
def main():
root = tk.Tk()
app = App(root)
app.pack()
root.mainloop()
if __name__ == '__main__':
try:
main()
except:
print 'Run failed...'

Alright, great. The issue has indeed nothing to do with Tkinter or Python but with the IDE itself. Thank you Ethan for pointing that out.
PyScripter has several modes or engines. I've been running scripts with its internal engine which is faster but does not reinitialize with every run. I believe this causes the problem. The remote engine, on the other hand, does reinitialize with every run. This avoids the failed run popups.
A more in-depth explanation from the PyScripter manual below:
Python Engines:
Internal
It is faster than the other options however if there are problems with
the scripts you are running or debugging they could affect the
reliability of PyScripter and could cause crashes. Another limitation
of this engine is that it cannot run or debug GUI scripts nor it can
be reinitialized.
Remote
This the default engine of PyScripter and is the recommended engine
for most Python development tasks. It runs in a child process and
communicates with PyScripter using rpyc. It can be used to run and
debug any kind of script. However if you run or debug GUI scripts you
may have to reinitialize the engine after each run.
Remote Tk
This remote Python engine is specifically created to run and debug
Tkinter applications including pylab using the Tkagg backend. It also
supports running pylab in interactive mode. The engine activates a
Tkinter mainloop and replaces the mainloop with a dummy function so
that the Tkinter scripts you are running or debugging do not block the
engine. You may even develop and test Tkinter widgets using the
interactive console.
Remote Wx
This remote Python engine is specifically created to run and debug
wxPython applications including pylab using the WX and WXAgg backends.
It also supports running pylab in interactive mode. The engine
activates a wx MainLoop and replaces the MainLoop with a dummy
function so that the wxPython scripts you are running or debugging do
not block the engine. You may even develop and test wxPython Frames
and Apps using the interactive console. Please note that this engine
prevents the redirection of wxPython output since that would prevent
the communication with Pyscripter.
When using the Tk and Wx remote engines you can of course run or debug
any other non-GUI Python script. However bear in mind that these
engines may be slightly slower than the standard remote engine since
they also contain a GUI main loop. Also note that these two engines
override the sys.exit function with a dummy procedure.

Related

Launch a process completely independent of the launching process

Hello I have a gui application, a simple painting program, written with PyQt 4, Python 2.7 and running on Windows, below is an excerpt
if __name__ == "__main__":
app=QApplication(sys.argv)
paint_app=main_paint_editor()#instantiate the gui
paint_app.show()
app.exec_()
This gets the gui started as a main thread. Once the UI is up and running there's a button that is supposed to launch a Flask app, (CherryPy )
This is done via a signal-slot connection in the gui, i.e
QtCore.QObject.connect(my_webserver_button,QtCore.SIGNAL("triggered()"),lauch_the_server)
The function launch_the_server, is meant to start a separate process for the flask app. As there is no real dependency between the gui and the flask app, I really want to use a separate process and not threads.
Also while experimenting, the use of threads in this specific case makes my paint gui stutter. So a separate process is my goal.
In order to start the webserver and flask app in its own proccess I have
from cherrpy import wsgiserver
from multiprocessing import Process
def launch_the_server(self):
flask_process=Process(target=start_cherrypy, name="local_webserver")
flask_process.start()
flask_process.join()
def start_cherrypy(self):
localwsgi_server=wsgiserver.CherryPyWSGIServer(('localhost',1234),flask_app,numthreads=10,server_name="paintserver")
localwsgi_server.start()
when I run in debug mode using eclipse, this works fine. I get a separate process and the flask application works fine.
However when I build an executable of my entire application using py2exe,
it doesn't work at all.
I do get an application which runs, however when I try and launch the web-server nothing happens. I put a couple of debug messages in the exe and none get printed.
I thought it might be a cherrypy problem at first so I replaced the code with something simple, i.e launching a process that simply writes "hello" to a text file
Nothing happened in the executable though it worked fine from Eclipse debugger.
How can I launch a separate process from within an existing GUI application to run additional functions such as starting a flask app? or writing text to a text file?
This answer worked for me.
py2exe-with-multiprocessing-fails-to-run-the-processes
Mainly the
import multiprocessing
if __name__ == '__main__':
multiprocessing.freeze_support()
did the trick looks like it was more a quirk of py2exe. Thanks to everyone for looking

Best solution for sys.stdout causing 'pythonw.exe' to crash

This (what i consiter a bug) plagued me for quite some time. After ALOT of searches on the topic i've found a decent solution, but i am not happy with it.
The bug is as follows;
#starts main.pyw
Import foo2.py
# foo2 contains print()
# After a few seconds(sometimes minutes) "'pythonw.exe' has stopped working."
After some research, i found out that this is due to python2 trying to force sys.stdout even know the context has no place to go. This was fixed in python3.
The program (main.pyw) is a Tkinter launcher for my game that captures the console output from foo.py. foo2.py is used by foo.py. main's console is never used, so the obvious choice was to hide it(and personally i have a small laptop, so the console was a space-hog/eye-sore)
Here is my make-shift solution:
# change 'main.pyw' to 'main.py
import ctypes
def HideConsole():
kernel32 = ctypes.Win.DLL('kernel32')
user32 = ctypes.Win.DLL('user32')
SW_HIDE = 0
hwnd = kernel32.GetConsoleWindow()
if hwnd:
user32.ShowWindow(hwnd, SW_HIDE)
HideConsole()
This works fine. Python.exe starts, the console opens, the console closes, then Tkinter starts, and the crashes are stopped. But i am getting the feeling that this isn't the right solution, what are your thoughts? Is there a better way to hide the console without using pythonw?
Edit: I have tried redirecting sys.stdout, and also the Print()'s are important to another script, so i'd rather not touch them.

Python Process Python segmentation fault (core dumped) [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 8 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are your best tips for debugging Python?
Please don't just list a particular debugger without saying what it can actually do.
Related
What are good ways to make my Python code run first time? - This discusses minimizing errors
PDB
You can use the pdb module, insert pdb.set_trace() anywhere and it will function as a breakpoint.
>>> import pdb
>>> a="a string"
>>> pdb.set_trace()
--Return--
> <stdin>(1)<module>()->None
(Pdb) p a
'a string'
(Pdb)
To continue execution use c (or cont or continue).
It is possible to execute arbitrary Python expressions using pdb. For example, if you find a mistake, you can correct the code, then type a type expression to have the same effect in the running code
ipdb is a version of pdb for IPython. It allows the use of pdb with all the IPython features including tab completion.
It is also possible to set pdb to automatically run on an uncaught exception.
Pydb was written to be an enhanced version of Pdb. Benefits?
http://pypi.python.org/pypi/pudb, a full-screen, console-based Python debugger.
Its goal is to provide all the niceties of modern GUI-based debuggers in a more lightweight and keyboard-friendly package. PuDB allows you to debug code right where you write and test it – in a terminal. If you've worked with the excellent (but nowadays ancient) DOS-based Turbo Pascal or C tools, PuDB's UI might look familiar.
Nice for debugging standalone scripts, just run
python -m pudb.run my-script.py
If you are using pdb, you can define aliases for shortcuts. I use these:
# Ned's .pdbrc
# Print a dictionary, sorted. %1 is the dict, %2 is the prefix for the names.
alias p_ for k in sorted(%1.keys()): print "%s%-15s= %-80.80s" % ("%2",k,repr(%1[k]))
# Print the instance variables of a thing.
alias pi p_ %1.__dict__ %1.
# Print the instance variables of self.
alias ps pi self
# Print the locals.
alias pl p_ locals() local:
# Next and list, and step and list.
alias nl n;;l
alias sl s;;l
# Short cuts for walking up and down the stack
alias uu u;;u
alias uuu u;;u;;u
alias uuuu u;;u;;u;;u
alias uuuuu u;;u;;u;;u;;u
alias dd d;;d
alias ddd d;;d;;d
alias dddd d;;d;;d;;d
alias ddddd d;;d;;d;;d;;d
Logging
Python already has an excellent built-in logging module. You may want to use the logging template here.
The logging module lets you specify a level of importance; during debugging you can log everything, while during normal operation you might only log critical things. You can switch things off and on.
Most people just use basic print statements to debug, and then remove the print statements. It's better to leave them in, but disable them; then, when you have another bug, you can just re-enable everything and look your logs over.
This can be the best possible way to debug programs that need to do things quickly, such as networking programs that need to respond before the other end of the network connection times out and goes away. You might not have much time to single-step a debugger; but you can just let your code run, and log everything, then pore over the logs and figure out what's really happening.
EDIT: The original URL for the templates was: http://aymanh.com/python-debugging-techniques
This page is missing so I replaced it with a reference to the snapshot saved at archive.org: http://web.archive.org/web/20120819135307/http://aymanh.com/python-debugging-techniques
In case it disappears again, here are the templates I mentioned. This is code taken from the blog; I didn't write it.
import logging
import optparse
LOGGING_LEVELS = {'critical': logging.CRITICAL,
'error': logging.ERROR,
'warning': logging.WARNING,
'info': logging.INFO,
'debug': logging.DEBUG}
def main():
parser = optparse.OptionParser()
parser.add_option('-l', '--logging-level', help='Logging level')
parser.add_option('-f', '--logging-file', help='Logging file name')
(options, args) = parser.parse_args()
logging_level = LOGGING_LEVELS.get(options.logging_level, logging.NOTSET)
logging.basicConfig(level=logging_level, filename=options.logging_file,
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
# Your program goes here.
# You can access command-line arguments using the args variable.
if __name__ == '__main__':
main()
And here is his explanation of how to use the above. Again, I don't get the credit for this:
By default, the logging module prints critical, error and warning messages. To change this so that all levels are printed, use:
$ ./your-program.py --logging=debug
To send log messages to a file called debug.log, use:
$ ./your-program.py --logging-level=debug --logging-file=debug.log
It is possible to print what Python lines are executed (thanks Geo!). This has any number of applications, for example, you could modify it to check when particular functions are called or add something like ## make it only track particular lines.
code.interact takes you into a interactive console
import code; code.interact(local=locals())
If you want to be able to easily access your console history look at: "Can I have a history mechanism like in the shell?" (will have to look down for it).
Auto-complete can be enabled for the interpreter.
ipdb is like pdb, with the awesomeness of ipython.
print statements
Some people recommend a debug_print function instead of print for easy disabling
The pprint module is invaluable for complex structures
the obvious way to debug a script
python -m pdb script.py
useful when that script raises an exception
useful when using virtualenv and pdb command is not running with the venvs python version.
if you don't know exactly where that script is
python -m pdb ``which <python-script-name>``
PyDev
PyDev has a pretty good interactive debugger. It has watch expressions, hover-to-evaluate, thread and stack listings and (almost) all the usual amenities you expect from a modern visual debugger. You can even attach to a running process and do remote debugging.
Like other visual debuggers, though, I find it useful mostly for simple problems, or for very complicated problems after I've tried everything else. I still do most of the heavy lifting with logging.
If you are familiar with Visual Studio, Python Tools for Visual Studio is what you look for.
Winpdb is very nice, and contrary to its name it's completely cross-platform.
It's got a very nice prompt-based and GUI debugger, and supports remote debugging.
In Vim, I have these three bindings:
map <F9> Oimport rpdb2; rpdb2.start_embedded_debugger("asdf") #BREAK<esc>
map <F8> Ofrom nose.tools import set_trace; set_trace() #BREAK<esc>
map <F7> Oimport traceback, sys; traceback.print_exception(*sys.exc_info()) #TRACEBACK<esc>
rpdb2 is a Remote Python Debugger, which can be used with WinPDB, a solid graphical debugger. Because I know you'll ask, it can do everything I expect a graphical debugger to do :)
I use pdb from nose.tools so that I can debug unit tests as well as normal code.
Finally, the F7 mapping will print a traceback (similar to the kind you get when an exception bubbles to the top of the stack). I've found it really useful more than a few times.
Defining useful repr() methods for your classes (so you can see what an object is) and using repr() or "%r" % (...) or "...{0!r}..".format(...) in your debug messages/logs is IMHO a key to efficient debugging.
Also, the debuggers mentioned in other answers will make use of the repr() methods.
Getting a stack trace from a running Python application
There are several tricks here. These include
Breaking into an interpreter/printing a stack trace by sending a signal
Getting a stack trace out of an unprepared Python process
Running the interpreter with flags to make it useful for debugging
If you don't like spending time in debuggers (and don't appreciate poor usability of pdb command line interface), you can dump execution trace and analyze it later. For example:
python -m trace -t setup.py install > execution.log
This will dump all source line of setup.py install execution to execution.log.
To make it easier to customize trace output and write your own tracers, I put together some pieces of code into xtrace module (public domain).
When possible, I debug using M-x pdb in emacs for source level debugging.
There is a full online course called "Software Debugging" by Andreas Zeller on Udacity, packed with tips about debugging:
Course Summary
In this class you will learn how to debug programs systematically, how
to automate the debugging process and build several automated
debugging tools in Python.
Why Take This Course?
At the end of this course you will have a solid understanding about
systematic debugging, will know how to automate debugging and will
have built several functional debugging tools in Python.
Prerequisites and Requirements
Basic knowledge of programming and Python at the level of Udacity
CS101 or better is required. Basic understanding of Object-oriented
programming is helpful.
Highly recommended.
if you want a nice graphical way to print your call stack in a readable fashion, check out this utility: https://github.com/joerick/pyinstrument
Run from command line:
python -m pyinstrument myscript.py [args...]
Run as a module:
from pyinstrument import Profiler
profiler = Profiler()
profiler.start()
# code you want to profile
profiler.stop()
print(profiler.output_text(unicode=True, color=True))
Run with django:
Just add pyinstrument.middleware.ProfilerMiddleware to MIDDLEWARE_CLASSES, then add ?profile to the end of the request URL to activate the profiler.

Tkinter exe file - DOS screen flashes but GUI does not persist

I have created a GUI on Python 2.7.11 that consists of a main page along with page 1 and page 2 that are linked through buttons on main page. Converted main page to a python exe file using PyInstaller and there were no errors in the conversion. main page.exe appeared in the dist folder but on clicking it, a DOS screen flashed and the main page did not open nor persist on the screen.
Being a beginner, I am not sure about how to proceed further. Please help.
If you've got a line like root.mainloop() at the end (with root standing for your main Tk window) to make sure the event loop runs, then you'll need to debug your code. Try running a small segment of the code at a time to see if all goes well, and see where it is that all doesn't go well; then examine the offending part closely to find the error, maybe running some lines of code in the interpreter from the command line to see what (if any) error messages you get.
On the other hand, if you don't have a line like root.mainloop() at the end, that could produce the error you saw. Being a Python beginner myself, and having learned to program in Tcl where the Tk event loop runs automatically, I've seen that error a few times myself. :o(
There were multiple issues associated with converting the Python Script that links modules through button clicks. Keeping in mind these factors, it would be best to convert it to exe using Cx_Freeze. It is more user-friendly and was highly effective for the GUI when compared to PyInstaller and Py2Exe.

Transparent UI / hidden UI after BringToForeground

I am programming in C++ using Qt (on Windows), I have a GUI application that can be run on the command line so that users can schedule it to be run using Scheduled Tasks.
Everything was working fine (I think), except when a user tried to schedule the task with the "run when user is logged on or not" option is checked. In this instance the application would run fine, but not pop up the GUI.
I thought maybe my problem was similar to this: https://serverfault.com/questions/101671/scheduled-tasks-w-gui-issue
I thought I found the issue because my GetProcID call was returning a list of ProcessId's and I was only using the first one it returned, which caused some issues. That process ID was then passed to BringToForeground.
After this change it now brings up a transparent, or non-existent other than the application icon on some machines (basically every test except my 3 machines that can debug). Works exactly as required on my test machines.
The application works well if the GUI app is already running and you make the same call on the command line (it passes the call to that process to run). The app also works fine in normal UI mode, (no command line params passed)
EDIT:
Does anyone have any ideas what might cause this? I am thinking it has something to do with the app not starting on the correct Desktop, but don't have a ton of experience with those and have no idea where to even begin.
EDIT 2: Only seem to have the issue when it is run remotely, or through virtualization. (still confirming if this is truly the case)