pygame program exits with no error message when run with pythonw - python-2.7

I'm trying to run a pygame program using pythonw to avoid having the console window show up. This causes a weird issue related to print statements.
Basically, the program will just exit after a few seconds with no error message. The more printing I do, the faster it happens.
If I run it in idle or at the command prompt (or in linux) the program works fine. This problem only happens when launched with pythonw (right-click, Open With, pythonw).
I'm using python 2.7.11 on Windows XP 32-bit. pygame 1.9.1release.
Is there a workaround for this? Why does the program simply terminate with no error?
import pygame
from pygame.locals import *
succeeded, failed = pygame.init()
display_surface = pygame.display.set_mode((320, 240))
clock = pygame.time.Clock()
terminate = False
while terminate is False:
for event in pygame.event.get():
if event.type == QUIT:
terminate = True
area = display_surface.fill((0,100,0))
pygame.display.flip()
elapsed = clock.tick(20)
print str(elapsed)*20
pygame.quit()

You don't need to remove print statements. Save them for later debugging. ;-)
Two steps to solve this problem:
Firstly, keep all the code in py file - don't change it to pyw now; Say it is actualCode.py
Then, create a new file runAs.pyw with the following lines in it
# In runAs.pyw file, we will first send stdout to StringIO so that it is not printed
import sys # access to stdout
import StringIO # StringIO implements a file like class without the need of disc
sys.stdout = StringIO.StringIO() # sends stdout to StringIO (not printed anymore)
import actualCode # or whatever the name of your file is, see further details below
Note that, just importing actualCode runs the file, so, in actualCode.py you should not enclose the code which is executed, in what I call is it main running file condition. For example,
# In actualCode.py file
....
....
....
if __name__ == '__main__': # Don't use this condition; it evaluates to false when imported
... # These lines won't be executed when this file is imported,
... # So, keep these lines outside
# Note: The file in your question, as it is, is fine

Related

Compiled console app quits immediately when importing ConfigParser (Python 2.7.12)

I am very new to Python and am trying to append some functionality to an existing Python program. I want to read values from a config INI file like this:
[Admin]
AD1 = 1
AD2 = 2
RSW = 3
When I execute the following code from IDLE, it works as ist should (I already was able to read in values from the file, but deleted this part for a shorter code snippet):
#!/usr/bin/python
import ConfigParser
# buildin python libs
from time import sleep
import sys
def main():
print("Test")
sleep(2)
if __name__ == '__main__':
main()
But the compiled exe quits before printing and waiting 2 seconds. If I comment out the import of ConfigParser, exe runs fine.
This is how I compile into exe:
from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {'bundle_files': 1}},
zipfile = None,
console=['Test.py'],
)
What am I doing wrong? Is there maybe another way to read in a configuration in an easy way, if ConfigParser for some reason doesnt work in a compiled exe?
Thanks in advance for your help!

Close Python IDLE shell without prompt

I am working on a script (script A), that needs to open a new Python IDLE shell, automatically run another script (script B) in it and then close it. The following code is what I use for this purpose:
import sys
sys.argv=['','-n','-t','My New Shell','-c','execfile("VarLoader.py")']
import idlelib.PyShell
idlelib.PyShell.main()
However I can't get the new shell close automatically. I have tried adding the following to script B but either it doesn't close the new shell or a windows pops up asking whether I want to kill it.
exit()
.
import sys
sys.exit()
Instead of monkeypatching or modifying the IDLE source code to make your program skip the prompt to exit I'd recommend you create a subclass of PyShell that overrides the close method how you want it to work:
import idlelib.PyShell
class PyShell_NoExitPrompt(idlelib.PyShell.PyShell):
def close(self):
"Extend EditorWindow.close(), does not prompt to exit"
## if self.executing:
## response = tkMessageBox.askokcancel(
## "Kill?",
## "Your program is still running!\n Do you want to kill it?",
## default="ok",
## parent=self.text)
## if response is False:
## return "cancel"
self.stop_readline()
self.canceled = True
self.closing = True
return idlelib.PyShell.EditorWindow.close(self)
The original issue with this was that then using idlelib.PyShell.main would not use your subclass, you can in fact create a copy of the function - without modifying the original - by using the FunctionType constructor that will use your modified class:
import functools
from types import FunctionType
def copy_function(f, namespace_override):
"""creates a copy of a function (code, signature, defaults) with a modified global scope"""
namespace = dict(f.__globals__)
namespace.update(namespace_override)
new_f = FunctionType(f.__code__, namespace, f.__name__, f.__defaults__, f.__closure__)
return functools.update_wrapper(f, new_f)
Then you can run your extra IDLE shell like this:
import sys
#there is also a way to prevent the need to override sys.argv but that isn't as concerning to me.
sys.argv = ['','-n','-t','My New Shell','-c','execfile("VarLoader.py")']
hacked_main = copy_function(idlelib.PyShell.main,
{"PyShell":PyShell_NoExitPrompt})
hacked_main()
Now you can leave IDLE the way it is and have your program work the way you want it too. (it is also compatible with other versions of python!)

Trapping a shutdown event in Python

I posted a question about how to catch a "sudo shutdown -r 2" event in Python. I was sent to this thread: Run code in python script on shutdown signal .
I'm running a Raspberry Pi v2 with Jessy.
I have read about
signal
and have tried to follow the ideas in the above thread, but so far I have not been successful. Here is my code:
import time
import signal
import sys
def CloseAll(Code, Frame):
f = open('/mnt/usbdrive/output/TestSignal.txt','a')
f.write('Signal Code:' + Code)
f.write('Signal Frame:' + Frame)
f.write('\r\n')
f.close()
sys.exit(0)
signal.signal(signal.SIGTERM,CloseAll)
print('Program is running')
try:
while True:
#get readings from sensors every 15 seconds
time.sleep(15)
f = open('/mnt/usbdrive/output/TestSignal.txt','a')
f.write('Hello ')
f.write('\r\n')
f.close()
except KeyboardInterrupt:
f = open('/mnt/usbdrive/output/TestSignal.txt','a')
f.write('Done')
f.write('\r\n')
f.close()
The program runs in a "screen" session/window and reacts as expected to a CNTL-C. However, when I exit the screen session, leaving the program running, and enter "sudo shutdown -r 2", the Pi reboots as expected after 2 minutes, but the TestSignal.txt file does not show that the signal.SIGTERM event was processed.
What am I doing wrong? Or better yet, how can I trap the shutdown event, usually initiated by a cron job, and close my Python program running in a screen session gracefully?
When you do not try to await such an event, but in a parallel session send SIGTERMto that process (e.g. by calling kill -15 $PID on the process id $PID of the python script running) , you should see an instructive error message ;-)
Also the comment about the mount point should be of interest after you repaired the python errors (TypeError: cannot concatenate 'str' and 'int' objects).
Try something like:
import time
import signal
import sys
LOG_PATH = '/mnt/usbdrive/output/TestSignal.txt'
def CloseAll(Code, Frame):
f = open(LOG_PATH, 'a')
f.write('Signal Code:' + str(Code) + ' ')
f.write('Signal Frame:' + str(Frame))
f.write('\r\n')
f.close()
sys.exit(0)
signal.signal(signal.SIGTERM, CloseAll)
print('Program is running')
try:
while True:
# get readings from sensors every 15 seconds
time.sleep(15)
f = open(LOG_PATH, 'a')
f.write('Hello ')
f.write('\r\n')
f.close()
except KeyboardInterrupt:
f = open(LOG_PATH, 'a')
f.write('Done')
f.write('\r\n')
f.close()
as a starting point. If this works somehow on your system why not rewrite some portions like:
# ... 8< - - -
def close_all(signum, frame):
with open(LOG_PATH, 'a') as f:
f.write('Signal Code:%d Signal Frame:%s\r\n' % (signum, frame))
sys.exit(0)
signal.signal(signal.SIGTERM, close_all)
# 8< - - - ...
Edit: To further isolate the error and adapt more to production like mode, one might rewrite the code like this (given that syslog is running on the machine, which it should, but I never worked on devices of that kind):
#! /usr/bin/env python
import datetime as dt
import time
import signal
import sys
import syslog
LOG_PATH = 'foobarbaz.log' # '/mnt/usbdrive/output/TestSignal.txt'
def close_all(signum, frame):
"""Log to system log. Do not spend too much time after receipt of TERM."""
syslog.syslog(syslog.LOG_CRIT, 'Signal Number:%d {%s}' % (signum, frame))
sys.exit(0)
# register handler for SIGTERM(15) signal
signal.signal(signal.SIGTERM, close_all)
def get_sensor_readings_every(seconds):
"""Mock for sensor readings every seconds seconds."""
time.sleep(seconds)
return dt.datetime.now()
def main():
"""Main loop - maybe check usage patterns for file resources."""
syslog.syslog(syslog.LOG_USER, 'Program %s is running' % (__file__,))
try:
with open(LOG_PATH, 'a') as f:
while True:
f.write('Hello at %s\r\n' % (
get_sensor_readings_every(15),))
except KeyboardInterrupt:
with open(LOG_PATH, 'a') as f:
f.write('Done at %s\r\n' % (dt.datetime.now(),))
if __name__ == '__main__':
sys.exit(main())
Points to note:
the log file for the actual measurements is separate from the logging channel for operational alerts
the log file handle is safeguarded in context managing blocks and in usual operation is just kept open
for alerting the syslog channel is used.
as a sample for the message routing the syslog.LOG_USER on my system (OS X) gives me in all terminals a message, whilst the syslog.LOG_ERR priority message in signal handler only targets the system log.
should be more to the point during shutdown hassle (not opening a file, etc.)
The last point (5.) is important in case all processes receive a SIGTERM during shutdown, i.e. all want to do something (slowing things down), maybe screenalso does not accept any buffered input anymore (or does not flush), note stdout is block buffered not line buffered.
The decoupling of the output channels, should also ease the eventual disappearance of the mount point of the measurement log file.

How to keep repeating and executing a unit-test script until interruption in Python

I would like to keep repeating/executing a script forever, until Ctrl+C is pressed. My code below doesn't work, it executes once and calls it finished. The .py file being executed is a unit test file.
import subprocess
import time
fileNames = [
'KeepRepeatingThisFile.py' ,
]
try:
while True:
for i in fileNames:
execfile(str(i))
time.sleep(20)
except KeyboardInterrupt:
pass
Last few lines of KeepRepeatingThisFile.py:
# Execute test
if __name__ == '__main__':
unittest.main()

Python multiprocessing: how to exit cleanly after an error?

I am writing some code that makes use of the multiprocessing module. However, since I am a newbie, what often happens is that some error pops up, putting a halt to the main application.
However, that applications' children still remain running, and I get a long, long list of running pythonw processes in my task manager list.
After an error occurs, what can I do to make sure all the child processes are killed as well?
There are two pieces to this puzzle.
How can I detect and kill all the child processes?
How can I make a best effort to ensure my code from part 1 is run whenever one process dies?
For part 1, you can use multiprocessing.active_children() to get a list of all the active children and kill them with Process.terminate(). Note the use of Process.terminate() comes with the usual warnings.
from multiprocessing import Process
import multiprocessing
def f(name):
print 'hello', name
while True: pass
if __name__ == '__main__':
for i in xrange(5):
p = Process(target=f, args=('bob',))
p.start()
# At user input, terminate all processes.
raw_input("Press Enter to terminate: ")
for p in multiprocessing.active_children():
p.terminate()
One solution to part 2 is to use sys.excepthook, as described in this answer. Here is a combined example.
from multiprocessing import Process
import multiprocessing
import sys
from time import sleep
def f(name):
print 'hello', name
while True: pass
def myexcepthook(exctype, value, traceback):
for p in multiprocessing.active_children():
p.terminate()
if __name__ == '__main__':
for i in xrange(5):
p = Process(target=f, args=('bob',))
p.start()
sys.excepthook = myexcepthook
# Sleep for a bit and then force an exception by doing something stupid.
sleep(1)
1 / 0