def gettable(request):
reqdata = request.POST
data = reqdata['dataabc']
# print data
return HttpResponse("OK")
This works, but as soon as I uncomment print data, I see a 500 response in my dev console.
Why could this be happening? I just need to print a couple of things to the console to test.
python 3
In python 3, print is a function.
Using print as a statement will fail and raise an exception which will terminate your view function prematurely and cause an error 500.
logging module
print is bad practice in libraries and in server-side / background tasks code. please use the logging module instead. django even has a section for how to configure and use logging properly.
mod_wsgi and print
https://code.google.com/p/modwsgi/wiki/DebuggingTechniques
Prior to mod_wsgi version 3.0, you would see this when using print with sys.stdout:
IOError: sys.stdout access restricted by mod_wsgi
and you would need to explicitly use a file, e.g.:
print >> sys.stderr, data # python 2
You can however disable the restriction by editing your configuration and using the WSGIRestrictStdout directive.
Completely agree with dnozay. Logging module is a way to go.
More about print statements with wsgi:
http://blog.dscpl.com.au/2009/04/wsgi-and-printing-to-standard-output.html?m=1
Related
I'm trying to debug some code in a Jupyter notebook. I've tried 3 4 different methods, and they all suffer from the same problem:
--Return--
None
> <ipython-input-22-04c6f5c205d1>(3)<module>()
1 import IPython.core.debugger as dbg
2 dber = dbg.Tracer()
----> 3 dber()
4 tst = huh.plot(ret_params=True)
5 type(tst)
ipdb> n
> y:\miniconda\lib\site-packages\ipython\core\interactiveshell.py(2884)run_code()
2882 finally:
2883 # Reset our crash handler in place
-> 2884 sys.excepthook = old_excepthook
2885 except SystemExit as e:
2886 if result is not None:
as you can see, the n command, which from what I understood from the pdb documentation should execute the next line (I'm assuming ipdb is just pdb adapted to work on IPython, especially since I can't find any command documentation that refers specifically to ipdb and not pdb)
s also has the same problem. This is actually what I want to do - step into the plot call (from what I understand, this is what s is supposed to do), but what I get is exactly the same as what I get from n. I also just tried r and I get the same problem.
Every example I've seen just uses Tracer()() or IPython.core.debugger.PDB().set_trace() to set a breakpoint in the line that follows the command, but both cause the same problems (and, I assume, are actually the exact same thing).
I also tried %debug (MultipleInstanceError) and %%debug (Doesn't show the code in the line being executed - just says what line, using s doesn't step into the function, just runs the line).
Edit: turns out, according to a blog post from April of this year, plain pdb should also work. It does allow me to interactively debug the notebook, but it only prints the current line being debugged (probably not a bug), and it has the same problem as IPython's set_trace() and Tracer()()
on plain IPython console, IPython's set_trace (only one I've tested) works just fine.
I encountered the same problem when debugging in Jupyter Notebook. What is working for me however, is when I call set_trace() inside a function. Why is explained here (click), though I don't really understand why others don't encounter this problem. Anyway, if you need a pragmatic solution for your problem and you want to debug a self-written function, try this:
from IPython.core.debugger import set_trace
def thisfunction(x):
set_trace() # start debugging when calling the function
x += 2
return x
thisfunction(5) # ipdb console opens and I can use 'n'
Now I can use 'n' and the debugging process runs the next line without problems. If I use the following code, however, I run into your above mentioned problem.
from IPython.core.debugger import set_trace
def thisfunction(x):
x += 2
return x
set_trace() # start debugging before calling the function.
# Calling 's' in the ipdb console to step inside "thisfunction" produces an error
thisfunction(5)
Hope this helps until somebody could solve the problem completely.
OK so I am trying to run one python script (test1.py) and print something into a webpage at the end. I also want a subprocess (test2.py) to begin during this script. However, as it happens, test2.py is going to take longer to execute than test1. The problem that I am having is that test1.py is being held up until test2 completes. I have seen numerous people post similar questions but none of the solutions I've seen have fixed my issue.
Here's a heavily simplified version of my code that demonstrates the issue. I am running it on a local server, and displaying it in firefox.
test1.py:
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import cgi
import cgitb; cgitb.enable()
import subprocess
form = cgi.FieldStorage()
print("Content-Type: text/html") # HTTP header to say HTML is following
print # blank line, end of headers
p = subprocess.Popen(['python','test2.py'])
print "script1 ended"
test2.py:
from time import sleep
print "script2 sleeping...."
sleep(60)
print "script2 ended"
Essentially I want to execute test1.py and have it say "script1 ended" in firefox, without having to wait 60 seconds for the subprocess to exit. I don't want anything from the test2 subprocess to show up.
EDIT: For anyone interested.. I solved the problem in windows by using os.startfile(), and as it turns out subprocess.Popen([...], stderr=subprocess.STDOUT, stdout=subprocess.PIPE) works in Linux. The scripts run in the background without delaying anything in the parent script when I do a wget on it.
I'm embedding Python in a C++ application (using the Python C API) and I want Python exceptions that are thrown to be handled by an exception handler mechanism that is already setup in C++. This exception handler mechanism will print out the exception itself, so I don't want the embedded Python interpreter to print it to screen. Can I setup the embedded Python interpreter to suppress console output in some way?
You can plug a Stub-Streamhandler to both the standard out and standard error channels in python.
Here a sample (inclusive revoicing both channels):
import sys
import cStringIO
print("Silencing stdout and stderr")
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = cStringIO.StringIO()
sys.stderr = cStringIO.StringIO()
print("This should not be printed out")
sys.stdout = _stdout
sys.stderr = _stderr
print("Revoiced stdout and stderr")
Running this sample should results in following output:
Silencing stdout and stderr
Revoiced stdout and stderr
[UPDATED]
And here the memory-saving variant with sending to devnull:
import os, sys
with open(os.devnull, 'w') as devnull:
sys.stdout = devnull
sys.stderr = devnull
# ... here your code
How did you execute the code? If an exception is not catched and printed by python directly (someone has installed an handler in python directly) it will be set to the exception handling and then the exception can be examined using C (see section Exception Handling in the reference manual).
If you call the python script from C-code, the calling function will return NULL or -1 (exception has occured) and now you can use the above stated API to read the exception (for e.g. PyErr_Occurred(),... ).
But if the things are done by the python code itself (if someone installed a hook or something inside its code and prints it directly), you have no change to catch things easily. Maybe you can add a hook by yourself, calling your code. This can be done via sys.excepthook(type, value, traceback) and one solution would be to find the position and remove it from the code or have a look at other methods to circumvent it.
This site is very informative about it: Python API/include files
Edit:
For redirecting stdout or stderr you can use the C-Api function:
FILE* sto = fopen("out.txt", "w+");
FILE* ste = fopen("outErr.txt", "w+");
PySys_SetObject("stdout", PyFile_FromFile(sto, "out.txt","wb", fclose));
PySys_SetObject("stderr", PyFile_FromFile(ste, "outErr.txt","wb", fclose));
before calling any other python code. PyFile_FromFile maybe also replaced by something else, I think, but I have not used this so far.
I get this error while running a python script (called by ./waf --run):
TypeError: abspath() takes exactly 1 argument (2 given)
The problem is that it is indeed called with: obj.path.abspath(env).
This is not a python issue, because that code worked perfectly before, and it's part of a huge project (ns3) so I doubt this is broken.
However something must have changed in my settings, because this code worked before, and now it doesn't.
Can you help me to figure out why I get this error ?
Here is the python code: http://pastebin.com/EbJ50BBt. The error occurs line 61.
The documentation of the method Node.abspath() states it does not take an additional env parameter, and I confirmed that it never did by checking the git history. I suggest replacing
if not (obj.path.abspath().startswith(launch_dir)
or obj.path.abspath(env).startswith(launch_dir)):
continue
with
if not obj.path.abspath().startswith(launch_dir):
continue
If this code worked before, this is probably due to the fact that the first operator of the or expression happened to always be True, so the second operator was never executed. It seems to be a bug in your code anyway.
You should have a file name and line number in the traceback. Go to that file and line and find out was "obj" and "obj.path.abspath" are. A simple solution would be to put the offending line in a try/except block to print (or log) more informations, ie:
# your code here
try:
whatever = obj.path.abspath(env)
except Exception, e:
# if you have a logger
logger.exception("oops : obj is '%s' (%s)" % (obj, type(obj)))
# else
import sys
print >> sys.stderr, "oops, got %s on '%s' (%s)" % (e, obj, type(obj))
# if you can run this code directly from a shell,
# this will send you in the interactive debugger so you can
# inspect the offending objet and the whole call stack.
# else comment out this line
import pdb; pdb.set_trace()
# and re-raise the exception
raise
My bet is that "obj.path" is NOT the python 'os.path' module, and that "obj.path.abspath" is a an instance method that only takes "self" as argument.
The problem came from the fact that apparently waf doesn't like symlinks, the python code must not be prepared for such cases.
Problem solved, thanks for your help everybody
I am looking for something like uWSGI + django autoreload mode for Flask.
I am running uwsgi version 1.9.5 and the option
uwsgi --py-autoreload 1
works great
If you're configuring uwsgi with command arguments, pass --py-autoreload=1:
uwsgi --py-autoreload=1
If you're using a .ini file to configure uwsgi and using uwsgi --ini, add the following to your .ini file:
py-autoreload = 1
For development environment you can try using
--python-autoreload uwsgi's parameter.
Looking at the source code it may work only in threaded mode (--enable-threads).
You could try using supervisord as a manager for your Uwsgi app. It also has a watch function that auto-reloads a process when a file or folder has been "touched"/modified.
You will find a nice tutorial here: Flask+NginX+Uwsgi+Supervisord
The auto-reloading functionality of development-mode Flask is actually provided by the underlying Werkzeug library. The relevant code is in werkzeug/serving.py -- it's worth taking a look at. But basically, the main application spawns the WSGI server as a subprocess that stats every active .py file once per second, looking for changes. If it sees any, the subprocess exits, and the parent process starts it back up again -- in effect reloading the chages.
There's no reason you couldn't implement a similar technique at the layer of uWSGI. If you don't want to use a stat loop, you can try using underlying OS file-watch commands. Apparently (according to Werkzeug's code), pyinotify is buggy, but perhaps Watchdog works? Try a few things out and see what happens.
Edit:
In response to the comment, I think this would be pretty easy to reimplement. Building on the example provided from your link, along with the code from werkzeug/serving.py:
""" NOTE: _iter_module_files() and check_for_modifications() are both
copied from Werkzeug code. Include appropriate attribution if
actually used in a project. """
import uwsgi
from uwsgidecorators import timer
import sys
import os
def _iter_module_files():
for module in sys.modules.values():
filename = getattr(module, '__file__', None)
if filename:
old = None
while not os.path.isfile(filename):
old = filename
filename = os.path.dirname(filename)
if filename == old:
break
else:
if filename[-4:] in ('.pyc', '.pyo'):
filename = filename[:-1]
yield filename
#timer(3)
def check_for_modifications():
# Function-static variable... you could make this global, or whatever
mtimes = check_for_modifications.mtimes
for filename in _iter_module_files():
try:
mtime = os.stat(filename).st_mtime
except OSError:
continue
old_time = mtimes.get(filename)
if old_time is None:
mtimes[filename] = mtime
continue
elif mtime > old_time:
uwsgi.reload()
return
check_for_modifications.mtimes = {} # init static
It's untested, but should work.
py-autoreload=1
in the .ini file does the job
import gevent.wsgi
import werkzeug.serving
#werkzeug.serving.run_with_reloader
def runServer():
gevent.wsgi.WSGIServer(('', 5000), app).serve_forever()
(You can use an arbitrary WSGI server)
I am afraid that Flask is really too bare bones to have an implementation like this bundled by default.
Dynamically reloading code in production is generally a bad thing, but if you are concerned about a dev environment, take a look at this bash shell script http://aplawrence.com/Unixart/watchdir.html
Just change the sleep interval to whatever suits your needs and substitute the echo command with whatever you use to reload uwsgi. I run uwsgi un master mode and just send a killall uwsgi command.