gdb Python API: exceptions disappearing? - gdb

Does anyone know why, in certain places, Python code inside of gdb doesn't properly handle exceptions? Or, to clarify, perhaps the exception message
is going somewhere other than the *gud buffer. gdb is not returning control
to the prompt, as expected.
(I'm using GNU gdb (GDB) 7.11.50.20160212-git in Emacs (24.5.1) gud mode)
For example:
class SomeEvent():
def __init__(self, ...):
... do something ...
def __call__(self):
... do something BAD here ...
gdb.post_event(SomeEvent())
When 'SomeEvent' is handled, it will just execute '__call__' up to the bad code, return, and then continue normal operation (as I can observe).
I've noticed this behavior in other 'callback' type methods, such as Stop() of a subclassed gdb.Breakpoint.

gdb.post_event ignores exceptions when the event object is invoked. You can see this clearly in the source code, in gdbpy_run_events:
/* Ignore errors. */
call_result = PyObject_CallObject (item->event, NULL);
if (call_result == NULL)
PyErr_Clear ();
This seems like a bug to me -- it would be more useful to print a stack trace or something instead.

Related

Wrap every function of every class of a module

Objective
Wrap every function of every class of gspread module.
I know there are countless of posts on the subject and most unanimously instruct to use decorators.
I'm not too familiar with decorators and felt like that approach is not as seamless as I hoped for. perhaps I didn't understand correctly.
But, I found this answer which "felt" like what I'm looking for.
(poor) Attempt
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import os
import inspect
class GoogleSheetAPI:
def __init__(self):
f = os.path.join(os.path.dirname(__file__), 'credentials.json')
os.environ.setdefault('GOOGLE_APPLICATION_CREDENTIALS', f)
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(f, scope)
self.client = gspread.authorize(credentials)
self.client.login()
def SafeCall(f):
try:
print 'before call'
f()
print 'after call'
except:
print 'exception caught'
return None
for class_name, c in inspect.getmembers(gspread, inspect.isclass):
for method_name, f in inspect.getmembers(c, inspect.ismethod):
setattr(c, f, SafeCall(f)) # TypeError: attribute name must be string, not 'instancemethod'
g = GoogleSheetAPI()
spreadsheet = g.client.open_by_key('<ID>') # calls a function in gspread.Client
worksheet = spreadsheet.get_worksheet(0) # calls a function in gspread.Spreadsheet
worksheet.add_rows(['key','value']) # calls a function in gspread.Worksheet
Notes
When I use the word "seamless" I mean that considering my code has many calls to many gspread functions, I want to change as little as possible. Using inspect/setattr seems like the perfect/seamless trick.
There are three obvious issues with your code actually.
The first one is the TypeError - which is easy to solve FWIW: as the error message (raised by setattr() states, "attribute name must be string, not 'instancemethod'". And you're indeed trying to use f (the method itself) instead of method_name. What you want here is of course:
setattr(c, method_name, SafeCall(f))
The second issue is that your SafeCall "decorator" is NOT a decorator. A decorator (well, the kind of decorator you want here at least) returns a function that wraps the original one, your current implementation just calls the original function. Actually, it is almost what SafeCall should actually return. An example of a proper decorator would be:
def decorator(func):
def wrapper(*args, **kw):
print("before calling {}".format(func))
result = func(*args, **kw)
print("after calling {}".format(func))
return result
return wrapper
And finally, the third obvious issue is here:
except:
print 'exception caught'
return None
You certainly don't want this. This
1/ will catch absolutely everything (incuding SysExit, which is what Python raises on sys.exit() calls and StopIteration which is how iterators signals they are exhausted),
2/ discard all the very useful debugging infos - making it impossible to diagnose what actuall went wrong
3/ return something that can be plain unusable so you'll have to test the return value of each method call, and since you won't know what went wrong, you won't be able to handle the issue otherwise than printing "oops, something went wrong but don't ask me what nor where nor why" and exiting the program, which is definitly not better than letting the exception propagate - the program will crash in both cases, but at least if you leave the exception alone you'll have some hints on what caused the issue.
4/ or, much worse, return a valid return value for the method (yes, quite a few method are designed to change state and return None) so you won't even know something went wrong and happily continue execution - which is a sure way to have incorrect result and corrupted data.
5/ not to mention that the methods you're decorating that way are very probably calling each others and using (expected) exceptions internally (with proper exception handling), so you are actually introducing bugs in an otherwise working (or mostly working) library.
IOW, this is probably the worse antipattern you can ever think of...

Is there an easy way to detect at runtime, whether JRuby profiling is enabled?

When profiling only parts of my JRuby program, I proceed as follows: I pass the option --profile.api to JRuby, and then do something like:
require 'jruby/profiler'
pdata = JRuby::Profiler.profile { my_code_to_be_profiled }
If the caller of the program forgets to pass --profile.api, the profile method raises an exception.
I now would like to test at runtime, whether profiling is enabled or not. How can this be done in a good way? One possibility would be to just try profiling an empty block and see whether I get an exception:
require 'jruby/profiler'
profiling_enabled = true # Let's be optimistic
begin
JRuby::Profiler.profile {}
rescue RuntimeError
profiling_enabled = false
end
This works, but doesn't look very elegant. Can anybody offer a better solution?
Something along these lines should work:
if JRuby.runtime.instance_config.is_profiling
pdata = JRuby::Profiler.profile { my_code_to_be_profiled }
end

Python Error message customization [duplicate]

When I raise my owns exceptions in my Python libraries, the exception stack shows the raise-line itself as the last item of the stack. This is obviously not an error, is conceptually right, but points the focus on something that is not useful for debugging when you're are using code externally, for example as a module.
Is there a way to avoid this and force Python to show the previous-to-last stack item as the last one, like the standard Python libraries.
Due warning: modifying the behaviour of the interpreter is generally frowned upon. And in any case, seeing exactly where an error was raised may be helpful in debugging, especially if a function can raise an error for several different reasons.
If you use the traceback module, and replace sys.excepthook with a custom function, it's probably possible to do this. But making the change will affect error display for the entire program, not just your module, so is probably not recommended.
You could also look at putting code in try/except blocks, then modifying the error and re-raising it. But your time is probably better spent making unexpected errors unlikely, and writing informative error messages for those that could arise.
you can create your own exception hook in python. below is the example of code that i am using.
import sys
import traceback
def exceptionHandler(got_exception_type, got_exception, got_traceback):
listing = traceback.format_exception(got_exception_type, got_exception, got_traceback)
# Removing the listing of statement raise (raise line).
del listing[-2]
filelist = ["org.python.pydev"] # avoiding the debuger modules.
listing = [ item for item in listing if len([f for f in filelist if f in item]) == 0 ]
files = [line for line in listing if line.startswith(" File")]
if len(files) == 1:
# only one file, remove the header.
del listing[0]
print>>sys.stderr, "".join(listing)
And below are some lines that I have used in my custom exception code.
sys.excepthook = exceptionHandler
raise Exception("My Custom error message.")
In the method exception you can add file names or module names in list "filenames" if you want to ignore any unwanted files. As I have ignored the python pydev module since I am using pydev debugger in eclipse.
The above is used in my own module for a specific purpose. you can modify and use it for your modules.
I'd suggest to not use the Exception mechanism to validate arguments, as tempting as that is. Coding with exceptions as conditionals is like saying, "crash my app if, as a developer, I don't think of all the bad conditions my provided arguments can cause. Perhaps using exceptions for things not only out of your control but also which is under control of something else like the OS or hardware or the Python language would be more logical, I don't know. In practice however I use exceptions as you request a solution for.
To answer your question, in part, it is just as simple to code thusly:
class MyObject(object):
def saveas(self, filename):
if not validate_filename(filename):
return False
...
caller
if not myobject.saveas(filename): report_and_retry()
Perhaps not a great answer, just something to think about.

lldb: conditional breakpoint on a most derived type

typical debugging pattern:
class Button : public MyBaseViewClass
{
...
};
....
void MyBaseViewClass::Resized()
{
//<---- here I want to stop in case MyBaseViewClass is really a Button, but not a ScrollBar, Checkbox or something else. I.e. I want a breakpoint condition on a dynamic (most derived) type
}
trivial approaches like a breakpoint on strstr(typeid(*this).name(), "Button") don't work because on typeid lldb console tells:
(lldb) p typeid(*this)
error: you need to include <typeinfo> before using the 'typeid' operator
error: 1 errors parsing expression
surely #include in console before making the call doesn't help
You can do this in Python pretty easily. Set the breakpoint - say it is breakpoint 1 - then do:
(lldb) break command add -s python 1
Enter your Python command(s). Type 'DONE' to end.
def function (frame, bp_loc, internal_dict):
"""frame: the lldb.SBFrame for the location at which you stopped
bp_loc: an lldb.SBBreakpointLocation for the breakpoint location information
internal_dict: an LLDB support object not to be used"""
this_value = frame.FindVariable("this", lldb.eDynamicDontRunTarget)
this_type = this_value.GetType().GetPointeeType().GetName()
if this_type == "YourClassNameHere":
return True
return False
DONE
The only tricky bit here is that when calling FindVariable I passed lldb.eDynamicDontRunTarget which told lldb to fetch the "dynamic" type of the variable, as opposed to the static type. As an aside, I could have also used lldb.eDynamicRunTarget but I happen to know lldb doesn't have to run the target go get C++ dynamic types.
This way of solving the problem is nice in that you don't have to have used RTTI for it to work (though then we'll only be able to get the type of classes that have some virtual method - since we use the vtable to do this magic.) It will also be faster than a method that requires running code in the debugee as your expression would have to do.
BTW, if you like this trick, you can also put the breakpoint code into a python function in some python file (just copy the def above), then use:
(lldb) command script import my_functions.py
(lldb) breakpoint command add -F my_functions.function
so you don't have to keep retyping it.

SWIG: Reporting Python exceptions from C++ code

I am using a library, which specifies in its API docs to define a class inherited from some particular class of of the library. The library itself is written in C++ and the bindings to Python is generated using SWIG. The problem is, when I run my Python code, no matter what exception Python throws, I get the error saying "terminate called after throwing an instance of 'Swig::DirectorMethodException'".
I would like to have this exception raised by the Python code to be reported while executing my program. Esp, those cases where I get ZeroDivisionError.
I tried to hack a bit by following the method described in the SWIG documentation at http://www.swig.org/Doc2.0/Python.html#Python_nn36 but with no luck. I still get the same message "terminate called after throwing an instance of 'Swig::DirectorMethodException'" no matter what I put in the module.i file.
Can some one please give me pointers on how to go about with this problem, so that Python exceptions are reported as they are?
Report exception raised by Python in the console of the program.
This is the useful fix from Madhusudan.C.S.
See his comment on ginbot's answer.
I am putting it as an answer so that it becomes more visible.
/* MyInterface.i */
%module(directors="1") MyInterface
%feature("director:except") {
if( $error != NULL ) {
PyObject *ptype, *pvalue, *ptraceback;
PyErr_Fetch( &ptype, &pvalue, &ptraceback );
PyErr_Restore( ptype, pvalue, ptraceback );
PyErr_Print();
Py_Exit(1);
}
}
I don't know how far along you are with your code base, so this may be of little use, but I had better luck with boost::python than SWIG. Then you could do this: boost::python Export Custom Exception