Trace method resolution order at runtime - django

I am working on some code (not mine) and I try to instand it.
There 16 classes in the inheritance tree.
Here area the class according to inspect.getmro(self.__class__)
foo_barcheck.views.barcheck.AdminListActionView
foo_barcheck.views.barcheck.ListActionMixin
djangotools.utils.urlresolverutils.UrlMixin
foo.views.ListActionView
foo.views.ContextMixin
foo.views.fooMixin
djangotools.views.ListActionView
djangotools.views.ListViewMixin
django.views.generic.list.MultipleObjectMixin
django.views.generic.edit.FormMixin
djangotools.views.DTMixin
django.views.generic.base.TemplateView
django.views.generic.base.TemplateResponseMixin
django.views.generic.base.ContextMixin
django.views.generic.base.View
object
I want to trace what happens if I call self.do_magic().
I want to see all do_magic() calls of these 16 classes.
The ideal solution would look like this:
result_of_do_magic, list_of_do_magic_methods = trace_method_calls(self.do_magic)
I have no clue how to implement trace_method_calls(). It should execute the code and trace the method calls at runtime.
Is there a tracing guru which knows how to do this?
AFAIK this needs to be done at runtime, since I don't know if all methods calls the do_magic() of the parents via super().
Update
I don't want to modify the code to be able to trace it. I guess this should be possible, since the mocking library can do comparable magic.

If I understand correctly, I can see 2 solutions here.
1) [Brute-ish and relatively easy] Find all occurrences of "do_magic" in your code and wrap it with decorator like:
def trace_this(fn):
def wrapped(*args, **kwargs):
print("do_magic is called!")
result = fn(*args, **kwargs)
print("Result is:{0}".format(result))
return result
return wrapped
...
#trace_this
def do_magic():
...
2) Run django as:
python -m trace -t manage.py runserver 127.0.0.1:8000 --noreload > trace.txt
and all invoked code would be in trace.txt that you can then parse/analyze with tools you prefer.

According to my understanding what you want are the outer frames of a particular function call to see what classes were called to lead to the method in question.
If it is what you want continue below, else let me know in the comments:
Below is an POC to demonstrate how it works. You can build a decorator out of it.
from __future__ import print_function
import inspect
class A:
def a(self):
stacks = inspect.getouterframes(inspect.currentframe())
for stack in stacks:
frame = stack[0]
klass = frame.f_locals.get("self")
mthd = frame.f_code.co_name
if klass: # to avoid printing modules
print('Called by class: {0}, method: {1}'.format(klass.__class__.__name__, mthd))
class B:
def b(self):
bb = A()
bb.a()
class C:
def c(self):
cc = B()
cc.b()
z = C()
z.c()
Output:
Explanation:
Class C calls Class B which in turn calls Class A
When the execution reaches till A.a() all the classes that inherited A are in the outer frames.
You can extract out just the functionality of A.a() to carve out a decorator so that it will print all outer frames leading upto the function call.

Related

Can global variable be used to call function?

SEFC.py:
import time
import traceback
import platform
sefc_verbose = False
obj_sefc = None
class CSEFC():
def __init__(self):
self.fp_platform = False
self.bbu_platform = False
return
def __del__(self):
return
def ipmi_cmd_trace(self):
return False
KCS.py:
import SEFC as sefc
class CKCS(CLogger):
def __init__(self, str_ip = None, int_port = _DEFAULT_ATRAGON_PORT):
CLogger.__init__(self)
self.obj_json_client = None
def send_ipmi_target(self, targetstr, raw_request, int_retry = 3):
if sefc.obj_sefc.ipmi_cmd_trace():
##do stuff
I am reading code written by someone else.I can't seem to understand in if sefc.obj_sefc.ipmi_cmd_trace(): obj_sefc is used to call ipmi_cmd_trace() function. sefc_obj is a global variable I belive. But this code should not work. Also, I doubt my programming ability. This code seems to compile and work for others. Is this correct? Am I missing something here?
With just the code you've shown, you're right, it won't work, since obj_sefc is None in the SEFC module. However, I suspect that some other code that you haven't shown that creates an instance of the CSEFC class and assigns it to that global variable. Then the code you've shown will work.
Its probably not a good design for the code you've shown to be reliant on some other code to be run first, since it will fail if it gets run in the wrong order. However, using a global variable to contain a single instance of a class is not problematic in general. You just want to make sure the code that creates the instance is put somewhere that ensures it will be run before the instance is needed. For instance, the code could be put at the bottom of the SEFC module, or at the top of the KCS module.

how to group traits together, encapsulating them as a group

I have a coordinate system that it makes sense to treat as a "whole group". They initialize, change, and reset simultaneously. I also like to not re-render as many times as I have coordinates when one changes. Here is the simplified version of what I have in mind, but I can't quite get there. Thanks.
Cleaner code is better in my case even if it uses more advanced features. Could the class 'Coord' be wrapped as a trait itself?
from traits.api import *
class Coord(HasTraits):
x=Float(1.0)
y=Float(1.0)
def __init__(self,**traits):
HasTraits.__init__(self,**traits)
class Model:
coord=Instance(Coord)
#on_trait_change('coord')##I would so have liked this to "just work"
def render(self):#reupdate render whenever coordinates change
class Visualization:
model=Instance(Model)
def increment_x(self):
self.model.coord.x+=1 ##should play well with Model.render
def new_coord(self):
self.model.coord=Coord(x=2,y=2) ##should play well with Model.render
There are a couple of issues with your source code. Model and Visualization both need to be HasTraits classes for the listener to work.
Also, it is rare to actually need to write the __init__ method of a HasTraits class. Traits is designed to work without it. That said, if you do write an __init__ method, make sure to use super to properly traverse the method resolution order. (Note that you will find this inconsistently implemented in the extant documentation and examples.)
Finally, use the 'anytrait' name to listen for any trait:
from traits.api import Float, HasTraits, Instance, on_trait_change
class Coord(HasTraits):
x=Float(1.0)
y=Float(1.0)
class Model(HasTraits):
coord=Instance(Coord, ())
#on_trait_change('coord.anytrait') # listens for any trait on `coord`.
def render(self):
print "I updated"
class Visualization(HasTraits):
model=Instance(Model, ())
def increment_x(self):
self.model.coord.x+=1 # plays well with Model.render
def new_coord(self):
self.model.coord=Coord(x=2,y=2) # plays well with Model.render
Here's my output:
>>> v = Visualization()
>>> v.increment_x()
I updated
>>> v.new_coord()
I updated

Detect method by self.sender() in Python

is there a way to detect which method has run other method similarly as you detect object with self.sender()?
For example I have a method A that enables all the checkboxes. On one page I have 10 on other 15. Depending on the method B or C that will call method A, I can define two scenarios in method A, rather than to copy the code.
Yes, there is a way. It utilizes the inspect module
import inspect
def echo():
"""Returns the name of a function that called it"""
return inspect.getouterframes(inspect.currentframe(), 2)[1][3]
def caller():
return echo()
print(caller(), caller.func_name)
Output:
('caller', 'caller')

Celery: clean way of revoking the entire chain from within a task

My question is probably pretty basic but still I can't get a solution in the official doc. I have defined a Celery chain inside my Django application, performing a set of tasks dependent from eanch other:
chain( tasks.apply_fetching_decision.s(x, y),
tasks.retrieve_public_info.s(z, x, y),
tasks.public_adapter.s())()
Obviously the second and the third tasks need the output of the parent, that's why I used a chain.
Now the question: I need to programmatically revoke the 2nd and the 3rd tasks if a test condition in the 1st task fails. How to do it in a clean way? I know I can revoke the tasks of a chain from within the method where I have defined the chain (see thisquestion and this doc) but inside the first task I have no visibility of subsequent tasks nor of the chain itself.
Temporary solution
My current solution is to skip the computation inside the subsequent tasks based on result of the previous task:
#shared_task
def retrieve_public_info(result, x, y):
if not result:
return []
...
#shared_task
def public_adapter(result, z, x, y):
for r in result:
...
But this "workaround" has some flaw:
Adds unnecessary logic to each task (based on predecessor's result), compromising reuse
Still executes the subsequent tasks, with all the resulting overhead
I haven't played too much with passing references of the chain to tasks for fear of messing up things. I admit also I haven't tried Exception-throwing approach, because I think that the choice of not proceeding through the chain can be a functional (thus non exceptional) scenario...
Thanks for helping!
I think I found the answer to this issue: this seems the right way to proceed, indeed. I wonder why such common scenario is not documented anywhere, though.
For completeness I post the basic code snapshot:
#app.task(bind=True) # Note that we need bind=True for self to work
def task1(self, other_args):
#do_stuff
if end_chain:
self.request.callbacks[:] = []
....
Update
I implemented a more elegant way to cope with the issue and I want to share it with you. I am using a decorator called revoke_chain_authority, so that it can revoke automatically the chain without rewriting the code I previously described.
from functools import wraps
class RevokeChainRequested(Exception):
def __init__(self, return_value):
Exception.__init__(self, "")
# Now for your custom code...
self.return_value = return_value
def revoke_chain_authority(a_shared_task):
"""
#see: https://gist.github.com/bloudermilk/2173940
#param a_shared_task: a #shared_task(bind=True) celery function.
#return:
"""
#wraps(a_shared_task)
def inner(self, *args, **kwargs):
try:
return a_shared_task(self, *args, **kwargs)
except RevokeChainRequested, e:
# Drop subsequent tasks in chain (if not EAGER mode)
if self.request.callbacks:
self.request.callbacks[:] = []
return e.return_value
return inner
This decorator can be used on a shared task as follows:
#shared_task(bind=True)
#revoke_chain_authority
def apply_fetching_decision(self, latitude, longitude):
#...
if condition:
raise RevokeChainRequested(False)
Please note the use of #wraps. It is necessary to preserve the signature of the original function, otherwise this latter will be lost and celery will make a mess at calling the right wrapped task (e.g. it will call always the first registered function instead of the right one)
As of Celery 4.0, what I found to be working is to remove the remaining tasks from the current task instance's request using the statement:
self.request.chain = None
Let's say you have a chain of tasks a.s() | b.s() | c.s(). You can only access the self variable inside a task if you bind the task by passing bind=True as argument to the tasks' decorator.
#app.task(name='main.a', bind=True):
def a(self):
if something_happened:
self.request.chain = None
If something_happened is truthy, b and c wouldn't be executed.

Extending SWIG builtin classes

The -builtin option of SWIG has the advantage of being faster, and of being exempt of a bug with multiple inheritance.
The setback is I can't set any attribute on the generated classes or any subclass :
-I can extend a python builtin type like list, without hassle, by subclassing it :
class Thing(list):
pass
Thing.myattr = 'anything' # No problem
-However using the same approach on a SWIG builtin type, the following happens :
class Thing(SWIGBuiltinClass):
pass
Thing.myattr = 'anything'
AttributeError: type object 'Thing' has no attribute 'myattr'
How could I work around this problem ?
I found a solution quite by accident. I was experimenting with metaclasses, thinking I could manage to override the setattr and getattr functions of the builtin type in the subclass.
Doing this I discovered the builtins already have a metaclass (SwigPyObjectType), so my metaclass had to inherit it.
And that's it. This alone solved the problem. I would be glad if someone could explain why :
SwigPyObjectType = type(SWIGBuiltinClass)
class Meta(SwigPyObjectType):
pass
class Thing(SWIGBuiltinClass):
__metaclass__ = Meta
Thing.myattr = 'anything' # Works fine this time
The problem comes from how swig implemented the classes in "-builtin" to be just like builtin classes (hence the name).
builtin classes are not extensible - try to add or modify a member of "str" and python won't let you modify the attribute dictionary.
I do have a solution I've been using for several years.
I'm not sure I can recommend it because:
It's arguably evil - the moral equivalent of casting away const-ness in C/C++
It's unsupported and could break in future python releases
I haven't tried it with python3
I would be a bit uncomfortable using "black-magic" like this in production code - it could break and is certainly obscure - but at least one giant corporation IS using this in production code
But.. I love how well it works to solve some obscure features we wanted for debugging.
The original idea is not mine, I got it from:
https://gist.github.com/mahmoudimus/295200 by Mahmoud Abdelkader
The basic idea is to access the const dictionary in the swig-created type object as a non-const dictionary and add/override any desired methods.
FYI, the technique of runtime modification of classes is called monkeypatching, see https://en.wikipedia.org/wiki/Monkey_patch
First - here's "monkeypatch.py":
''' monkeypatch.py:
I got this from https://gist.github.com/mahmoudimus/295200 by Mahmoud Abdelkader,
his comment: "found this from Armin R. on Twitter, what a beautiful gem ;)"
I made a few changes for coding style preferences
- Rudy Albachten April 30 2015
'''
import ctypes
from types import DictProxyType, MethodType
# figure out the size of _Py_ssize_t
_Py_ssize_t = ctypes.c_int64 if hasattr(ctypes.pythonapi, 'Py_InitModule4_64') else ctypes.c_int
# python without tracing
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
]
# fixup for python with tracing
if object.__basicsize__ != ctypes.sizeof(_PyObject):
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('_ob_next', ctypes.POINTER(_PyObject)),
('_ob_prev', ctypes.POINTER(_PyObject)),
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
]
class _DictProxy(_PyObject):
_fields_ = [('dict', ctypes.POINTER(_PyObject))]
def reveal_dict(proxy):
if not isinstance(proxy, DictProxyType):
raise TypeError('dictproxy expected')
dp = _DictProxy.from_address(id(proxy))
ns = {}
ctypes.pythonapi.PyDict_SetItem(ctypes.py_object(ns), ctypes.py_object(None), dp.dict)
return ns[None]
def get_class_dict(cls):
d = getattr(cls, '__dict__', None)
if d is None:
raise TypeError('given class does not have a dictionary')
if isinstance(d, DictProxyType):
return reveal_dict(d)
return d
def test():
import random
d = get_class_dict(str)
d['foo'] = lambda x: ''.join(random.choice((c.upper, c.lower))() for c in x)
print "and this is monkey patching str".foo()
if __name__ == '__main__':
test()
Here's a contrived example using monkeypatch:
I have a class "myclass" in module "mystuff" wrapped with swig -python -builtin
I want to add an extra runtime method "namelen" that returns the length of the name returned by myclass.getName()
import mystuff
import monkeypatch
# add a "namelen" method to all "myclass" objects
def namelen(self):
return len(self.getName())
d = monkeypatch.get_class_dict(mystuff.myclass)
d['namelen'] = namelen
x = mystuff.myclass("xxxxxxxx")
print "namelen:", x.namelen()
Note that this can also be used to extend or override methods on builtin python classes, as is demonstrated in the test in monkeypatch.py: it adds a method "foo" to the builtin str class that returns a copy of the original string with random upper/lower case letters
I would probably replace:
# add a "namelen" method to all "myclass" objects
def namelen(self):
return len(self.getName())
d = monkeypatch.get_class_dict(mystuff.myclass)
d['namelen'] = namelen
with
# add a "namelen" method to all "myclass" objects
monkeypatch.get_class_dict(mystuff.myclass)['namelen'] = lambda self: return len(self.getName())
to avoid extra global variables