Does pdb offer watchpoints? [duplicate] - python-2.7

There is large python project where one attribute of one class just have wrong value in some place.
It should be sqlalchemy.orm.attributes.InstrumentedAttribute, but when I run tests it is constant value, let's say string.
There is some way to run python program in debug mode, and run some check (if variable changed type) after each step throught line of code automatically?
P.S. I know how to log changes of attribute of class instance with help of inspect and property decorator. Possibly here I can use this method with metaclasses...
But sometimes I need more general and powerfull solution...
Thank you.
P.P.S. I need something like there: https://stackoverflow.com/a/7669165/816449, but may be with more explanation of what is going on in that code.

Well, here is a sort of slow approach. It can be modified for watching for local variable change (just by name). Here is how it works: we do sys.settrace and analyse the value of obj.attr each step. The tricky part is that we receive 'line' events (that some line was executed) before line is executed. So, when we notice that obj.attr has changed, we are already on the next line and we can't get the previous line frame (because frames aren't copied for each line, they are modified ). So on each line event I save traceback.format_stack to watcher.prev_st and if on the next call of trace_command value has changed, we print the saved stack trace to file. Saving traceback on each line is quite an expensive operation, so you'd have to set include keyword to a list of your projects directories (or just the root of your project) in order not to watch how other libraries are doing their stuff and waste cpu.
watcher.py
import traceback
class Watcher(object):
def __init__(self, obj=None, attr=None, log_file='log.txt', include=[], enabled=False):
"""
Debugger that watches for changes in object attributes
obj - object to be watched
attr - string, name of attribute
log_file - string, where to write output
include - list of strings, debug files only in these directories.
Set it to path of your project otherwise it will take long time
to run on big libraries import and usage.
"""
self.log_file=log_file
with open(self.log_file, 'wb'): pass
self.prev_st = None
self.include = [incl.replace('\\','/') for incl in include]
if obj:
self.value = getattr(obj, attr)
self.obj = obj
self.attr = attr
self.enabled = enabled # Important, must be last line on __init__.
def __call__(self, *args, **kwargs):
kwargs['enabled'] = True
self.__init__(*args, **kwargs)
def check_condition(self):
tmp = getattr(self.obj, self.attr)
result = tmp != self.value
self.value = tmp
return result
def trace_command(self, frame, event, arg):
if event!='line' or not self.enabled:
return self.trace_command
if self.check_condition():
if self.prev_st:
with open(self.log_file, 'ab') as f:
print >>f, "Value of",self.obj,".",self.attr,"changed!"
print >>f,"###### Line:"
print >>f,''.join(self.prev_st)
if self.include:
fname = frame.f_code.co_filename.replace('\\','/')
to_include = False
for incl in self.include:
if fname.startswith(incl):
to_include = True
break
if not to_include:
return self.trace_command
self.prev_st = traceback.format_stack(frame)
return self.trace_command
import sys
watcher = Watcher()
sys.settrace(watcher.trace_command)
testwatcher.py
from watcher import watcher
import numpy as np
import urllib2
class X(object):
def __init__(self, foo):
self.foo = foo
class Y(object):
def __init__(self, x):
self.xoo = x
def boom(self):
self.xoo.foo = "xoo foo!"
def main():
x = X(50)
watcher(x, 'foo', log_file='log.txt', include =['C:/Users/j/PycharmProjects/hello'])
x.foo = 500
x.goo = 300
y = Y(x)
y.boom()
arr = np.arange(0,100,0.1)
arr = arr**2
for i in xrange(3):
print 'a'
x.foo = i
for i in xrange(1):
i = i+1
main()

There's a very simple way to do this: use watchpoints.
Basically you only need to do
from watchpoints import watch
watch(your_object.attr)
That's it. Whenever the attribute is changed, it will print out the line that changed it and how it's changed. Super easy to use.
It also has more advanced features, for example, you can call pdb when the variable is changed, or use your own callback functions instead of print it to stdout.

A simpler way to watch for an object's attribute change (which can also be a module-level variable or anything accessible with getattr) would be to leverage hunter library, a flexible code tracing toolkit. To detect state changes we need a predicate which can look like the following:
import traceback
class MutationWatcher:
def __init__(self, target, attrs):
self.target = target
self.state = {k: getattr(target, k) for k in attrs}
def __call__(self, event):
result = False
for k, v in self.state.items():
current_value = getattr(self.target, k)
if v != current_value:
result = True
self.state[k] = current_value
print('Value of attribute {} has chaned from {!r} to {!r}'.format(
k, v, current_value))
if result:
traceback.print_stack(event.frame)
return result
Then given a sample code:
class TargetThatChangesWeirdly:
attr_name = 1
def some_nested_function_that_does_the_nasty_mutation(obj):
obj.attr_name = 2
def some_public_api(obj):
some_nested_function_that_does_the_nasty_mutation(obj)
We can instrument it with hunter like:
# or any other entry point that calls the public API of interest
if __name__ == '__main__':
obj = TargetThatChangesWeirdly()
import hunter
watcher = MutationWatcher(obj, ['attr_name'])
hunter.trace(watcher, stdlib=False, action=hunter.CodePrinter)
some_public_api(obj)
Running the module produces:
Value of attribute attr_name has chaned from 1 to 2
File "test.py", line 44, in <module>
some_public_api(obj)
File "test.py", line 10, in some_public_api
some_nested_function_that_does_the_nasty_mutation(obj)
File "test.py", line 6, in some_nested_function_that_does_the_nasty_mutation
obj.attr_name = 2
test.py:6 return obj.attr_name = 2
... return value: None
You can also use other actions that hunter supports. For instance, Debugger which breaks into pdb (debugger on an attribute change).

Try using __setattr__ to override the function that is called when an attribute assignment is attempted. Documentation for __setattr__

You can use the python debugger module (part of the standard library)
To use, just import pdb at the top of your source file:
import pdb
and then set a trace wherever you want to start inspecting the code:
pdb.set_trace()
You can then step through the code with n, and investigate the current state by running python commands.

def __setattr__(self, name, value):
if name=="xxx":
util.output_stack('xxxxx')
super(XXX, self).__setattr__(name, value)
This sample code helped me.

Related

Mock patch results in error -- TypeError: object() takes no parameters in python 3

This is a bit complicated because I'm debugging some code written a long time ago in python 2.7
In progress of migrating to Python 3 (I know, I know) and facing this problem when trying to fix unit tests
The problem is I'm getting an error TypeError: object() takes no parameters
I'll list the functions below. I have to replace a lot of names of functions and objects. If you see an inconsistency in module names, assume it's typo.
First the class it's calling
class Parser(object):
def __init__(self, some_instance, some_file):
self._some_instance = some_instance
self.stream = Parser.formsomestream(some_file)
self.errors = []
#staticmethod
def formsomestream(some_file):
# return a stream
class BetterParser(Parser):
def parse(self):
# skip some steps, shouldn't relate to the problem
return details # this is a string
class CSVUploadManager(object):
def __init__(self, model_instance, upload_file):
self._model_instance = model_instance
self._upload_file = upload_file
# then bunch of functions here
# then.....
def _parse(self):
parser_instance = self._parser_class(self._model_instance, self._upload_file)
self._csv_details = parser_instance.parse()
# bunch of stuff follows
def _validate(self):
if not self._parsed:
self._parse()
validator_instance = self._validator_class(self._model_instance, self._csv_details)
# some attributes to set up here
def is_valid(self):
if not self._validated:
self._validate()
Now the test function
from somewhere.to.this.validator import MockUploadValidator
from another.place import CSVUploadManager
class TestSomething(SomeConfigsToBeMixedIn):
#mock.patch('path.to.BetterParser.parse')
#mock.patch('path.to.SomeValidator.__new__')
#mock.patch('path.to.SomeValidator.validate')
def test_validator_is_called(self, mock_validator_new, mock_parse):
mock_validator_new.return_value = MockUploadValidator.__new__(MockUploadValidator)
mock_parse.return_value = mock_csv_details
mock_validator_new.return_value = MockUploadValidator()
string_io = build_some_string_io_woohoo() # this returns a StringIO
some_file = get_temp_from_stream(string_io)
upload_manager = CSVUploadManager(a_model_instance, some_file)
upload_manager.is_valid() # this is where it fails and produces that error
self.assertTrue(mock_parse.called)
self.assertTrue(mock_validator_new.called)
validator_new_call_args = (SomeValidator, self.cash_activity, mock_csv_details)
self.assertEqual(mock_validator_new._mock_call_args_list[0][0], validator_new_call_args)
As you can see, the CSVUploadManager takes in the a django model instance and a file-like obj, this thing will trigger self._parser_class which calls BetterParser, then BetterParser does its things.
However, I'm guessing it's due to the mock, it returns TypeError: object() takes no parameters
My questions:
Why would this error occur?
Why only happening on python 3.x? (I'm using 3.6)
This also causes other tests (in different testcases) to fail when they would normally pass if I don't test them with the failed test. Why is that?
Is it really related to mocking? I'd assume it is because when I test on the server, the functionality is here.
EDIT: adding Traceback
Traceback (most recent call last):
File "/path/to/lib/python3.6/site-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
File "/path/to/test_file.py", line 39, in test_validator_is_called:
upload_manager.is_valid()
File "/path/to/manager.py", line 55, in is_valid
self._validate()
File "/path/to/manager.py", line 36, in _validate
validator_instance = self._validator_class(self._model_instance, self._csv_details)
TypeError: object() takes no parameters
There should be 3 mock arguments, except self.
Like this:
#mock.patch('path.to.BetterParser.parse')
#mock.patch('path.to.SomeValidator.__new__')
#mock.patch('path.to.SomeValidator.validate')
def test_validator_is_called(self, mock_validate, mock_validator_new, mock_parse):
...

how to get ticking timer with dynamic label?

What im trying to do is that whenever cursor is on label it must show the time elapsed since when it is created it does well by subtracting (def on_enter(i)) the value but i want it to be ticking while cursor is still on label.
I tried using after function as newbie i do not understand it well to use on dynamic labels.
any help will be appreciated thx
code:
from Tkinter import *
import datetime
date = datetime.datetime
now = date.now()
master=Tk()
list_label=[]
k=[]
time_var=[]
result=[]
names=[]
def delete(i):
k[i]=max(k)+1
time_var[i]='<deleted>'
list_label[i].pack_forget()
def create():#new func
i=k.index(max(k))
for j in range(i+1,len(k)):
if k[j]==0:
list_label[j].pack_forget()
list_label[i].pack(anchor='w')
time_var[i]=time_now()
for j in range(i+1,len(k)):
if k[j]==0:
list_label[j].pack(anchor='w')
k[i]=0
###########################
def on_enter(i):
list_label[i].configure(text=time_now()-time_var[i])
def on_leave(i):
list_label[i].configure(text=names[i])
def time_now():
now = date.now()
return date(now.year,now.month,now.day,now.hour,now.minute,now.second)
############################
for i in range(11):
lb=Label(text=str(i),anchor=W)
list_label.append(lb)
lb.pack(anchor='w')
lb.bind("<Button-3>",lambda event,i=i:delete(i))
k.append(0)
names.append(str(i))
lb.bind("<Enter>",lambda event,i=i: on_enter(i))
lb.bind("<Leave>",lambda event,i=i: on_leave(i))
time_var.append(time_now())
master.bind("<Control-Key-z>",lambda event: create())
mainloop()
You would use after like this:
###########################
def on_enter(i):
list_label[i].configure(text=time_now()-time_var[i])
list_label[i].timer = list_label[i].after(1000, on_enter, i)
def on_leave(i):
list_label[i].configure(text=names[i])
list_label[i].after_cancel(list_label[i].timer)
However, your approach here is all wrong. You currently have some functions and a list of data. What you should do is make a single object that contains the functions and data together and make a list of those. That way you can write your code for a single Label and just duplicate that. It makes your code a lot simpler partly because you no longer need to keep track of "i". Like this:
import Tkinter as tk
from datetime import datetime
def time_now():
now = datetime.now()
return datetime(now.year,now.month,now.day,now.hour,now.minute,now.second)
class Kiran(tk.Label):
"""A new type of Label that shows the time since creation when the mouse hovers"""
hidden = []
def __init__(self, master=None, **kwargs):
tk.Label.__init__(self, master, **kwargs)
self.name = self['text']
self.time_var = time_now()
self.bind("<Enter>", self.on_enter)
self.bind("<Leave>", self.on_leave)
self.bind("<Button-3>", self.hide)
def on_enter(self, event=None):
self.configure(text=time_now()-self.time_var)
self.timer = self.after(1000, self.on_enter)
def on_leave(self, event=None):
self.after_cancel(self.timer) # cancel the timer
self.configure(text=self.name)
def hide(self, event=None):
self.pack_forget()
self.hidden.append(self) # add this instance to the list of hidden instances
def show(self):
self.time_var = time_now() # reset time
self.pack(anchor='w')
def undo(event=None):
'''if there's any hidden Labels, show one'''
if Kiran.hidden:
Kiran.hidden.pop().show()
def main():
root = tk.Tk()
root.geometry('200x200')
for i in range(11):
lb=Kiran(text=i)
lb.pack(anchor='w')
root.bind("<Control-Key-z>",undo)
root.mainloop()
if __name__ == '__main__':
main()
More notes:
Don't use lambda unless you are forced to; it's known to cause bugs.
Don't use wildcard imports (from module import *), they cause bugs and are against PEP8.
Put everything in functions.
Use long, descriptive names. Single letter names just waste time. Think of names as tiny comments.
Add a lot more comments to your code so that other people don't have to guess what the code is supposed to do.
Try a more beginner oriented forum for questions like this, like learnpython.reddit.com

How to modify the return address in Python 2 (or achieve an equivalent result)

Some background on why I want to achieve what I'm trying to achieve:
I am making a long-running server type application. I would like to be able to perform functional and integration style testing on this application with high coverage, especially in the failure and corner-case scenarios. In order to achieve this, I would like to inject various faults which are configurable at run time so that my test can validate the program's behavior when such a condition is hit.
The Problem:
I would like to be able to dynamically decide the return behavior of a function. I would additionally like to do this with only a function call and without pre-processing the source code (macros). Here is a simple example:
from functools import wraps
def decorator(func):
#wraps(func)
def func_wrapper(*args, **kwargs):
print 'in wrapper before %s' % func.__name__
val = func(*args, **kwargs)
print 'in wrapper after %s' % func.__name__
return val
return func_wrapper
#decorator
def grandparent():
val = parent()
assert val == 2
# do something with val here
#decorator
def parent():
foo = 'foo_val'
some_func(foo)
# other statements here
child()
# if the condition in child is met,
# this would be dead (not-executed)
# code. If it is not met, this would
# be executed.
return 1
def child(*args, **kwargs):
# do something here to make
# the assert in grandparent true
return 2
# --------------------------------------------------------------------------- #
class MyClass:
#decorator
def foo(self):
val = self.bar()
assert val == 2
def bar(self):
self.tar()
child()
return 1
def tar(self):
return 42
# --------------------------------------------------------------------------- #
The grandparent() function in the code above calls parent() to get a response. It would then do something based on the value of val. The parent() function calls child() and unconditionally returns the value 1. I would like to write something in child() which causes the value it returns to be returned to grandparent() and skip processing the rest of parent().
Restrictions/Permissions
grandparent() could be function number n in a long chain of function calls, not necessarily the top level function. Only
child() and any new helper functions called solely as a result of calling child() can be modified/created to make this work.
All work must be done at runtime. No pre-processing of the source files is acceptable
The decision about the parent() function's behavior has to be decided inside of the specific child() call.
Using pure Python (2.7) or making use of the cPython API are both acceptable ways to solve this issue.
This is allowed to be a hack. The child() function will be inert in production mode.
I have tried
I have tried modifying the stack list (deleting the parent() frame) retrieved from inspect.stack(), but this seems to not do anything.
I have tried creating new bytecode for the parent() frame and replacing it in the stack. This also does not seem to have an effect.
I have tried looking into the cPython functions related to stack management, but when I added or removed a frame, I kept getting stack under or overflows.
If you know the name(s) of child(), then you can patch all child() callers at runtime by iterating through module and class functions, patching call sites to child() to add your custom logic, and hot-swapping callers of child with the patched version.
Here is a working example:
#!/usr/bin/env python2.7
from six import wraps
def decorator(func):
#wraps(func)
def func_wrapper(*args, **kwargs):
print 'in wrapper before %s' % func.__name__
val = func(*args, **kwargs)
print 'in wrapper after %s' % func.__name__
return val
return func_wrapper
#decorator
def grandparent():
val = parent()
assert val == 2
# do something with val here
#decorator
def parent():
# ...
# ...
child()
# if the condition in child is met,
# this would be dead (not-executed)
# code. If it is not met, this would
# be executed.
return 1
def child(*args, **kwargs):
# do something here to make
# the assert in grandparent true
return 2
# --------------------------------------------------------------------------- #
class MyClass:
#decorator
def foo(self):
val = self.bar()
assert val == 2
def bar(self):
self.tar()
child()
return 1
def tar(self):
return 42
# --------------------------------------------------------------------------- #
import sys
import inspect
import textwrap
import types
import itertools
import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
log = logging.getLogger(__name__)
def should_intercept():
# TODO: check system state and return True/False
# just a dummy implementation for now based on # of args
if len(sys.argv) > 1:
return True
return False
def _unwrap(func):
while hasattr(func, '__wrapped__'):
func = func.__wrapped__
return func
def __patch_child_callsites():
if not should_intercept():
return
for module in sys.modules.values():
if not module:
continue
scopes = itertools.chain(
[module],
(clazz for clazz in module.__dict__.values() if inspect.isclass(clazz))
)
for scope in scopes:
# get all functions in scope
funcs = list(fn for fn in scope.__dict__.values()
if isinstance(fn, types.FunctionType)
and not inspect.isbuiltin(fn)
and fn.__name__ != __patch_child_callsites.__name__)
for fn in funcs:
try:
fn_src = inspect.getsource(_unwrap(fn))
except IOError as err:
log.warning("couldn't get source for fn: %s:%s",
scope.__name__, fn.__name__)
continue
# remove common indentations
fn_src = textwrap.dedent(fn_src)
if 'child()' in fn_src:
# construct patched caller source
patched_fn_name = "patched_%s" % fn.__name__
patched_fn_src = fn_src.replace(
"def %s(" % fn.__name__,
"def %s(" % patched_fn_name,
)
patched_fn_src = patched_fn_src.replace(
'child()', 'return child()'
)
log.debug("patched_fn_src:\n%s", patched_fn_src)
# compile patched caller into scope
compiled = compile(patched_fn_src, inspect.getfile(scope), 'exec')
exec(compiled) in fn.__globals__, scope.__dict__
# replace original caller with patched caller
patched_fn = scope.__dict__.get(patched_fn_name)
setattr(scope, fn.__name__, patched_fn)
log.info('patched %s:%s', scope.__name__, fn.__name__)
if __name__ == '__main__':
__patch_child_callsites()
grandparent()
MyClass().foo()
Run with no arguments to get the original behavior (assertion failure). Run with one or more arguments and the assertion disappears.

How to order NDB query by the key?

I try to use task queues on Google App Engine. I want to utilize the Mapper class shown in the App Engine documentation "Background work with the deferred library".
I get an exception on the ordering of the query result by the key
def get_query(self):
...
q = q.order("__key__")
...
Exception:
File "C:... mapper.py", line 41, in get_query
q = q.order("__key__")
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\ndb\query.py", line 1124, in order
'received %r' % arg)
TypeError: order() expects a Property or query Order; received '__key__'
INFO 2017-03-09 11:56:32,448 module.py:806] default: "POST /_ah/queue/deferred HTTP/1.1" 500 114
The article is from 2009, so I guess something might have changed.
My environment: Windows 7, Python 2.7.9, Google App Engine SDK 1.9.50
There are somewhat similar questions about ordering in NDB on SO.
What bugs me this code is from the official doc, presumably updated in Feb 2017 (recently) and posted by someone within top 0.1 % of SO users by reputation.
So I must be doing something wrong. What is the solution?
Bingo.
Avinash Raj is correct. If it were an answer I'd accept it.
Here is the full class code
#!/usr/bin/python2.7
# -*- coding: utf-8 -*-
from google.appengine.ext import deferred
from google.appengine.ext import ndb
from google.appengine.runtime import DeadlineExceededError
import logging
class Mapper(object):
"""
from https://cloud.google.com/appengine/docs/standard/python/ndb/queries
corrected with suggestions from Stack Overflow
http://stackoverflow.com/questions/42692319/how-to-order-ndb-query-by-the-key
"""
# Subclasses should replace this with a model class (eg, model.Person).
KIND = None
# Subclasses can replace this with a list of (property, value) tuples to filter by.
FILTERS = []
def __init__(self):
logging.info("Mapper.__init__: {}")
self.to_put = []
self.to_delete = []
def map(self, entity):
"""Updates a single entity.
Implementers should return a tuple containing two iterables (to_update, to_delete).
"""
return ([], [])
def finish(self):
"""Called when the mapper has finished, to allow for any final work to be done."""
pass
def get_query(self):
"""Returns a query over the specified kind, with any appropriate filters applied."""
q = self.KIND.query()
for prop, value in self.FILTERS:
q = q.filter(prop == value)
if __name__ == '__main__':
q = q.order(self.KIND.key) # the fixed version. The original q.order('__key__') failed
# see http://stackoverflow.com/questions/42692319/how-to-order-ndb-query-by-the-key
return q
def run(self, batch_size=100):
"""Starts the mapper running."""
logging.info("Mapper.run: batch_size: {}".format(batch_size))
self._continue(None, batch_size)
def _batch_write(self):
"""Writes updates and deletes entities in a batch."""
if self.to_put:
ndb.put_multi(self.to_put)
self.to_put = []
if self.to_delete:
ndb.delete_multi(self.to_delete)
self.to_delete = []
def _continue(self, start_key, batch_size):
q = self.get_query()
# If we're resuming, pick up where we left off last time.
if start_key:
key_prop = getattr(self.KIND, '_key')
q = q.filter(key_prop > start_key)
# Keep updating records until we run out of time.
try:
# Steps over the results, returning each entity and its index.
for i, entity in enumerate(q):
map_updates, map_deletes = self.map(entity)
self.to_put.extend(map_updates)
self.to_delete.extend(map_deletes)
# Do updates and deletes in batches.
if (i + 1) % batch_size == 0:
self._batch_write()
# Record the last entity we processed.
start_key = entity.key
self._batch_write()
except DeadlineExceededError:
# Write any unfinished updates to the datastore.
self._batch_write()
# Queue a new task to pick up where we left off.
deferred.defer(self._continue, start_key, batch_size)
return
self.finish()

Python 2.7: Defining default parameters based on globals?

I'm writing a utility where I would like to have global variables that change the way a function operates. By default I'd like all the functions to follow one style, but in certain cases I'd also like the ability to force the way a function operates.
Say I have a file Script_Defaults.py with
USER_INPUT = True
In another python file I have many functions like this:
from Script_Defaults import USER_INPUT
def DoSomething(var_of_data, user_input = USER_INPUT):
if user_input:
... #some code here that asks the user what to change in var_of_data
.... # goes on to do something
The problem I face here is that the default parameter only loads once when the file starts.
I want to be able to set USER_INPUT as False or True during the run of the program. To get this behaviour I'm currently using it like this...
from Script_Defaults import USER_INPUT
def DoSomething(var_of_data, user_input = None):
if user_input is None:
user_input = USER_INPUT
if user_input:
... #some code here that asks the user what to change in var_of_data
.... # goes on to do something
This seems like a lot of unnecessary code, especially if I have a lot of conditions like USER_INPUT, and many functions that need them. Is there a better to get this functionality, or is this the only way?
Using decorators, and manipulation of a function's default arguments, you can use the following solution:
from change_defaults import Default, set_defaults
my_defaults = dict(USER_INPUT=0)
#set_defaults(my_defaults)
def DoSomething(var_of_data, user_input=Default("USER_INPUT")):
return var_of_data, user_input
def main():
print DoSomething("This")
my_defaults["USER_INPUT"] = 1
print DoSomething("Thing")
my_defaults["USER_INPUT"] = 2
print DoSomething("Actually")
print DoSomething("Works", 3)
if __name__ == "__main__":
main()
Which requires the following code:
# change_defaults.py
from functools import wraps
class Default(object):
def __init__(self, name):
super(Default, self).__init__()
self.name = name
def set_defaults(defaults):
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
# Backup original function defaults.
original_defaults = f.func_defaults
# Replace every `Default("...")` argument with its current value.
function_defaults = []
for default_value in f.func_defaults:
if isinstance(default_value, Default):
function_defaults.append(defaults[default_value.name])
else:
function_defaults.append(default_value)
# Set the new function defaults.
f.func_defaults = tuple(function_defaults)
return_value = f(*args, **kwargs)
# Restore original defaults (required to keep this trick working.)
f.func_defaults = original_defaults
return return_value
return wrapper
return decorator
By defining the default parameters with Default(parameter_name) you tell the set_defaults decorator which value to take from the defaults dict.
Also, with a little more code (irrelevant to the solution) you can make it work like:
#set_defaults(my_defaults)
def DoSomething(var_of_data, user_input=Default.USER_INPUT):
...