My simple code works like this:
In my_stuffmodule1.py, I have the following:
import sys
def main():
result = 'found stuff here'
return result
if __name__ == '__main__':
main()
I want to use the result returned from my_stuffmodule1 in my next module below, called my_stuffmodule2:
import my_stuffmodule1
result
class Use_stuff(object):
def stuff1(self):
for item in result:
code..
def stuff2(self):
code...
BUT I get errors such as 'result is not defined'. I want to use the items in the result string in my_stuffmodule2
As result is defined in my_stuffmodule1.main, it is only visible inside main(), once you call main(), it will be returned.
So in your second module, you need to do this:
import my_stuffmodule1
result = my_stuffmodule1.main()
Now you'll have the value of result in your second module. If you don't want to do this, then in your first module, you need to make sure main() is called when the module is evaluated (when its imported). To do that, you'll need to put a call to main in the global scope, like this:
def main():
return 'result found here'
result = main()
Now, when you import the module, main() will be called and you can do this:
import my_stuffmodule1
result = my_stuffmodule1.result
Note that you still have to use my_stuffmodule1.result because you are importing the module directly. If you just want to have result shown, you could do this:
from my_stuffmodule1 import result
However, keep in mind that this will overwrite any other result you might have in your second module. Therefore, its better to import the module, then qualify the name with my_stuffmodule1.result.
At a guess, I'd say it's because you're importing the first file, and the result string is only getting set when it's run in the context of 'main'. Also, you are returning the result variable - that variable is being defined within the scope of the main() function, and not in a 'global' scope, as it were.
If you were to simply set the variable result outside of the main() function, does that work?
Alternatively, set the value of result to the return value of the main() function, in the second script.
Below is a working example of the second option..
my_stuffmodule1
import sys
def main():
result = 'found stuff here'
return result
if __name__ == '__main__':
main()
my_stuffmodule2
import my_stuffmodule1
result = my_stuffmodule1.main()
class Use_stuff(object):
def stuff1(self):
for item in result:
code..
def stuff2(self):
code...
if you want the result from the main method of my_stuffmodule1 then you will need to execute that modules main method, in this case.
for example
import my_stuffmodule1
result = my_stuffmodule1.main()
class Use_stuff(object):
def stuff1(self):
for item in result:
code..
def stuff2(self):
code...
Related
Consider example:
def func_b(a):
print a
def func_a():
a = [-1]
for i in xrange(0, 2):
a[0] = i
func_b(a)
And test function that tries to test func_a and mocks func_b:
import mock
from mock import call
def test_a():
from dataTransform.test import func_a
with mock.patch('dataTransform.test.func_b', autospec=True) as func_b_mock:
func_a()
func_b_mock.assert_has_calls([call(0), call(1)])
After func_a has executed I try to test if func_a made correct calls to func_b, but since in for loop I am mutating list in the end I get:
AssertionError: Calls not found.
Expected: [call(0), call(1)]
Actual: [call([1]), call([1])]
The following works (the importing mock from unittest is a Python 3 thing, and module is where func_a and func_b are):
import mock
from mock import call
import copy
class ModifiedMagicMock(mock.MagicMock):
def _mock_call(_mock_self, *args, **kwargs):
return super(ModifiedMagicMock, _mock_self)._mock_call(*copy.deepcopy(args), **copy.deepcopy(kwargs))
This inherits from MagicMock, and redefines the call behaviour to deepcopy the arguments and keyword arguments.
def test_a():
from module import func_a
with mock.patch('module.func_b', new_callable=ModifiedMagicMock) as func_b_mock:
func_a()
func_b_mock.assert_has_calls([call([0]), call([1])])
You can pass the new class into patch using the new_callable parameter, however it cannot co-exist with autospec. Note that your function calls func_b with a list, so call(0), call(1) has to be changed to call([0]), call([1]). When run by calling test_a, this does nothing (passes).
Now we cannot use both new_callable and autospec because new_callable is a generic factory but in our case is just a MagicMock override. But Autospeccing is a very cool mock's feature, we don't want lose it.
What we need is replace MagicMock by ModifiedMagicMock just for our test: we want avoid to change MagicMock behavior for all tests... could be dangerous. We already have a tool to do it and it is patch, used with the new argument to replace the destination.
In this case we use decorators to avoid too much indentation and make it more readable:
#mock.patch('module.func_b', autospec=True)
#mock.patch("mock.MagicMock", new=ModifiedMagicMock)
def test_a(func_b_mock):
from module import func_a
func_a()
func_b_mock.assert_has_calls([call([0]), call([1])])
Or:
#mock.patch("mock.MagicMock", new=ModifiedMagicMock)
def test_a():
with mock.patch('module.func_b') as func_b_mock:
from module import func_a
func_a()
func_b_mock.assert_has_calls([call([0]), call([1])])
SEFC.py:
import time
import traceback
import platform
sefc_verbose = False
obj_sefc = None
class CSEFC():
def __init__(self):
self.fp_platform = False
self.bbu_platform = False
return
def __del__(self):
return
def ipmi_cmd_trace(self):
return False
KCS.py:
import SEFC as sefc
class CKCS(CLogger):
def __init__(self, str_ip = None, int_port = _DEFAULT_ATRAGON_PORT):
CLogger.__init__(self)
self.obj_json_client = None
def send_ipmi_target(self, targetstr, raw_request, int_retry = 3):
if sefc.obj_sefc.ipmi_cmd_trace():
##do stuff
I am reading code written by someone else.I can't seem to understand in if sefc.obj_sefc.ipmi_cmd_trace(): obj_sefc is used to call ipmi_cmd_trace() function. sefc_obj is a global variable I belive. But this code should not work. Also, I doubt my programming ability. This code seems to compile and work for others. Is this correct? Am I missing something here?
With just the code you've shown, you're right, it won't work, since obj_sefc is None in the SEFC module. However, I suspect that some other code that you haven't shown that creates an instance of the CSEFC class and assigns it to that global variable. Then the code you've shown will work.
Its probably not a good design for the code you've shown to be reliant on some other code to be run first, since it will fail if it gets run in the wrong order. However, using a global variable to contain a single instance of a class is not problematic in general. You just want to make sure the code that creates the instance is put somewhere that ensures it will be run before the instance is needed. For instance, the code could be put at the bottom of the SEFC module, or at the top of the KCS module.
I didn't find quite what I was looking for.
I want to obtain the output (stdout) from a python function in real time.
The actual problem is that I want to plot a graph (with cplot from sympy) with a progress bar in my UI. The argument verbose makes cplot output the progress to stdout.
sympy.mpmath.cplot(lambda z: z, real, imag, verbose=True)
The output would be something like:
0 of 71
1 of 71
2 of 71
...
And so on.
I want to capture line by line so I can make a progress bar. (I realize this might not be possible without implementing multithreading). I'm using python2.7 (mainly because I need libraries that aren't in python3)
So, ¿How do I achieve that?
You can capture stdout by monkeypatching sys.stdout. A good way to do it is using a context manager, so that it gets put back when you are done (even if the code raises an exception). If you don't use a context manager, be sure to put the original sys.stdout back using a finally block.
You'll need an object that is file-like, that takes the input and does what you want with it. Subclassing StringIO is a good start. Here's an example of a context manager that captures stdout and stderr and puts them in the result of the bound variable.
class CapturedText(object):
pass
#contextmanager
def captured(disallow_stderr=True):
"""
Context manager to capture the printed output of the code in the with block
Bind the context manager to a variable using `as` and the result will be
in the stdout property.
>>> from tests.helpers import capture
>>> with captured() as c:
... print('hello world!')
...
>>> c.stdout
'hello world!\n'
"""
import sys
stdout = sys.stdout
stderr = sys.stderr
sys.stdout = outfile = StringIO()
sys.stderr = errfile = StringIO()
c = CapturedText()
yield c
c.stdout = outfile.getvalue()
c.stderr = errfile.getvalue()
sys.stdout = stdout
sys.stderr = stderr
if disallow_stderr and c.stderr:
raise Exception("Got stderr output: %s" % c.stderr)
(source)
It works as shown in the docstring. You can replace StringIO() with your own class that writes the progress bar.
Another possibility would be to monkeypatch sympy.mpmath.visualization.print, since cplot uses print to print the output, and it uses from __future__ import print_function.
First, make sure you are using from __future__ import print_function if you aren't using Python 3, as this will otherwise be a SyntaxError.
Then something like
def progressbar_print(*args, **kwargs):
# Take *args and convert it to a progress output
progress(*args)
# If you want to still print the output, do it here
print(*args, **kwargs)
sympy.mpmath.visualization.print = progressbar_print
You might want to monkeypatch it in a custom function that puts it back, as other functions in that module might use print as well. Again, remember to do this using either a context manager or a finally block so that it gets put back even if an exception is raised.
Monkeypatching sys.stdout is definitely the more standard way of doing this, but I like this solution in that it shows that having print as a function can actually be useful.
I have a class that transforms some values via a user-specified function. The reference to the function is passed in the constructor and saved as an attribute. I want to be able to pickle or make copies of the class. In the __getstate__() method, I convert the dictionary entry to a string to make it safe for pickling or copying. However, in the __setstate__() method I'd like to convert back from string to function reference, so the new class can transform values.
class transformer(object):
def __init__(self, values=[1], transform_fn=np.sum):
self.values = deepcopy(values)
self.transform_fn = transform_fn
def transform(self):
return self.transform_fn(self.values)
def __getstate__(self):
obj_dict = self.__dict__.copy()
# convert function reference to string
obj_dict['transform_fn'] = str(self.transform_fn)
return obj_dict
def __setstate__(self, obj_dict):
self.__dict__.update(obj_dict)
# how to convert back from string to function reference?
The function reference that is passed can be any function, so solutions involving a dictionary with a fixed set of function references is not practical/flexible enough. I would use it like the following.
from copy import deepcopy
import numpy as np
my_transformer = transformer(values=[0,1], transform_fn=np.exp)
my_transformer.transform()
This outputs: array([ 1. , 2.71828183])
new_transformer = deepcopy(my_transformer)
new_transformer.transform()
This gives me: TypeError: 'str' object is not callable, as expected.
You could use dir to access names in a given scope, and then getattr to retrieve them.
For example, if you know the function is in numpy:
import numpy
attrs = [x for x in dir(numpy) if '__' not in x] # I like to ignore private vars
if obj_dict['transform_fn'] in attrs:
fn = getattr(numpy, obj_dict['transform_fn'])
else:
print 'uhoh'
This could be extended to look in other modules / scopes.
If you want to search in the current scope, you can do the following (extended from here):
import sys
this = sys.modules[__name__]
attrs = dir(this)
if obj_dict['transform_fn'] in attrs:
fn = getattr(this, obj_dict['transform_fn'])
else:
print 'Damn, well that sucks.'
To search submodules / imported modules you could iterate over attrs based on type (potentially recursively, though note that this is an attr of this).
I think if you are asking the same question, I came here for.
The answer is simply use eval() to evaluate the name.
>>> ref = eval('name')
This is going to return what 'name' references in the scope where the eval is
executed, then you can determine if that references is a function.
I got a function in a certain module that I want to redefine(mock) at runtime for testing purposes. As far as I understand, function definition is nothing more than an assignment in python(the module definition itself is a kind of function being executed). As I said, I wanna do this in the setup of a test case, so the function to be redefined lives in another module. What is the syntax for doing this?
For example, 'module1' is my module and 'func1' is my function, in my testcase I have tried this (no success):
import module1
module1.func1 = lambda x: return True
import module1
import unittest
class MyTest(unittest.TestCase):
def setUp(self):
# Replace othermod.function with our own mock
self.old_func1 = module1.func1
module1.func1 = self.my_new_func1
def tearDown(self):
module1.func1 = self.old_func1
def my_new_func1(self, x):
"""A mock othermod.function just for our tests."""
return True
def test_func1(self):
module1.func1("arg1")
Lots of mocking libraries provide tools for doing this sort of mocking, you should investigate them as you will likely get a good deal of help from them.
import foo
def bar(x):
pass
foo.bar = bar
Just assign a new function or lambda to the old name:
>>> def f(x):
... return x+1
...
>>> f(3)
4
>>> def new_f(x):
... return x-1
...
>>> f = new_f
>>> f(3)
2
It works also when a function is from another module:
### In other.py:
# def f(x):
# return x+1
###
import other
other.f = lambda x: x-1
print other.f(1) # prints 0, not 2
Use redef: http://github.com/joeheyming/redef
import module1
from redef import redef
rd_f1 = redef(module1, 'func1', lambda x: True)
When rd_f1 goes out of scope or is deleted, func1 will go back to being back to normal
If you want to reload into the interpreter file foo.py that you are editing, you can make a simple-to-type function and use execfile(), but I just learned that it doesn't work without the global list of all functions (sadly), unless someone has a better idea:
Somewhere in file foo.py:
def refoo ():
global fooFun1, fooFun2
execfile("foo.py")
In the python interpreter:
refoo() # You now have your latest edits from foo.py