Get access to class attributes - python-2.7

import yaml
class Import_Yaml_Setting():
def __init__(self, path):
self.read_yaml(path)
def read_yaml(self, path):
stream = open(path, 'r')
self.settings = yaml.load(stream)
stream.close()
class MasterDef(Import_Yaml_Setting):
def __init__(self, path):
Import_Yaml_Setting.__init__(self, path)
def function_1():
path = 'path_to_settings\\yaml_file.yaml'
MasterDef(path)
def function_2():
MasterDef.settings
if __name__ == '__main__':
function_1()
function_2()
My plan is it to have a class Import_Yaml_Setting which imports settings from a yaml file. The class MasterDef inherits the class Import_Yaml_Setting.
After 'function_1' calls MasterDef in order to import the settings. I want to do this once in my program. After, I just want to get access to the imported settings
without import them again. This should do function_2.
My problem
I don't know how I have to call MasterDef at the first place. If I would create an instance of MasterDef them I wouldn't have access to this instance in function_2.
Also, I get an error that says MasterDef has no attribute settings.
What would be the right way to do this.

There are a few things incorrect, so lets start with the most obvious.
If you have a class MasterDef, calling MasterDef() creates an instance
of that class. If you don't assign that to a variable, that instance will
immediately disappear.
Doing MasterDef.settings later on could work if the class had a
class attribute or method called settings, but in that case you are not accessing
the settings attribute on an instance.
Typical such global settings are passed around, or implemented as a function object that
does the loading only once, or are made into a global variable (as
shown in the following example). Simplified you would do:
from __future__ import print_function, absolute_import, division, unicode_literals
class MasterDef(object):
def __init__(self):
self.settings = dict(some='setting')
master_def = None
def function_1():
global master_def
if master_def is None:
master_def = MasterDef()
def function_2():
print('master_def:', master_def.settings)
if __name__ == '__main__':
function_1()
function_2()
which gives:
master_def: {'some': 'setting'}
A few notes to the above:
If, for whatever reason, you are doing anything new on Python 2.7
make things more Python3 compatible by including the from
__future__ import as indicated. Even if you are just using the
print function (instead of the outdated print statement). It
will make transitioning easier (2.7 goes EOL in 2020)
Again in 2.7 make your base classes a subclass of object, that
makes it e.g. possible to have properties.
By testing that master_def is None you can invoke function_1 multiple
times
You should also be aware that PyYAML load, as is written in its
documentation, can be unsafe when you don't have full control over
your input. There is seldom need to use load() so use safe_load()
or upgrade to my ruamel.yaml package which implements the newer YAML
1.2 standard (released 2009, so there is no excuse for using PyYAML
that still doesn't support that).
As you also seem to be on Windows (assumed from you using \\), consider using raw strings
where you don't need to escape the backslash, using os.path.join(). I am leaving out
your path part in my full example as I am not on Windows:
from __future__ import print_function, absolute_import, division, unicode_literals
import ruamel.yaml
class Import_Yaml_Setting(object):
def __init__(self, path):
self._path = path # stored in case you want to write out the configuration
self.settings = self.read_yaml(path)
def read_yaml(self, path):
yaml = ruamel.yaml.YAML(typ='safe')
with open(path, 'r') as stream:
return yaml.load(stream)
class MasterDef(Import_Yaml_Setting):
def __init__(self, path):
Import_Yaml_Setting.__init__(self, path)
master_def = None
def function_1():
global master_def
path = 'yaml_file.yaml'
if master_def is None:
master_def = MasterDef(path)
def function_2():
print('master_def:', master_def.settings)
if __name__ == '__main__':
function_1()
function_2()
If your YAML file looks like:
example: file
very: simple
the output of the above program will be:
master_def: {'example': 'file', 'very': 'simple'}

Related

How to fix circular importing?

It seems I have a circular importing error. I currently just struggling to fix it. Does anyone know what I should do?
In my models.py, containing ReservedItems & Order:
def reserveditem_pre_save_receiver(sender, instance, **kwargs):
if not instance.order_reference:
instance.order_reference = unique_order_reference_generator()
In my utils.py
from lumis.utils import get_random_string
from .models import Order, ReservedItem
def unique_order_reference_generator():
new_id = get_random_string(length=10)
reserved_item = ReservedItem.objects.filter(
order_reference=new_id
).exists()
order = Order.objects.filter(order_reference=new_id).exists()
if reserved_item or order:
return unique_order_reference_generator()
else:
return new_id
You can import modules locally in the body of the function, so:
from lumis.utils import get_random_string
def unique_order_reference_generator():
from .models import Order, ReservedItem
new_id = get_random_string(length=10)
reserved_item = ReservedItem.objects.filter(
order_reference=new_id
).exists()
order = Order.objects.filter(order_reference=new_id).exists()
if reserved_item or order:
return unique_order_reference_generator()
else:
return new_id
This thus means that the module is not loaded when Python loads the file, but when the function is actually called. As a result, we can load the unique_order_reference_generator function, without having to load a the module that actually depends on this function.
Note that, like #Alasdair says, signals are typically defined in a dedicated file (signals.py) for example which should be loaded in the ready() function of the app. But regardless how you structure code, frequently local imports should be used to avoid circular imports.
All the current suggestions are good. Move your signal handlers out of models. Models are prone to circular imports because they are used everywhere, so it is a good idea to keep only model code in models.py.
Personally, I don't like imports in the middle of the code
import-outside-toplevel / Import outside toplevel
Instead I use Django application API to load models without importing
from django.apps import apps
def signal_handler(instance, *args, **kwargs):
Order = apps.get_model('your_app', 'Order')
...

Can't import custom python module using multiprocess library

Just getting started with using the multiprocessing library in my code base to parallelise a simple for loop, where previously, in a serial for loop, I would import a custom configuration .py file and pass it to be a function to be run.
However I'm having issues with passing in the configuration module to be parellelised.
NB. There are multiple custom configuration.py which I want to pass into the different processes.
Example:
def get_custom_config():
config_list = []
for project_config in configs:
config = importlib.import_module("config.%s.%s" % (prefix, project_config)
config_list.append(config)
return config_list
def print_config(config):
print config.something_in_config_file
if __name__ = "__main__":
config_list = get_custom_config()
pool = mp.Pool(processes=2)
pool.map(print_config, config_list)
Returns:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
cPickle.PicklingError: Can't pickle <type 'module'>: attribute lookup __builtin__.module failed
What is the best way of passing a module to a parallel process?
I do have a possible solution for you, but I don't like the approach you have.
config = importlib.import_module("config.%s.%s" % (prefix, project_config)
You should try and have config as a dictionary of key value pairs instead as a module. Or import it that way.
The issue is that functions and modules are not picklable by default in Python 2.7. Functions are picklable by default in Python 3.X and modules are still not.
import importlib
import multiprocessing as mp
configs = ["abc", "def"]
import copy_reg
import types
def _pickle_module(module):
module_name = module.__name__
print("pickling" + module_name)
path = getattr(module, "__file__", None)
return _unpickle_module, (module_name, path)
def _unpickle_module(module_name, path):
return importlib.import_module(module_name)
copy_reg.pickle(types.ModuleType, _pickle_module, _unpickle_module)
def get_custom_config():
config_list = []
for project_config in configs:
config = importlib.import_module("config.%s" % (project_config))
config_list.append(config)
return config_list
def print_config(config):
print (vars(config))
if __name__ == "__main__":
config_list = get_custom_config()
pool = mp.Pool(processes=2)
pool.map(print_config, config_list)
This basically re-imports the module in the other process, so do remember you are not sharing data between them. This is a good read only variables.
But as I mentioned passing modules to a different process makes less sense. Try to fix your approach instead of using the code I posted
PS: Solution inspired from Can't pickle <type 'cv2.BRISK'>: attribute lookup cv2.BRISK failed

i want to import a function from another python file which is inside the class

I have tried to import like this:
sample_wave is another python file onlistener is the classname and onpartialTranscript is function in it
from sample_wave import onPartialTranscript
how can i invoke the function inside the class from another file?
def create_username():
username, pwd
try:
engine.say("Enter the user name and password for New user")
engine.runAndWait()
username = add_user(raw_input(onPartialTranscript(username), pwd=getpass.getpass()))
engine.say("User Credentials added! %s" % username)
engine.runAndWait()
except Exception as e:
engine.say("recheck credentials")
engine.runAndWait()
you cannot import function from class you should import class and then create object of that class and call that method
from sample_wave import onlistener
x = onlistener();
username = ...
x.onPartialTranscript(username)
You can't import the function directly. However, you could:
Import the class and instantiate the object, as stated in earlier answer
Rewrite the function
Import the class and create your own as its child, overriding __init__ method
I would vote for 3., it seems to be the cleanest to me.
Besides you can try following experiment. Nevertheless, it can introduce unexpected results, usage of eval is not a good practice for example, etc.
test.py - the module which you want to import from:
class onlistener:
def __init__(self):
pass
def testfunction(self):
print "Imported"
In the following, use inspect module to find the function called testfunction:
import inspect
from test import onlistener
def main():
classfunc = inspect.getsourcelines(onlistener.testfunction)
# the first field contains the code (list of lines), convert it to string
my_str = ''
first = True
for line in myfunc[0]:
if first:
# First line contains 'self' argument, remove it
line = line.replace('self', '')
first = False
# we need to remove the indentation, here it's 4 spaces
my_str += line[4:]
# append the call to the function
# newline is needed due to specification of compile function
my_str += 'testfunction()\n'
# create code object
my_func = compile(my_str, '<string>', 'exec')
# The last line will call the function and print "Imported"
eval(my_func)

__init__ variable not found in test class?

I recently changed from using nose to nose2, however a lot of my testing code seems to have broken in the process. One thing in particular is the init variable i put in my test class "self.mir_axis" is giving this error:
mirror_index = mirror_matrix.index(self.mir_axis)
AttributeError: 'TestConvert' object has no attribute 'mir_axis'
This used to work with nose, however with nose2 my init variable for some reason is no longer registering. Am I missing something here? Im using python 2.7.3, and eclipse as an IDE btw.
from nose2.compat import unittest
from nose2.tools import params
from nose2 import session
from nose2.events import ReportTestEvent
from nose2.plugins import testid
from nose2.tests._common import (FakeStartTestEvent, FakeLoadFromNameEvent,
FakeLoadFromNamesEvent, TestCase)#
# Import maya modules
import maya.cmds as mc
# Absolute imports of other modules
from neo_autorig.scripts.basic import name
from neo_autorig.scripts.basic import utils
# Test class for converting strings
class TestConvert(TestCase):
counter = 0 # counter to cycle through mir_axes
def _init__(self):
mir_axes = ['xy', '-xy', 'yz', '-yz'] # different axes to be applied
self.mir_axis = mir_axes[self.__class__.counter]
self.__class__.counter += 1 # increase counter when run
if self.__class__.counter > 3:
self.__class__.counter = 0 # if counter reaches max, reset
self.utils = utils.Utils(self.mir_axis, False) # pass module variables
def setUp(self): # set up maya scene
side_indicator_l = mc.spaceLocator(n='side_indicator_left')[0]
side_indicator_r = mc.spaceLocator(n='side_indicator_right')[0]
mirror_matrix = ['xy', '-xy', 'yz', '-yz']
trans_matrix = ['tz', 'tz', 'tx', 'tx']
side_matrix = [1, -1, 1, -1]
mirror_index = mirror_matrix.index(self.mir_axis)
mc.setAttr(side_indicator_l+'.'+trans_matrix[mirror_index], side_matrix[mirror_index])
mc.setAttr(side_indicator_r+'.'+trans_matrix[mirror_index], side_matrix[mirror_index]*-1)
def tearDown(self): # delete everything after
mc.delete('side_indicator_left', 'side_indicator_right')
def test_prefix_name_side_type(self): # test string
nc = name.Name('prefix_name_side_type')
existing = nc.get_scenenames('transform')
self.assertEqual(nc.convert('test', 'empty', self.utils.find_side('side_indicator_left'),
'object', existing), 'test_empty_l_object')
self.assertEqual(nc.convert('test', 'empty', self.utils.find_side('side_indicator_right'),
'object', existing), 'test_empty_r_object')
# run if script is run from inside module
if __name__ == '__main__':
import nose2
nose2.main()
I see two problems with the snippet you posted:
The first one is def _init__(self): is missing an underscore; it should be def __init__(self):
The second one (and seems to be the reason for the error) is the fact that the first line in _init__, mir_axes = ['xy', '-xy', ..., should be self.mir_axes = ...
Edit
You should use setUp instead of __init__ regardless, according to Ned Batchelder of Coverage.py fame. :)

making a function staticmethod in python is confusing

Hi I have a GUI written using Tkinter and the code template is as follows. My question is PyCharm gives me warnings on my functions (def func1, def func2) that they are static. To get rid of the warnings I placed #staticmethod above the functions. What does this do and is it necessary?
# Use TKinter for python 2, tkinter for python 3
import Tkinter as Tk
import ctypes
import numpy as np
import os, fnmatch
import tkFont
class MainWindow(Tk.Frame):
def __init__(self, parent):
Tk.Frame.__init__(self,parent)
self.parent = parent
self.parent.title('BandCad')
self.initialize()
#staticmethod
def si_units(self, string):
if string.endswith('M'):
num = float(string.replace('M', 'e6'))
elif string.endswith('K'):
num = float(string.replace('K', 'e3'))
elif string.endswith('k'):
num = float(string.replace('k', 'e3'))
else:
num = float(string)
return num
if __name__ == "__main__":
# main()
root = Tk.Tk()
app = MainWindow(root)
app.mainloop()
You can also turn off that inspection so that PyCharm doesn't warn you. Preferences -> Editor -> Inspections. Note that the inspection appears in the JavaScript section as well as the Python section.
You are right about #staticmethod being confusing. It is not really needed in Python code and in my opinion should almost never by used. Instead, since si_units is not a method, move it out of the class and remove the unused self parameter. (Actually, you should have done that when adding #staticmethod; the posted code will not work right with 'self' left in.)
Unless one has forgotten to use 'self' when it needs to be used, this is (or at least should be) the intent of the PyCharm warning. No confusion, no fiddling with PyCharm settings.
While you are at it, you could condense the function and make it easily extensible to other suffixes by using a dict.
def si_units(string):
d = {'k':'e3', 'K':'e3', 'M':'e6'}
end = string[-1]
if end in d:
string = string[:-1] + d[end]
return float(string)
for f in ('1.5', '1.5k', '1.5K', '1.5M'): print(si_units(f))