How to implicitly use the base definition of a method - python-2.7

I'm currently developping for python 2, and I'm trying to use abstract base classes to simulate interfaces. I have an interface, a base implementation of that interface and many subclasses that extend the base implementation. It looks like this:
class Interface(object):
__metaclass__ = ABCMeta
class IAudit(Interface):
#abstractproperty
def timestamp(self):
raise NotImplementedError()
#abstractproperty
def audit_type(self):
raise NotImplementedError()
class BaseAudit(IAudit):
def __init__(self, audit_type):
# init logic
pass
#property
def timestamp(self):
return self._timestamp
#property
def audit_type(self):
return self._audit_type
class ConcreteAudit(BaseAudit):
def __init__(self, audit_type):
# init logic
super(ConcreteAudit, self).__init__(audit_type)
pass
However PyCharm notifies me that ConcreteAudit should implement all abstract methods. However, BaseAudit (which is not specified as an abc) already implements those methods, and ­­­­ConcreteAudit is a subclass of BaseAudit. Why is PyCharm warning me? Shouldn't it detect that IAudit's contract is already implemented through BaseAudit?

Why is PyCharm warning you?
Because all Python IDEs suck, that's why.
Whenever an intern/junior programmer/peer tells me that something that I wrote isn't working for him, I tell him that I'm not discussing it until he tries it by executing a Python script from the command line or from the stock interpreter. 99% of the time, the problem disappears.
Why do they suck? Beats me. But they all sometimes hide exceptions, sometimes make it possible to have stuff imported you didn't know about, and all sometimes decide (as in this case) that something is problematic that a real program running on the stock interpreter simply won't have a problem with.
I tried you code in both Python 2.7 and Python 3.4, and as long as I add from abc import ABCMeta, abstractproperty at the top, it runs peachy-keen fine.
So just ditch PyCharm, or update the tags to show that's where the error is.

Related

Patch a class method used in a view

I'm trying to test my view given certain responses from AWS. To do this I want to patch a class I wrote to return certain things while running tests.
#patch.object(CognitoInterface, "get_user_tokens", return_value=mocked_get_user_tokens_return)
class TestLogin(TestCase):
def test_login(self, mocked_get_user_tokens):
print(CognitoInterface().get_user_tokens("blah", "blah")) # Works, it prints the patched return value
login_data = {"email": "whatever#example.com", "password": "password"}
response = self.client.post(reverse("my_app:login"), data=login_data)
Inside the view from "my_app:login", I call...
CognitoInterface().get_user_tokens(email, password)
But this time, it uses the real method. I want it to use the patched return here as well.
It seems my patch only applies inside the test file. How can I make my patch apply to all code during the test?
Edit: Never figured out why #patch.object wasn't working. I just used #patch("path.to.file.from.project.root.ClassName.method_to_patch").
Also, see: http://bhfsteve.blogspot.com/2012/06/patching-tip-using-mocks-in-python-unit.html
Most likely CognitoInterface is imported in the views.py earlier than the monkeypatching happens. You can check this by adding print() into the views where CognitoInterface is imported (or declared, it's not obvious from the amount of code here), and then another print() next to the monkeypatching.
If the monkeypatching happens latter, you have to either delay the import, or in worst case monkeypatch in both places.

Django - How do I can have a big initialization required for a view (APIView)

I have a simple APIView:
class MyView(APIView):
symspell = Symspell()
def post(self, request):
res = self.symspell.do_something()
return res
Here's my issue: the constructor of my class Symspell required something like 30s to run. So when i run or do anything with my app really (like ./manage.py migrate) it adds 30s to the runtime.
So my questions would be:
is there a better way to do this ? (use a class with a long constructor in a view)
can i only construct this view when i'm ONLY running the server and not doing other operations like migrations ?
can i use the same class in several views ?
Thanks for your help !
is there a better way to do this ? (use a class with a long constructor in a view)
I don't think so, although I've never seen a constructor this heavy, so don't consider me an authority on this.
can i only construct this view when i'm ONLY running the server and not doing other operations like migrations ?
This is doable if you run the constructor inside the initial() function of the APIView.
can i use the same class in several views ?
I think you mean use the same object in several views? If so, you can implement the class as a singleton to avoid rerunning the constructor each time.

How to test signals when using factory_boy with muted signals

I am using factory_boy package and DjangoModelFactory to generate a factory model with muted signals
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
...
I have a post_save signal connected to the model:
def send_notification(sender, instance, created, **kwargs):
if created:
send_email(...)
post_save.connect(send_notification, SomeModel)
How can I test the signals works when I create an instance of the model using the factory class?
Some solutions for the direct question. Followed by a caution.
A) Instead of turning off the signals, mock the side effects
#mock.patch('send_email')
def test_mocking_signal_side_effects(self, mocked_send_email):
my_obj = SomeModelTargetFactory()
# mocked version of send_email was called
self.assertEqual(mocked_send_email.call_count, 1)
my_obj.foo = 'bar'
my_obj.save()
# didn't call send_email again
self.assertEqual(mocked_send_email.call_count, 1)
Note: mock was separate package before joining standard lib in 3.3
B) Use as context manager so you can selectively disable in your tests
This would leave the signals on by default, but you can selectively disable:
def test_without_signals(self):
with factory.django.mute_signals(signals.post_save):
my_obj = SomeModelTargetFactory()
# ... perform actions w/o signals and assert ...
C) Mute signals and an extended version of the base factory
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
# ...
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactoryNoSignals(SomeModelTargetFactory):
pass
I've never tried this, but it seems like it should work. Additionally, if you just need the objects for a quick unit test where persistence isn't required, maybe FactoryBoy's BUILD strategy is a viable option.
Caution: Muting signals, especially like post_save can hide nasty bugs
There are easily findable references about how using signals in your own code can create a false sense of decoupling (post_save for example, essentially is the same as overriding and extending the save method. I'll let you research that to see if it applies to your use case.
Would definitely think twice about making it the default.
A safer approach is to "mute"/mock the receiver/side effect, not the sender.
The default Django model signals are used frequently by third party packages. Muting those can hide hard to track down bugs due to intra-package interaction.
Defining and calling (and then muting if needed) your own signals is better, but often is just re-inventing a method call. Sentry is a good example of signals being used well in a large codebase.
Solution A is by far the most explicit and safe. Solution B and C, without the addition of your own signal requires care and attention.
I wont say there are no use cases for muting post_save entirely. It should be an exception and an alert to maybe double check the need in the first place.

Passing parameters to tearDown method

Suppose I have entity that creates SVN branch during its work. To perform functional testing I create multiple almost same methods (I use python unittest framework but question relates to any test framework):
class Tester(unittest.TestCase):
def test_valid1_url(self):
url="valid1"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_valid2_url(self):
url="valid2"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_invalid_url(self):
url="invalid"
self.assertRaises(ValueError, BranchCreator().create_branch, url)
After each test I want to remove the resulting branch or do nothing if test failed. Ideally I would use something like following:
#teardown_params(url='valid1')
def test_valid1_url(self):
def tearDown(self, url):
if (url_exists(url)): remove_branch(url)
But tearDown does not accept any parameter.
I see few quite dirty solutions:
a) create field "used_url" in Tester, set it in every method and use in tearDown:
def test_valid1_url(self):
self.used_url="valid1"
BranchCreator().create_branch(self.used_url)
self.assertUrlExists(url)
...
def tearDown(self):
if (url_exists(self.used_url)): remove_branch(self.used_url)
It should work because (at least in my environment) all tests are run sequentally so there would be no conflicts. But this solution violates tests independency principle due to shared variable, and if I would manage to launch tests simultaneously, it will not work.
b) Use separate method like cleanup(self, url) and call it from every method
Is there any other approach?
I think that the b) solution could work even if it mandates to have the call to the helper method in every test and that sounds to me like a sort of duplication.
Another approach could be the calling the helper method inside the "assertUrlExists" function. In this way the duplication is removed and you can avoid to check again the existence of the URL in order to manage the cleanup: you have the assertion result and you can use it.

Setting metaclass of wrapped class with Boost.Python

I have an Event class defined in C++ that I expose to Python using Boost. My scripts are expected to derive from this class, and I'd like to do some initialization whenever a new child class is defined.
How can I set the metaclass of the exposed Event class such that whenever a Python script derives from this class, the metaclass could do the required initialization?
I would like to avoid having to explicitly use a metaclass in the scripts...
class KeyboardEvent(Event): # This is what I want
pass
class KeyboardEvent(Event, metaclass=EventMeta): # This is not a good solution
pass
Edit: Part of the solution
It seems there's no way to set the metaclass with Boost.Python. The next best thing is to improvise and change the metaclass after the class was defined. In native Python, the safe way to change a metaclass is to do this:
B = MetaClass(B.__name__, B.__bases__, B.__dict__)
In Boost, it'd look something like this:
BOOST_PYTHON_MODULE(event)
{
using namespace boost::python;
using boost::python::objects::add_to_namespace;
class_<EventMetaClass> eventmeta("__EventMetaClass")
...;
class_<Event> event("Event")
...;
add_to_namespace(scope(), "Event",
eventmeta(event["__name__"], event["__bases__"], event["__dict__"]));
}
The problem is that I can't seem to find a way to define a metaclass with Boost.Python, which is why I've opened How to define a Python metaclass with Boost.Python?.
If boost does not offer a way to do it from withn c++, and it looks like it don't, the way to go is to create wrapper classes that implement the metaclass -
It can be done more or less automatically usign a ittle bit of instrospection. Let's suppose your boost module is named "event" - you should either name the file as _event or place it inside you module, and write an python file - named "event.py" (or an __init__.py file on your module that would do more or less this:
import _event
class eventmeta(type):
...
event_dict = globals()
for key, value in _event.__dict__.items():
if isinstance(value, type):
event_dict[key] = eventmeta(key, (value,),{})
else:
#set other module members as members of this module
event_dict[key] = value
del key, value, event_dict
Thos cpde will automatically set module variables equal to any names found in the native"_event" module - and for each class it encounters, create a new class changing the metaclass, as in your example.
It may be that you get a metaclass conflict by doing this. If so, the way is to make the newly created classes to be proxies to the native classes, by creating proper __getattribute__ and __setattr__ methods. Just ask in a comment if you will need to do that.