assert variable is threading._Event protected class instance - assert

I want to check that a a variable e is an instance of threading.Event. However, when I create e, it actually creates a protected class instance threading._Event. For example:
import threading
e = threading.Event()
assert type(e) == threading.Event # raises AssertionError
assert type(e) == threading._Event # succeeds
Asserting that it is a protected class instance seems un-pythonic. Would assert type(e) == type(threading.Event()) be better? Would another option be better yet?

Have a look at this answer about subclassing threading.Event.
threading.Event is not a class, it's function in threading.py
def Event(*args, **kwargs):
"""A factory function that returns a new event.
Events manage a flag that can be set to true with the set() method and reset
to false with the clear() method. The wait() method blocks until the flag is
true.
"""
return _Event(*args, **kwargs)
Since this function returns _Event instance, you can subclass _Event (although it's never a good idea to import and use underscored names):
from threading import _Event
class State(_Event):
def __init__(self, name):
super(Event, self).__init__()
self.name = name
def __repr__(self):
return self.name + ' / ' + self.is_set()
This was changed in Python 3:
class Event:
"""Class implementing event objects.
Events manage a flag that can be set to true with the set() method and reset
to false with the clear() method. The wait() method blocks until the flag is
true. The flag is initially false.
"""

Related

abstractproperty + classmethod decorators in python

I want to enforce childrens to use a classmethod in python2.7.
I tried this:
import abc
class Base(object):
__metaclass__ = abc.ABCMeta
#abc.abstractproperty
def value(self):
pass
#abc.abstractproperty
#classmethod
def text(cls):
pass
class Imp(Base):
TEXT = "hi im text"
#classmethod
def haba(cls):
print 'HI'
#property
def value(self):
return 'asdasd'
#classmethod
#property
def text(cls):
return 'ho ho p'
print Imp.text
print Imp.TEXT
But I'm getting this output:
<bound method ABCMeta.? of <class 'main.Imp'>>
hi im text
How I can properly enforce childrens to implement classmethod properties?
You can see that Imp.TEXT is working but there is no way to enforce creating this member from base class this way
After re-reading your question a few times I concluded that you want the cl method to behave as if it is a property for the class.
First, Python's implementation of abstract method/property checking is meant to be performed at instantiation time only, not at class declaration. I hope you are aware of that.
Second, Python's descriptor protocol allows for the creation of the equivalent of "class properties", although there is no higher level support for that on the language itself - you can create a class which __get__ method returns your calculated property when the instance argument is None (usually descriptors will return 'self' so that they can be retrieved from the class).
Finally - it is possible that by declaring a custom metaclass being abstract itself, and then declaring it as your class metaclass, abstractproperties will trigger in runtime - let's try that - :
In [1]: import abc
In [2]: class AbsPropertyMeta(abc.ABC, type):
...: #abc.abstractproperty
...: def cl(cls):
...: return "Ho ho ho"
...:
In [3]: class ConcreteExample(metaclass=AbsPropertyMeta):
...: pass
...:
(Note that I will develop the answer using Python 3, which should be what you should be using in any new project or for learning purposes as well)
So, as for the former example, the property in the metaclass does work as a "class property", but Python does not enforce its redefinition in the class body.
So, if you really need this design, you should create a complete custom metaclass for that, and let go of the abc.ABCMeta mechanisms at all:
from functools import partial
def abstractclassproperty(func):
func._abstract_property = True
return func
class clsproperty(object):
def __init__(self, func):
self.func = func
def __get__(self, instance, owner):
return self.func(owner)
class ABCAbstractClsProperty(type):
def __new__(mcls, name, bases, namespace, **kw):
new_cls = super(ABCAbstractClsProperty, mcls).__new__(mcls, name, bases, namespace, **kw)
for attr_name in dir(new_cls): # Dir retrieves attributes from all superclasses
attr = getattr(new_cls, attr_name)
if getattr(attr, "im_func", None): # Python 2 specific normalization.
attr = attr.im_func
if getattr(attr, '_abstract_property', False) and new_cls.__dict__.get(attr_name) is not attr:
raise TypeError("Can't create class {!r}: abstract property {!r} not implemented".format(name, attr_name))
return new_cls
""" # Python 3:
class A(metaclass=ABCAbstractClsProperty):
#abstractclassproperty
def cl(cls):
pass
"""
class A(object):
__metaclass__ = ABCAbstractClsProperty
#abstractclassproperty
def cl(cls):
pass
try:
class B(A):
pass
except TypeError:
print("Check ok")
class C(A):
#clsproperty
def cl(cls):
return "ho ho ho " + cls.__name__
print(C.cl)

Is save() method commiting changes asynchronously?

I have a simple class model with Django Admin (v. 1.9.2) like this:
from django.contrib.auth.models import User
class Foo(models.Model):
...
users = models.ManyToManyField(User)
bar = None
I have also overloaded save() method like this:
def save(self, *args, **kwargs):
self.bar = 1
async_method.delay(...)
super(Foo, self).save(*args, **kwargs)
Here async_method is an asynchronous call to a task that will run on Celery, which takes the users field and will add some values to it.
At the same time, whenever a user is added to the ManyToManyField, I want to do an action depending on the value of the bar field. For that, I have defined a m2m_changed signal:
def process_new_users(sender, instance, **kwargs):
if kwargs['action'] == 'post_add':
# Do some stuff
print instance.bar
m2m_changed.connect(process_new_users, sender=Foo.users.through)
And there's the problem. Although I'm changing the value of bar inside the save() method and before I call the asynchronous method, when the process_new_users() method is triggered, instance.bar is still None (initial value).
I'm not sure if this is because the save() method commits changes asynchronously and when the process_new_users() is triggered it has not yet commited changes and is retrieving the old value, or if I'm missing something else.
Is my assumption correct? If so, is there a way to force the values in save() be commited synchronously so I can then call the asynchronous method?
Note: Any alternative way of achieving this is also welcome.
UPDATE 1: As of #Gert's answer, I implemented a transaction.on_change() trigger so whenever the Foo instance is saved, I can safely call the asynchronous function afterwards. To do that I implemented this:
bar = BooleanField(default=False) # bar has became a BooleanField
def call_async(self):
async_method.delay(...)
def save(self, *args, **kwargs):
self.bar = True
super(Foo, self).save(*args, **kwargs)
transaction.on_commit(lambda: self.call_async())
Unfortunately, this changes nothing. Instead of None I'm now getting False when I should be getting True in the m2m_changed signal.
You want to make sure that your database is up to date. In Django 1.9, there is a new transaction.on_commit which can trigger celery tasks.

Django, post_save signal recrusion. How to bypass signal firing

I have a situation where when one of my models is saved MyModel I want to check a field, and trigger the same change in any other Model with the same some_key.
The code works fine, but its recursively calling the signals. As a result I am wasting CPU/DB/API calls. I basically want to bypass the signals during the .save(). Any suggestions?
class MyModel(models.Model):
#bah
some_field = #
some_key = #
#in package code __init__.py
#receiver(models_.post_save_for, sender=MyModel)
def my_model_post_processing(sender, **kwargs):
# do some unrelated logic...
logic = 'fun! '
#if something has changed... update any other field with the same id
cascade_update = MyModel.exclude(id=sender.id).filter(some_key=sender.some_key)
for c in cascade_update:
c.some_field = sender.some_field
c.save()
Disconnect the signal before calling save and then reconnect it afterwards:
post_save.disconnect(my_receiver_function, sender=MyModel)
instance.save()
post_save.connect(my_receiver_function, sender=MyModel)
Disconnecting a signal is not a DRY and consistent solution, such as using update() instead of save().
To bypass signal firing on your model, a simple way to go is to set an attribute on the current instance to prevent upcoming signals firing.
This can be done using a simple decorator that checks if the given instance has the 'skip_signal' attribute, and if so prevents the method from being called:
from functools import wraps
def skip_signal(signal_func):
#wraps(signal_func)
def _decorator(sender, instance, **kwargs):
if hasattr(instance, 'skip_signal'):
return None
return signal_func(sender, instance, **kwargs)
return _decorator
Based on your example, that gives us:
from django.db.models.signals import post_save
from django.dispatch import receiver
#receiver(post_save, sender=MyModel)
#skip_signal()
def my_model_post_save(sender, instance, **kwargs):
instance.some_field = my_value
# Here we flag the instance with 'skip_signal'
# and my_model_post_save won't be called again
# thanks to our decorator, avoiding any signal recursion
instance.skip_signal = True
instance.save()
Hope This helps.
A solution may be use update() method to bypass signal:
cascade_update = MyModel.exclude(
id=sender.id).filter(
some_key=sender.some_key).update(
some_field = sender.some_field )
"Be aware that the update() method is converted directly to an SQL statement. It is a bulk operation for direct updates. It doesn't run any save() methods on your models, or emit the pre_save or post_save signals"
You could move related objects update code into MyModel.save method. No playing with signal is needed then:
class MyModel(models.Model):
some_field = #
some_key = #
def save(self, *args, **kwargs):
super(MyModel, self).save(*args, **kwargs)
for c in MyModel.objects.exclude(id=self.id).filter(some_key=self.some_key):
c.some_field = self.some_field
c.save()

django: recursion using post-save signal

Here's the situation:
Let's say I have a model A in django. When I'm saving an object (of class A) I need to save it's fields into all other objects of this class. I mean I need every other A object to be copy of lat saved one.
When I use signals (post-save for example) I get a recursion (objects try to save each other I guess) and my python dies.
I men I expected that using .save() method on the same class in pre/post-save signal would cause a recursion but just don't know how to avoid it.
What do we do?
#ShawnFumo Disconnecting a signal is dangerous if the same model is saved elsewhere at the same time, don't do that !
#Aram Dulyan, your solution works but prevent you from using signals which are so powerful !
If you want to avoid recursion and keep using signals (), a simple way to go is to set an attribute on the current instance to prevent upcoming signals firing.
This can be done using a simple decorator that checks if the given instance has the 'skip_signal' attribute, and if so prevents the method from being called:
from functools import wraps
def skip_signal():
def _skip_signal(signal_func):
#wraps(signal_func)
def _decorator(sender, instance, **kwargs):
if hasattr(instance, 'skip_signal'):
return None
return signal_func(sender, instance, **kwargs)
return _decorator
return _skip_signal
We can now use it this way:
from django.db.models.signals import post_save
from django.dispatch import receiver
#receiver(post_save, sender=MyModel)
#skip_signal()
def my_model_post_save(sender, instance, **kwargs):
# you processing
pass
m = MyModel()
# Here we flag the instance with 'skip_signal'
# and my_model_post_save won't be called
# thanks to our decorator, avoiding any signal recursion
m.skip_signal = True
m.save()
Hope This helps.
This will work:
class YourModel(models.Model):
name = models.CharField(max_length=50)
def save_dupe(self):
super(YourModel, self).save()
def save(self, *args, **kwargs):
super(YourModel, self).save(*args, **kwargs)
for model in YourModel.objects.exclude(pk=self.pk):
model.name = self.name
# Repeat the above for all your other fields
model.save_dupe()
If you have a lot of fields, you'll probably want to iterate over them when copying them to the other model. I'll leave that to you.
Another way to handle this is to remove the listener while saving. So:
class Foo(models.Model):
...
def foo_post_save(instance):
post_save.disconnect(foo_post_save, sender=Foo)
do_stuff_toSaved_instance(instance)
instance.save()
post_save.connect(foo_post_save, sender=Foo)
post_save.connect(foo_post_save, sender=Foo)

How do I use Django signals with an abstract model?

I have an abstract model that keeps an on-disk cache. When I delete the model, I need it to delete the cache. I want this to happen for every derived model as well.
If I connect the signal specifying the abstract model, this does not propagate to the derived models:
pre_delete.connect(clear_cache, sender=MyAbstractModel, weak=False)
If I try to connect the signal in an init, where I can get the derived class name, it works, but I'm afraid it will attempt to clear the cache as many times as I've initialized a derived model, not just once.
Where should I connect the signal?
Building upon Justin Abrahms' answer, I've created a custom manager that binds a post_save signal to every child of a class, be it abstract or not.
This is some one-off, poorly tested code and is therefore provided with no warranties! It seems to works, though.
In this example, we allow an abstract model to define CachedModelManager as a manager, which then extends basic caching functionality to the model and its children. It allows you to define a list of volatile keys (a class attribute called volatile_cache_keys) that should be deleted upon every save (hence the post_save signal) and adds a couple of helper functions to generate cache keys, as well as retrieving, setting and deleting keys.
This of course assumes you have a cache backend setup and working properly.
# helperapp\models.py
# -*- coding: UTF-8
from django.db import models
from django.core.cache import cache
class CachedModelManager(models.Manager):
def contribute_to_class(self, model, name):
super(CachedModelManager, self).contribute_to_class(model, name)
setattr(model, 'volatile_cache_keys',
getattr(model, 'volatile_cache_keys', []))
setattr(model, 'cache_key', getattr(model, 'cache_key', cache_key))
setattr(model, 'get_cache', getattr(model, 'get_cache', get_cache))
setattr(model, 'set_cache', getattr(model, 'set_cache', set_cache))
setattr(model, 'del_cache', getattr(model, 'del_cache', del_cache))
self._bind_flush_signal(model)
def _bind_flush_signal(self, model):
models.signals.post_save.connect(flush_volatile_keys, model)
def flush_volatile_keys(sender, **kwargs):
instance = kwargs.pop('instance', False)
for key in instance.volatile_cache_keys:
instance.del_cache(key)
def cache_key(instance, key):
if not instance.pk:
name = "%s.%s" % (instance._meta.app_label, instance._meta.module_name)
raise models.ObjectDoesNotExist("Can't generate a cache key for " +
"this instance of '%s' " % name +
"before defining a primary key.")
else:
return "%s.%s.%s.%s" % (instance._meta.app_label,
instance._meta.module_name,
instance.pk, key)
def get_cache(instance, key):
result = cache.get(instance.cache_key(key))
return result
def set_cache(instance, key, value, timeout=60*60*24*3):
result = cache.set(instance.cache_key(key), value, timeout)
return result
def del_cache(instance, key):
result = cache.delete(instance.cache_key(key))
return result
# myapp\models.py
from django.contrib.auth.models import User
from django.db import models
from helperapp.models import CachedModelManager
class Abstract(models.Model):
creator = models.ForeignKey(User)
cache = CachedModelManager()
class Meta:
abstract = True
class Community(Abstract):
members = models.ManyToManyField(User)
volatile_cache_keys = ['members_list',]
#property
def members_list(self):
result = self.get_cache('members_list')
if not result:
result = self.members.all()
self.set_cache('members_list', result)
return result
Patches welcome!
I think you can connect to post_delete without specifying sender, and then check if actual sender is in list of model classes. Something like:
def my_handler(sender, **kwargs):
if sender.__class__ in get_models(someapp.models):
...
Obviously you'll need more sophisticated checking etc, but you get the idea.
Create a custom manager for your model. In its contribute_to_classmethod, have it set a signal for class_prepared. This signal calls a function which binds more signals to the model.