Trigering post_save signal only after transaction has completed - django

I have written some APIs, for which the respective functions executive inside a transaction block. I am calling the save() method (after some modifications) on instance/s of a/several Model/s, and also consecutively indexing some JSON related information of the instance/s in Elasticsearch. I want the database to rollback even if for some reason the save() for one of the instances or indexing to the Elasticsearch fails.
Now, the problem is arising that even inside the transaction block, the post_save() signals gets called, and that is an issue because some notifications are being triggered from those signals.
Is there a way to trigger post_save() signals only after the transactions have completed successful?

I think the simplest way is to use transaction.on_commit(). Here's an example using the models.Model subclass Photo that will only talk to Elasticsearch once the current transaction is over:
from django.db import transaction
from django.db.models.signals import post_save
#receiver(post_save, sender=Photo)
def save_photo(**kwargs):
transaction.on_commit(lambda: talk_to_elasticsearch(kwargs['instance']))
Note that if the transaction.on_commit() gets executed while not in an active transaction, it will run right away.

Not really. The signals have nothing to do with the db transaction success or failure, but with the save method itself - before the call you have the pre_save signal fired and after the call you have the post_save signal fired.
There are 2 approaches here:
you are going to inspect the instance in the post_save method and decide that the model was saved successfully or not; simplest way to do that: in the save method, after the transaction executed successfully, annotate your instance with a flag, say instance.saved_successfully = True, which you will test in the post_save handler.
you are going to ditch the post_save signal and create a custom signal for yourself, which you will trigger after the transaction ran successfully.
Makes sense?
P.S.
If you strictly need to bind to the transaction commit signal, have a look over this package: https://django-transaction-hooks.readthedocs.org/en/latest/; it looks like the functionality is integrated in Django 1.9a.

I was having serious issues with django's admin not allowing post_save transactions on parent objects when they had inline children being modified.
This was my solution to an error complaining about conducting queries in the middle of an atomic block:
def on_user_post_save_impl(user):
do_something_to_the_user(user)
def on_user_post_save(sender, instance, **kwargs):
if not transaction.get_connection().in_atomic_block:
on_user_post_save_impl(instance)
else:
transaction.on_commit(lambda: on_user_post_save_impl(instance))

We are using this little nugget:
def atomic_post_save(sender, instance, **kwargs):
if hasattr(instance, "atomic_post_save") and transaction.get_connection().in_atomic_block:
transaction.on_commit(lambda: instance.atomic_post_save(sender, instance=instance, **kwargs))
post_save.connect(atomic_post_save)
Then we simply define a atomic_post_save method on any model we like:
class MyModel(Model):
def atomic_post_save(self, sender, created, **kwargs):
talk_to_elasticsearch(self)
Two things to notice:
We only call atomic_post_save when inside a transaction.
It's too late in the flow to send messages and have them included in the current request from inside atomic_post_save.

Related

Djangos m2m_changed trigger doesn't trigger properly

I have a model that looks as follows and I wish to trigger a method every time the user_ids field get's changed. Using the post_save signal obviously didn't do anything, as ManyToMany relationships are special in that way.
class Lease(models.Model):
unit = models.ForeignKey(Unit, on_delete=models.CASCADE)
user_ids = models.ManyToManyField('user.User')
Using the m2m_changed trigger as follows also didn't do anything, which got me puzzled. I don't really understand what is wrong with this code also having tried to leave the '.user_ids' out. There are no errors or anything, it just doesn't trigger when the user_ids from the Lease model are changed.
#receiver(m2m_changed, sender=Lease.user_ids)
def update_user_unit(sender, instance, **kwargs):
print('Test')
Reading the documentation, I suppose the sender should be the intermediate model, not the ManyToMany field itself. Try this:
#receiver(m2m_changed, sender=Lease.user_ids.through)

How to trigger Django's pre_delete signal?

I have a User model and a UserImage model that contains a foreign key to a User. The foreign key is set to CASCADE delete.
Here is what the receivers look like in my models.py:
#receiver(pre_delete, sender=User)
def deleteFile(sender, instance, **kwargs):
print("User pre_delete triggered")
instance.thumbnail.delete()
#receiver(pre_delete, sender=UserImage)
def deleteFile(sender, instance, **kwargs):
print("UserImage pre_delete triggered")
instance.image.delete()
When I execute the following lines of code:
>>> User.objects.last().delete()
"UserImage pre_delete triggered"
For some reason the associated UserImage signal is being received but the actual User model's signal is not.
Am I missing something?
If you read the documentation carefully you will see that the delete() method on a model will execute purely in SQL (if possible). So the delete() method on UserImage will not be called by Django, thus the signal will not be triggered. If you want it to be triggered you could override the delete method on your User model to also call the delete() on the related object. Something like this:
class User(models.Model):
def delete(self, using=None):
self.userimage_set.all().delete()
super().delete(using=using)
UPDATE:
I did not read the question correctly so I have to update my answer. I think what is happening is that both signals have the same name and thus the first one is overwritten by the second one, and thus only the second one is executed. I would suggest changing the function name to something else and see if that changes things.

Django signals dispatch_uid

I have a question regarding the usage of dispatch_uid for signals.
Currently, I am preventing multiple usage of the signal by simply adding if not instance.order_reference. I wonder now if dispatch_uid has the same functionality and I can delete the "if not" clause.
signals.py
def reserveditem_create_order_reference(sender, instance, **kwargs):
if not instance.order_reference:
instance.order_reference = unique_order_reference_generator()
app.py
class OrdersConfig(AppConfig):
name = 'orders'
def ready(self):
#Pre save signal for ReservedItem model
reserved_model = self.get_model('ReservedItem')
pre_save.connect(
reserveditem_create_order_reference,
sender=reserved_model,
dispatch_uid="my_unique_identifier"
)
As by the docs,
In some circumstances, the code connecting receivers to signals may run multiple times. This can cause your receiver function to be registered more than once, and thus called multiple times for a single signal event. If this behavior is problematic (such as when using signals to send an email whenever a model is saved), pass a unique identifier as the dispatch_uid argument to identify your receiver function
So yes, removing the if clause and setting a unique signal receiver dispatch_uid instead will prevent your handlers from being connected (and thereafter, called) more than once.

object is not in the database when django post_save is running?

I'm not sure what this ticket is talking about is what i'm experiencing.
https://code.djangoproject.com/ticket/14051
MyClass.objects.create() calls post_save() handler, and inside one of the handler, I do Myclass.objects.filter(id=instance.id) and it returns nothing.
So when you are inside post_save signal handler,
your instance is not yet found in the DB because it 's not committed yet?
Is this true?
this is create part
thread = ReviewThread.objects.create(**validated_data)
each review_thread has a review_meta (foreign_key) and related_name is 'review_threads'
class ReviewThread(forum_models.Thread):
thread = models.OneToOneField(forum_models.Thread, parent_link=True)
review_meta = models.ForeignKey(ReviewMeta, related_name='review_threads')
This is the receiving part
def maybe_update_review_meta_primary_image(review_thread):
ReviewThread.objects.filter(id=review_thread.id) #returns nothing
#...
#receiver(post_save, sender=ReviewThread)
def update_review_meta(sender, instance, **kwargs):
review_thread = instance
maybe_update_review_meta_primary_image(review_thread)
If you use transactions then yes, it is true. And it is a pretty valid behaviour.

Modifying Django's pre_save/post_save Data

I'm having a hard time to grasp this post_save/pre_save signals from django.
What happens is that my model has a field called status and when a entry to this model is added/saved, it's status must be changed accordingly with some condition.
My model looks like this:
class Ticket(models.Model):
(...)
status = models.CharField(max_length=1,choices=OFFERT_STATUS, default='O')
And my signal handler, configured for pre_save:
def ticket_handler(sender, **kwargs):
ticket = kwargs['instance']
(...)
if someOtherCondition:
ticket.status = 'C'
Now, what happens if I put aticket.save() just bellow this last line if statement is a huge iteration black hole, since this action calls the signal itself. And this problem happens in both pre_save and post_save.
Well... I guess that the capability of altering a entry before (or even after) saving it is pretty common in django's universe. So, what I'm doing wrong here? Is the Signals the wrong approach or I'm missing something else here?
Also, would it be possible to, once this pre_save/post_save function is triggered, to access another model's instance and change a specific row entry on that?
Thanks
Signals are not a proper approach for updating fields in the same model prior to save. Override the model's save method instead of using a signal for this case.
def save(self, force_insert=False, force_update=False):
status = whatever....
super(Ticket, self).save(force_insert, force_update)
For updates to other models, signals are a great approach because you can easily decouple your models. Specifically, you can add a pre_/post_save signal to trigger actions without the need to modify the code of the saved model (which could reside in another application from a third party).
I agree with Carles that this may belong in a save(). When you must do this with signals, make sure you have a very tight condition around the save(). Your test could be rewrtitten as:
if someOtherCondition and ticket.status != 'C':
ticket.status = 'C'
ticket.save()
You won't get into infinite recursion with the test done this way.