I have a User model and a UserImage model that contains a foreign key to a User. The foreign key is set to CASCADE delete.
Here is what the receivers look like in my models.py:
#receiver(pre_delete, sender=User)
def deleteFile(sender, instance, **kwargs):
print("User pre_delete triggered")
instance.thumbnail.delete()
#receiver(pre_delete, sender=UserImage)
def deleteFile(sender, instance, **kwargs):
print("UserImage pre_delete triggered")
instance.image.delete()
When I execute the following lines of code:
>>> User.objects.last().delete()
"UserImage pre_delete triggered"
For some reason the associated UserImage signal is being received but the actual User model's signal is not.
Am I missing something?
If you read the documentation carefully you will see that the delete() method on a model will execute purely in SQL (if possible). So the delete() method on UserImage will not be called by Django, thus the signal will not be triggered. If you want it to be triggered you could override the delete method on your User model to also call the delete() on the related object. Something like this:
class User(models.Model):
def delete(self, using=None):
self.userimage_set.all().delete()
super().delete(using=using)
UPDATE:
I did not read the question correctly so I have to update my answer. I think what is happening is that both signals have the same name and thus the first one is overwritten by the second one, and thus only the second one is executed. I would suggest changing the function name to something else and see if that changes things.
Related
I have a model that looks as follows and I wish to trigger a method every time the user_ids field get's changed. Using the post_save signal obviously didn't do anything, as ManyToMany relationships are special in that way.
class Lease(models.Model):
unit = models.ForeignKey(Unit, on_delete=models.CASCADE)
user_ids = models.ManyToManyField('user.User')
Using the m2m_changed trigger as follows also didn't do anything, which got me puzzled. I don't really understand what is wrong with this code also having tried to leave the '.user_ids' out. There are no errors or anything, it just doesn't trigger when the user_ids from the Lease model are changed.
#receiver(m2m_changed, sender=Lease.user_ids)
def update_user_unit(sender, instance, **kwargs):
print('Test')
Reading the documentation, I suppose the sender should be the intermediate model, not the ManyToMany field itself. Try this:
#receiver(m2m_changed, sender=Lease.user_ids.through)
Lets say I have a model called BookModel with 4 fields : (title, author, price, publish_year).
And I have a handler in signals:
#receiver([post_save, post_delete], sender=BookModel)
def signal_handler(sender, instance, **kwargs):
…..
Question is how to distinguish a situation when specific model field has changed during save(). For example if price has changed I want to do stuff. Better explain in pseudo code...
#receiver([post_save, post_delete], sender=BookModel)
def signal_handler(sender, instance, **kwargs):
# pseudo code bellow
if field “price” has changed:
do stuff
else:
do nothing
According the docs if I use “update_fields” in save() - it is possible, but what if I dont use it???
Also is it possible to distinguish a situation when I received signal from post_save or from post_delete still using 1 handler?
#receiver([post_save, post_delete], sender=BookModel)
def signal_handler(sender, instance, **kwargs):
# pseudo code bellow
if signal is post_save:
if field “price” has changed:
do stuff
else:
do nothing
else:
do other stuff
Thanks
You can try django-model-utils's FieldTracker to track changes in model fields. It also use with post_save signal.
Checking changes using signals
The field tracker methods may also be used in pre_save and post_save signal handlers to identify field changes on model save.
I have written some APIs, for which the respective functions executive inside a transaction block. I am calling the save() method (after some modifications) on instance/s of a/several Model/s, and also consecutively indexing some JSON related information of the instance/s in Elasticsearch. I want the database to rollback even if for some reason the save() for one of the instances or indexing to the Elasticsearch fails.
Now, the problem is arising that even inside the transaction block, the post_save() signals gets called, and that is an issue because some notifications are being triggered from those signals.
Is there a way to trigger post_save() signals only after the transactions have completed successful?
I think the simplest way is to use transaction.on_commit(). Here's an example using the models.Model subclass Photo that will only talk to Elasticsearch once the current transaction is over:
from django.db import transaction
from django.db.models.signals import post_save
#receiver(post_save, sender=Photo)
def save_photo(**kwargs):
transaction.on_commit(lambda: talk_to_elasticsearch(kwargs['instance']))
Note that if the transaction.on_commit() gets executed while not in an active transaction, it will run right away.
Not really. The signals have nothing to do with the db transaction success or failure, but with the save method itself - before the call you have the pre_save signal fired and after the call you have the post_save signal fired.
There are 2 approaches here:
you are going to inspect the instance in the post_save method and decide that the model was saved successfully or not; simplest way to do that: in the save method, after the transaction executed successfully, annotate your instance with a flag, say instance.saved_successfully = True, which you will test in the post_save handler.
you are going to ditch the post_save signal and create a custom signal for yourself, which you will trigger after the transaction ran successfully.
Makes sense?
P.S.
If you strictly need to bind to the transaction commit signal, have a look over this package: https://django-transaction-hooks.readthedocs.org/en/latest/; it looks like the functionality is integrated in Django 1.9a.
I was having serious issues with django's admin not allowing post_save transactions on parent objects when they had inline children being modified.
This was my solution to an error complaining about conducting queries in the middle of an atomic block:
def on_user_post_save_impl(user):
do_something_to_the_user(user)
def on_user_post_save(sender, instance, **kwargs):
if not transaction.get_connection().in_atomic_block:
on_user_post_save_impl(instance)
else:
transaction.on_commit(lambda: on_user_post_save_impl(instance))
We are using this little nugget:
def atomic_post_save(sender, instance, **kwargs):
if hasattr(instance, "atomic_post_save") and transaction.get_connection().in_atomic_block:
transaction.on_commit(lambda: instance.atomic_post_save(sender, instance=instance, **kwargs))
post_save.connect(atomic_post_save)
Then we simply define a atomic_post_save method on any model we like:
class MyModel(Model):
def atomic_post_save(self, sender, created, **kwargs):
talk_to_elasticsearch(self)
Two things to notice:
We only call atomic_post_save when inside a transaction.
It's too late in the flow to send messages and have them included in the current request from inside atomic_post_save.
I'm overriding Django's model delete method in order to delete orphan files in the disk for image fields, something like this:
class Image(models.Model):
img = models.ImageField(upload_to=get_image_path)
...
def delete(self, *args, **kwargs):
self.img.delete()
super(Image, self).delete(*args, **kwargs)
This works fine when I delete single objects from the admin, but when I select multiple objects and delete them, this doesn't seem to get called. I have been googling for a while but haven't hit the right keywords to get the answer for this, nor the official documentation seems to talk about this subject.
It does:
The delete() method does a bulk delete and does not call any delete() methods on your models. It does, however, emit the pre_delete and post_delete signals for all deleted objects (including cascaded deletions).
For that to work, you can override delete method on QuerySet, and then apply that QuerySet as manager:
class ImageQuerySet(models.QuerySet):
def delete(self, *args, **kwargs):
for obj in self:
obj.img.delete()
super(ImageQuerySet, self).delete(*args, **kwargs)
class Image(models.Model):
objects = ImageQuerySet.as_manager()
img = models.ImageField(upload_to=get_image_path)
...
def delete(self, *args, **kwargs):
self.img.delete()
super(Image, self).delete(*args, **kwargs)
I believe this issue is addressed in the docs
where it says:
Overridden model methods are not called on bulk operations
Note that the delete() method for an object is not necessarily called when deleting objects in bulk using a QuerySet or as a result of a cascading delete. To ensure customized delete logic gets executed, you can use pre_delete and/or post_delete signals.
Unfortunately, there isn’t a workaround when creating or updating objects in bulk, since none of save(), pre_save, and post_save are called.
As suggested in the docs above, I believe a better solution is to use the post_delete signal, like so:
from django.db.models.signals import post_delete
from django.dispatch import receiver
class Image(models.Model):
img = models.ImageField(upload_to=get_image_path)
...
#receiver(post_delete, sender=Image)
def delete_image_hook(sender, instance, using, **kwargs):
instance.img.delete()
Unlike overriding the delete method, the delete_image_hook function should be called on bulk deletes and cascading deletes as well. Here is more information on using Django's Signals: https://docs.djangoproject.com/en/1.11/topics/signals/#connecting-to-signals-sent-by-specific-senders
Note on previous answers:
Some of the earlier posts suggest overriding the delete method of QuerySet, which may have performance implications and other unintended behavior. Perhaps those answers were written before Django's Signals were implemented, but I think using Signals is a cleaner approach.
Delete method of queryset works directly on the database. It does not call Model.delete() methods. From the docs:
Keep in mind that this will, whenever possible, be executed purely in SQL, and so the delete() methods of individual object instances will not necessarily be called during the process. If you’ve provided a custom delete() method on a model class and want to ensure that it is called, you will need to “manually” delete instances of that model (e.g., by iterating over a QuerySet and calling delete() on each object individually) rather than using the bulk delete() method of a QuerySet.
If you want to override Django administration interface's default behavior, you can write a custom delete action:
https://docs.djangoproject.com/en/dev/ref/contrib/admin/actions/
Another method is to override post_delete (or pre_delete) signal instead of delete method:
https://docs.djangoproject.com/en/dev/ref/signals/#django.db.models.signals.post_delete
Like pre_delete, but sent at the end of a model’s delete() method and a queryset’s delete() method.
The accepted answer may not work for everyone. I couldn't get it to work on Django 3.2, but it may only be because I already had a custom manager and was not confident that I could combine my customizations to a models.Manager with customizations to a models.QuerySet.
I found that overriding delete_queryset (available in Django 2.1+) on the model's admin (as described in this thorough and fully-illustrated answer from user Kushan Gunasekera to another related SO question) was quick and easy.
I have a model which is overriding save() to slugify a field:
class MyModel(models.Model):
name = models.CharField(max_length=200)
slug = models.SlugField(max_length=200)
def save(self, *args, **kwargs):
self.slug = slugify(self.name)
super(MyModel, self).save(*args, **kwargs)
When I run load data to load a fixture, this save() does not appear to be called because the slug field is empty in the database. Am I missing something?
I can get it to work by a pre_save hook signal, but this is a bit of a hack and it would be nice to get save() working.
def mymodel_pre_save(sender, **kwargs):
instance = kwargs['instance']
instance.slug = slugify(instance.name)
pre_save.connect(mymodel_pre_save, sender=MyModel)
Thanks in advance.
No you're not. save() is NOT called by loaddata, by design (its way more resource intensive, I suppose). Sorry.
EDIT: According to the docs, pre-save is not called either (even though apparently it is?).
Data is saved to the database as-is, according to https://docs.djangoproject.com/en/dev/ref/django-admin/#what-s-a-fixture
I'm doing something similar now - I need a second model to have a parallel entry for each of the first model in the fixture. The second model can be enabled/disabled, and has to retain that value across loaddata calls. Unfortunately, having a field with a default value (and leaving that field out of the fixture) doesn't seem to work - it gets reset to the default value when the fixture is loaded (The two models could have been combined otherwise).
So I'm on Django 1.4, and this is what I've found so far:
You're correct that save() is not called. There's a special DeserializedObject that does the insertion, by calling save_base() on the Model class - overriding save_base() on your model won't do anything since it's bypassed anyway.
#Dave is also correct: the current docs still say the pre-save signal is not called, but it is. It's behind a condition: if origin and not meta.auto_created
origin is the class for the model being saved, so I don't see why it would ever be falsy.
meta.auto_created has been False so far with everything I've tried, so I'm not yet sure what it's for. Looking at the Options object, it seems to have something to do with abstract models.
So yes, the pre_save signal is indeed being sent.
Further down, there's a post_save signal behind the same condition that is also being sent.
Using the post_save signal works. My models are more complex, including a ManyToMany on the "Enabled" model, but basically I'm using it like this:
from django.db.models.signals import post_save
class Info(models.Model):
name = models.TextField()
class Enabled(models.Model):
info = models.ForeignKey(Info)
def create_enabled(sender, instance, *args, **kwards):
if Info == sender:
Enabled.objects.get_or_create(id=instance.id, info=instance)
post_save.connect(create_enabled)
And of course, initial_data.json only defines instances of Info.