Sorry for some crazy subj.
I'd like to override django models save method and call some additional code if the model instance is newly created.
Sure I can use signals or check if the model have empty pk field and if yes, create temporary variable and later call a code:
Class EmailModel(models.Model):
email = models.EmailField()
def save(self, *args, **kwargs)
is_new = self.pk is None
super(EmailModel, self).save(*args, **kwargs)
# Create necessary objects
if is_new:
self.post_create()
def post_create(self):
# do job, send mails
pass
But I like to have some beautiful code and avoid using temporary variable in save method.
So the question is: is it possible to find if the instance of model is newly created object just after super save_base parent method call?
I've checked django sources can't find how to do that in right way.
Thanks
We have related post
For real - signals are best approch in this case.
You could use post_save() signal and in the listener just check if the credit_set exist for current model instance and if not - create one. That would be my choice - there is no need to overdo such a simple task.
Of course if you really need to know exactly when the model was initiated (I doubt it) use post_init() signal. You don't need to override save() method just to set some additional variables. Just catch post_init() signal, or pre_save(), and just change/add what you want. IMHO there is no sense to override save() method and check if this is new instance or not - that's why the signals are there.
Related
In a simple ForeignKey relation, I want to run specific code when one of the related object is modified.
Here a schematic code :
class Car(models.Model):
pass
class Wheel(models.Model):
car = models.ForeignKey('Car',
on_delete=models.CASCADE,
related_name='wheels',
null=True)
def save(self, *args, **kwargs):
super().save(*args, **kwargs)
my_custom_code()
According to django's documentation, the reverse related manager method .add perform the save in database using update() instead of save() by default. (idem for .remove, etc...)
So in the below code, the my_custom_code isn't called when using .add:
car = Car.objects.create()
wheel = Wheel.objects.create() # ok my_custom_code called here
car.wheels.add(wheel) # not called here because django use "update mechanism" which do not use save method
We need to indicate bulk=False to add method in order to force using save method.
car.wheels.add(wheel, bulk=False) # ok my_custom_code called here
This is a problem for me as it is important that the my_custom_code is called when any of the related objects is modified. If someone forgot to indicate bulk=False, it will generate inconsistent data.
There is a signal for this case ? (like m2m_changed signal)
There is a way to force bulk=False for all methods of a ForeignKey relation ?
Thanks for your help.
Why I need this ? My X problem is to store a computed result depending of the related elements in the parent record. (and I need this value always be up to date)
My initial idea was to do computation every time a related model is modified by overriding save.
I am using Django 1.9, and now trying to override save method behaviour. The problem is that when I do instance.some_field = some_value, the self object is already modified, whereas I need to know that was the original value of some_field. To do this, the only way out seems to be fetching object from database by self's pk.
So I wonder what is GENERIC syntax - if there's any - to fetch instance from the database? By saying GENERIC, I imply that I don't want to explicitly type the model name (like MYMODEL.objects.get(...)), but rather make Django figure out the right model based on target instance's model.
To make the question clearer, let me illustrate my goal in a pseudo-code:
def save_extented(self, *args, **kwargs):
original_object = (self model).objects.get(pk = self.pk)
Is it possible ? Or maybe I don't need this, and there's some smart Django hack to fetch the instance with rolled back field values ?
You can use django-dirtyfields and pre_save signal:
#receiver(pre_save, sender=MyModel)
def pre_save_logic(sender, instance, **kwargs):
if 'my_field' in instance.get_dirty_fields():
do_some_logic()
Is there a way, hopefully without breaking admin, to disable editing existing model instances on the ORM level?
I'm not talking about removing 'Save' and 'Save and continue' buttons from templates - there should be no operations that can change the values of a committed instance of a model.
Preferably, the 'Save As' option should work instead.
Overwrite the save function for your model like so:
class MyModel(models.Model):
def save(self, *args, **kwargs):
if self.pk is None:
super(MyModel, self).save(*args, **kwargs)
This function only call the superclass save function (which actually saves the change) if there is no pk, e.g. the model instance is new.
You could override your model class's save() (do nothing if self.pk) and delete (always do nothing)
But really, the database level is the safest place for that. For example, in PostgreSQL you could write two simple rules:
CREATE RULE noupd_myapp_mymodel AS ON UPDATE TO myapp_mymodel
DO NOTHING;
CREATE RULE nodel_myapp_mymodel AS ON DELETE TO myapp_mymodel
DO NOTHING;
Either way, the admin wouldn't know anything about this, so everything still looks editable. See my answer to Whole model as read-only for an attempt at making a model read-only in the admin. For your purposes, keep the add permission as-is, and only declare all fields read-only when not adding.
EDIT: One reason why overriding delete() in your model class is not safe, is the fact that "bulk delete" (Queryset.delete(), e.g. admin checkboxes action) will not call the individual instances' delete() method, it will go straight to SQL: https://docs.djangoproject.com/en/dev/topics/db/queries/#deleting-objects
For those who need to prevent MyModel.objects.filter(pk=123).update(name="bob"):
class NoUpdateQuerySet(models.QuerySet):
def update(self, *args, **kwargs):
pass
class MyModel(models.Model):
objects = NoUpdateQuerySet.as_manager()
...
Django docs - link
I have a model which is overriding save() to slugify a field:
class MyModel(models.Model):
name = models.CharField(max_length=200)
slug = models.SlugField(max_length=200)
def save(self, *args, **kwargs):
self.slug = slugify(self.name)
super(MyModel, self).save(*args, **kwargs)
When I run load data to load a fixture, this save() does not appear to be called because the slug field is empty in the database. Am I missing something?
I can get it to work by a pre_save hook signal, but this is a bit of a hack and it would be nice to get save() working.
def mymodel_pre_save(sender, **kwargs):
instance = kwargs['instance']
instance.slug = slugify(instance.name)
pre_save.connect(mymodel_pre_save, sender=MyModel)
Thanks in advance.
No you're not. save() is NOT called by loaddata, by design (its way more resource intensive, I suppose). Sorry.
EDIT: According to the docs, pre-save is not called either (even though apparently it is?).
Data is saved to the database as-is, according to https://docs.djangoproject.com/en/dev/ref/django-admin/#what-s-a-fixture
I'm doing something similar now - I need a second model to have a parallel entry for each of the first model in the fixture. The second model can be enabled/disabled, and has to retain that value across loaddata calls. Unfortunately, having a field with a default value (and leaving that field out of the fixture) doesn't seem to work - it gets reset to the default value when the fixture is loaded (The two models could have been combined otherwise).
So I'm on Django 1.4, and this is what I've found so far:
You're correct that save() is not called. There's a special DeserializedObject that does the insertion, by calling save_base() on the Model class - overriding save_base() on your model won't do anything since it's bypassed anyway.
#Dave is also correct: the current docs still say the pre-save signal is not called, but it is. It's behind a condition: if origin and not meta.auto_created
origin is the class for the model being saved, so I don't see why it would ever be falsy.
meta.auto_created has been False so far with everything I've tried, so I'm not yet sure what it's for. Looking at the Options object, it seems to have something to do with abstract models.
So yes, the pre_save signal is indeed being sent.
Further down, there's a post_save signal behind the same condition that is also being sent.
Using the post_save signal works. My models are more complex, including a ManyToMany on the "Enabled" model, but basically I'm using it like this:
from django.db.models.signals import post_save
class Info(models.Model):
name = models.TextField()
class Enabled(models.Model):
info = models.ForeignKey(Info)
def create_enabled(sender, instance, *args, **kwards):
if Info == sender:
Enabled.objects.get_or_create(id=instance.id, info=instance)
post_save.connect(create_enabled)
And of course, initial_data.json only defines instances of Info.
I use the save() method to override the models.Model method in order to manipulate a bit of the properties before saving them in the DB (for example, issuing a Google MAPS API call to get geolocation of an address).
However, I would not like to issue a call to Google everytime I update an entry via the admin panel, but only when I insert a new entry.
Is there any easy way to do it (instead of running a query against the DB inside the save() method and check if the object already exists)?
Meir
check the primary key of that said object on save()
def save(self, *args, **kwargs):
if self.pk:
# this is executed when updating (pk exists)
else:
# this is executed when inserting
super(Model, self).save(*args, **kwargs)
something like that
source
read the link thoroughly, theres a gotcha there
EDIT: gotchas, read the comments for more gotchas
https://docs.djangoproject.com/en/dev/ref/models/querysets/#get-or-create
basically, with this you can check if your object exists and run your update, or create a new one if the object isn't int your database