The save() documentation explains that:
A subclass of ModelForm can accept an
existing model instance as the keyword
argument instance; if this is
supplied, save() will update that
instance. If it's not supplied, save()
will create a new instance of the
specified model
However, self.instance in save() always has an object.
So, how do I tell if the instance is existing or a newly created one?
You can check self.instance.pk to see if the model has previously been saved. However, that could be unreliable in the case where you created a new instance of the model and then initialized a modelform with that instance before saving it.
Another possibility, based on the BaseModelForm source code in Django 1.2, is to check self.instance._adding, which will be True if the model was created and False otherwise. However, I haven't tested this, so YMMV.
If the first option will work, I'd recommend using that rather than an undocumented feature of ModelForms--it's less likely to change in the future and probably clearer.
Related
If I have a relation like this in django
class Reporter(models.Model):
pass
class Article(models.Model):
reporter = models.ForeignKey(Reporter)
and I want to create a new reporter with articles at once, I first have to save() the Reporter to DB and then I can add the articles.
But sometime, I would like to prepare everything "offline" (in sense of, before pushing anything to the DB), so like creating a Reporter object, adding articles to it and maybe afterwards still modifying some attributes of the Reporter object.
Then, when everything is done, I want to push all together to the DB. But of course when I use Reporter.article_set.add() before calling Reporter.save() I will get an error, because django will try to add the articles and foreign keys to the DB automatically. Is there any way to prevent this, and prepare my object inlcuding the relations "offline" ?
My own approach would be, to add a set_articles method to Reporter and then override the save() method so it will check if there are any articles set and add them after saving the Reporter
But before I start improvising I would like to know if there are already any solutions within django
The save() method accepts an optional commit keyword argument, which accepts either True or False. If you call save() with commit=False, then it will return an object that hasn’t yet been saved to the database. In this case, it’s up to you to call save() on the resulting model instance.
This may seem like a dumb question but I really feel like Django-Rest-Framework does not explain really well how it works. There is too much black box magic which confuses the configuration instructions (in my opinion).
For instance, it says I can override create or update within my serializer. So when my view post, I can send data to the serializer which has an update method declared.
class MySerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
def update(self, instance, validated_data):
Does update only get called when the model already exists and we're just updating some of it's data? Or does it call create when it creates a new one?
if I were to add this method to that class,
def create(self, validated_data):
return MyObject.objects.create(**validated_data)
is this specifically the method that must be called in order to add a new object? and your ability to override should be put in the serializer, but if not declared this is the default method with parameters that's being called?
There is too much black box magic which confuses the configuration instructions (in my opinion).
If there is something in the documentation that you think can be improved, feel free to submit a pull request with an update. If it seems reasonable, it will probably be merged and show up in a future release.
Or does it call create when it creates a new one?
create is called when a serializer is not initialized with a model instance, but only data is passed into it. Once serializer.save() is called, the serializer check if an instance was passed in and directly calls create so the model instance is created and saved in the database.
Does update only get called when the model already exists and we're just updating some of it's data?
update is called when a serializer is initialized with a model instance (or other object) and serializer.save() is called. This is roughly equivalent to Model.objects.update() when compared to Model.objects.create().
is this specifically the method that must be called in order to add a new object?
Serializers are designed to be saved (including object creation and updating) using the central serializer.save() method. This is similar to how a model can be saved or created using the model.save() method.
and your ability to override should be put in the serializer, but if not declared this is the default method with parameters that's being called?
It is recommended to override create and update on the serializer level if you need to change the logic for how a model and its related objects need to be saved, such as when working with nested serializers.
I'm using Django 1.7. I've got a default custom manager that filters on an "active" boolean field. According to the docs, it needs to be the default manager to work with related fields (ie. accessing User.story_set only shows active Story objects). I'm keeping the standard manager for admin and shell access, but I am unable to save changes to objects, I'm speculating because save() methods pass through the default manager at some point.
class Story(models.Model):
active = models.BooleanField(default=True)
....
objects = ActiveStoryManager()
full_set = models.Manager()
class ActiveStoryManager(models.Manager):
def get_query_set(self):
return super(ActiveStoryManager, self).get_query_set().filter(active=True)
use_for_related_fields = True
This works well for all public-facing use. However, in admin and shell I am unable to affect inactive objects, including turning them back active.
story = Story.full_set.get(id=#) will fetch a story with active=False, but after setting active=True I am unable to save, getting a
django.db.utils.IntegrityError: duplicate key value violates unique constraint "stories_story_pkey"
DETAIL: Key (id)=(#) already exists.
Calling save.(force_update=True) returns django.db.utils.DatabaseError: Forced update did not affect any rows.
So while save() is a model method, it seems to depend on the default manager at some point in the saving process.
A workaround is using the Queryset API, e.g. Story.full_set.filter(id=#).update(active=True), but that's only usable in the shell, and requires manually typing each change, still can't save inactive instances in the admin.
Any help on this?
It cannot be done! As inancsevinc pointed out, save() calls on the default manager. The Django docs mention that get_query_set should not be modified on default managers, and I have sadly found out why. Hopefully in the future relatedManagers can be specified/controlled, but for now this method will not work for me. Confirmed in Django IRC chat.
Instead, I'm throwing together a ordinary Manager method, as well as model methods for some models, to get equivalent functionality. Also requires changing all the related_set calls in the template to include the new methods, so it's a pain, but no other way.
To make admin page work with a different manager, you can implement get_queryset method on your ModelAdmin class.
class StoryAdmin(ModelAdmin):
def get_queryset(self, request):
return self.model.full_set.get_queryset()
Django models generally handle the ON DELETE CASCADE behaviour quite adequately (in a way that works on databases that don't support it natively.)
However, I'm struggling to discover what is the best way to override this behaviour where it is not appropriate, in the following scenarios for example:
ON DELETE RESTRICT (i.e. prevent deleting an object if it has child records)
ON DELETE SET NULL (i.e. don't delete a child record, but set it's parent key to NULL instead to break the relationship)
Update other related data when a record is deleted (e.g. deleting an uploaded image file)
The following are the potential ways to achieve these that I am aware of:
Override the model's delete() method. While this sort of works, it is sidestepped when the records are deleted via a QuerySet. Also, every model's delete() must be overridden to make sure Django's code is never called and super() can't be called as it may use a QuerySet to delete child objects.
Use signals. This seems to be ideal as they are called when directly deleting the model or deleting via a QuerySet. However, there is no possibility to prevent a child object from being deleted so it is not usable to implement ON CASCADE RESTRICT or SET NULL.
Use a database engine that handles this properly (what does Django do in this case?)
Wait until Django supports it (and live with bugs until then...)
It seems like the first option is the only viable one, but it's ugly, throws the baby out with the bath water, and risks missing something when a new model/relation is added.
Am I missing something? Any recommendations?
Just a note for those who run into this issue as well, there is now an built-in solution in Django 1.3.
See the details in the documentation django.db.models.ForeignKey.on_delete Thanks for editor of Fragments of Code site to point it out.
The simplest possible scenario just add in your model FK field definition:
on_delete=models.SET_NULL
Django only emulates CASCADE behaviour.
According to discussion in Django Users Group the most adequate solutions are:
To repeat ON DELETE SET NULL scenario - manually do obj.rel_set.clear() (for every related model) before obj.delete().
To repeat ON DELETE RESTRICT scenario - manually check is obj.rel_set empty before obj.delete().
Ok, the following is the solution I've settled on, though it's far from satisfying.
I've added an abstract base class for all my models:
class MyModel(models.Model):
class Meta:
abstract = True
def pre_delete_handler(self):
pass
A signal handler catches any pre_delete events for subclasses of this model:
def pre_delete_handler(sender, instance, **kwargs):
if isinstance(instance, MyModel):
instance.pre_delete_handler()
models.signals.pre_delete.connect(pre_delete_handler)
In each of my models, I simulate any "ON DELETE RESTRICT" relations by throwing an exception from the pre_delete_handler method if a child record exists.
class RelatedRecordsExist(Exception): pass
class SomeModel(MyModel):
...
def pre_delete_handler(self):
if children.count():
raise RelatedRecordsExist("SomeModel has child records!")
This aborts the delete before any data is modified.
Unfortunately, it is not possible to update any data in the pre_delete signal (e.g. to emulate ON DELETE SET NULL) as the list of objects to delete has already been generated by Django before the signals are sent. Django does this to avoid getting stuck on circular references and to prevent signaling an object multiple times unnecessarily.
Ensuring a delete can be performed is now the responsibility of the calling code. To assist with this, each model has a prepare_delete() method that takes care of setting keys to NULL via self.related_set.clear() or similar:
class MyModel(models.Model):
...
def prepare_delete(self):
pass
To avoid having to change too much code in my views.py and models.py, the delete() method is overridden on MyModel to call prepare_delete():
class MyModel(models.Model):
...
def delete(self):
self.prepare_delete()
super(MyModel, self).delete()
This means that any deletes explicitly called via obj.delete() will work as expected, but if a delete has cascaded from a related object or is done via a queryset.delete() and the calling code hasn't ensured that all links are broken where necessary, then the pre_delete_handler will throw an exception.
And lastly, I've added a similar post_delete_handler method to the models that gets called on the post_delete signal and lets the model clear up any other data (for example deleting files for ImageFields.)
class MyModel(models.Model):
...
def post_delete_handler(self):
pass
def post_delete_handler(sender, instance, **kwargs):
if isinstance(instance, MyModel):
instance.post_delete_handler()
models.signals.post_delete.connect(post_delete_handler)
I hope that helps someone and that the code can be re-threaded back into something more useable without too much trouble.
Any suggestions on how to improve this are more than welcome.
I have a model with a version field - autocreate timestamp.
When a model instance is being saved I want to create a new instance with a new timestamp instead of updating the old model instance.
Is it possible?
I thought of overriding the model save() method but I don't know how to create a new instance without creating a infinite save() loop.
Thanks
You could set self.id = None in the overridden save method - then in the super method, Django would do an INSERT rather than an UPDATE.
Or, as pointed out in the documentation here, you could use the force_insert=True parameter in the call to save, which does the same thing.