I have two models, Invoice and InvoiceItems, which have a one-to-many relationship.
Throughout the code base we're creating InvoiceItems for a given Invoice using the Manager object as:
invoice.invoice_items.create(...)
The thing is, now we have a validation that has to take place before trying to create an InvoiceItem, and going through the codebase, refactoring all the creation pieces would be a headache.
I wonder if there's a way to override the create method itself or should we go for the model's save()?
To modify a Manager's method you need to create your own. Given the following case:
# models
class MyModel(models.Model):
# ... fields
objects = MyManager()
class MyManager(models.Manager):
def create(self):
# write your own code here
pass
Do not worry about the others methods (filter, delete, etc.) all of them will work as usual.
You can find more about custom managers here
Related
Right now i'm reading about custom managers that you can use to add additional logic when doing a CRUD action like Create. You make a custom manager class and then initialize the objects attribute of the table class with an instance of the custom manager class.
I've also read that you can also use the pre_save and post_save signals to add additional save logic to the save action. My question is: When should i use a custom manager class over signals and is the use of a custom manager slower then the default manager of the Model class?
Thank you
OK, here's one I wrote earlier. I've cut out a lot of irrelevant stufff, hoping all the essentials remain. You need to know there's a Foreign Key chain PRSBlock -> PRS2 -> Jobline -> Description. Using this, all queries for PRSBlock.objects.filter(... will return PRSBlock objects with extra fields membrane_t, substrate_t and material from the Description found by following this chain. They're almost always needed in this context, unlike most of the other things along that chain. select_related would be serious overkill.
class PRSBlock_Manager(models.Manager):
def get_queryset(self):
return super().get_queryset().annotate(
membrane_t = F('PRS__jobline__description__mt'),
substrate_t= F('PRS__jobline__description__ft'),
material = F('PRS__jobline__description__material')
)
class PRSblock( models.Model):
# override the default manager
objects=PRSBlock_Manager() # annotate always with description parametrs relevant to block LUTs
PRS = models.ForeignKey( PRS2, models.CASCADE, related_name='prs_blocks')
# and lots of other fields that aren't relevant
I am trying to create a very simple CRUD application using REST API.
So I create a very simple model, serializer and viewset for all these.
And then I noticed that I don't fully understand some basic principals about right use-cases for calling (for example create method for my model instance)
As I understand, django providers several approaches:
I can define my CRUD methods inside model class:
class Foo(models.Model):
...
def create(...):
foo = Foo()
foo.save()
I also can create instances using model serializers (seems there is no big difference, because the same save method from model instance is calling):
class FooSerializer(seializer.ModelSerilizer):
...
class Meta:
model = Foo
....
def create():
fs = self.Meta.model()
fs.save()
2b. I can use simple serializers:
class FooSerializer(serializers.Serializer):
def create(**validated_data):
return Foo(**validated_data)
Finally I can use perform_create, update and so on from viewset:
class FooView(ModelViewSet):
serializer = FooSerializer
def perform_create():
serializer.save()
...
Is there some patterns when one or another solution should be implemented?
Could you please provide some explanation with use cases?
THanks!
Lets go step by step on your points of creating/using create method:
You don't need to write a create() method inside model.
You don't need to write a create() method in model serializer, unless you want to handle additional keywords or override the create() method to change the default behavior(Reference).
In serializer.Serializer you can write a create() method if you want save an instance with that serializer. Useful when you are using this serializer with GenericAPIViews or Viewsets. Reference can be found in documentation.
By writing perform_create() method in Viewset, you are basically overriding the default perform_create() from the Viewset. You can integrate additional tasks inside that function when overriding it(example).
While going through the official Django documentation I came across the Model Instance reference section in which it is mentioned that you can create an instance of the model using the custom Model using self.create. I wanted to know what's the difference between using the create method and the custom create_method when both are using the same fields and in both the cases the data is being saved in the DB.
Documentation:
https://docs.djangoproject.com/en/2.2/ref/models/instances/#creating-objects
class BookManager(models.Manager):
def create_book(self, title):
book = self.create(title=title)
return book
class Book(models.Model):
title = models.CharField(max_length=100)
objects = BookManager()
book = Book.objects.create_book("Pride and Prejudice")
Difference between these two
book2 = Book.objects.create(title="Pride and Prejudice")
Well in this simplest case, there is no difference. The reason of describing this technique in docs is obvious there
You may be tempted to customize the model by overriding the __init__
method. If you do so, however, take care not to change the calling
signature as any change may prevent the model instance from being
saved. Rather than overriding __init__, try using one of these
approaches:
It means you may be want to set some extra/default values to model instance. If you override constructor for this purpose, it is a little unsafe (and not usually a practice in django). That's why two other techniques for doing this are described. You are mentioning one of them. You can do some extra stuff in custom manager method if you want
class BookManager(models.Manager):
def create_book(self, title):
# you can do some extra stuff here for instance creation
book = self.create(title=title)
# or here when it is saved to db
return book
Otherwise there is no difference.
I have a use case where a particular class can either be transient or persistent. Transient instances are build from a JSON payload on a PUT call, and may either be persisted to the database or used during the server call and then either returned or discarded. What is best practice for this case? My options seem to be:
Write two classes, one of which is a models.Model subclass, and the other of which isn't, and make them implement the same API, or
Use the Model subclass, but be careful not to call save().
Is either of these preferable, according to conventional use of Django models?
You'll need both:
abstract = True is useful if inheritants still should be concrete models, so that no table should be created just for the parent class. It allows you to opt out of multi-table inheritance, and instead have the shared attributes duplicated to inheritants tables instead (abstract base inheritance).
managed = False is useful if the inheriting class should never be persisted at all. Django migrations and fixtures won't generate any database table for this.
class TransientModel(models.Model):
"""Inherit from this class to use django constructors and serialization but no database management"""
def save(*args, **kwargs):
pass # avoid exceptions if called
class Meta:
abstract = True # no table for this class
managed = False # no database management
class Brutto(TransientModel):
"""This is not persisted. No table app_brutto"""
#do more things here
pass
In order to remain as DRY as possible, you could have an abstract mock class deriving your model:
class A(models.Model):
# fields'n'stuff
class TransientA(A):
def save(*args, **kwargs):
pass # avoid exceptions if called
class Meta:
abstract = True # no table created
Now, even if you call save on it anywhere (even in methods inherited from A), you'll be shooting blanks.
The main purpose of a model is to contain business logic, so I want most of my code inside Django model in the form of methods. For example I want to write a method named get_tasks_by_user() inside task model. So that I can access it as
Tasks.get_tasks_by_user(user_id)
Following is my model code:
class Tasks(models.Model):
slug=models.URLField()
user=models.ForeignKey(User)
title=models.CharField(max_length=100)
objects=SearchManager()
def __unicode__(self):
return self.title
days_passed = property(getDaysPassed)
def get_tasks_by_user(self,userid):
return self.filters(user_id=userid)
But this doesn't seems to work, I have used it in view as:
tasks = Tasks.objects.get_tasks_by_user(user_id)
But it gives following error:
'SearchManager' object has no attribute 'get_tasks_by_user'
If I remove objects=SearchManager, then just name of manager in error will change so I think that is not issue. Seems like I am doing some very basic level mistake, how can I do what I am trying to do? I know I can do same thing via :Tasks.objects.filters(user_id=userid) but I want to keep all such logic in model. What is the correct way to do so?
An easy way to do this is by using classmethod decorator to make it a class method. Inside class Tasks:
#classmethod
def get_tasks_by_user(cls, userid):
return cls.objects.filters(user_id=userid)
This way you can simply call:
tasks = Tasks.get_tasks_by_user(user_id)
Alternatively, you can use managers per Tom's answer.
To decided on which one to choose in your specific case, you can refer James Bennett's (the release manager of Django) blog post on when to use managers/classmethod.
Any methods on a model class will only be available to instances of that model, i.e. individual objects.
For your get_tasks_by_user function to be available as you want it (on the collection), it needs to be implemented on the model manager.
class TaskManager(models.Manager):
def get_tasks_by_user(self, user_id):
return super(TaskManager, self).get_query_set().filter(user=user_id)
class Task(models.Model):
# ...
objects = TaskManager()