Limit the number of rows in a Django table - django

I have a table in my models file and I want to design it such that there is a limit to ten rows in the table. When the limit is exceeded the oldest row will be deleted. For some context this is for a display on the front end that shows a user the ten most recent links they have accessed. I am new to Django so if anyone had a suggestion on how to do this, it would be greatly appreciated!

You could write a custom save method that checks the length of YourObject.objects.all(), and then deletes the oldest one when that length is equal to 10.
Something along the line of:
def save(self, *args, **kwargs):
if YourModel.objects.count() == 10:
objects[0].delete()
super(YourModel, self).save(*args, **kwargs)

In my opinion, you can use Signals. A post_save in this case. This way, you keep the object creation and deletion logic separate.
Since you want the oldest to be deleted, I am assuming you have a created field in the model.
Once you save,
def my_handler(sender, instance, **kwargs):
qs = MyModel.objects.order_by('created') #ensure ordering.
if qs.count() > 10:
qs[0].delete() #remove the oldest element
class MyModel(models.Model):
title = models.CharField('title', max_length=200)
created = models.DateTimeField(auto_add_now=True, editable=False)
post_save.connect(my_handler, sender=MyModel)
Of course, nothing stops you from using the pre_save signal, but use it only if you are absolutely sure the save method wont fail.

Improving a bit on #karthikr answer.
I also believe a post-save signal handler is the best way, but isolating the function in a class method and checking for the created flag. If for example you want to keep a log of the last 1000 emails your system has sent:
from django.db import models
from django.db.models.signals import post_save
LOG_SIZE=1000
class EmailLog(models.Model):
"""Keeps a log of the 1000 most recent sent emails."""
sent_at = models.DateTimeField(auto_add_now=True)
message = models.TextField()
to_addr = models.EmailField()
#classmethod
def post_create(
cls,
sender,
instance: "EmailLog",
created: bool,
*args,
**kwargs
):
if created: # Indicates if it's a new object
qset = EmailLog.objects.order_by("sent_at") # Force ordering
if qset.count() > LOG_SIZE:
qset[0].delete()
post_save.connect(EmailLog.post_create, sender=EmailLog)

Related

Django why model foreign key cascade will not trigger delete?

there two basic ways to do something when an instance gets deleted:
Overwrite Model.delete
Signal
I used to reckon both of them serve the same purpose, just provides different ways of writing, but works exactly.
However, in this occasion, I realise I was wrong:
class Human(models.Model):
name = models.CharField(max_length=20)
class Pet(models.Model):
name = models.CharField(max_length=20)
owner = models.ForeignKey(Human, related_name="pet", on_delete=models.CASCADE)
def delete(self, *args, **kwargs):
print('------- Pet.delete is called')
return super().delete(*args, **kwargs)
h = Human(name='jason')
h.save()
p = Pet(name="dog", owner=h)
p.save()
h.delete()
# nothing is shown
Why Pet.delete Is not firing at Human.delete By the foreign cascade? Does I have to apply a signal on this? If so, would it cost more performance?
I am building something very heavy, comment system, filter decent records and delete when the commented target get deleted, the comment model has many null-able foreign key fields, with models.CASCADE Set, only one of them is assigned with value. But in product delete view, I call product.delete Then triggers cascade, but comment.delete Is not firing.
Currently, the project has delete Defined on many models, with assumption that it is always triggered when the instance get removed from database, and it is tremendous work to rewrite it in signal. Is there a way to call delete When at cascading? (I know it is likely impossible since it is a database field specification)
I implement a mix-in for Commendable models with extra methods defined, therefore, I decided to modify delete method to signal to something like this:
from django.db import models
from django.dispatch import receiver
from django.db.models.signals import pre_delete
# Create your models here.
class Base:
def __init_subclass__(cls):
#receiver(pre_delete, sender=cls)
def pet_pre_delete1(sender, instance, **kwargs):
print('pet pre delete 1 is called')
#receiver(pre_delete, sender=cls)
def pet_pre_delete2(sender, instance, **kwargs):
print('pet pre delete 2 is called')
class Human(models.Model):
name = models.CharField(max_length=20)
def __str__(self):
return f'<human>{self.name}'
class Pet(Base, models.Model):
name = models.CharField(max_length=20)
owner = models.ForeignKey(Human, related_name="pet", on_delete=models.CASCADE)
def __str__(self):
return f'<pet>{self.name}'
# ------- Pet.delete is called
# pet pre delete 1 is called
# pet pre delete 2 is called
it works fine in testing, I wonder if there is any risk using this, would it be garbage collected?

Is the post_save signal in Django atomic?

I really can't find anything solid in the docs. Lets say I'm doing something like this:
from django.db.models.signals import post_save
from django.dispatch import receiver
class Item(models.Model):
total_score = models.IntegerField()
def set_score(self):
...
class Review(models.Model):
item = models.ForeignKey(Item, on_delete=models.CASCADE)
score = models.IntegerField()
#receiver(post_save, sender=Review)
def my_handler(sender, **kwargs):
sender.item.set_score()
What I'm trying to do is call set_score() for the item object, whenever a review object is saved. Is this atomic? I definitely want the whole thing to be atomic, as a situation where a review is saved, but the item's total score is not updated is a recipe for bugs.
No, there's nothing special about signals with regard to database transactions (the only kind of atomicity handled by Django). It's up to you to ensure that the relevant commands are always part of the same database transaction.
One approach would be to simply rely on the calling code to do this by using ATOMIC_REQUESTS, using transactions in your views, etc.
Or, since post_save signals are sent as part of Model.save(), you could simply override Review.save() and make it use a transaction.
class Review(models.Model):
...
#transaction.atomic()
def save(self, *args, **kwargs):
super().save(*args, **kwargs)

Django model validate a ManyToMany field before adding

I have a model that looks somewhat like this:
class Passenger(models.Model):
name = models.CharField(max_length=50)
surname = models.CharField(max_length=50)
class Flight(models.Model):
capacity = models.IntegerField()
passengers = models.ManyToManyField(Passenger)
Before adding a new passenger to the flight I would like to validate whether the number of passengers is not going to exceed the capacity. I was wondering what would be the best way to go about this.
Obviously I could manually check the number of passengers before adding a new one, but maybe there is some support in django? I tried writing a validator, but wasn't sure how to do it.
According to Django docs you can listen to the m2m_changed signal, which will trigger pre_add and post_add actions.
Using add() with a many-to-many relationship, however, will not call
any save() methods (the bulk argument doesn’t exist), but rather
create the relationships using QuerySet.bulk_create(). If you need to
execute some custom logic when a relationship is created, listen to
the m2m_changed signal, which will trigger pre_add and post_add
actions.
According to #M.Void answer – Code Example:
from django.db import models
from django.db.models.signals import m2m_changed
from django.core.exceptions import ValidationError
class MyModel(models.Model):
m2mField = models.ManyToManyField('self')
m2mFieldLimit = 2
def m2mField_changed(sender,**kwargs):
instance = kwargs['instance']
if len(instance.m2mField.all()) >= instance.m2mFieldLimit :
raise ValidationError(f'Max number of records is {instance.m2mFieldLimi}')
m2m_changed.connect(commonobjects_changed,sender=MyModel.m2mField.through)
Override the clean method on the model to do the check you want:
class Passenger(models.Model):
name = models.CharField(max_length=50)
surname = models.CharField(max_length=50)
def clean(self, *args, **kwargs):
# clean gets called automatically by other things, so we can't always
# expect flight_id to be provided
if 'flight_id' in kwargs:
flight = Flight.objects.get(pk=kwargs['flight_id'])
if flight.passengers.all().count() >= flight.capacity:
# flight is full!
raise ValidationError
super(Passenger, self).clean()
class Flight(models.Model):
capacity = models.IntegerField()
passengers = models.ManyToManyField(Passenger)
Note that to do this, you will have to pass in the flight ID when validating the passenger:
f = Flight.objects.get(...)
p = Passenger(name='First', surname='Last')
try:
p.clean(flight_id=f.id) # full_clean calls clean, among other validations
p.save()
except ValidationError as e:
# do something to handle the error
Note that it is possible in multi-threaded applications for something to get validated successfully, but still fail to save in a race condition. You would need to add additional code to handle that.
See here for details on model validation.

How to exclude django model fields during a save?

I've got a fairly complicated Django model that includes some fields that should only be saved under certain circumstances. As a simple example,
from django.db import models
class MyModel(models.Model):
name = models.CharField(max_length=200)
counter = models.IntegerField(default=0)
def increment_counter(self):
self.counter = models.F('counter') + 1
self.save(update_fields=['counter'])
Here I'm using F expressions to avoid race conditions while incrementing the counter. I'll generally never want to save the value of counter outside of the increment_counter function, as that would potentially undo an increment called from another thread or process.
So the question is, what's the best way to exclude certain fields by default in the model's save function? I've tried the following
def save(self, **kwargs):
if update_fields not in kwargs:
update_fields = set(self._meta.get_all_field_names())
update_fields.difference_update({
'counter',
})
kwargs['update_fields'] = tuple(update_fields)
super().save(**kwargs)
but that results in ValueError: The following fields do not exist in this model or are m2m fields: id. I could of course just add id and any m2m fields in the difference update, but that then starts to seem like an unmaintainable mess, especially once other models start to reference this one, which will add additional names in self._meta.get_all_field_names() that need to be excluded from update_fields.
For what it's worth, I mostly need this functionality for interacting with the django admin site; every other place in the code could relatively easily call model_obj.save() with the correct update_fields.
I ended up using the following:
from django.db import models
class MyModel(models.Model):
name = models.CharField(max_length=200)
counter = models.IntegerField(default=0)
default_save_fields = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.default_save_fields is None:
# This block should only get called for the first object loaded
default_save_fields = {
f.name for f in self._meta.get_fields()
if f.concrete and not f.many_to_many and not f.auto_created
}
default_save_fields.difference_update({
'counter',
})
self.__class__.default_save_fields = tuple(default_save_fields)
def increment_counter(self):
self.counter = models.F('counter') + 1
self.save(update_fields=['counter'])
def save(self, **kwargs):
if self.id is not None and 'update_fields' not in kwargs:
# If self.id is None (meaning the object has yet to be saved)
# then do a normal update with all fields.
# Otherwise, make sure `update_fields` is in kwargs.
kwargs['update_fields'] = self.default_save_fields
super().save(**kwargs)
This seems to work for my more complicated model which is referenced in other models as a ForeignKey, although there might be some edge cases that it doesn't cover.
I created a mixin class to make it easy to add to a model, inspired by clwainwright's answer. Though it uses a second mixin class to track which fields have been changed, inspired by this answer.
https://gitlab.com/snippets/1746711

Save the user from one model to the another model

What I want to do is, whenever I create a new message, I want the sender of the Message to be added to the user of that particular Thread (the Message its relating to).
How do I do that? Can it be done by overriding the save method? I can do it in the views.py, but I was hoping it would be better if I can add it in the models.py itself. Any help will be very grateful. Thank you!
class Thread(models.Model):
user = models.ManyToManyField(User)
is_hidden = models.ManyToManyField(User, related_name='hidden_thread', blank=True)
def __unicode__(self):
return unicode(self.id)
class Message(models.Model):
thread = models.ForeignKey(Thread)
sent_date = models.DateTimeField(default=datetime.now)
sender = models.ForeignKey(User)
body = models.TextField()
is_hidden = models.ManyToManyField(User, related_name='hidden_message', blank=True)
def __unicode__(self):
return "%s - %s" % (unicode(self.thread.id), self.body)
You could lookup the reverse foreign key and get all the users for a particular thread without having to manually put it in Thread
Then you can get users associated with a thread by the reverse lookup:
User.objects.filter(message__thread=thread)
If you don't want to actively pull the user set as dm03514 showed, such as if you want to add users to the thread by default but maintain the ability to remove them from the thread many-to-many later, you can indeed do this by overriding the save method or by using a post_save signal.
save is good enough for almost all cases - the advantage of post_save is that it can more reliably distinguish between saving a new message and saving edits to an existing message. But if you're not creating messages with preselected PKs or loading them from fixtures save can work fine:
class Message(models.Model):
def save(self, *args, **kwargs):
probably_new = (self.pk is None)
super(Message, self).save(*args, **kwargs)
if probably_new:
self.thread.user.add(self.sender)
A signal would look like this:
from django.db.models.signals import post_save
def update_thread_users(sender, **kwargs):
created = kwargs['created']
raw = kwargs['raw']
if created and not raw:
instance = kwargs['instance']
instance.thread.user.add(instance.sender)
post_save.connect(update_thread_users, sender=Message)
And then review the docs on preventing duplicate signals in case of multiple imports:
https://docs.djangoproject.com/en/dev/topics/signals/#preventing-duplicate-signals