Django pre_save signal called when commit=False - django

I am writing a Django application that will track changes to the models, in a similar way to the admin interface. For example, I will be able to display a list of changes to a model, that look something like Changed Status from Open to Closed.
I am using the pre_save signal to do this, comparing the relevant fields between the existing item in the database, and the "instance" which is being saved. To get the existing item, I have to do sender._default_manager.get(pk=sender.pk) which seems a bit messy, but that part works.
The problem is, the view for changing this model calls the save() method on the form twice (first with commit=False) - this means that 2 changes get logged in the database, as the pre_save signal is emitted twice.
Is there any way I can accomplish this? Maybe in a different way altogether, though I remember reading that the Django admin app uses signals to track changes that users make.

Looking through the Django source, it seems that pre_save signals are sent on every call to save, even if commit is false. I would suggest inserting on the first pre_save, but add a flag column to the changes table, e.g.
class FooChanges(models.Model):
foo = models.ForeignKey(Foo)
dt = models.DateTimeField(default=datetime.now)
field = models.CharField(max_length=50)
value = models.CharField(max_length=50) # Or whatever is appropriate here
finished = models.BooleanField(default=False)
Then, your presave can be:
def pre_save_handler(sender, instance):
foo_changes, created = FooChanges.objects.get_or_create(foo=instance, finished=False, field='Status', value=instance.status)
if not created:
foo_changes.finished = True
foo_changes.save()
So on the first pre_save, you actually insert the change. On the second pass, you retrieve it from the database, and set the flag to false to make sure you don't pick it up the next time Foo's status changes.

use dispatch_uid:
http://docs.djangoproject.com/en/1.2/topics/signals/#preventing-duplicate-signals

Django Audit Log
django-audit-log is a pluggable app that does exactly what you want with little effort. I've used it in a project and I'll surely use it in many more now that I know it.

Related

Django - adding gamification features

I have a medium size Django REST app that I'm looking to add gamification features to.
The application in question is a school webapp where students can create mockup quizzes, participate in exams that teachers publish, and write didactical content that gets voted by other students and teachers.
I want to add some gamification features to make the app more interesting and to incentivize participation and usage of the various features: for example, each student will have a personal "reputation" score, and gain points upon completing certain actions--a student may gain points when completing a quiz with a high score, when submitting some content, or when receiving upvotes to such content.
The tricky part is I want to be able to have this logic be as separate as possible from the existing codebase, for various reasons: separation of concerns, ability to plug the engine in/out if needed, ability to easily deactivate features for certain groups of users, etc.
What I'm looking for here is some software engineering advice that's also Django-specific. Here's a high level description of what I'm thinking of doing--I'd like some advice on the approach.
create a new gamification app. Here I will have models that describe a change in reputation for a user and possibly other related events. The app should also send notifications when gamification-related events occur
from the gamification app, expose a callback-based interface, which the other primary app can call into to dispatch events
use the django-lifecycle package to call the callbacks from gamification when triggers occur.
This way, my existing models would only get touched to register the triggers from django-lifecycle (similar to signals). For example, let's say I want to give students points when they turn in an assignment. Let's say I have an AssignmentSubmission model to handle assignment submissions. With the added lifecycle hook, it'd look like this:
class AssignmentSubmission(models.Model):
NOT_TURNED_IN = 0
TURNED_IN = 1
STATES = ((NOT_TURNED_IN, 'NOT_TURNED_IN'), (TURNED_IN, 'TURNED_IN'))
user = models.ForeignKey(user)
assignment = models.ForeignKey(assignment)
state = models.PositiveSmallIntegerField(choices=STATES, default=NOT_TURNED_IN)
#hook(AFTER_UPDATE, when="state", was=NOT_TURNED_IN, is_now=TURNED_IN)
def on_turn_in(self):
get_gamification_interface().on_assignment_turn_in(self.user)
The on_assignment_turn_in method might look something like:
def on_assignment_turn_in(user):
ReputationIncrease.objects.create(user, points=50)
notifications.notify(user, "You gained 50 points")
This is pretty much just a sketch to give an idea.
I am unsure how get_gamification_interface() would work. Should it return a singleton? Maybe instantiate an object? Or return a class with static methods? I think it'd be best to have a getter like this as opposed to manually importing methods from the gamification app, but maybe it could also create too much overhead.
What's a good way to handle adding "pluggable" features to a project that are inherently entangled with existing models and business logic while also touching those as little as possible?
The foreign key approach is fine. You can easily chain and link queries using information from existing tables and you could even avoid touching the original code by importing your models to the new app. You can use Django signals in your new app and ditch the django-lifecycle extension to avoid adding lines to your core models. I used the following approach to keep track of modified records in a table; take a TrackedModel with fields field_one, field_two, field_n... which will be tracked by one of your new app's model, namely RecordTrackingModel:
from parent_app.models import TrackedModel # The model you want to track from a parent app.
from django.db.models.signals import post_save # I'm choosing post_save just to illustrate.
from django.dispatch import receiver
from datetime import datetime
class RecordTrackingModel(models.Model):
record = models.ForeignKey(TrackedModel, verbose_name=("Tracked Model"), on_delete=models.CASCADE)
field_one = models.TextField(verbose_name=("Tracked Field One"), null=True, blank=True) # Use same field type as original
field_two = models.TextField(("Tracked Field Two"))
field_n = ...
notes = models.TextField(verbose_name=("notes"), null=True, blank=True)
created = models.DateTimeField(verbose_name=("Record creation date"), auto_now=False, auto_now_add=True)
#receiver(post_save, sender=TrackedModel) # Here, listen for the save signal coming from a saved or updated TrackedModel record.
def modified_records(instance, **kwargs):
record = instance
tracked_field_one = instance.field_one
tracked_field_two = instance.field_two
tracked_field_n = another_function(instance.field_n) #an external function that could take a field content as input.
...
note = 'Timestamp: ' + str(datetime.now().isoformat(timespec='seconds'))
track_record = RecordTrackingModel.objects.create(record=record, field_one=tracked_field_one, field_two=tracked_field_two, field_n=tracked_field_n, ..., notes=note)
return track_record
There's no need to add functions to your pre-existing models as the signal dispatcher triggers whenever a save or delete signal appears at TrackedModel. Then you could place "if" statements for wether or not to perform actions based on field values, i.e.: just pass if an AssignmentSubmission record has a "Not Turned In" status.
Check Django signals reference for more information about when they trigger.
Additionally, I would suggest to change the "state" field to boolean type and rename it to something more explicit like "is_turned_in" for ease of use. It will simplify your forms and code. Edit: For more than 2 choices (non-boolean), I prefer using ForeignKey instead. It will let you modify choices over time easily from the admin site.
Edit:
Another approach could be mirroring the original models in your gamification app and call for a mirror record update when a save method is used in the original model.
gamification_app/models.py:
from parent_app.models import OriginalModel # The model you want to track from a parent app.
from django.db.models.signals import post_save # I'm choosing post_save just to illustrate.
from django.dispatch import receiver
from datetime import datetime
def gamification_function(input, parameters):
output = *your gamification logic*
return output
class MirrorModel(models.Model):
original_model = (OriginalModel, verbose_name=("Original Model"), on_delete=models.CASCADE)
field_one = ... #Same type as the original
field_two = ...
field_n = ...
#hook(AFTER_UPDATE, when="field_n", was=OPTION_1, is_now=OPTION_2)
def on_turn_in(self):
gamification_function(self.field, self.other_field)
#receiver(post_save, sender=OriginalModel) # Here, listen for the save signal coming from the original app change record.
def mirror_model_update(instance, **kwargs):
pk = instance.pk
mirror_model = []
if MirrorModel.objects.get(original_model.pk=pk).exists():
mirror_model = MirrorModel.objects.get(original_model.pk=pk)
mirror_model.field_one = instance.field_one # Map field values
*other logic ...*
mirror_model.save() # This should trigger your hook
else:
mirror_model = MirrorModel(original_model = instance, field_one = instance.field_one, ...)
mirror_model.save() #This should trigger your hooks as well, but for a new record
This way you can decouple the gamification app and even choose not to keep a record of all or the same fields as the original model, but fields specific to your functionality needs.
Your idea was good.
In the gamification app add your views, protect it with LoginRequiredMixin, and extend it with a check if a related record in the AssignmentSubmission table exists and is turned on.
In this way you have a 100% separated gamification views, models, logic, ecc...
In the existing views you can add a link to a gamification view.

Django: assign foreignkey field with object?

models.py
class Order(TimeStampedModel):
name = models.CharField(max_length=50)
class Payment(TimeStampedModel):
order = models.ForeignKey(
'orders.Order',
null=True,
blank=True,
unique=True,
)
What I want to do is to create Payment first and add it to Order.
Let say Order is already created without payment field.
<1>
In [1]: order = Order.objects.first()
In [2]: payment = Payment.objects.create()
In [3]: order.payment = payment
In [4]: order.save()
<2>
In [7]: order.payment_set.add(payment)
I'd like to know what is difference between <1> and <2>. Which one is a right way?
Your first example can be rewritten to reduce DB usage (as it is currently presented, it will make INSERT with payment=NULL, followed by an UPDATE). This should be better:
order = Order.objects.first()
payment = Payment.objects.create(order=order)
Now, to the question. Both approaches are almost identical. However, there are differences if you are using django >= 1.9. .add now performs bulk inserts by default instead of calling save on each instance. This means that if you have any signals like post_save or post_create connected to your Payment model, you won't have them executed unless you explicitly specify bulk=False:
order.payment_set.add(payment, bulk=False)
I think you may have your relation the wrong way around your first scenario; lines 3 and 4 should be:
payment.order = order
payment.save()
1. Setting FK on an instance.
The save method of your Payment is called and you are associating a single Payment with a single Order. Payment.save is called, firing save related signals.
2: Using the RelatedManager.add method.
You can actually add multiple associations at once using this method. So for example you could associate multiple Payments with the one Order using:
order.payment_set.add(payment_1, payment_2, ... payment_N)
Importantly, by default since Django 1.9, the Payments are created without their save() method being called, so you will not get save related signals firing.
On the importance of save signals firing:
It's very common to use signals in Django applications. They provide a powerful way to add functionality to your application without cluttering up your model logic. If your application is using them, bewaring adding associations using the RelatedManager.add approach, as the save related signals will not fire.

How to track changes when using update() in Django models

I'm trying to keep track of the changes whenever a field is changed.
I can see the changes in Django Admin History whenever I use the .save() method, but whenever I use the .update() method it does not record whatever I changed in my object.
I want to use update() because it can change multiple fields at the same time. It makes the code cleaner and more efficient (one query, one line...)
Right now I'm using this:
u = Userlist.objects.filter(username=user['username']).update(**user)
I can see all the changes when I do
u = Userlist.objects.get(username=user['username'])
u.lastname=lastname
u.save()
I'm also using django-simple-history to see the changes.setup.
From the docs:
Finally, realize that update() does an update at the SQL level and,
thus, does not call any save() methods on your models, nor does it
emit the pre_save or post_save signals (which are a consequence of
calling Model.save())
update() works at the DB level, so Django admin cannot track changes when updates are applied via .update(...).
If you still want to track the changes on updates, you can use:
for user in Userlist.objects.filter(age__gt=40):
user.lastname = 'new name'
user.save()
This is however more expensive and is not advisable if the only benefit is tracking changes via the admin history.
Here's how I've handled this and it's worked well so far:
# get current model instance to update
instance = UserList.objects.get(username=username)
# use model_to_dict to convert object to dict (imported from django.forms.models import model_to_dict)
obj_dict = model_to_dict(instance)
# create instance of the model with this old data but do not save it
old_instance = UserList(**obj_dict)
# update the model instance (there are multiple ways to do this)
UserList.objects.filter(username=username).update(**user)
# get the updated object
updated_object = UserList.objects.get(id=id)
# get list of fields in the model class
my_model_fields = [field.name for field in cls._meta.get_fields()]
# get list of fields if they are different
differences = list(filter(lambda field: getattr(updated_object, field, None)!= getattr(old_instance, field, None), my_model_fields))
The differences variable will give you the list of fields that are different between the two instances. I also found it helpful to add which model fields I don't want to check for differences (e.g. we know the updated_date will always be changed, so we don't need to keep track of it).
skip_diff_fields = ['updated_date']
my_model_fields = []
for field in cls._meta.get_fields():
if field.name not in skip_diff_fields:
my_model_fields.append(field.name)

Django making sure user and user profile have same pk

Right now I'm using Django's built in admin system to manage users, to which I've attached a profile to contain additional data using the following:
class Profile(models.Model):
user = models.OneToOneField(User, editable = False)
# Data fields here...
As it stands the User and Profile pk (and accordingly id number) will be the same if and only if the profile is created right after the user is created. I could guarantee that this would be the case during the registration process, and while that would cover most uses, creating users with the admin interface could cause mismatched ids to occur. Thus this does not seem like a very robust way to solve this problem and I'd like to hardcode the pk's to be the same. I'm not sure how to do this.
I thought the following would work:
profile_id = models.IntegerField(default=user.pk, editable = False,
primary_key = True)
But it gives me the error:
AttributeError: 'OneToOneField' has no attribute 'pk'
What's the best way to guarantee that the profile and user have the same pk? Note: I'd really rather not deal with extending the base user model as using the OneToOneField to link the two seems to be sufficient for all my needs.
Thanks!
[edit]
My reasoning for asking the question:
My immediate problem was that I wanted a dictionary of values of the User's Profile, which I was retrieving usingprofile_values = Profile.objects.filter(pk=user.id).values()[0]. This highlighted the bug, and I "hacked" around it last night using pk=user.profile.id instead. In the light of the morning this does not seem like such a terrible hack. However, it seems like having pk discrepancies could lead to quiet and hard to catch bugs down the line, and thus forcing them to match up would be a Good Idea. But I'm new to Django so I'd entirely accept that it is, in fact, never a problem if you're writing your code correctly. That said, for almost academic reasons, I'd be curious to see how this might be solved.
[/edit]
Like you already agree that it was never a problem because we have a OneToOne mapping between the two models.
So when you need to get the profile obj corresponding to a User:
profile_values = Profile.objects.get(user_id=user)
assuming,
class Profile(models.Model):
user = models.OneToOneField(User)
...
If your column name is not user, then use the corresponding name in get query.
Still if you are curious as to how to achieve same pk for both models, then we can set a signal on every save of User model. See the documentation.
def create_profile(sender, **kwargs):
if kwargs["created"]:
p = Profile(user=kwargs["instance"], ...)
p.save()
django.db.models.signals.post_save.connect(create_profile, sender=User)
create_profile() will be called every time any User object is saved.
In this function, we create Profile object only if a new User instance has been created.
If we start from blank slate, then I think this will always make sure that a Profile exists for every User and is created right after User was created; which in turn will give same pk for both models.
pk is a parameter in a filter() query, but not a field name. You probably want to use user.id.

Django form save with post_save signal causing conflict

I have a Physical_therapy_order model and an Event model (an event has foreignkey to Physical_therapy_order). I have a view which allows a user to create a new event. It also has a form with 3 fields from the Physical_therapy_order model.
def PTEventCreateView(request, pt_pk):
#get the pt order and create an a form for that order
pt_order = get_object_or_404(Physical_therapy_order, pk=pt_pk)
ptform = PT_schedule_form(instance=pt_order)
if request.POST:
eventform = PTEventForm(data=request.POST)
ptform = PT_schedule_form(data=request.POST, instance=pt_order)
if eventform.is_valid() and ptform.is_valid():
#I do some checks here that compare data across the two forms.
# if everything looks good i mark keep_saving=True so I can
# continue to save all the data provided in the two forms
if keep_saving:
ptform.save()
eventform.save()
#...send user to succss page
This works just FINE EXCEPT: my PTEvent model has a function attached to its post_save signal. This function pulls the event's related pt_order and makes some modifications to it. Now, if i save the eventform first then the changes from the signal don't happen. if i save the ptform first the ptform changes get discarded and the changes from the signal happen.
THIS IS IMPORTANT: The ptform is editing three entirely different fields than the post_save signal. So its not like they're modifying the same data, only the same model instance. I thought a form only saves the fields in its meta.fields attribute. Why would this be happening? Also, if i save the ptform first, then when eventsform is saved shouldn't the signal use the updated physical_therapy_order? I'm not sure if I'm even on the right track?
I think this is because of cached objects.
What I would suggest is
Save eventform first
Get new instance of pt_order either querying db or through saved instance of eventform
And then re-create form and save.
Sample code change:
# your code
if keep_saving:
evt = eventform.save()
# I'm not sure exact name of your field name for pt_order in Event model, change appropriately
newptform = PT_schedule_form(data=request.POST, instance= evt.pt_order)
newpt = newptform.save()