Model datetime field validation for fields with auto_now - django

I am very new to django and python in general, and I was trying to learn rest_framework to create RESTful APIs.
So i have a model like this:
class Listing(models.Model):
listingid = models.BigIntegerField(primary_key=True)
sellerid = models.IntegerField()
createdon = models.DateTimeField(auto_now_add=True, editable=False)
expirydate = models.DateTimeField(null=True)
validationstatus = models.SmallIntegerField(default=0)
listingstatus = models.SmallIntegerField(
choices=((0, 'Active'),
(1, 'Hidden'),
(2, 'Suspended'),
(4, 'Expired'),
(5, 'Deleted'),
),
default=0)
Now i need to validate that the expirydate is always greater than the createdon date.
I know i can do this in the views, I guess that would not be a good idea, since now the validation only exists in the views.
So that leaves me with the serializers and the model.
I know I can override the save method to do check this like so:
class MasterListing(models.Model):
# fields here..
def save(self, *args, **kwargs):
if self.expirydate > self.createdon:
super().save(*args, **kwargs)
return ValidationError("Expiry date cannot be greater than created date ("++")")
but I dont know if this would be a good idea, since now I am raising an error which the programmer may forget to catch. I am also not sure if the fields would be populated when this method would run.
Another way I read about in the docs is the clean method which i couldn't really understand so well.
Can anyone guide me on how to handle situations like this when you are working with the rest_framework?
Some of the things I have read about validation till now:
Serializer Validation
Field level validation
Validators
Model Validation
override clean method
override save method
Just do it manually in the views
There seem to be so many options, and I might have even left a few, I could not clearly get an idea of when to use where.
I am sorry if this is a little on the beginner level, but i am new to frameworks and django seems to be very different from what i was doing in PHP. Any advice is welcome!
Edit: I will be using django for the rest_framework only and nothing else, since we only want to build RESTful APIs.

Django REST framework used to call Model.clean, which was previously the recommended place for putting validation logic that needed to be used in Django forms and DRF serializers. As of DRF 3.0, this is no longer the case and Model.clean will no longer be called during the validation cycle. With that change, there are now two possible places to put in custom validation logic that works on multiple fields.
If you are only using Django REST framework for validation, and you don't have any other areas where data needs to be manually validated (like a ModelForm, or in the Django admin), then you should look into Django REST framework's validation framework.
class MySerializer(serializers.ModelSerializer):
# ...
def validate(self, data):
# The keys can be missing in partial updates
if "expirydate" in data and "createdon" in data:
if data["expirydate"] < data["createdon"]:
raise serializers.ValidationError({
"expirydata": "Expiry date cannot be greater than created date",
})
return super(MySerializer, self).validate(data)
If you need to use Django REST framework in combination with a Django component that uses model-level validation (like the Django admin), you have two options.
Duplicate your logic in both Model.clean and Serializer.validate, violating the DRY principle and opening yourself up to future issues.
Do your validation in Model.save and hope that nothing strange happens later.
but I dont know if this would be a good idea, since now I am raising an error which the programmer may forget to catch.
I would venture to say that it would be better for the error to be raised than for the saved data to possibly become invalid on purpose. Once you start allowing invalid data, you have to put in checks anywhere the data is used to fix it. If you don't allow it to go into an invalid state, you don't run into that issue.
I am also not sure if the fields would be populated when this method would run.
You should be able to assume that if an object is going to be saved, the fields have already been populated with their values.

If you would like to both Model Validation and Serializer validation using Django REST Framework 3.0, you can force your serializer to use the Model validation like this (so you don't repeat yourself):
import rest_framework, django
from rest_framework import serializers
class MySerializer(serializers.ModelSerializer):
def validate(self, data):
for key, val in data.iteritems():
setattr(self.instance, key, val)
try:
self.instance.clean()
except django.core.exceptions.ValidationError as e:
raise rest_framework.exceptions.ValidationError(e.message_dict)
return data
I thought about generating a new function from my model's clean() function's code, and have it either spit out django.core.exceptions.ValidationError or rest_framework.exceptions.ValidationError, based on a parameter source (or something) to the function. Then I would call it from the model, and from the serializer. But that hardly seemed better to me.

If you want to make sure that your data is valid on the lowest level, use Model Validation (it should be run by the serializer class as well as by (model)form classes (eg. admin)).
If you want the validation to happen only in your API/forms put it in a serializer/form class. So the best place to put your validation should be Model.clean().
Validation should never actually happen in views, as they shouldn't get too bloated and the real business logic should be encapsulated in either models or forms.

Related

When and where is `Field.blank` checked by DRF?

I have a model
class SomeModel(models.Model):
emails = ArrayField(models.EmailField(), default=list)
And let's say I have the following Serializer of the model:
class SomeModelSerializer(serializers.ModelSerializer):
class Meta:
model = SomeModel
fields = ['emails']
The email field is not blank-able, i.e: It's required to set a value for it when submitting a Form of the model, or when making changes to its Admin page.
My understanding is that DRF relies as well on Django's internal machinery to validate whether emails is missing on the Serializer data or not. But the thing is that I can't find where (and when) this happens.
I've found that DRF is not calling the Model's clean() method anymore (link). But what baffles me is that changing the blank value on the field seems to have a direct impact on the Serializer. I have switched to blank=True, and then the Serializer would allow it to be saved without that field... Then I switched back to blank=False, and the Serializer would fail if emails is not present.
So do you have any idea of when and where DRF checks for a field's blank value?
Thanks!
As far as I know, it simply doesn't. Those are only used across forms and the django admin interface.
I always specify those things on the serializer level, by setting the appropiate arguments for my fields (doc), in this case it would be allow_blank.
I am building REST APIs with django, and the only case where the blank property on the model field catches me, is when fiddling around on the admin page.
However, there appears to be a package that could be of interest to you:
django-seriously.
I haven't used it, but it appears to call full_clean() on every save().
Of course, this has the disadvantage that you will probably loose DRFs nice error messages.

How to serialize data not coming from the request and properly validate it (ModelSerializer in Django Rest Framework)?

Using Django Rest Framework 3, Function Based Views, and the ModelSerializer (more specifically the HyperlinkedModelSerializer).
When a user submits a form from the client, I have a view that takes the request data, uses it to call to an external API, then uses the data from the external API to populate data for a model serializer.
I believe I have this part working properly, and from what I read, you are supposed to use context and validate()
In my model serializer, I have so far just this one overidden function:
from django.core.validators import URLValidator
def validate(self, data):
if 'foo_url' in self.context:
data['foo_url'] = self.context['foo_url']
URLValidator(data['foo_url'])
if 'bar_url' in self.context:
data['bar_url'] = self.context['bar_url']
URLValidator(data['bar_url'])
return super(SomeSerializer, self).validate(data)
Just in case, the relevant view code is like so:
context = {'request': request}
...
context['foo_url'] = foo_url
context['bar_url'] = bar_url
s = SomeSerializer(data=request.data, context=context)
if s.is_valid():
s.save(user=request.user)
return Response(s.data, status=status.HTTP_201_CREATED)
Now assuming I have the right idea going (my model does populate its foo_url and bar_url fields from the corresponding context data), where I get confused is how the validation is not working. If I give it bad data, the model serializer does not reject it.
I assumed that in validate(), by adding the context data to the data, the data would be checked for validity when is_valid() was called. Maybe not the case, especially when I print out s (after using the serializer but before calling is_valid()) there is no indication that the request object's data has been populated with the context data from validate() (I don't know if it should be).
So I tried calling the URLValidators directly in the validate() method, but still doesn't seem to be working. No errors despite giving it invalid data like 'asdf' or an empty python dict ({}). My test assertions show that the field indeed contains invalid data like '{}'.
What would be the proper way to do this?
You're not calling the validator.
By doing URLValidator(data['bar_url']) you're actually building an url validator with custom schemes (see the docs) and that's it. The proper code should be:
URLValidator()(data['bar_url'])
Where you build a default url validator and then validate the value.
But anyway I would not use this approach, what I would do instead is directly add the extra data (not using the context) and let DRF do the validation by declaring the right fields:
# Somewhere in your view
request.data['bar_url'] = 'some_url'
# In serializer:
class MySerializer(serializers.ModelSerializer):
bar_url = serializers.URLField()
class Meta:
fields = ('bar_url', ...)
To answer your comment
I also don't understand how this also manages to make it past the
Django's model validation
See this answer:
Why doesn't django's model.save() call full_clean()?
By default Django does not automatically call the .full_clean method so you can save a model instance with invalid values (unless the constraints are on the database level).

Django/DRF wide validation

This is more of a conceptual question. I am not looking for code sample answers. Simply an insight into validation when working with Django and DRF.
Consider the following the model:
class Store(models.Model):
id = models.CharField()
products = JsonField(default='[]')
regexp = models.CharField(max_length=255)
I am using Django REST Framework and I have a serializer which serializes this model for a StoreView.
I have some validation I would like to enforce. For example, I want products to take the form: {"id":x, "optional-title":y} and I would like to enforce some regex validation for regexp.
How would I enforce validation for this model in one single place and still get correct error returns. By 'correct error returns', I mean that I should return a 400 BAD REQUEST when I receive some bad payload in from an API client but I should also return a normal Django ValidationError if I create an object on the model level.
I can't see the advantage of serializer level validation. It appears to me that I would just need to duplicate my validations in the model level if I want to guarantee that a bad object never gets into the DB.
You can define validate_<field> method within serializer class
def validate_regexp(obj,regex):
#your regex validation goes here
#valid_regex = .....
if not valid_regex:
raise serializers.ValidationError("Regex invalid")
return regex

Modify data before validation step with django rest framework

I have a simple Model that stores the user that created it with a ForeignKey. The model has a corresponding ModelSerializer and ModelViewSet.
The problem is that when the user submits a POST to create a new record, the user should be set by the backend. I tried overriding perform_create on the ModelViewSet to set the user, but it actually still fails during the validation step (which makes sense). It comes back saying the user field is required.
I'm thinking about overriding the user field on the ModelSerializer to be optional, but I feel like there's probably a cleaner and more efficient way to do this. Any ideas?
I came across this answer while looking for a way to update values before the control goes to the validator.
This might be useful for someone else - here's how I finally did it (DRF 3) without rewriting the whole validator.
class MyModelSerializer(serializers.ModelSerializer):
def to_internal_value(self, data):
data['user'] = '<Set Value Here>'
return super(MyModelSerializer, self).to_internal_value(data)
For those who're curious, I used this to round decimal values to precision defined in the model so that the validator doesn't throw errors.
You can make the user field as read_only.
This will ensure that the field is used when serializing a representation, but is not used when creating or updating an instance during deserialization.
In your serializers, you can do something like:
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
extra_kwargs = {
'user' : {'read_only' : True} # define the 'user' field as 'read-only'
}
You can then override the perform_create() and set the user as per your requirements.
Old topic but it could be useful for someone.
If you want to alter your data before validation of serializer:
serializer.initial_data["your_key"] = "your data"
serializer.is_valid(raise_exception=True)

M2M relationship validation in Django

I have these two models:
class Test(models.Model):
problems = models.ManyToManyField('Problem')
...
class Problem(models.Model):
type = models.CharField(max_length=3, choices=SOME_CHOICES)
...
Now, while adding Problems to a Test, I need to limit the number of particular type of problems in the Test. E.g. a Test can contain only 3 Problems of type A, and so on.
The only way to validate this seems to be by using m2m_changed signal on Test.problems.through table. However, to do the validation, I need to access the current Problem being added AND the existing Problems - which doesn't seem to be possible somehow.
What is the correct way to do something like this? M2M validation seems to be a topic untouched in the docs. What am I missing?
You are right on the part that you have to register an m2m_changed signal function like the following:
def my_callback(sender, instance, action, reverse, model, pk_set, **kwargs)
If you read the documentation you 'll see that sender is the object-model that triggers the change and model is the object-model that will change. pk_set will give you the pkeys that will be the new reference for your model. So in your Test model you have to do something like this:
#receiver(m2m_changed)
def my_callback(sender, instance, action, reverse, model, pk_set, **kwargs):
if action == "pre_add":
problem_types = [x.type for x in model.objects.filter(id__in=pk_set)]
if problem_types.count("A") > some_number:
raise SomeException
Mind though that an Exception at that level will not be caught if you're entering fields from Django admin site. To be able to provide user friendly errors for django admin data entry, you'll have to register your own form as admin form. In your case, you need to do the following:
class ProblemTypeValidatorForm(ModelForm):
def clean(self):
super(ProblemTypeValidatorForm, self).clean()
problem_types = [x.type for x in self.cleaned_data.get("problems") if x]
if problem_types.count("A") > some_number:
raise ValidationError("Cannot have more than {0} problems of type {1}"
.format(len(problem_types), "A")
then in your admin.py
#admin.register(Test)
class TestAdmin(admin.ModelAdmin):
form = ProblemTypeValidatorForm
Now keep in mind that these are two different level implementations. None will protect you from someone doing manually this:
one_test_object.problems.add(*Problem.objects.all())
one_test_object.save()
Personal opinion:
So keeping in mind the above, I suggest you go with the ModelForm & ModelAdmin approach and if you're providing an API for CRUD operations, make your validations there as well. Nothing can protect you from someone entering stuff in your db through django shell. If you want such solution types you should go directly to your db and write some kind of magic trigger script. But keep in mind that your db is actually data. Your backend is the one with the business logic. So you shouldn't really try to impose business rules down to the db level. Keep the rules in your backend by validating your data at the spots where create/update happens.
You can't override save for a M2M I'm afraid, but you can achieve what you want.
Use the m2m_changed signal where the action is pre_add.
The 'instance' kwarg will be the Test model the problem is being added to.
The 'pk_id' kwarg will be the primary key of the Problems being added (1 or more).
The validation logic will be something like this:
p_type = Problem.objects.get(id=kwargs['pk_id']).type
type_count = kwargs['instance'].problems.filter(type=p_type).count()
if p_type == 'A' and type_count == 3:
raise Exception("cannot have more than 3 Problems of type A")
[sorry don't have django on hand to verify the query]