I have a pluggable app for Django that provides a few forms. The forms have a few settings associated with them that control some of the forms' behavior (e.g., labels, initial values, and so on).
I've followed a blog post to set the default settings for the pluggable app, and that works well under normal circumstances. However, in tests, where I provide overrides, the overrides do not get applied at all.
Here's the code for one of the forms:
if settings.CURRENCY_FORM_INCLUDE_EMPTY:
currencies.insert(0, (settings.CURRENCY_FORM_EMPTY_VALUE,
settings.CURRENCY_FORM_EMPTY_LABEL))
class CurrencyForm(forms.Form):
currency = forms.ChoiceField(
required=False,
choices=currencies,
label=settings.CURRENCY_FORM_LABEL,
initial=settings.CURRENCY_FORM_INITIAL_VALUE)
Obviously, the moment class is defined, settings like label and inital value are applied immediately, so overrides have no effect on them.
I ended up with a rather hackish solugion of evaluating all settings in form's __init__ method:
class CurrencyForm(forms.Form):
def __init__(self, *args, **kwargs):
super(CurrencyForm, self).__init__(*args, **kwargs)
choices = list(currencies)
if settings.CURRENCY_FORM_INCLUDE_EMPTY:
choices.insert(0, (settings.CURRENCY_FORM_EMPTY_VALUE,
settings.CURRENCY_FORM_EMPTY_LABEL))
self.fields['currency'].label = settings.CURRENCY_FORM_LABEL
self.fields['currency'].choices = choices
self.fields['currency'].initial = kwargs.get(
'initial', {}
).get('currency', settings.CURRENCY_FORM_INITIAL_VALUE)
currency = forms.ChoiceField(required=False,
choices=())
Obviously, lots of moving parts. I'm not very happy with this code. How do I properly test the settings' effect on the forms without resorting to these hacks?
I don't get what you're trying to do. But as an advice, you should think this in a more object oriented way. For example, instead of using that if statement, you should define everything and just plug what you use.
If you have 2 forms, and want to use 1 of them at each time, you could have some setting, like:
settings.FORM_TO_USE = CurrencyForm
And when you want to instantiate you can do:
def my_view(request):
form = settings.FORM_TO_USE()
Lastly, try to manage the tests separate from the configuration. If you're unittesting django, it shouldn't care what the settings are.
Related
I've got a project with quite a few admin-actions. Currently I'm registering them like so:
#admin.action(description='Some admin action description')
def do_something_action(self, request, queryset):
pass
Some of these are being added to the admin-class of another app so I cannot simply add the function directly on the class where they are needed.
The problem is that these actions are shown project-wide, on every admin-screen.
How can I stop this behaviour, and manually set them where they are wanted? If it matters, it's Django3.2.
As I couldn't find out why these actions are shown project wide, I decided to manually override the get_actions function.
Firstly, a Mixin was created to deal with the exclusion of certain actions.
class ExcludedActionsMixin:
'''
Exclude admin-actions. On the admin, you're expected to have
excluded_actions = [...]
Keep in mind that this breaks the auto-discovery of actions.
You will need to set the ones you actually want, manually.
'''
def get_actions(self, request):
# We want to exclude some actions from this admin. Django seems to auto assign all general actions
# that aren't included in the class by default to the entire package. But we have some actions
# intended for another package here. This wouldn't work.
actions = super().get_actions(request)
# so let's recompile the actions list and keeping excluded_actions in mind.
for excluded_action in self.excluded_actions:
try:
del actions[excluded_action]
except KeyError:
pass
return actions
This Mixin is used to do both local overrides in specific apps, but also to create a 'default' admin which contains the most wanted
class DefaultAdminActions(ExcludedActionsMixin, admin.ModelAdmin):
# There are a number of actions we want to be excluded pretty much everywhere. Instead of
# setting them again and again, we'll just delcare them here.
# And import DefaultAdmin instead of admin.ModelAdmin
excluded_actions = ['unwanted_action1', 'unwanted_action2', 'unwanted_action3']
Other approaches are more than welcome.
I have these two models:
class Test(models.Model):
problems = models.ManyToManyField('Problem')
...
class Problem(models.Model):
type = models.CharField(max_length=3, choices=SOME_CHOICES)
...
Now, while adding Problems to a Test, I need to limit the number of particular type of problems in the Test. E.g. a Test can contain only 3 Problems of type A, and so on.
The only way to validate this seems to be by using m2m_changed signal on Test.problems.through table. However, to do the validation, I need to access the current Problem being added AND the existing Problems - which doesn't seem to be possible somehow.
What is the correct way to do something like this? M2M validation seems to be a topic untouched in the docs. What am I missing?
You are right on the part that you have to register an m2m_changed signal function like the following:
def my_callback(sender, instance, action, reverse, model, pk_set, **kwargs)
If you read the documentation you 'll see that sender is the object-model that triggers the change and model is the object-model that will change. pk_set will give you the pkeys that will be the new reference for your model. So in your Test model you have to do something like this:
#receiver(m2m_changed)
def my_callback(sender, instance, action, reverse, model, pk_set, **kwargs):
if action == "pre_add":
problem_types = [x.type for x in model.objects.filter(id__in=pk_set)]
if problem_types.count("A") > some_number:
raise SomeException
Mind though that an Exception at that level will not be caught if you're entering fields from Django admin site. To be able to provide user friendly errors for django admin data entry, you'll have to register your own form as admin form. In your case, you need to do the following:
class ProblemTypeValidatorForm(ModelForm):
def clean(self):
super(ProblemTypeValidatorForm, self).clean()
problem_types = [x.type for x in self.cleaned_data.get("problems") if x]
if problem_types.count("A") > some_number:
raise ValidationError("Cannot have more than {0} problems of type {1}"
.format(len(problem_types), "A")
then in your admin.py
#admin.register(Test)
class TestAdmin(admin.ModelAdmin):
form = ProblemTypeValidatorForm
Now keep in mind that these are two different level implementations. None will protect you from someone doing manually this:
one_test_object.problems.add(*Problem.objects.all())
one_test_object.save()
Personal opinion:
So keeping in mind the above, I suggest you go with the ModelForm & ModelAdmin approach and if you're providing an API for CRUD operations, make your validations there as well. Nothing can protect you from someone entering stuff in your db through django shell. If you want such solution types you should go directly to your db and write some kind of magic trigger script. But keep in mind that your db is actually data. Your backend is the one with the business logic. So you shouldn't really try to impose business rules down to the db level. Keep the rules in your backend by validating your data at the spots where create/update happens.
You can't override save for a M2M I'm afraid, but you can achieve what you want.
Use the m2m_changed signal where the action is pre_add.
The 'instance' kwarg will be the Test model the problem is being added to.
The 'pk_id' kwarg will be the primary key of the Problems being added (1 or more).
The validation logic will be something like this:
p_type = Problem.objects.get(id=kwargs['pk_id']).type
type_count = kwargs['instance'].problems.filter(type=p_type).count()
if p_type == 'A' and type_count == 3:
raise Exception("cannot have more than 3 Problems of type A")
[sorry don't have django on hand to verify the query]
I am very new to django and python in general, and I was trying to learn rest_framework to create RESTful APIs.
So i have a model like this:
class Listing(models.Model):
listingid = models.BigIntegerField(primary_key=True)
sellerid = models.IntegerField()
createdon = models.DateTimeField(auto_now_add=True, editable=False)
expirydate = models.DateTimeField(null=True)
validationstatus = models.SmallIntegerField(default=0)
listingstatus = models.SmallIntegerField(
choices=((0, 'Active'),
(1, 'Hidden'),
(2, 'Suspended'),
(4, 'Expired'),
(5, 'Deleted'),
),
default=0)
Now i need to validate that the expirydate is always greater than the createdon date.
I know i can do this in the views, I guess that would not be a good idea, since now the validation only exists in the views.
So that leaves me with the serializers and the model.
I know I can override the save method to do check this like so:
class MasterListing(models.Model):
# fields here..
def save(self, *args, **kwargs):
if self.expirydate > self.createdon:
super().save(*args, **kwargs)
return ValidationError("Expiry date cannot be greater than created date ("++")")
but I dont know if this would be a good idea, since now I am raising an error which the programmer may forget to catch. I am also not sure if the fields would be populated when this method would run.
Another way I read about in the docs is the clean method which i couldn't really understand so well.
Can anyone guide me on how to handle situations like this when you are working with the rest_framework?
Some of the things I have read about validation till now:
Serializer Validation
Field level validation
Validators
Model Validation
override clean method
override save method
Just do it manually in the views
There seem to be so many options, and I might have even left a few, I could not clearly get an idea of when to use where.
I am sorry if this is a little on the beginner level, but i am new to frameworks and django seems to be very different from what i was doing in PHP. Any advice is welcome!
Edit: I will be using django for the rest_framework only and nothing else, since we only want to build RESTful APIs.
Django REST framework used to call Model.clean, which was previously the recommended place for putting validation logic that needed to be used in Django forms and DRF serializers. As of DRF 3.0, this is no longer the case and Model.clean will no longer be called during the validation cycle. With that change, there are now two possible places to put in custom validation logic that works on multiple fields.
If you are only using Django REST framework for validation, and you don't have any other areas where data needs to be manually validated (like a ModelForm, or in the Django admin), then you should look into Django REST framework's validation framework.
class MySerializer(serializers.ModelSerializer):
# ...
def validate(self, data):
# The keys can be missing in partial updates
if "expirydate" in data and "createdon" in data:
if data["expirydate"] < data["createdon"]:
raise serializers.ValidationError({
"expirydata": "Expiry date cannot be greater than created date",
})
return super(MySerializer, self).validate(data)
If you need to use Django REST framework in combination with a Django component that uses model-level validation (like the Django admin), you have two options.
Duplicate your logic in both Model.clean and Serializer.validate, violating the DRY principle and opening yourself up to future issues.
Do your validation in Model.save and hope that nothing strange happens later.
but I dont know if this would be a good idea, since now I am raising an error which the programmer may forget to catch.
I would venture to say that it would be better for the error to be raised than for the saved data to possibly become invalid on purpose. Once you start allowing invalid data, you have to put in checks anywhere the data is used to fix it. If you don't allow it to go into an invalid state, you don't run into that issue.
I am also not sure if the fields would be populated when this method would run.
You should be able to assume that if an object is going to be saved, the fields have already been populated with their values.
If you would like to both Model Validation and Serializer validation using Django REST Framework 3.0, you can force your serializer to use the Model validation like this (so you don't repeat yourself):
import rest_framework, django
from rest_framework import serializers
class MySerializer(serializers.ModelSerializer):
def validate(self, data):
for key, val in data.iteritems():
setattr(self.instance, key, val)
try:
self.instance.clean()
except django.core.exceptions.ValidationError as e:
raise rest_framework.exceptions.ValidationError(e.message_dict)
return data
I thought about generating a new function from my model's clean() function's code, and have it either spit out django.core.exceptions.ValidationError or rest_framework.exceptions.ValidationError, based on a parameter source (or something) to the function. Then I would call it from the model, and from the serializer. But that hardly seemed better to me.
If you want to make sure that your data is valid on the lowest level, use Model Validation (it should be run by the serializer class as well as by (model)form classes (eg. admin)).
If you want the validation to happen only in your API/forms put it in a serializer/form class. So the best place to put your validation should be Model.clean().
Validation should never actually happen in views, as they shouldn't get too bloated and the real business logic should be encapsulated in either models or forms.
Basically in a popup (bootstrap) I would like to have all specified pre-populated fields from my model.
I found this code (https://groups.google.com/forum/#!searchin/django-rest-framework/HTMLFormRenderer/django-rest-framework/s24WFvnWMxw/hhmaD6Qw0AMJ)
class CreatePerformanceForm(forms.ModelForm):
model = Performance
fields = ('field1', 'field2')
class PerformanceCreateView(ListCreateAPIView):
serializer_class = PerformanceCreateSerializer
model = Performance
template_name = 'core/perform.html'
def get(self, request, format=None):
data = {'
form': CreatePerformanceForm()
}
return Response(data)
My question is the same.
Is there a way to create the form directly from the serializer so I don't have to create a Django form?
I looked at HTMLFormRenderer, but the DRF doc is quiet poor about this issue.
Thanks,
D
See this issue. Important part:
There are some improvements that could be made there [to HTMLFormRenderer], notably supporting error messaging against fields, and rendering the serializer directly into html without creating a Django form in order to do so [...]
So basically, HTMLFormRenderer also uses Django forms. Also, you are right, the documentation doesn't provide too much support for it. Even more, it seems that this renderer might soon change. See here. Quote:
Note that the template used by the HTMLFormRenderer class, and the context submitted to it may be subject to change. If you need to use this renderer class it is advised that you either make a local copy of the class and templates, or follow the release note on REST framework upgrades closely.
I know this doesn't help much, but for now there is no better way than the way you did it.
This is a follow-up on How do you change the default widget for all Django date fields in a ModelForm?.
Suppose you have a very large number of models (e.g. A-ZZZ) that is growing with the input of other developers that are beyond your control, and you want to change the way all date fields are entered (i.e. by using jQueryUI). What's the best way to ensure that all date fields are filled out using that new widget?
One suggestion from the cited question was:
def make_custom_datefield(f):
if isinstance(f, models.DateField):
# return form field with your custom widget here...
else:
return f.formfield()
class SomeForm(forms.ModelForm):
formfield_callback = make_custom_datefield
class Meta:
# normal modelform stuff here...
However, is this possible to do where you don't have explicit ModelForm's, but url patterns come from models directly? i.e. your url config is likeso:
url(r'^A/?$', 'list_detail.object_list', SomeModelA)
where SomeModelA is a model (not a form) that's turned into a ModelForm by Django in the background.
At present in my system there are no Forms for each Model. The only point of creating forms explicitly would be to add the formfield_callback suggested in the prior solution, but that goes against DRY principles, and would be error prone and labour intensive.
I've considered (as suggested in the last thread) creating my own field that has a special widget and using that instead of the builtin. It's not so labour intensive, but it could be subject to errors (nothing a good grep couldn't fix, though).
Suggestions and thoughts are appreciated.
It sounds like you want to do this project-wide (ie: you're not trying to do this in some cases, but in ALL cases in your running application).
One possibility is to replace the widget attribute of the DateField class itself. You would need to do this in some central location... something that is guaranteed to be loaded by every running instance of the django app. Middleware can help with this. Otherwise, just put it in the __init__ file of your app.
What you want to do is re-assign the widget property for the forms.DateField class itself. When a new DateField is created, Django checks to see if the code specifies any particular widget in the field property definition. If not, it uses the default for DateField. I'm assuming that if a user in your scenario really defined a particular widget, you'd want to honour that despite the change to your global API.
Try this as an example of forcing the default to some other widget... in this case a HiddenInput:
from django import forms
forms.DateField.widget = forms.HiddenInput
class Foo(forms.Form):
a = forms.DateField()
f = Foo()
print f.fields['a'].widget
# results in <django.forms.widgets.HiddenInput object at 0x16bd910>