I have a model with overridden __init__ method like this:
class MyModel(models.Model):
...
def __init__(self, *args, **kwargs):
if not kwargs.get('skip', False):
do_something()
super().__init__(*args, **kwargs)
How can I pass skip argument to __init__, when I iter the queryset:
data = [obj for obj in MyModel.objects.all()]
I would like to implement this via method in custom manager, for use this something like this: queryset.with_skip()
I see that you don't remove the the argument skip from kwargs before passing it to super().__init__. That means that "skip" is a name of field, otherwise you got exception TypeError("'skip' is an invalid keyword argument for this function").
If you really need do_something() when the object is created before using so indispensably that nobody should forget to avoid all unsupported ways (??) then custom managers etc. are not enough.
Your problem is that models.Model.__init__(...) supports both *args and **kwargs arguments so perfectly that they should be interchangeable. You broke it and if the "skip" is passed by the complete tuple of positional arguments, you ignore it. That is if the object is created from the database. Read the docs Customizing model loading:
... If all of the model’s fields are present, then values are guaranteed to be in the order __init__() expects them. That is, the instance can be created by cls(*values)...
.
| #classmethod
| def from_db(cls, db, field_names, values):
| ...
| instance = cls(*values)
| ...
An easy way to fix it is to call do_something() after super().__init__ and read self.skip instead of implement parsing both kwargs and args.
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not self.skip:
do_something()
A problem could be the signal "post_init" that is sent at the end of super().__init__ if you need it.
The last possibility is to support *args (hacky, but still uses documented names someway):
def __init__(self, *args, **kwargs):
if kwargs:
skip = kwargs.get('skip', False)
else:
# check "val is True" in order to skip if val is DEFERRED
skip = any(field.name == 'skip' and val is True
for val, field in zip(args, self._meta.concrete_fields)
)
if not skip:
do_something()
super().__init__(*args, **kwargs)
EDIT: Maybe you you don't need what you wanted and a Proxy model that can do something extra sometimes over a basic model on the same data in the same database table is the right solution. ("Skip" doesn't look like a name describing the object data, but like a name describing a mode of object creation. It is easier to test and maintain a subclass than a mysterious switch inside.)
Related
I'm new to Django. I understand what are the usage of *args and **kwargs. And also know how to use them in method overriding.
But, I don't understand what purpose they serve while overriding the save() method in a model class.
My observation is that no number of arguments, either non-keyworded or keyworded, were assigned to them anywhere. Still why do I must use them and how.
Have this example:
class DemoModel(models.Model):
title = models.CharField(max_length = 200)
slug = models.SlugField()
def save(self, *args, **kwargs):
self.slug = slugify(self.title)
super(DemoModel, self).save(*args, **kwargs)
Please explain.
From Django model documentation:
It’s also important that you pass through the arguments that can be
passed to the model method – that’s what the *args, **kwargs bit does.
Django will, from time to time, extend the capabilities of built-in
model methods, adding new arguments. If you use *args, **kwargs in
your method definitions, you are guaranteed that your code will
automatically support those arguments when they are added.
Very late but,
You should add return to the save method.
return super(DemoModel, self).save(*args, **kwargs)
Secondly, the kwargs which means keyword arguments are also URL parameters that are passed to the views like kwargs={"pk": self.object.id} when you save the method and it needs to redirect to a detail view for example, it needs the id of the just created object. The magic happens already in Django views, but you can pass extra parameters if you want.
I stumbled upon a code that is used to provide some args to the request method. Problem is that I'm not that sure if it is the cleanest way to handle this case.
def check_permissions(check_mixins):
"""
:param check_mixins: is given to the inner decorator
Decorator that will automatically populate some parameters when
using dispatch() toward the right method (get(), post())
"""
def _decorator(_dispatch):
def wrapper(request, *args, **kwargs):
Is it a problem if "self" isn't passed in the method definition in here...
for mixin in check_mixins:
kwargs = mixin.check(request, *args, **kwargs)
if isinstance(kwargs, HttpResponseRedirect):
return kwargs
return _dispatch(request, *args, **kwargs)
return wrapper
return _decorator
class UserLoginMixin(object):
def check(request, *args, **kwargs):
... and here ? It seems so ugly in my IDE
user = request.user
if user.is_authenticated() and not user.is_anonymous():
kwargs['user'] = user
return kwargs
return redirect('user_login')
class AppoExistMixin(object):
def check(request, *args, **kwargs):
Here too...
appo_id = kwargs['appo_id']
try:
appoff = IdAppoff.objects.get(id=appo_id)
kwargs['appoff'] = appoff
del kwargs['appo_id']
return kwargs
except IdAppoff.DoesNotExist:
pass
messages.add_message(request, messages.ERROR,
"Item doesn't exist!")
return redirect('home')
class SecurityMixin(View):
"""
Mixin that dispatch() to the right method with augmented kwargs.
kwargs are added if they match to specific treatment.
"""
data = []
def __init__(self, authenticators):
super(SecurityMixin, self).__init__()
# Clearing data in order to not add useless param to kwargs
self.data.clear()
# Build the list that contain each authenticator providing
# context increase
for auth in authenticators:
self.data.append(auth)
#method_decorator(check_permissions(data))
Why data and not self.data ? How is it possible ?
def dispatch(self, request, *args, **kwargs):
return super(SecurityMixin, self).dispatch(request, *args, **kwargs)
Each view then inherits from SecurityMixin and got authenticators = [UserLoginMixin, ...] as class attribute.
The problem I have sometimes (I can't reproduce the bug...) is that I got KeyError on augmented kwargs while URL definition is properly set. eg:
appo_id = kwargs['appo_id']
KeyError: 'appo_id'Exception
I've been looking for hours and it seems that I will never have the solution... It's a bit frustrating.
If someone could help It'll be greatly appreciated.
I have a hunch that improper handling of class attributes is at fault.
CLASS VS INSTANCE
The class attribute data is overwritten every time SecurityMixin.__init__ is called:
class A:
data = []
def __init__(self, *args):
self.data.clear() # self.data references the class attribute
for x in args:
self.data.append(x)
x = A('foo')
# A.data = ['foo']
# x.data = ['foo']
y = A('bar')
# A.data = ['bar']
# y.data = ['bar']
# x.data = ['bar'] !!
HOWEVER:
class A:
data = ['I am empty']
def __init__(self, *args):
self.data = [] # redeclaring data as an instance attribute
for x in args:
self.data.append(x)
x = A('foo')
# A.data = ['I am empty']
# x.data = ['foo']
y = A('bar')
# A.data = ['I am empty']
# y.data = ['bar']
# x.data = ['foo']
This class attribute data is passed to the decorator (you cannot pass an instance attribute to a method decorator, i.e. self.data, because the instance does not yet exist during decorator declaration).
The wrapped function, however, does have access to the instance if it is passed in ('self' argument).
Django's method_decorator removes this self argument; that decorator is used to transform a function decorator (which does not get a self argument implicitly) into a method decorator (which gets a self parameter implicitly). That's why you do not have to include self in the list of parameters for the various mixin check methods as it was removed by method_decorator. To put it simply: use method_decorator to decorate a method with a function decorator. Read up on it here decorating CBVs.
Knowing that, I am not really sure why check_permissions should be a function decorator as it is now when you only use it to decorate methods.
You could just decorate dispatch with check_permissions itself:
def check_permissions(_dispatch):
def _decorator(self, request, *args, **kwargs): # adding self
for mixin in self.data: # referencing the INSTANCE data
kwargs = mixin.check(request, *args, **kwargs)
if isinstance(kwargs, HttpResponseRedirect):
return kwargs
return _dispatch(self, request, *args, **kwargs) # don't forget self here
return _decorator
#check_permissions
def dispatch(self, request, *args, **kwargs):
...
Maybe some view is trying to check AppoExistMixin because it is in that view's data list, although it should not be - and the view's kwargs do not include 'appo_id'. You could also try being explicit by passing the wanted check mixins directly to the decorator: #method_decorator(check_permissions([UserLoginMixin, ...])). This way you you don't have to mess with class vs instance attributes.
Also... you should rename data to something that you are unlikely to overwrite with your own variable.
If you want to be super-lazy you could just do:
appo_id = kwargs.get('appo_id',False)
if not appo_id: return kwargs
But this would only fix that particular error in that one view. It's ignoring a symptom instead of curing the disease.
Some more explanation:
function vs method. check_permissions is a function, while dispatch() is a method. You cannot simply use a function decorator on a method: for one, because the implicit argument self (the instance the method belongs to) is passed to the decorator as well, although it may not expect it.
That is where django's method_decorator comes in by removing and storing self within the decorator. Compare the two signatures: wrapper(request, *args, **kwargs) vs _decorator(self, request, *args, **kwargs). In the former, method_decorator 'absorbed' self before the function decorator is called.
Think of it as an adapter, a decorator for the decorator, that 'bridges the gap' between function and method. Use it if you don't want to/cannot alter the decorator.
In your case, however, you can change the decorator to make it work with a method - thus you don't need django's method_decorator.
I have a model with a custom save function, where I want to perform certain functions based on a condition like this:
class ClassName(models.Model):
def save(self, *args, **kwargs):
reindex = **kwargs.pop("reindex")
super().save(*args, **kwargs)
if reindex:
People.objects.create()
Now inside a task I want to call the following:
kwargs = { "reindex": False}
ClassName.objects.get_or_create(**kwargs)
When it does a create, it obviously runs the save function, but it's giving me an error saying reindex is not a field.
I have been researching it for a while now and can't figure out what to do.
Maybe someone can point me in the right direction.
I just want to pass in an argument into the get_or_create, so that I can conditionally perform a certain function in the save method.
When you do
kwargs = { "reindex": False}
ClassName.objects.get_or_create(**kwargs)
it is actually equivalent to
ClassName.objects.get_or_create(reindex=False)
Thus, since reindex appears not to be a field defined in the model ClassName, you get an error.
BTW, beyond things which appear erroneous, e.g. reindex = **kwargs.pop("reindex"), you should define reindex as one of the fields of your model. But I admit that I answer blindly, because to me, your class definition cannot work like so. If one assumes that reindex is an integer field, you could do
class ClassName(models.Model):
reindex = models.IntegerField(null=True)
def save(self, *args, **kwargs):
super(ClassName, self).save(*args, **kwargs)
if "reindex" in kwargs:
People.objects.create()
I've created a model, and I'm rendering the default/unmodified model form for it. This alone generates 64 SQL queries because it has quite a few foreign keys, and those in turn have more foreign keys.
Is it possible to force it to always (by default) perform a select_related every time one of these models are returned?
You can create a custom manager, and simply override get_queryset for it to apply everywhere. For example:
class MyManager(models.Manager):
def get_queryset(self):
return super(MyManager, self).get_queryset().select_related('foo', 'bar')
(Prior to Django 1.6, it was get_query_set).
Here's also a fun trick:
class DefaultSelectOrPrefetchManager(models.Manager):
def __init__(self, *args, **kwargs):
self._select_related = kwargs.pop('select_related', None)
self._prefetch_related = kwargs.pop('prefetch_related', None)
super(DefaultSelectOrPrefetchManager, self).__init__(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
qs = super(DefaultSelectOrPrefetchManager, self).get_queryset(*args, **kwargs)
if self._select_related:
qs = qs.select_related(*self._select_related)
if self._prefetch_related:
qs = qs.prefetch_related(*self._prefetch_related)
return qs
class Sandwich(models.Model):
bread = models.ForeignKey(Bread)
extras = models.ManyToManyField(Extra)
# ...
objects = DefaultSelectOrPrefetchManager(select_related=('bread',), prefetch_related=('extras',))
Then you can re-use the manager easily between model classes. As an example use case, this would be appropriate if you had a __unicode__ method on the model which rendered a string that included some information from a related model (or anything else that meant a related model was almost always required).
...and if you really want to get wacky, here's a more generalized version. It allows you to call any sequence of methods on the default queryset with any combination of args or kwargs. There might be some errors in the code, but you get the idea.
from django.db import models
class MethodCalls(object):
"""
A mock object which logs chained method calls.
"""
def __init__(self):
self._calls = []
def __getattr__(self, name):
c = Call(self, name)
self._calls.append(c)
return c
def __iter__(self):
for c in self._calls:
yield tuple(c)
class Call(object):
"""
Used by `MethodCalls` objects internally to represent chained method calls.
"""
def __init__(self, calls_obj, method_name):
self._calls = calls_obj
self.method_name = method_name
def __call__(self, *method_args, **method_kwargs):
self.method_args = method_args
self.method_kwargs = method_kwargs
return self._calls
def __iter__(self):
yield self.method_name
yield self.method_args
yield self.method_kwargs
class DefaultQuerysetMethodCallsManager(models.Manager):
"""
A model manager class which allows specification of a sequence of
method calls to be applied by default to base querysets.
`DefaultQuerysetMethodCallsManager` instances expose a property
`default_queryset_method_calls` to which chained method calls can be
applied to indicate which methods should be called on base querysets.
"""
def __init__(self, *args, **kwargs):
self.default_queryset_method_calls = MethodCalls()
super(DefaultQuerysetMethodCallsManager, self).__init__(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
qs = super(DefaultQuerysetMethodCallsManager, self).get_queryset(*args, **kwargs)
for method_name, method_args, method_kwargs in self.default_queryset_method_calls:
qs = getattr(qs, method_name)(*method_args, **method_kwargs)
return qs
class Sandwich(models.Model):
bread = models.ForeignKey(Bread)
extras = models.ManyToManyField(Extra)
# Other field definitions...
objects = DefaultQuerysetMethodCallsManager()
objects.default_queryset_method_calls.filter(
bread__type='wheat',
).select_related(
'bread',
).prefetch_related(
'extras',
)
The python-mock-inspired MethodCalls object is an attempt at making the API more natural. Some might find that a bit confusing. If so, you could sub out that code for an __init__ arg or kwarg that just accepts a tuple of method call information.
Create a custom models.Manager and override all the methods (filter, get etc.) and append select_related onto every query. Then set this manager as the objects attribute on the model.
I would recommend just going through your code and adding the select_related where needed, because doing select_related on everything is going to cause some serious performance issues down the line (and it wouldn't be entirely clear where it's coming from).
How can one accomplish class-based default value in following scheme? I mean, I would like to inherited classes set default value for "number" differently:
class OrderDocumentBase(PdfPrintable):
number = models.PositiveIntegerField(default=self.create_number())
#classmethod
def create_number(cls):
raise NotImplementedError
class Invoice(OrderDocumentBase):
#classmethod
def create_number(cls):
return 1
class CreditAdvice(OrderDocumentBase):
#classmethod
def create_number(cls):
return 2
I have looked at this stackoverflow question, but it doesn't address the same problem. The only thing I thought would work was overloading OrderDocumentBase's __init__ method like this:
def __init__(self, *args, **kwargs):
"""
Overload __init__ to enable dynamic set of default to number
"""
super(OrderDocumentBase, self).__init__(*args, **kwargs)
number_field = filter(lambda x: x.name == 'number', self._meta.fields)[0]
number = self.__class__.create_number()
number_field.default = number
This works, but only partially and behaves quite wierdly. In admin interface, I can see the default being set only after second or latter page refresh. On first try, None is being set :(
Second possibility is redefinition of number field in each class, but that doesn't seem too much pretty. Is there any other way?
Can someone help?
It does feel nicer to do this via default=, but anything you use there doesn't have a way to get at your class or specific model. To have it show up properly in places like the admin, you could set it in init() instead of save().
class OrderDocumentBase(PdfPrintable):
number = models.PositiveIntegerField()
def __init__(self, *args, **kwargs):
super(OrderDocumentBase, self).__init__(*args, **kwargs)
if not self.pk and not self.number:
self.number = self.DEFAULT_NUMBER
class Invoice(OrderDocumentBase):
DEFAULT_NUMBER = 2
class CreditAdvice(OrderDocumentBase):
DEFAULT_NUMBER = 3
There are a couple of problems here. First, the self.method is not going to work. There is no self in the context of the body of the class, which is where you are declaring the PositiveIntegerField.
Second, passing a callable will not work as the callable gets bound at compile time and does not change at runtime. So if you define say,
class OrderDocumentBase(PdfPrintable):
create_number = lambda: return 0
number = models.PositiveIntegerField(default=create_number)
class Invoice(OrderDocumentBase):
create_number = lambda: return 1
All Invoice instances will still get 0 as default value.
One way I can think of to tackle this is to override the save() method. You can check if the number has not been supplied and set it to a default before saving.
class OrderDocumentBase(PdfPrintable):
number = models.PositiveIntegerField()
def save(self, *args, **kwargs):
if not self.number:
self.number = self.DEFAULT
super(OrderDocumentBase, self).save(*args, **kwargs)
class Invoice(OrderDocumentBase):
DEFAULT = 2
class CreditAdvice(OrderDocumentBase):
DEFAULT = 3
I tested the above with a small change (made OrderDocumentBase abstract as I did not have PdfPrintable) and it worked as expected.