How to do multiple save object calls in a Django view, but commit only once - django

I have a Django view in which I call my_model.save() in a single object (conditionally) in multiple spots. my_model is a normal model class.
save() is commited at once in Django, and thus, the database gets hit several times in the worst case. To prevent this, I defined a boolean variable save_model and set it to True in the case of a object modification. At the end of my view, I check this boolean and call save on my object in needed.
Is there a simpler way of doing this? I tried Djangos transaction.commit_on_success as a view decorator, but the save-calls appear to get queued and committed anyway.

You could look into django-dirtyfields.
Simply use DirtyFieldsMixin as a mixin to your model. You will then be able to check if an object has changed (using obj.is_dirty()) before doing a save().

You can use transaction support everywhere in your code, Django docs say it explicitely:
Although the examples below use view functions as examples, these decorators and context managers can be used anywhere in your code that you need to deal with transactions
But this isn't the thing transactions are for. You can get rid of your boolean variable using some existing app for that, like django-dirtyfields.
But it smells like a bad design. Why do you need to call save multiple times? Are you sure there is no way to call it only once?

There can be two approaches for this... But they are similar... First one is calling save() before returning response.
def my_view(request):
obj = Mymodel.objects.get(...)
if cond1:
obj.attr1 = True
elif cond2:
obj.attr2 = True
else:
obj.attr1 = False
obj.attr2 = False
obj.save()
return .......
Second one is your approach...
But there is no other way to do this, except you define your own decorator or do some other approach, but in fact, you need to check if there is any modification on your model (or you want to save changes to your data).

Related

returning list data on POST in django-tastypie

I know always_return_data = True makes tastypie return object after creation.
I'd like to return a list of data upon object creation.
(when a user like a post, I return all likes-info for the post)
Is this a bad practice?
I could do this easily in a regular django view, but having hard time with tastypie.
The only thing I can think of would be to create a Mixin for Resource/ModelResource with a different post_list/put_detail/put_list that takes care of this. Check out resources.py for the functions that you'd need to change.

Django - How to pass dynamic models between pages

I have made a django app that creates models and database tables on the fly. This is, as far as I can tell, the only viable way of doing what I need. The problem arises of how to pass a dynamically created model between pages.
I can think of a few ways of doing such but they all sound horrible. The methods I can think of are:
Use global variables within views.py. This seems like a horrible hack and likely to cause conflicts if there are multiple simultaneous users.
Pass a reference in the URL and use some eval hackery to try and refind the model. This is probably stupid as the model could potentially be garbage collected en route.
Use a place-holder app. This seems like a bad idea due to conflicts between multiple users.
Having an invisible form that posts the model when a link is clicked. Again very hacky.
Is there a good way of doing this, and if not, is one of these methods more viable than the others?
P.S. In case it helps my app receives data (as a json string) from a pre-existing database, and then caches it locally (i.e. on the webserver) creating an appropriate model and table on the fly. The idea is then to present this data and do various filtering and drill downs on it with-out placing undue strain on the main database (as each query returns a few hundred results out of a database of hundreds of millions of data points.) W.R.T. 3, the tables are named based on a hash of the query and time stamp, however a place-holder app would have a predetermined name.
Thanks,
jhoyla
EDITED TO ADD: Thanks guys, I have now solved this problem. I ended up using both answers together to give a complete answer. As I can only accept one I am going to accept the contenttypes one, sadly I don't have the reputation to give upvotes yet, however if/when I ever do I will endeavor to return and upvote appropriately.
The solution in it's totality,
from django.contrib.contenttypes.models import ContentType
view_a(request):
model = create_model(...)
request.session['model'] = ContentType.objects.get_for_model(model)
...
view_b(request):
ctmodel = request.session.get('model', None)
if not ctmodel:
return Http404
model = ctmodel.model_class()
...
My first thought would be to use content types and to pass the type/model information via the url.
You could also use Django's sessions framework, e.g.
def view_a(request):
your_model = request.session.get('your_model', None)
if type(your_model) == YourModel
your_model.name = 'something_else'
request.session['your_model'] = your_model
...
def view_b(request):
your_model = request.session.get('your_model', None)
...
You can store almost anything in the session dictionary, and managing it is also easy:
del request.session['your_model']

Creating a Django callable object for Field.default

I'm trying to create a callable object to return the default value to use for a field when creating a new instance.
The logic for the value is dependent on other data in the model. I tried creating a separate class but have not hit on the right combination of factors. Example:
in models.py:
Class Box(models.Model):
inv_id = models.CharField(max_length=16,default=gen_inv_id())
The callable object will need to query the database model and increment a table value. I tried creating a class in a separate .py module under the app, but it needs a method to return a value. OO is not my strong suit at this point. I think the model has become invalid and the method depends on it so it seems like a chicken/egg scenario has emerged.
Thanks for any help.
Since forever (pre 1.0 days) the default keyword supported callables. The issue with your code is you're not passing in a callable (default=gen_inv_id), but the result of a callable (default=gen_inv_id()).
So you probably want to do:
Class Box(models.Model):
inv_id = models.CharField(max_length=16,default=gen_inv_id)
Check out the docs for the latest version that describes this:
https://docs.djangoproject.com/en/1.4/ref/models/fields/#default
I've run into this before. One thing you can do is to overwrite the class's save method, so that you first save the parameters you need to do the computation, then do the computation and resave. If you're overwriting the save method you'll need to call super.save() (I forget what the exact notation is)
edit: the notation for super.save is super(Model, self).save(*args, **kwargs)

Django - Prevent automatic related table fetch

How can I prevent Django, for testing purposes, from automatically fetching related tables not specified in the select_related() call during the intial query?
I have a large application where I make significant use of
select_related() to bring in related model data during each original
query. All select_related() calls are used to specify the specific related models, rather than relying on the default, e.g. select_related('foo', 'bar', 'foo__bar')
As the application has grown, the select_related calls haven't
completely kept up, leaving a number of scenarios where Django happily
and kindly goes running off to the database to fetch related model
rows. This significantly increases the number of database hits, which
I obviously don't want.
I've had some success in tracking these down by checking the queries
generated using the django.db.connection.queries collection, but some
remain unsolved.
I've tried to find a suitable patch location in the django code to raise an
exception in this scenario, making the tracking much easier, but tend
to get lost in the code.
Thanks.
After some more digging, I've found the place in the code to do this.
The file in question is django/db/models/fields/related.py
You need to insert two lines into this file.
Locate class "SingleRelatedObjectDescriptor". You need to change the function __get__() as follows:
def __get__(self, instance, instance_type=None):
if instance is None:
return self
try:
return getattr(instance, self.cache_name)
except AttributeError:
raise Exception("Automated Database Fetch on %s.%s" % (instance._meta.object_name, self.related.get_accessor_name()))
# leave the old code here for when you revert!
Similarly, in class "ReverseSingleRelatedObjectDescriptor" further down the code, you again need to change __get__() to:
def __get__(self, instance, instance_type=None):
if instance is None:
return self
cache_name = self.field.get_cache_name()
try:
return getattr(instance, cache_name)
except AttributeError:
raise Exception("Automated Database Fetch on %s.%s" % (instance._meta.object_name, self.field.name))
# BEWARE: % parameters are different to previous class
# leave old code here for when you revert
Once you've done this, you'll find that Django raises an exception every time it performs an automatic database lookup. This is pretty annoying when you first start, but it will help you track down those pesky database lookups. Obviously, when you've found them all, it's probably best to revert the database code back to normal. I would only suggest using this during a debugging/performance investigation phase and not in the live production code!
So, you're asking how to stop a method from doing what it's specifically designed to do? I don't understand why you would want to do that.
However, one thing to know about select_related is that it doesn't automatically follow relationships which are defined as null=True. So if you can set your FKs to that for now, the relationship won't be followed.

Can a django template know whether the view it is invoked from has the #login_required decorator?

Let's say that I have a system that has some pages that are public (both non-authenticated users and logged-in users can view) and others which only logged-in users can view.
I want the template to show slightly different content for each of these two classes of pages. The #login_required view decorator is always used on views which only logged-in users can view. However, my template would need to know whether this decorator is used on the view from which the template was invoked from.
Please keep in mind that I do not care whether the user is logged in or not for the public pages. What I care about is whether a page can be viewed by the general public, and the absence of a #login_required decorator will tell me that.
Can anyone throw me a hint on how the template would know whether a particular decorator is being used on the view from which the template invoked from?
Yes, it is possible, but not terribly straightforward. The complicating factor is that Django's login_required decorator actually passes through 2 levels of indirection (one dynamic function and one other decorator), to end up at django.contrib.auth.decorators._CheckLogin, which is a class with a __call__ method.
Let's say you have a non-django, garden-variety decorated function that looks like this:
def my_decorator(func):
def inner():
return func()
return inner
#my_decorator
def foo():
print foo.func_name
# results in: inner
Checking to see if the function foo has been wrapped can be as simple as checking the function object's name. You can do this inside the function. The name will actually be the name of the last wrapper function. For more complicated cases, you can use the inspect module to walk up the outer frames from the current frame if you're looking for something in particular.
In the case of Django, however, the fact that the decorator is actually an instance of the _CheckLogin class means that the function is not really a function, and therefore has no func_name property: trying the above code will raise an Exception.
Looking at the source code for django.contrib.auth.decorators._CheckLogin, however, shows that the _CheckLogin instance will have a login_url property. This is a pretty straightforward thing to test for:
#login_required
def my_view(request):
is_private = hasattr(my_view, 'login_url')
Because _CheckLogin is also used to implement the other auth decorators, this approach will also work for permission_required, etc. I've never actually had a need to use this, however, so I really can't comment on what you should look for if you have multiple decorators around a single view... an exercise left to the reader, I guess (inspect the frame stack?).
As unrequested editorial advice, however, I would say checking the function itself to see if it was wrapped like this strikes me as a bit fiddly. You can probably imagine all sorts of unpredictable behaviour waiting to happen when a new developer comes to the project as slaps on some other decorator. In fact, you're also exposed to changes in the django framework itself... a security risk waiting to happen.
I would recommend Van Gale's approach for that reason as something that is explicit, and therefore a much more robust implementation.
I would pass an extra context variable into the template.
So, the view that has #login_required would pass a variable like private: True and the other views would pass private: False
Why does your template need to know this? If the #login_required decorator is used, the view itself prevents people who aren't logged in from ever reaching the page and therefore never seeing the template to begin with.
Templates are hierarchical so why not have a #login_required version and a "no #login_required" version, both of which inherit from the same parent?
This would keep the templates a lot cleaner and easier to maintain.