Passing Initial Data to Django Form After Instantiation - django

Say I instantiate a Django Form in one of my views and can, conditionally, supply it with initial data. In the spirit of Not Repeating Myself, I would like to construct the form first and pass this information to the form later. What I would LIKE to do looks something like:
form = ThisForm()
if initial_data_element:
initial_data = {
'element': initial_data_element,
}
form.initial.update(initial_data)
What would be sub-optimal, but still answer my question, would be something that looks like:
form = ThisForm()
if initial_data_element:
form.fields['element'].initial = initial_data_element
I was unable to find anything like this in the documentation or in other questions about forms here, which leads me to suspect that there is a different way that this class of problem is dealt with. For now I'll likely deal with it in the form's __init__ method, but I think I'd prefer its construction be straightforward to follow in the view.
Worth mentioning is the reason I would want to do this: my traditional vector of approach of using a variable initialized to None is overwriting instance data in my ModelForm, and I'd really like to have a more robust solution.
My question: what is the recommended method of applying initial data to an already-instantiated form, if it is in fact even possible?

Related

Django Template set of sets

Is it possible to access a set of a set in django template.
ie. a.b_set.c_set.count
so it gets a count of all c objects related to all b objects which are related to c.
I know I can make a query in the backend ie c.objects.filter(b__a=a), however I wish to do it from the template alone.
Cheers,
Emmet
This may not be possible to do from the template, since it was never intended to use "complex" logic. You should do it in the view.
Since what you want to get is a new attribute "per queryset", this is no one-liner.
Example:
as = a.objects.all()
for a in as:
a.b_c_count = c.objects.filter(b__a=a).count()
And use it like that in the template:
a.b_c_count
If you have a lot of a objects, this will be a bottleneck, so you may want to try the extra method (and use as = a.objects.all().extra(*parameters)), or even raw sql.

Secure-by-default django ORM layer---how?

I'm running a Django shop where we serve each our clients an object graph which is completely separate from the graphs of all the other clients. The data is moderately sensitive, so I don't want any of it to leak from one client to another, nor for one client to delete or alter another client's data.
I would like to structure my code such that I by default write code which adheres to the security requirements (No hard guarantees necessary), but lets me override them when I know I need to.
My main fear is that in a Twig.objects.get(...), I forget to add client=request.client, and likewise for Leaf.objects.get where I have to check that twig__client=request.client. This quickly becomes error-prone and complicated.
What are some good ways to get around my own forgetfulness? How do I make this a thing I don't have to think about?
One candidate solution I have in mind is this:
Set the default object manager as DANGER = models.Manager() on my abstract base class(es).
Have a method ok(request) on said base classes which applies .filter(leaf__twig__branch__trunk__root__client=request.client) as applicable.
use MyModel.ok(request) instead of MyModel.objects wherever feasible.
Can this be improved upon? One not so nice issue is when a view calls a model method, e.g. branch.get_twigs_with_fruit, I now have to either pass a request for it to run through ok or I have to invoke DANGER. I like neither :-\
Is there some way of getting access to the current request? I think that might mitigate the situation...
Ill explain a different problem I had however I think the solution might be something to look into.
Once I was working on a project to visualize data where I needed to have a really big table which will store all the data for all visualizations. That turned out to be a big problem because I would have to do things like Model.objects.filter(visualization=5) which was just not very elegant and not efficient.
To make things simpler and more efficient I ended up creating dynamic models on the fly. Essentially I would create a separate table in the db on the fly and then store a data only for that one visualization in that. My code is something like:
def get_model_class(table_name):
class ModelBase(ModelBase):
def __new__(cls, name, bases, attrs):
name = '{}_{}'.format(name, table_name)
return super(ModelBase, cls).__new__(cls, name, bases, attrs)
class Data(models.Model):
# fields here
__metaclass__ = ModelBase
class Meta(object):
db_table = table_name
return Data
dynamic_model = get_model_class('foo')
This was useful for my purposes because it allowed queries to be much faster but getting back to your issue I think something like this can be useful because this will make sure that each client's data is separate not only via a foreign key, but is actually separated in the db.
Using this method is pretty straight forward except before using the model, you have to call the function to get it for each client. To make things more efficient you can cache/memoize the results of the function call so that it does not have to recompute the same thing more than once.

How to do multiple save object calls in a Django view, but commit only once

I have a Django view in which I call my_model.save() in a single object (conditionally) in multiple spots. my_model is a normal model class.
save() is commited at once in Django, and thus, the database gets hit several times in the worst case. To prevent this, I defined a boolean variable save_model and set it to True in the case of a object modification. At the end of my view, I check this boolean and call save on my object in needed.
Is there a simpler way of doing this? I tried Djangos transaction.commit_on_success as a view decorator, but the save-calls appear to get queued and committed anyway.
You could look into django-dirtyfields.
Simply use DirtyFieldsMixin as a mixin to your model. You will then be able to check if an object has changed (using obj.is_dirty()) before doing a save().
You can use transaction support everywhere in your code, Django docs say it explicitely:
Although the examples below use view functions as examples, these decorators and context managers can be used anywhere in your code that you need to deal with transactions
But this isn't the thing transactions are for. You can get rid of your boolean variable using some existing app for that, like django-dirtyfields.
But it smells like a bad design. Why do you need to call save multiple times? Are you sure there is no way to call it only once?
There can be two approaches for this... But they are similar... First one is calling save() before returning response.
def my_view(request):
obj = Mymodel.objects.get(...)
if cond1:
obj.attr1 = True
elif cond2:
obj.attr2 = True
else:
obj.attr1 = False
obj.attr2 = False
obj.save()
return .......
Second one is your approach...
But there is no other way to do this, except you define your own decorator or do some other approach, but in fact, you need to check if there is any modification on your model (or you want to save changes to your data).

Changing a QuerySet object on the fly in Django

Can or should I ever do this in a view?
a = SomeTable.objects.all()
for r in a:
if r.some_column == 'foo':
r.some_column = 'bar'
It worked like a champ, but I tried a similar thing somewhere else and I was getting strange results, implying that QuerySet objects don't like to be trifled with. And, I didn't see anything in the docs good or bad for this sort of trick.
I know there are other ways to do this, but I'm specifically wanting to know if this is a bad idea, why it's bad, and if it is indeed bad, what the 'best' most django/pythonic way to change values on the fly would be.
This is fine as long as you don't do anything later that will cause the queryset to be re-evaluated - for example, slicing it. That will make another query to the database, and all your modified objects will be replaced with fresh ones.
A way to protect yourself against that would be to convert to a list first:
a = list(SomeTable.objects.all())
This way, further slicing etc won't cause a fresh db call, and any modifications will be preserved.
Yup. See docs here
SomeTable.objects.filter(some_column='foo').update(some_column='bar')
I would go with Django's idiom. It executes the SQL with a single statement with 'where' and 'update' rather sending multiple SQL statements like your code would. This saves time. Check with Django's 'connection' to test SQL time.

Move a python / django object from a parent model to a child (subclass)

I am subclassing an existing model. I want many of the members of the parent class to now, instead, be members of the child class.
For example, I have a model Swallow. Now, I am making EuropeanSwallow(Swallow) and AfricanSwallow(Swallow). I want to take some but not all Swallow objects make them either EuropeanSwallow or AfricanSwallow, depending on whether they are migratory.
How can I move them?
It's a bit of a hack, but this works:
swallow = Swallow.objects.get(id=1)
swallow.__class__ = AfricanSwallow
# set any required AfricanSwallow fields here
swallow.save()
I know this is much later, but I needed to do something similar and couldn't find much. I found the answer buried in some source code here, but also wrote an example class-method that would suffice.
class AfricanSwallow(Swallow):
#classmethod
def save_child_from_parent(cls, swallow, new_attrs):
"""
Inputs:
- swallow: instance of Swallow we want to create into AfricanSwallow
- new_attrs: dictionary of new attributes for AfricanSwallow
Adapted from:
https://github.com/lsaffre/lino/blob/master/lino/utils/mti.py
"""
parent_link_field = AfricanSwallow._meta.parents.get(swallow.__class__, None)
new_attrs[parent_link_field.name] = swallow
for field in swallow._meta.fields:
new_attrs[field.name] = getattr(swallow, field.name)
s = AfricanSwallow(**new_attrs)
s.save()
return s
I couldn't figure out how to get my form validation to work with this method however; so it certainly could be improved more; probably means a database refactoring might be the best long-term solution...
Depends on what kind of model inheritance you'll use. See
http://docs.djangoproject.com/en/dev/topics/db/models/#model-inheritance
for the three classic kinds. Since it sounds like you want Swallow objects that rules out Abstract Base Class.
If you want to store different information in the db for Swallow vs AfricanSwallow vs EuropeanSwallow, then you'll want to use MTI. The biggest problem with MTI as the official django model recommends is that polymorphism doesn't work properly. That is, if you fetch a Swallow object from the DB which is actually an AfricanSwallow object, you won't get an instance o AfricanSwallow. (See this question.) Something like django-model-utils InheritanceManager can help overcome that.
If you have actual data you need to preserve through this change, use South migrations. Make two migrations -- first one that changes the schema and another that copies the appropriate objects' data into subclasses.
I suggest using django-model-utils's InheritanceCastModel. This is one implementation I like. You can find many more in djangosnippets and some blogs, but after going trough them all I chose this one. Hope it helps.
Another (outdated) approach: If you don't mind keeping parent's id you can just create brand new child instances from parent's attrs. This is what I did:
ids = [s.pk for s in Swallow.objects.all()]
# I get ids list to avoid memory leak with long lists
for i in ids:
p = Swallow.objects.get(pk=i)
c = AfricanSwallow(att1=p.att1, att2=p.att2.....)
p.delete()
c.save()
Once this runs, a new AfricanSwallow instance will be created replacing each initial Swallow instance
Maybe this will help someone :)