I've been drooling over Django all day while coding up an internal website in record time, but now I'm noticing that something is very inefficient with my ForeignKeys in the model.
I have a model which has 6 ForeignKeys, which are basically lookup tables. When I query all objects and display them in a template, it's running about 10 queries per item. Here's some code, which ought to explain it better:
class Website(models.Model):
domain_name = models.CharField(max_length=100)
registrant = models.ForeignKey('Registrant')
account = models.ForeignKey('Account')
registrar = models.ForeignKey('Registrar')
server = models.ForeignKey('Server', related_name='server')
host = models.ForeignKey('Host')
target_server = models.ForeignKey('Server', related_name='target')
class Registrant(models.Model):
name = models.CharField(max_length=100)
...and 5 more simple tables. There are 155 Website records, and in the view I'm using:
Website.objects.all()
It ends up executing 1544 queries. In the template, I'm using all of the foreign fields, as in:
<span class="value">Registrant:</span> {{ website.registrant.name }}<br />
So I know it's going to run a lot of queries...but it seems like this is excessive. Is this normal? Should I not be doing it this way?
I'm pretty new to Django, so hopefully I'm just doing something stupid. It's definitely a pretty amazing framework.
You should use the select_related function, e.g.
Website.objects.select_related()
so that it will automatically do a join and follow all of those foreign keys when the query is performed instead of loading them on demand as they are used. Django loads data from the database lazily, so by default you get the following behavior
# one database query
website = Website.objects.get(id=123)
# first time account is referenced, so another query
print website.account.username
# account has already been loaded, so no new query
print website.account.email_address
# first time registrar is referenced, so another query
print website.registrar.name
and so on. If you use selected related, then a join is performed behind the scenes and all of these foreign keys are automatically followed and loaded on the first query, so only one database query is performed. So in the above example, you'd get
# one database query with a join and all foreign keys followed
website = Website.objects.select_related().get(id=123)
# no additional query is needed because the data is already loaded
print website.account.username
print website.account.email_address
print website.registrar.name
Related
**
Difference between creating a foreign key for consistency and for joins
**
I am fine to use Foreignkey and Queryset API with Django.
I just want to understand little bit more deeply how it works behind the scenes.
In Django manual, it says
a database index is automatically created on the ForeignKey. You can
disable this by setting db_index to False. You may want to avoid the
overhead of an index if you are creating a foreign key for consistency
rather than joins, or if you will be creating an alternative index
like a partial of multiple column index.
creating for a foreign key for consistency rather than joins
this part is confusing me.
I expected that you use Join keyword if you do query with Foreign key like below.
SELECT
*
FROM
vehicles
INNER JOIN users ON vehicles.car_owner = users.user_id
For example,
class Place(models.Model):
name = models.Charfield(max_length=50)
address = models.Charfield(max_length=50)
class Comment(models.Model):
place = models.ForeignKeyField(Place)
content = models.Charfield(max_length=50)
if you use queryset like Comment.objects.filter(place=1), i expected using Join Keyword in low level SQL command.
but, when I checked it by printing out queryset.query in console, it showed like below.
(I simplified with Model just to explains. below, it shows all attributes in my model. you can ignore attributes)
SELECT
"bfm_comment"."id", "bfm_comment"."content", "bfm_comment"."user_id", "bfm_comment"."place_id", "bfm_comment"."created_at"
FROM "bfm_comment" WHERE "bfm_comment"."place_id" = 1
creating a foreign key for consistency vs creating a foreign key for joins
simply, I thought if you use any queryset, it means using foreign key for joins. Because you can get parent's table data by c = Comment.objects.get(id=1) c.place.name easily. I thought it joins two tables behind scenes. But result of Print(queryset.query) didn't how Join Keyword but Find it by Where keyword.
The way I understood from an answer
Case 1:
Comment.objects.filter(place=1)
result
SELECT
"bfm_comment"."id", "bfm_comment"."content", "bfm_comment"."user_id", "bfm_comment"."place_id", "bfm_comment"."created_at"
FROM "bfm_comment"
WHERE "bfm_comment"."id" = 1
Case 2:
Comment.objects.filter(place__name="df")
result
SELECT "bfm_comment"."id", "bfm_comment"."content", "bfm_comment"."user_id", "bfm_comment"."place_id", "bfm_comment"."created_at"
FROM "bfm_comment" INNER JOIN "bfm_place" ON ("bfm_comment"."place_id" = "bfm_place"."id")
WHERE "bfm_place"."name" = df
Case1 is searching rows which has comment.id column is 1 in just Comment table.
But in Case 2, it needs to know Place table's attribute 'name', so It has to use JOIN keyword to check values in column of Place table. Right?
So Is it alright to think that I create a foreign key for joins if i use queryset like Case2 and that it is better to create index on the Foreign Key?
for above question, I think I can take the answer from Django Manual
Consider adding indexes to fields that you frequently query using
filter(), exclude(), order_by(), etc. as indexes may help to speed up
lookups. Note that determining the best indexes is a complex
database-dependent topic that will depend on your particular
application. The overhead of maintaining an index may outweigh any
gains in query speed
In conclusion, it really depends on how my application work with it.
If you execute the following command the mystery will be revealed
./manage.py sqlmigrate myapp 0001
Take care to replace myapp with your app name (bfm I think) and 0001 with the actual migration where the Comment model is created.
The generated sql will reveal that the actual table is created with place_id int rather than a place Place that is because the RDBMS doesn't know anything about models, the models are only in the application level. It's the job of the django orm to fetch the data from the RDBMS and convert them into model instances. That's why you always get a place member in each of your Comment instances and that place member gives you access to the members of the related Place instance in turn.
So what happens when you do?
Comment.objects.filter(place=1)
Django is smart enough to know that you are referring to a place_id because 1 is obviously not an instance of a Place. But if you used a Place instance the result would be the same. So there is no join here. The above query would definitely benefit from having an index on the place_id, but it wouldn't benefit from having a foreign key constraint!! Only the Comment table is queried.
If you want a join, try this:
Comment.objects.filter(place__name='my home')
Queries of this nature with the __ often result in joins, but sometimes it results in a sub query.
Querysets are lazy.
https://docs.djangoproject.com/en/1.10/topics/db/queries/#querysets-are-lazy
QuerySets are lazy – the act of creating a QuerySet doesn’t involve
any database activity. You can stack filters together all day long,
and Django won’t actually run the query until the QuerySet is
evaluated. Take a look at this example:
I have a small Django project to learn with (it's a web UI for the RANCID backup software) and I've run into a problem.
The model for the app defines Devices, and DeviceGroups. Each Device is a member of a group and has a couple of state flags - Enabled, Successful - to indicate if they are operating correctly. Here's the relevant bits.
class DeviceGroup(models.Model):
group_name = models.CharField(max_length=60,unique=True)
class Device(models.Model):
hostname = models.CharField(max_length=60,unique=True)
enabled = models.BooleanField(default=True)
device_group = models.ForeignKey(DeviceGroup)
last_was_success = models.BooleanField(default=False,editable=False)
I have a summary table on the front 'dashboard' page, that shows a list of all the groups, and for each group, how many devices are in it. I'd like to also show the number of Active devices, and the number of failing (i.e. Not last_was_success) devices per-group. The plain device count is already available through the ForeignKey field.
This seems like the kind of thing that annotate is for, but not quite. And actually, I'm not sure how I'd do it with raw SQL either. Most likely as three queries and some lookup afterwards, or subqueries.
So - is it possible 'nicely' in Django? Or alternatively, how do you do the joining up again in the Template or View? The object passed into the template is simply:
device_groups = DeviceGroup.objects.order_by('group_name')
currently, and I don't think I can just add extra fields onto the queryset results "manually", can I? i.e. it's not a dict or similar.
i think you must use
device_groups = DeviceGroup.objects.all().order_by('group_name')
or
device_groups = DeviceGroup.objects.filter(condition).order_by('group_name')
I've always found the Django orm's handling of subclassing models to be pretty spiffy. That's probably why I run into problems like this one.
Take three models:
class A(models.Model):
field1 = models.CharField(max_length=255)
class B(A):
fk_field = models.ForeignKey('C')
class C(models.Model):
field2 = models.CharField(max_length=255)
So now you can query the A model and get all the B models, where available:
the_as = A.objects.all()
for a in the_as:
print a.b.fk_field.field2 #Note that this throws an error if there is no B record
The problem with this is that you are looking at a huge number of database calls to retrieve all of the data.
Now suppose you wanted to retrieve a QuerySet of all A models in the database, but with all of the subclass records and the subclass's foreign key records as well, using select_related() to limit your app to a single database call. You would write a query like this:
the_as = A.objects.select_related("b", "b__fk_field").all()
One query returns all of the data needed! Awesome.
Except not. Because this version of the query is doing its own filtering, even though select_related is not supposed to filter any results at all:
set_1 = A.objects.select_related("b", "b__fk_field").all() #Only returns A objects with associated B objects
set_2 = A.objects.all() #Returns all A objects
len(set_1) > len(set_2) #Will always be False
I used the django-debug-toolbar to inspect the query and found the problem. The generated SQL query uses an INNER JOIN to join the C table to the query, instead of a LEFT OUTER JOIN like other subclassed fields:
SELECT "app_a"."field1", "app_b"."fk_field_id", "app_c"."field2"
FROM "app_a"
LEFT OUTER JOIN "app_b" ON ("app_a"."id" = "app_b"."a_ptr_id")
INNER JOIN "app_c" ON ("app_b"."fk_field_id" = "app_c"."id");
And it seems if I simply change the INNER JOIN to LEFT OUTER JOIN, then I get the records that I want, but that doesn't help me when using Django's ORM.
Is this a bug in select_related() in Django's ORM? Is there any work around for this, or am I simply going to have to do a direct query of the database and map the results myself? Should I be using something like Django-Polymorphic to do this?
It looks like a bug, specifically it seems to be ignoring the nullable nature of the A->B relationship, if for example you had a foreign key reference to B in A instead of the subclassing, that foreign key would of course be nullable and django would use a left join for it. You should probably raise this in the django issue tracker. You could also try using prefetch_related instead of select_related that might get around your issue.
I found a work around for this, but I will wait a while to accept it in hopes that I can get some better answers.
The INNER JOIN created by the select_related('b__fk_field') needs to be removed from the underlying SQL so that the results aren't filtered by the B records in the database. So the new query needs to leave the b__fk_field parameter in select_related out:
the_as = A.objects.select_related('b')
However, this forces us to call the database everytime a C object is accessed from the A object.
for a in the_as:
#Note that this throws an DoesNotExist error if a doesn't have an
#associated b
print a.b.fk_field.field2 #Hits the database everytime.
The hack to work around this is to get all of the C objects we need from the database from one query and then have each B object reference them manually. We can do this because the database call that accesses the B objects retrieved will have the fk_field_id that references their associated C object:
c_ids = [a.b.fk_field_id for a in the_as] #Get all the C ids
the_cs = C.objects.filter(pk__in=c_ids) #Run a query to get all of the needed C records
for c in the_cs:
for a in the_as:
if a.b.fk_field_id == c.pk: #Throws DoesNotExist if no b associated with a
a.b.fk_field = c
break
I'm sure there's a functional way to write that without the nested loop, but this illustrates what's happening. It's not ideal, but it provides all of the data with the absolute minimum number of database hits - which is what I wanted.
I've got 2 existing models that I need to join that are non-relational (no foreign keys). These were written by other developers are cannot be modified by me.
Here's a quick description of them:
Model Process
Field filename
Field path
Field somethingelse
Field bar
Model Service
Field filename
Field path
Field servicename
Field foo
I need to join all instances of these two models on the filename and path columns. I've got existing filters I have to apply to each of them before this join occurs.
Example:
A = Process.objects.filter(somethingelse=231)
B = Service.objects.filter(foo='abc')
result = A.filter(filename=B.filename,path=B.path)
This sucks, but your best bet is to iterate all models of one type, and issue queries to get your joined models for the other type.
The other alternative is to run a raw SQL query to perform these joins, and retrieve the IDs for each model object, and then retrieve each joined pair based on that. More efficient at run time, but it will need to be manually maintained if your schema evolves.
I have 2 tables, simpleDB_all and simpleDB_some. The "all" table has an entry for every item I want, while the "some" table has entries only for some items that need additional information. The Django models for these are:
class all(models.Model):
name = models.CharField(max_length=40)
important_info = models.CharField(max_length=40)
class some(models.Model):
all_key = models.OneToOneField(all)
extra_info = models.CharField(max_length=40)
I'd like to create a view that shows every item in "all" with the extra info if it exists in "some". Since I'm using a 1-1 field I can do this with almost complete success:
allitems = all.objects.all()
for item in allitems:
print item.name, item.important_info, item.some.extra_info
but when I get to the item that doesn't have a corresponding entry in the "some" table I get a DoesNotExist exception.
Ideally I'd be doing this loop inside a template, so it's impossible to wrap it around a "try" clause. Any thoughts?
I can get the desired effect directly in SQL using a query like this:
SELECT all.name, all.important_info, some.extra_info
FROM all LEFT JOIN some ON all.id = some.all_key_id;
But I'd rather not use raw SQL.
You won't get a DoesNotExist exception in the template - they are hidden, by design, by the template system.
The SQL you give is what is generated, more or less, when you use select_related on your query (if you're using Django 1.2 or a checkout more recent than r12307, from February):
allitems = all.objects.select_related('some')