How expensive are `count` calls for Django querysets? - django

I have a list of "posts" I have to render. For each post, I must do three filter querysets, OR them together, and then count the number of objects. Is this reasonable? What factors might make this slow?
This is roughly my code:
def viewable_posts(request, post):
private_posts = post.replies.filter(permissions=Post.PRIVATE, author_profile=request.user.user_profile).order_by('-modified_date')
community_posts = post.replies.filter(permissions=Post.COMMUNITY, author_profile__in=request.user.user_profile.following.all()).order_by('-modified_date')
public_posts = post.replies.filter(permissions=Post.PUBLIC).order_by('-modified_date')
mixed_posts = private_posts | community_posts | public_posts
return mixed_posts
def viewable_posts_count(request, post):
return viewable_posts(request, post).count()

The biggest factor I can see is that you have filter actions on each post. If possible, you should query the results associated with each post in ONE query. As of the count, it's the most efficient way of getting the number of results from a query, so it's likely not a problem.

Try the following code:
def viewable_posts(request, post):
private_posts = post.replies.filter(permissions=Post.PRIVATE, author_profile=request.user.user_profile).values_list('id',flat=True)
community_posts = post.replies.filter(permissions=Post.COMMUNITY, author_profile__in=request.user.user_profile.following.values_list('id',flat=True)
public_posts = post.replies.filter(permissions=Post.PUBLIC).values_list('id',flat=True)
Lposts_id = private_posts
Lposts_id.extend(community_posts)
Lposts_id.extend(public_posts)
viewable_posts = post.filter(id__in=Lposts_id).order_by('-modified_date')
viewable_posts_count = post.filter(id__in=Lposts_id).count()
return viewable_posts,viewable_posts_count
It should improve the following things:
order_by once, instead of three times
The count method runs on a query with only the index field
django uses a faster filter with "values", both for the count and the filtering.
Depends on your database, the db own cache may pick the last queried posts for viewable_posts, and use it for viewable_posts_count
Indeed, if you can squeeze the first three filter queries into one, you will save time as well.

Related

Django loop to without loop using orm

In the following code i am converting date_no to day of the week.So i have to loop 7 times for each day.And query is executing 7 times.But i want to change this code such that there is no loop and query to run is only one.
day_wise = {}
for date_no in range(1,7):
# Total 7 queryies
BusinessShareInfo_result = BusinessShareInfo.objects.filter(Date__week_day=date_no).all()
day_wise[calendar.day_name[date_no]] = {'Average':0,'Maximum':0,'Minimum':0 }
data = BusinessShareInfo_result.aggregate(Avg('Turnover'), Max('Turnover'), Min('Turnover') )
day_wise[calendar.day_name[date_no]]['Average'] = data['Turnover__avg']
day_wise[calendar.day_name[date_no]]['Maximum'] = data['Turnover__max']
day_wise[calendar.day_name[date_no]]['Minimum'] = data['Turnover__min']
I just want the functionality to be same but without any loop.
Even if you do not write any loop, you still make loops, for example to fetch the database. This looping is not that inefficient. What is inefficient is making seven queries, because making a query, regardless what the query is, is already expensive by itself.
You can make use of an ExtractWeekDay expression [Django-doc] to reduce the number of queries to one:
from django.db.models.functions import ExtractWeekDay
qs = BusinessShareInfo.objects.values(
week_day=ExtractWeekDay('Date')
).annotate(
Average=Avg('Turnover')
Max=Max('Turnover')
Min=Min('Turnover')
)
result = {
calendar.day_name[r['week_day']]: {
'Average': r['Average'],
'Max': r['Max'],
'Min': r['Min'],
}
for r in qs
}
Note: normally the name of the fields in a Django model are written in snake_case, not PerlCase, so it should be: date instead of Date.

Django: ManyToMany filter matching on ALL items in a list

I have such a Book model:
class Book(models.Model):
authors = models.ManyToManyField(Author, ...)
...
In short:
I'd like to retrieve the books whose authors are strictly equal to a given set of authors. I'm not sure if there is a single query that does it, but any suggestions will be helpful.
In long:
Here is what I tried, (that failed to run getting an AttributeError)
# A sample set of authors
target_authors = set((author_1, author_2))
# To reduce the search space,
# first retrieve those books with just 2 authors.
candidate_books = Book.objects.annotate(c=Count('authors')).filter(c=len(target_authors))
final_books = QuerySet()
for author in target_authors:
temp_books = candidate_books.filter(authors__in=[author])
final_books = final_books and temp_books
... and here is what I got:
AttributeError: 'NoneType' object has no attribute '_meta'
In general, how should I query a model with the constraint that its ManyToMany field contains a set of given objects as in my case?
ps: I found some relevant SO questions but couldn't get a clear answer. Any good pointer will be helpful as well. Thanks.
Similar to #goliney's approach, I found a solution. However, I think the efficiency could be improved.
# A sample set of authors
target_authors = set((author_1, author_2))
# To reduce the search space, first retrieve those books with just 2 authors.
candidate_books = Book.objects.annotate(c=Count('authors')).filter(c=len(target_authors))
# In each iteration, we filter out those books which don't contain one of the
# required authors - the instance on the iteration.
for author in target_authors:
candidate_books = candidate_books.filter(authors=author)
final_books = candidate_books
You can use complex lookups with Q objects
from django.db.models import Q
...
target_authors = set((author_1, author_2))
q = Q()
for author in target_authors:
q &= Q(authors=author)
Books.objects.annotate(c=Count('authors')).filter(c=len(target_authors)).filter(q)
Q() & Q() is not equal to .filter().filter(). Their raw SQLs are different where by using Q with &, its SQL just add a condition like WHERE "book"."author" = "author_1" and "book"."author" = "author_2". it should return empty result.
The only solution is just by chaining filter to form a SQL with inner join on same table: ... ON ("author"."id" = "author_book"."author_id") INNER JOIN "author_book" T4 ON ("author"."id" = T4."author_id") WHERE ("author_book"."author_id" = "author_1" AND T4."author_id" = "author_1")
I came across the same problem and came to the same conclusion as iuysal,
untill i had to do a medium sized search (with 1000 records with 150 filters my request would time out).
In my particular case the search would result in no records since the chance that a single record will align with ALL 150 filters is very rare, you can get around the performance issues by verifying that there are records in the QuerySet before applying more filters to save time.
# In each iteration, we filter out those books which don't contain one of the
# required authors - the instance on the iteration.
for author in target_authors:
if candidate_books.count() > 0:
candidate_books = candidate_books.filter(authors=author)
For some reason Django applies filters to empty QuerySets.
But if optimization is to be applied correctly however, using a prepared QuerySet and correctly applied indexes are necessary.

Django QuerySet update performance

Which one would be better for performance?
We take a slice of products. which make us impossible to bulk update.
products = Product.objects.filter(featured=True).order_by("-modified_on")[3:]
for product in products:
product.featured = False
product.save()
or (invalid)
for product in products.iterator():
product.update(featured=False)
I have tried QuerySet's in statement too as following.
Product.objects.filter(pk__in=products).update(featured=False)
This line works fine on SQLite. But, it rises following exception on MySQL. So, I couldn't use that.
DatabaseError: (1235, "This version of MySQL doesn't yet support
'LIMIT & IN/ALL/ANY/SOME subquery'")
Edit: Also iterator() method causes re-evaluate the query. So, it is bad for performance.
As #Chris Pratt pointed out in comments, the second example is invalid because the objects don't have update methods. Your first example will require queries equal to results+1 since it has to update each object. That might really be costly if you have 1000 products. Ideally you do want to reduce this to a more fixed expense if possible.
This is a similar situation to another question:
Django: Cannot update a query once a slice has been taken
That being said, you would have to do it in at least 2 queries, but you have to be a bit sneaky on how to construct the LIMIT...
Using Q objects for complex queries:
# get the IDs we want to exclude
products = Product.objects.filter(featured=True).order_by("-modified_on")[:3]
# flatten them into just a list of ids
ids = products.values_list('id', flat=True)
# Now use the Q object to construct a complex query
from django.db.models import Q
# This builds a list of "AND id NOT EQUAL TO i"
limits = [~Q(id=i) for i in ids]
Product.objects.filter(featured=True, *limits).update(featured=False)
In some cases it's acceptable to cache QuerySet in array
products = list(products)
Product.objects.filter(pk__in=products).update(featured=False)
Small optimization with values_list
products_id = list(products.values_list('id', flat=True)
Product.objects.filter(pk__in=products_id).update(featured=False)

Improving Django performance with 350000+ regs and complex query

I have a model like this:
class Stock(models.Model):
product = models.ForeignKey(Product)
place = models.ForeignKey(Place)
date = models.DateField()
quantity = models.IntegerField()
I need to get the latest (by date) quantity for every product for every place,
with almost 500 products, 100 places and 350000 stock records on the database.
My current code is like this, it worked on testing but it takes so long with the real data that it's useless
stocks = Stock.objects.filter(product__in=self.products,
place__in=self.places, date__lt=date_at)
stock_values = {}
for prod in self.products:
for place in self.places:
key = u'%s%s' % (prod.id, place.id)
stock = stocks.filter(product=prod, place=place, date=date_at)
if len(stock) > 0:
stock_values[key] = stock[0].quantity
else:
try:
stock = stocks.filter(product=prod, place=place).order_by('-date')[0]
except IndexError:
stock_values[key] = 0
else:
stock_values[key] = stock.quantity
return stock_values
How would you make it faster?
Edit:
Rewrote the code as this:
stock_values = {}
for product in self.products:
for place in self.places:
try:
stock_value = Stock.objects.filter(product=product, place=place, date__lte=date_at)\
.order_by('-date').values('cant')[0]['cant']
except IndexError:
stock_value = 0
stock_values[u'%s%s' % (product.id, place.id)] = stock_value
return stock_values
It works better (from 256 secs to 64) but still need to improve it. Maybe some custom SQL, I don't know...
Arthur's right, the len(stock) isn't the most efficient way to do that. You could go further along the "easier to ask for forgiveness than permission" route with something like this inside the inner loop:
key = u'%s%s' % (prod.id, place.id)
try:
stock = stocks.filter(product=prod, place=place, date=date_at)[0]
quantity = stock.quantity
except IndexError:
try:
stock = stocks.filter(product=prod, place=place).order_by('-date')[0]
quantity = stock.quantity
except IndexError:
quantity = 0
stock_values[key] = quantity
I'm not sure how much that would improve it compared to just changing the length check, though I think this should at least restrict it to two queries with LIMIT 1 on them (see Limiting QuerySets).
Mind you, this is still performing a lot of database hits since you could run through that loop almost 50000 times. Optimize how you're looping and you're in a better position still.
maybe the trick is in that len() method!
follow docs from:
Note: Don't use len() on QuerySets if all you want to do is determine
the number of records in the set. It's much more efficient to handle a
count at the database level, using SQL's SELECT COUNT(*), and Django
provides a count() method for precisely this reason. See count()
below.
So try changing the len to count(), and see if it makes faster!

Django Object Filter (last 1000)

How would one go about retrieving the last 1,000 values from a database via a Objects.filter? The one I am currently doing is bringing me the first 1,000 values to be entered into the database (i.e. 10,000 rows and it's bringing me the 1-1000, instead of 9000-1,000).
Current Code:
limit = 1000
Shop.objects.filter(ID = someArray[ID])[:limit]
Cheers
Solution:
queryset = Shop.objects.filter(id=someArray[id])
limit = 1000
count = queryset.count()
endoflist = queryset.order_by('timestamp')[count-limit:]
endoflist is the queryset you want.
Efficiency:
The following is from the django docs about the reverse() queryset method.
To retrieve the ''last'' five items in
a queryset, you could do this:
my_queryset.reverse()[:5]
Note that this is not quite the same
as slicing from the end of a sequence
in Python. The above example will
return the last item first, then the
penultimate item and so on. If we had
a Python sequence and looked at
seq[-5:], we would see the fifth-last
item first. Django doesn't support
that mode of access (slicing from the
end), because it's not possible to do
it efficiently in SQL.
So I'm not sure if my answer is merely inefficient, or extremely inefficient. I moved the order_by to the final query, but I'm not sure if this makes a difference.
reversed(Shop.objects.filter(id=someArray[id]).reverse()[:limit])