I'm constructing a query using the Q object but it's hanging.
When I "AND" the filters together, the query works fine. Here is the example:
School.objects.filter( Q(city__search='"orlando"'), Q(schoolattribute__attribute__name__search='"subjects"') )
But when I "OR" the filters together, the query just hangs because I'm assuming there's too much to process:
School.objects.filter( Q(city__search='"orlando"') | Q(schoolattribute__attribute__name__search='"subjects"')
I'm wondering what's going on here exactly and what can I do to mitigate it. Why does the query work when "AND" is used, but not when "OR" is used?
EDIT: Good tip #psagers. So it turns out that the AND query gets two INNER JOINs whereas the OR query gets two LEFT OUTER JOINs.
Given your situation, I'll assume the following:
You have a really big data set
You don't want to fetch too many entries
To optimize your code, you'd probably be better off using two queries:
schools_by_city = School.objects.filter(city__search='"orlando"')
schools_by_attribute_city = School.objects.filter(schoolattribute__attribute__name__search='"subjects"')
result = set(schools_by_city).union(set(schools_by_attribute_city))
This will probably be better than your original query (because you can use the INNER join), but you should test it out. If my assumptions are wrong, you should probably rethink your db structure (i.e. use a specialized tool for searching instead of mysql fulltext, rethinking SchoolAttribute, whatever floats your boat).
Related
When attempting to return a list of values from django objects, will performance be better using a list comprehension:
[x.value for x in Model.objects.all()]
or calling list() on django's values_list function:
list(Model.objects.values_list('value', flat=True))
and why?
The most efficient way is to do the second approach (using values_list()). The reason for this is that this modifies the SQL query that is sent to the database to only select the values provided.
The first approach FIRST selects all values from the database, and after that filters them again. So you have already "spend" the resources to fetch all values with that approach.
You can compare the queries generated by wrapping your QuerySet with str(queryset.query) and it will return the actual SQL query that gets executed.
See example below
class Model(models.Model):
foo = models.CharField()
bar = models.CharField()
str(Model.objects.all().query)
# SELECT "model"."id", "model"."foo", "model"."bar" FROM "model"
str(Model.objects.values_list("foo").query)
# SELECT "model"."foo" FROM "model"
I had also somewhat assumed the argument in the currently-accepted answer would be correct. Namely, having a fewer number of fields being fetched would lead to Model.objects.all() taking less time than Model.objects.values_list('foo') to execute. However, I didn't find this in practice when using %timeit.
I actually found that doing
Model.objects.values_list('foo', flat=True) would take ~2-10x longer than just Model.objects.all(). I found this was the case for
an empty django table
a table with 10s of rows
a table with millions of rows
Including/removing flat=True seemed to make no significant difference in executing time for values_list. I would be interested what others find as well?
So this makes me think from a pure "what SQL is executed" point of view, although the values_list ORM query fetches fewer field values from the db, I imagine there is more logic still within the source django code of .all() vs .values_list() which could lead to different additional execution times (including .all() taking less time).
However, to fully address the initial example code, we would also need to factor in any further considerations affecting the execution time due to using a list comprehension [] in the .all() case VS list() in the .values_list() case. The general discussion of list() VS a list comprehension is covered in other questions already.
TLDR So I imagine it is a trade-off between those 2 factors.
the apparent difference in execution time between .values_list() and .all() (which from my tests indicate we can't simply deduce fewer fields being fetched leads to faster execution - more investigation of underlying django source code needed for cause of this)
any differences between using a list comprehension and list()
In my test cases, I generally found the .all() query was actually faster than the .values_list() query, but when also factoring in the transformation to a list, the .values_list scenario would overall take less time. So it may well depend on the scenario...
I found that the objects could be duplicate in a queryset. However, when I try to access each of the object and do nothing, it changes and seems to be right.
Here are the commands I have typed into the shell
At first I gained a queryset orderby the field 'receiveTime'. Then it seems that ds[1996] equals to ds[1997]. And I try to use the loop:
for d in ds:
pass
Then the ds[1996] isn't equal to ds[1997], but what have I done?
Maybe it is a feature of the lazy search?
plus 1:I have reproduced it just now. I didn't do any inserting or deleting just now.
These are the commands I just typed into the shell.
plus 2:I have seen the raw sql queries when I call the ds[0] and ds[1] which I have shown in the picture 2. The sql queries are correct but the answer seems to be wrong. I think maybe the reason is that the sorting parameter receiveTime of two objects are the same, which lead to the disorder of the objects?
Here are the raw sql queries
Replace order_by("receive_time") with order_by("receive_time", "id"). PostgreSQL uses qsort which is an unstable sort. Given only receive_time, if values are the same, the order is not guaranteed.
Don't post code or logs in images. Ever.
This question already has answers here:
Django: __in query lookup doesn't maintain the order in queryset
(6 answers)
Closed 8 years ago.
I've searched online and could only find one blog that seemed like a hackish attempt to keep the order of a query list. I was hoping to query using the ORM with a list of strings, but doing it that way does not keep the order of the list.
From what I understand bulk_query only works if you have the id's of the items you want to query.
Can anybody recommend an ideal way of querying by a list of strings and making sure the objects are kept in their proper order?
So in a perfect world I would be able to query a set of objects by doing something like this...
Entry.objects.filter(id__in=['list', 'of', 'strings'])
However, they do not keep order, so string could be before list etc...
The only work around I see, and I may just be tired or this may be perfectly acceptable I'm not sure is doing this...
for i in listOfStrings:
object = Object.objects.get(title=str(i))
myIterableCorrectOrderedList.append(object)
Thank you,
The problem with your solution is that it does a separate database query for each item.
This answer gives the right solution if you're using ids: use in_bulk to create a map between ids and items, and then reorder them as you wish.
If you're not using ids, you can just create the mapping yourself:
values = ['list', 'of', 'strings']
# one database query
entries = Entry.objects.filter(field__in=values)
# one trip through the list to create the mapping
entry_map = {entry.field: entry for entry in entries}
# one more trip through the list to build the ordered entries
ordered_entries = [entry_map[value] for value in values]
(You could save yourself a line by using index, as in this example, but since index is O(n) the performance will not be good for long lists.)
Remember that ultimately this is all done to a database; these operations get translated down to SQL somewhere.
Your Django query loosely translated into SQL would be something like:
SELECT * FROM entry_table e WHERE e.title IN ("list", "of", "strings");
So, in a way, your question is equivalent to asking how to ORDER BY the order something was specified in a WHERE clause. (Needless to say, I hope, this is a confusing request to write in SQL -- NOT the way it was designed to be used.)
You can do this in a couple of ways, as documented in some other answers on StackOverflow [1] [2]. However, as you can see, both rely on adding (temporary) information to the database in order to sort the selection.
Really, this should suggest the correct answer: the information you are sorting on should be in your database. Or, back in high-level Django-land, it should be in your models. Consider revising your models to save a timestamp or an ordering when the user adds favorites, if that's what you want to preserve.
Otherwise, you're stuck with one of the solutions that either grabs the unordered data from the db then "fixes" it in Python, or constructing your own SQL query and implementing your own ugly hack from one of the solutions I linked (don't do this).
tl;dr The "right" answer is to keep the sort order in the database; the "quick fix" is to massage the unsorted data from the database to your liking in Python.
EDIT: Apparently MySQL has some weird feature that will let you do this, if that happens to be your backend.
Is there a way to use fuzzy matching in a django queryset filter?
I'm looking for something along the lines of:
Object.objects.filter(fuzzymatch(namevariable)__gt=.9)
or is there a way to use lambda functions, or something similar in django queries, and if so, how much would it affect performance time (given that I have a stable set of ~6000 objects in my database that I want to match to)
(realized I should probably put my comments in the question)
I need something stronger than contains, something along the lines of difflib. I'm basically trying to get around doing a Object.objects.all() and then a list comprehension with fuzzy matching.
(although I'm not necessarily sure that doing that would be much slower than trying to filter based on a function, so if you have thoughts on that I'm happy to listen)
also, even though it's not exactly what I want, I'd be open to some kind of tokenized opposite-contains, like:
Object.objects.filter(['Virginia', 'Tech']__in=Object.name)
Where something like "Virginia Technical Institute" would be returned. Although case insensitive, preferably.
When you're using the ORM, the thing to understand is that everything you do converts to SQL commands and it's the performance of the underlying queries on the underlying database that matter. Case in point:
SELECT COUNT (*) ...
Is that fast? Depends on whether your database stores any records to give you that information - MySQL/MyISAM does, MySQL/InnoDB does not. In English - this is one lookup in MYISAM, and n in InnoDB.
Next thing - in order to do exact match lookups efficiently in SQL you have to tell it when you create the table - you can't just expect it to understand. For this purpose SQL has the INDEX statement - in django, use db_index=True in the field options of your model. Bear in mind that this has an added performance hit on writes (to create the index) and obviously extra storage is needed (for the datastructure) so you cannot "INDEX all the things". Also, I don't think it will help for fuzzy matching - but it's worth noting anyway.
Next consideration - how do we do fuzzy matching in SQL? Well apparently LIKE and CONTAINS allow a certain amount of searching and wildcard-results to be executed in SQL. These are T-SQL links - translate for your database server :) You can achieve this via Model.objects.get(fieldname__contains=value) which will produce LIKE SQL, or similar. There are a number of options available there for different lookups.
This may or may not be powerful enough for you - I'm not sure.
Now, for the big question: performance. Chances are if you're doing a contains search that the SQL server will have to hit all of the rows in the database - don't take my word on that, but it would be my bet - even with indexing on. With 6000 rows this might not take all that long; then again, if you're doing this on a per-connection-to-your-app basis it's probably going to create a slowdown.
Next thing to understand about the ORM: if you do this:
Model.objects.get(fieldname__contains=value)
Model.objects.get(fieldname__contains=value)
You will issue two queries to the database server. In other words, the ORM doesn't always cache the results - so you might just want to do an .all() and search in memory. Do read about caching and querysets.
Further on on that last page, you'll also see Q objects - useful for more complicated queries.
So in summary then:
SQL contains some basic fuzzy matching-like parameters.
Whether or not these are sufficient depends on your needs.
How they perform depends on your SQL server - definitely measure it.
Whether you can cache these results in memory depends on how likely scaling is - again might be worth measuring the memory commit as a result - if you can share between instances and if the cache will be frequently invalidated (if it will be, don't do it).
Ultimately, I'd start by getting your fuzzy matching working, then measure, then tweak, then measure until you work out how to improve performance. 99% of this I learnt doing exactly that :)
with postgres as database, you can use TrigramSimilarity to do fuzzy search and rank your results on different weight as well. Here is the link to documentation :
https://docs.djangoproject.com/en/2.0/ref/contrib/postgres/search/#trigram-similarity
For full text search you can refer to https://czep.net/17/full-text-search.html
If you need something stronger than contains lookup, have a look at regex lookups: https://docs.djangoproject.com/en/1.0/ref/models/querysets/#regex
I am hoping someone can help me out with a quick question I have regarding chaining Django querysets. I am noticing a slow down because I am evaluating many data points in the database to create data trends. I was wondering if there was a way to have the chained filters evaluated locally instead of hitting the database. Here is a (crude) example:
pastries = Bakery.objects.filter(productType='pastry') # <--- will obviously always hit DB, when evaluated
cannoli = pastries.filter(specificType='cannoli') # <--- can this be evaluated locally instead of hitting the DB when evaluated, as long as pastries was evaluated?
I have checked the docs and I do not see anything specifying this, so I guess it's not possible, but I wanted to check with the 'braintrust' first ;-).
BTW - I know that I can do this myself by implementing some methods to loop through these datapoints and evaluate the criteria, but there are so many datapoints that my deadline does not permit me manually implementing this.
Thanks in advance.
QuerySet methods always produce SQL that returns the desired expression. This is why you cannot e.g. call various methods after slicing; SQL does not support that syntax. The ORM does nothing more than assemble said SQL. If you want fancier processing then you will need to perform it in Python code yourself.