I am working on a test project using django-nonrel.
After enabling the admin interface and adding some entities to the database, I added a search_field to the ModelAdmin class. As I tried to search I got the following error:
DatabaseError: Lookup type 'icontains' isn't supported
In order to fix this, I added an index like this:
from models import Empresa
from dbindexer.api import register_index
register_index(Empresa, {'nombre': 'icontains'})
But now I am getting the following error:
First ordering property must be the same as inequality filter property, if specified for this query; received key, expected idxf_nombre_l_icontains
Am I trying to do something that is not supported by django-nonrel and dbindex yet?
Thanks in advance for any help
I have the same problem (on another case), know the cause of it, but currently have no solution.
It is because of GAE's database limitation in which if a query contain an inequality comparison, that is ' < , > , >= ' or something like that, any ordering of any member of the entities (other than the member that use the inequality comparison) must be preceded by an ordering of the member with inequality comparison first.
If we are directly using GAE's database, this limitation can easily be overcome by first set the order by the member that use the inequality first, than sort with whatever you want to sort.
Unfortunately, the django-nonrel and djangoappengine's database wrapper seems to be unable to do that (I've tried the order by first technique using django model, still error, maybe it's just me), not to mention the use of dbindexer as the wrapper of djangoappengine.db which itself is a wrapper of GAE's database......
Bottomline, debugging can be a hell for this mess. You may want to use GAE datastore directly just for this case, or wait for djangoappengine team to come up with better alternative.
I kind of fixed it by changing the ordering property in the ModelAdmin subclass:
class EmpresaAdmin(admin.ModelAdmin):
search_fields = ('nombre',)
#order by the atribute autogenerated by dbindex
ordering = ('idxf_nombre_l_icontains',)
Does anyone know a better way to fix this?
Related
You can query Django's JSONField, either by direct lookup, or by using annotations. Now I realize if you annotate a field, you can all sorts of complex queries, but for the very basic query, which one is actually the preferred method?
Example: Lets say I have model like so
class Document(models.Model):
data = JSONField()
And then I store an object using the following command:
>>> Document.objects.create(data={'name': 'Foo', 'age': 24})
Now, the query I want is the most basic: Find all documents where data__name is 'Foo'. I can do this 2 ways, one using annotation, and one without, like so:
>>> from django.db.models.expressions import RawSQL
>>> Document.objects.filter(data__name='Foo')
>>> Document.objects.annotate(name = RawSQL("(data->>'name')::text", [])).filter(name='Foo')
So what exactly is the difference? And if I can make basic queries, why do I need to annotate? Provided of course I am not going to make complex queries.
There is no reason whatsoever to use raw SQL for queries where you can use ORM syntax. For someone who is conversant in SQL but less experienced with Django's ORM, RawSQL might provide an easier path to a certain result than the ORM, which has its own learning curve.
There might be more complex queries where the ORM runs into problems or where it might not give you the exact SQL query that you need. It is in these cases that RawSQL comes in handy – although the ORM is getting more feature-complete with every iteration, with
Cast (since 1.10),
Window functions (since 2.0),
a constantly growing array of wrappers for database functions
the ability to define custom wrappers for database functions with Func expressions (since 1.8) etc.
They are interchangable so it's matter of taste. I think Document.objects.filter(data__name='Foo') is better because:
It's easier to read
In the future, MariaDB or MySql can support JSON fields and your code will be able to run on both PostgreSQL and MariaDB.
Don't use RawSQL as a general rule. You can create security holes in your app.
I have created custom Django model-field subclasses based on CharField but which use to_python() to ensure that the model objects returned have more complex objects (some are lists, some are dicts with a specific format, etc.) -- I'm using MySQL so some of the PostGreSql field types are not available.
All is working great, but Pylint believes that all values in these fields will be strings and thus I get a lot of "unsupported-membership-test" and "unsubscriptable-object" warnings on code that uses these models. I can disable these individually, but I would prefer to let Pylint know that these models return certain object types. Type hints are not helping, e.g.:
class MealPrefs(models.Model):
user = ...foreign key...
prefs: dict[str, list[str]] = \
custom_fields.DictOfListsExtendsCharField(
default={'breakfast': ['cereal', 'toast'],
'lunch': ['sandwich']},
)
I know that certain built-in Django fields return correct types for Pylint (CharField, IntegerField) and certain other extensions have figured out ways of specifying their type so Pylint is happy (MultiSelectField) but digging into their code, I can't figure out where the "magic" specifying the type returned would be.
(note: this question is not related to the INPUT:type of Django form fields)
Thanks!
I had a look at this out of curiosity, and I think most of the "magic" actually comes for pytest-django.
In the Django source code, e.g. for CharField, there is nothing that could really give a type hinter the notion that this is a string. And since the class inherits only from Field, which is also the parent of other non-string fields, the knowledge needs to be encoded elsewhere.
On the other hand, digging through the source code for pylint-django, though, I found where this most likely happens:
in pylint_django.transforms.fields, several fields are hardcoded in a similar fashion:
_STR_FIELDS = ('CharField', 'SlugField', 'URLField', 'TextField', 'EmailField',
'CommaSeparatedIntegerField', 'FilePathField', 'GenericIPAddressField',
'IPAddressField', 'RegexField', 'SlugField')
Further below, a suspiciously named function apply_type_shim, adds information to the class based on the type of field it is (either 'str', 'int', 'dict', 'list', etc.)
This additional information is passed to inference_tip, which according to the astroid docs, is used to add inference info (emphasis mine):
astroid can be used as more than an AST library, it also offers some
basic support of inference, it can infer what names might mean in a
given context, it can be used to solve attributes in a highly complex
class hierarchy, etc. We call this mechanism generally inference
throughout the project.
astroid is the underlying library used by Pylint to represent Python code, so I'm pretty sure that's how the information gets passed to Pylint. If you follow what happens when you import the plugin, you'll find this interesting bit in pylint_django/.plugin, where it actually imports the transforms, effectively adding the inference tip to the AST node.
I think if you want to achieve the same with your own classes, you could either:
Directly derive from another Django model class that already has the associated type you're looking for.
Create, and register an equivalent pylint plugin, that would also use Astroid to add information to the class so that Pylint know what to do with it.
I thought initially that you use a plugin pylint-django, but maybe you explicitly use prospector that automatically installs pylint-django if it finds Django.
The checker pylint neither its plugin doesn't check the code by use information from Python type annotations (PEP 484). It can parse a code with annotations without understanding them and e.g. not to warn about "unused-import" if a name is used in annotations only. The message unsupported-membership-test is reported in a line with expression something in object_A simply if the class A() doesn't have a method __contains__. Similarly the message unsubscriptable-object is related to method __getitem__.
You can patch pylint-django for your custom fields this way:
Add a function:
def my_apply_type_shim(cls, _context=None): # noqa
if cls.name == 'MyListField':
base_nodes = scoped_nodes.builtin_lookup('list')
elif cls.name == 'MyDictField':
base_nodes = scoped_nodes.builtin_lookup('dict')
else:
return apply_type_shim(cls, _context)
base_nodes = [n for n in base_nodes[1] if not isinstance(n, nodes.ImportFrom)]
return iter([cls] + base_nodes)
into pylint_django/transforms/fields.py
and also replace apply_type_shim by my_apply_type_shim in the same file at this line:
def add_transforms(manager):
manager.register_transform(nodes.ClassDef, inference_tip(my_apply_type_shim), is_model_or_form_field)
This adds base classes list or dict respectively, with their magic methods explained above, to your custom field classes if they are used in a Model or FormView.
Notes:
I thought also about a plugin stub solution that does the same, but the alternative with "prospector" seems so complicated for SO that I prefer to simply patch the source after installation.
Classes Model or FormView are the only classes created by metaclasses, used in Django. It is a great idea to emulate a metaclass by a plugin code and to control the analysis simple attributes. If I remember, MyPy, referenced in some comment here, has also a plugin mypy-django for Django, but only for FormView, because writing annotations for django.db is more complicated than to work with attributes. - I was trying to work on it for one week.
I'm working on an events site and have a one to many relationship between a production and its performances, when I have a performance object if I need its production id at the moment I have to do
$productionId = $performance->getProduction()->getId();
In cases when I literally just need the production id it seems like a waste to send off another database query to get a value that's already in the object somewhere.
Is there a way round this?
Edit 2013.02.17:
What I wrote below is no longer true. You don't have to do anything in the scenario outlined in the question, because Doctrine is clever enough to load the id fields into related entities, so the proxy objects will already contain the id, and it will not issue another call to the database.
Outdated answer below:
It is possible, but it is unadvised.
The reason behind that, is Doctrine tries to truly adhere to the principle that your entities should form an object graph, where the foreign keys have no place, because they are just "artifacts", that come from the way relational databases work.
You should rewrite the association to be
eager loaded, if you always need the related entity
write a DQL query (preferably on a Repository) to fetch-join the related entity
let it lazy-load the related entity by calling a getter on it
If you are not convinced, and really want to avoid all of the above, there are two ways (that I know of), to get the id of a related object, without triggering a load, and without resorting to tricks like reflection and serialization:
If you already have the object in hand, you can retrieve the inner UnitOfWork object that Doctrine uses internally, and use it's getEntityIdentifier() method, passing it the unloaded entity (the proxy object). It will return you the id, without triggering the lazy-load.
Assuming you have many-to-one relation, with multiple articles belonging to a category:
$articleId = 1;
$article = $em->find('Article', $articleId);
$categoryId = $em->getUnitOfWork()->getEntityIdentifier($article->getCategory());
Coming 2.2, you will be able to use the IDENTITY DQL function, to select just a foreign key, like this:
SELECT IDENTITY(u.Group) AS group_id FROM User u WHERE u.id = ?0
It is already committed to the development versions.
Still, you should really try to stick to one of the "correct" methods.
I currently have a line in my Django app like this:
db.execute("SELECT SUM(price * qty) FROM inventory_orderline WHERE order_id = %s", [self.id])
I'd rather perform this through the models interface provided my Django, but can't find any references.
I'm pretty sure this can be done with an annotation, but the examples only cover a few of them and I can't find a list in the documentation.
I'd like to do something like this:
self.line_items.annotate(lineprice=Product('orderline__price', 'orderline__qty')).aggregate(Sum('lineprice'))
Can anyone suggest an annotation class to use to perform the multiplication? Even better, a link to the API listing all these annotation/aggregation classes?
It's not clearly documented, but this can be done with the F() function:
from django.db.models import F
self.line_items.annotate(lineprice=F('orderline__price') * F('orderline__qty')).aggregate(Sum('lineprice'))
Unfortunately, there aren't enough inbuilt Aggregate functions and specifically, there is not one for Product.
But that doesn't limit you in any way other than having to write a "non-concise" ORM query. Specifically for your case, you should be able to do:
self.line_items.extra(select=("lineprice": "orderline__price*orderline__qty")).aggregate(Sum('lineprice'))
Say my_instance is of model MyModel.
I'm looking for a good way to do:
my_model = get_model_for_instance(my_instance)
I have not found any really direct way to do this.
So far I have come up with this:
from django.db.models import get_model
my_model = get_model(my_instance._meta.app_label, my_instance.__class__.__name__)
Is this acceptable? Is it even a sure-fire, best practice way to do it?
There is also _meta.object_name which seems to deliver the same as __class__.__name__. Does it? Is better or worse? If so, why?
Also, how do I know I'm getting the correct model if the app label occurs multiple times within the scope of the project, e.g. 'auth' from 'django.contrib.auth' and let there also be 'myproject.auth'?
Would such a case make get_model unreliable?
Thanks for any hints/pointers and sharing of experience!
my_model = type(my_instance)
To prove it, you can create another instance:
my_new_instance = type(my_instance)()
This is why there's no direct way of doing it, because python objects already have this feature.
updated...
I liked marcinn's response that uses type(x). This is identical to what the original answer used (x.__class__), but I prefer using functions over accessing magic attribtues. In this manner, I prefer using vars(x) to x.__dict__, len(x) to x.__len__ and so on.
updated 2...
For deferred instances (mentioned by #Cerin in comments) you can access the original class via instance._meta.proxy_for_model.
my_new_instance = type(my_instance)()
At least for Django 1.11, this should work (also for deferred instances):
def get_model_for_instance(instance):
return instance._meta.model
Source here.