serialize a list of objects in json in Django - django

In my python code in views.py, i have made a list of Object instances. For example, if one of my objects in models.py is Sentence, then I at one point have an list of Sentences:
sentenceList = [Sentence1,Sentence2,...,SentenceN]
I have been happily using the template language in Django to send this off to a template where I can loop through the sentences in a template, but I am now implamenting some AJAX functionality, and would like to send something similar to this list "sentenceList" through a serialization through an AJAX portal.
It seems that I am between two worlds here though: the regular python JSON seiralizer, which serializes lists and python object, and simplejson, which only deals with pure Django Model queries.
Does anyone have a suggestion of how I should handle this (including not using lists of object instances if it is for some reason not good form)?
Thank you!
-Andrew

The Django serializer should be able to deal with a regular list of Django objects, according to the docs:
(Actually, the second argument can be any iterator that yields Django objects, but it'll almost always be a QuerySet).

Related

Python/Django model dictionary allows one type of update, but not another

I am working on some Django/Python code.
Basically, the backend of my code gets sent a dict of parameters named 'p'. These values all come off Django models.
When I tried to override them as such:
p['age']=25
I got a 'model error'. Yet, if I write:
p.age=25
it works fine.
My suspicion is that, internally, choice #1 tries to set a new value to an instance of a class created by Django that objects to being overridden, but internally Python3 simply replaces the Django instance with a "new" attribute of the same name ('age'), without regard for the prior origin, type, or class of what Django created.
All of this is in a RESTful framework, and actually in test code. So even if I am right I don't believe it changes anything for me in reality.
But can anyone explain why one type of assignment to an existing dict works, and the other fails?
p is a class, not a dict. Django built it that way.
But, as such, one approach (p.age) lets you change an attribute of the object in the class.

Specify typing for Django field in model (for Pylint)

I have created custom Django model-field subclasses based on CharField but which use to_python() to ensure that the model objects returned have more complex objects (some are lists, some are dicts with a specific format, etc.) -- I'm using MySQL so some of the PostGreSql field types are not available.
All is working great, but Pylint believes that all values in these fields will be strings and thus I get a lot of "unsupported-membership-test" and "unsubscriptable-object" warnings on code that uses these models. I can disable these individually, but I would prefer to let Pylint know that these models return certain object types. Type hints are not helping, e.g.:
class MealPrefs(models.Model):
user = ...foreign key...
prefs: dict[str, list[str]] = \
custom_fields.DictOfListsExtendsCharField(
default={'breakfast': ['cereal', 'toast'],
'lunch': ['sandwich']},
)
I know that certain built-in Django fields return correct types for Pylint (CharField, IntegerField) and certain other extensions have figured out ways of specifying their type so Pylint is happy (MultiSelectField) but digging into their code, I can't figure out where the "magic" specifying the type returned would be.
(note: this question is not related to the INPUT:type of Django form fields)
Thanks!
I had a look at this out of curiosity, and I think most of the "magic" actually comes for pytest-django.
In the Django source code, e.g. for CharField, there is nothing that could really give a type hinter the notion that this is a string. And since the class inherits only from Field, which is also the parent of other non-string fields, the knowledge needs to be encoded elsewhere.
On the other hand, digging through the source code for pylint-django, though, I found where this most likely happens:
in pylint_django.transforms.fields, several fields are hardcoded in a similar fashion:
_STR_FIELDS = ('CharField', 'SlugField', 'URLField', 'TextField', 'EmailField',
'CommaSeparatedIntegerField', 'FilePathField', 'GenericIPAddressField',
'IPAddressField', 'RegexField', 'SlugField')
Further below, a suspiciously named function apply_type_shim, adds information to the class based on the type of field it is (either 'str', 'int', 'dict', 'list', etc.)
This additional information is passed to inference_tip, which according to the astroid docs, is used to add inference info (emphasis mine):
astroid can be used as more than an AST library, it also offers some
basic support of inference, it can infer what names might mean in a
given context, it can be used to solve attributes in a highly complex
class hierarchy, etc. We call this mechanism generally inference
throughout the project.
astroid is the underlying library used by Pylint to represent Python code, so I'm pretty sure that's how the information gets passed to Pylint. If you follow what happens when you import the plugin, you'll find this interesting bit in pylint_django/.plugin, where it actually imports the transforms, effectively adding the inference tip to the AST node.
I think if you want to achieve the same with your own classes, you could either:
Directly derive from another Django model class that already has the associated type you're looking for.
Create, and register an equivalent pylint plugin, that would also use Astroid to add information to the class so that Pylint know what to do with it.
I thought initially that you use a plugin pylint-django, but maybe you explicitly use prospector that automatically installs pylint-django if it finds Django.
The checker pylint neither its plugin doesn't check the code by use information from Python type annotations (PEP 484). It can parse a code with annotations without understanding them and e.g. not to warn about "unused-import" if a name is used in annotations only. The message unsupported-membership-test is reported in a line with expression something in object_A simply if the class A() doesn't have a method __contains__. Similarly the message unsubscriptable-object is related to method __getitem__.
You can patch pylint-django for your custom fields this way:
Add a function:
def my_apply_type_shim(cls, _context=None): # noqa
if cls.name == 'MyListField':
base_nodes = scoped_nodes.builtin_lookup('list')
elif cls.name == 'MyDictField':
base_nodes = scoped_nodes.builtin_lookup('dict')
else:
return apply_type_shim(cls, _context)
base_nodes = [n for n in base_nodes[1] if not isinstance(n, nodes.ImportFrom)]
return iter([cls] + base_nodes)
into pylint_django/transforms/fields.py
and also replace apply_type_shim by my_apply_type_shim in the same file at this line:
def add_transforms(manager):
manager.register_transform(nodes.ClassDef, inference_tip(my_apply_type_shim), is_model_or_form_field)
This adds base classes list or dict respectively, with their magic methods explained above, to your custom field classes if they are used in a Model or FormView.
Notes:
I thought also about a plugin stub solution that does the same, but the alternative with "prospector" seems so complicated for SO that I prefer to simply patch the source after installation.
Classes Model or FormView are the only classes created by metaclasses, used in Django. It is a great idea to emulate a metaclass by a plugin code and to control the analysis simple attributes. If I remember, MyPy, referenced in some comment here, has also a plugin mypy-django for Django, but only for FormView, because writing annotations for django.db is more complicated than to work with attributes. - I was trying to work on it for one week.

packing objects as json with django?

I've run into a snag in my views.
Here "filtered_posts" is array of Django objects coming back from the model.
I am having a little trouble figuring out how to get as text data that I can
later pack into json instead of using serializers.serialize...
What results is that the data comes double-escaped (escaped once by serializers.serialize and a second time by json.dumps).
I can't figure out how to return the data from the db in the same way that it would come back if I were using the MySQLdb lib directly, in other words, as strings, instead of references to objects. As it stands if I take out the serializers.serialize, I get a list of these django objects, and it doesn't even list them all (abbreviates them with '...(remaining elements truncated)...'.
I don't think I should, but should I be using the __unicode__() method for this? (and if so, how should I be evoking it?)
JSONtoReturn = json.dumps({
'lowest_id': user_posts[limit - 1].id,
'user_posts': serializers.serialize("json", list(filtered_posts)),
})
The Django Rest Framework looks pretty neat. I've used Tastypie before, too.
I've also done RESTful APIs that don't include a framework. When I do that, I define toJSON methods on my objects, that return dictionaries, and cascade the call to related elements. Then I call json.dumps() on that. It's a lot of work, which is why the frameworks are worth looking at.
What you're looking for is Django Rest Framework. It handles related objects in exactly thew way you're expecting it to (you can include a nested object, like in your example, or simply have it output the PK of the related object for the key).

Using Django ORM to serialize data and related members

I'm currently working on a REST api, using django. I started using the nice djangorestframework, which I loved to use the "View" class.
But, I'm facing with the serialization problem.
I do not like the Serialization using the Serializer classes.
The main goal is to prepare a sort of giant dict, with all the infos, and give it to a renderer class which translate it into an xml, json, yaml, depending on the "Accept:" HTTP header. The goal is classy, but 60% of the CPU time is spend on creating the "GIANT DICT".
This dict can be created using django Models, but I think using on the fly instanciated classes and object is VERY un-efficient ? I'm trying to use some QuerySet methods to filter which models member I want to have, and getting a simple dict : the ::values() method, but unfornately, I can't access the m2m and foreignkey from my models.
Did you already tried this ? Any though ?
You could use the QuerySet's iterator method:
... For a QuerySet which returns a large number of objects that you only
need to access once, this can results in better performance and a
significant reduction in memory.
Your code should looks like:
for obj in SomeModel.objects.values_list('id', 'name').iterator():
# do something

Pitfalls of generating JSON in Django templates

I've found myself unsatisfied with Django's ability to render JSON data. If I use built in serializes then database foreign key relationships are not included in the data (only the keys). Also, it seems to be impossible to include custom data in the json feed that isn't part of the model being serialized.
As a test I implemented a template that rendered some JSON for the resultset of a particular model. I was able to include/exclude whatever parts of the model I wanted and was able to include custom data as well.
The test seemed to work well and wasn't slower than the recommended serialization methods.
Are there any pitfalls to this using this method of serialization?
While it's hard to say definitively whether this method has any pitfalls, it's the method we use in production as you control everything that is serialized, even if the underlying model is changed. We've been running a high traffic application in for almost two years using this method.
Hope this helps.
One problem might be escaping metacharacters like ". Django's template system automatically escapes dangerous characters, but it's set up to do that for HTML. You should look up exactly what the template escaping does, and compare that to what's dangerous in JSON. Otherwise, you could cause XSS problems.
You could think about constructing a data structure of dicts and lists, and then running a JSON serializer on that, rather than directly on your database model.
I don't understand why you see the choice as being either 'use Django serializers' or 'write JSON in templates'. The middle way, which to my mind is much more robust and fits your use case well, is to build up your data as Python lists/dictionaries and then simply use simplejson.dumps() to convert it to a JSON string.
We use this method to get custom JSON format consumed by datatables.net
It was the easiest method we find to accomplish this task and it looks very fine with no problems so far.
You can find details here: http://datatables.net/development/server-side/django
So far, generating JSON from templates, we've run into the need to escape newlines. Looking at doing simplejson.dumps() next.