I'm working on a project which requires REST API. I have tried Piston but it doesn't suit my requirement as it currently allows only 1 handler per model.
Tastypie seems to be a better alternative. However, I'm stuck with the following problem. My article class is displayed according to a complex rule such as ranking and date created. To enhance server performance, I created a dummy table which records the order of all the articles so that upon user requests, the complex ordering process will not be executed. Instead, the server checks for the orders of each article from the dummy table.
With Tastypie, a query set is required. However, because I want to use the orders recorded in the dummy table, I have to use a more complex code snippet to retrieve the data.
Is there any possibility that I can return an array of article objects and Tastypie can transform them into a proper JSON format.
What you need is extending the queryset in Meta. Assuming your articles table should be ordered by some additional data your queryset would be defined for example like that:
Meta:
queryset = Article.objects.extra(select={
'ordering': 'SELECT foo FROM bar'
},).order_by('ordering')
You have to define the additional fields in your resources:
ordering = field.IntegerField(attribute="ordering", default=0, readonly=True)
The additional field should now be returned with all other fields retrieved from your queryset. Note that if you define the fields attribute in your meta you also have to add the new field there.
Related
Suppose we have the next model:
class Publications(models.Model):
author = ..........
post = ..........
and we don't want duplicate records to be stored in the database.
This could be done with unique togheter on the model:
Meta:
unique_together = (author, post)
or it could be done in the view with something like:
register_exist = Publications.objects.filter(...).exists()
if register_exist == False:
#Code to save the info
What are the advantages or disadvantages of using these methods?
Meta:
unique_together = (author, post)
Constrain at database level. This make the data always consistent no matter what views input the data.
But the other one:
register_exist = Publications.objects.filter(...).exists()
if register_exist == False:
#Code to save the info
Constrain at application level. There might be a cost to query and check if the record is existing or not. And the data might not be consistent among the application when somebody might add new record without this step (by incident or accident), that make the data no longer consistent anymore.
In a nutshell, the unique_together attribute create a UNIQUE constraint whereas the .filter(..) is used to filter the QuerySet wrt the given conditions.
In other words, If you applied unique_together functionality in your model, you can't break that constraint (technically possible, but) even if you try to do so.
I need to allow users to create and store filters for one of my models. The only decent idea I came up with is something like this:
class MyModel(models.Model):
field1 = models.CharField()
field2 = models.CharField()
class MyModelFilter(models.Model):
owner = models.ForeignKey('User', on_delete=models.CASCADE, verbose_name=_('Filter owner'))
filter = models.TextField(_('JSON-defined filter'), blank=False)
So the filter field store a string like:
{"field1": "value1", "field2": "value2"}.
Then, somewhere in code:
filters = MyModelFilter.objects.filter(owner_id=owner_id)
querysets = [MyModel.objects.filter(**json.loads(filter)) for filter in filters]
result_queryset = reduce(lambda x, y: x|y, querysets)
This is not safe and I need to control available filter keys somehow. On the other hand, it presents full power of django queryset filters. For example, with this code I can filter related models.
So I wonder, is there any better approach to this problem, or maybe a 3rd-party library, that implements same functionality?
UPD:
reduce in code is for filtering with OR condition.
UPD2:
User-defined filters will be used by another part of system to filter newly added model instances, so I really need to store them on server-side somehow (not in cookies or something like that).
SOLUTION:
In the end, I used django-filter to generate filter form, then grabbing it's query data, converting in to json and saving it to the database.
After that, I could deserialize that field and use it in my FilterSet again. One problem that I couldn't solve in a normal way is testing single model in my FilterSet (when model in already fetched and I need to test, it it matches filter) so I ended up doing it manually (by checking each filter condition on model).
Are you sure this is actually what you want to do? Are your end users going to know what a filter is, or how to format the filter?
I suggest that you look into the Django-filter library (https://django-filter.readthedocs.io/).
It will enable you to create filters for your Django models, and then assist you with rendering the filters as forms in the UI.
I have been mulling over this for a while looking at many stackoverflow questions and going through aggregation docs
I'm needing to get a dataset of PropertyImpressions grouped by date. Here is the PropertyImpression model:
#models.py
class PropertyImpression(models.Model):
'''
Impression data for Property Items
'''
property = models.ForeignKey(Property, db_index=True)
imp_date = models.DateField(auto_now_add=True)
I have tried so many variations of the view code, but I'm posting this code because I consider to be the most logical, simple code, which according to documentation and examples should do what I'm trying to do.
#views.py
def admin_home(request):
'''
this is the home dashboard for admins, which currently just means staff.
Other users that try to access this page will be redirected to login.
'''
prop_imps = PropertyImpression.objects.values('imp_date').annotate(count=Count('id'))
return render(request, 'reportcontent/admin_home.html', {'prop_imps':prop_imps})
Then in the template when using the {{ prop_imps }} variable, it gives me a list of the PropertyImpressions, but are grouped by both imp_date and property. I need this to only group by imp_date, and by adding the .values('imp_date') according to values docs it would just be grouping by that field?
When leaving off the .annotate in the prop_imps variable, it gives me a list of all the imp_dates, which is really close, but when I group by the date field it for some reason groups by both imp_date and property.
Maybe you have defined a default ordering in your PropertyImpression model?
In this case, you should add order_by() before annotate to reset it :
prop_imps = PropertyImpression.objects.values('imp_date').order_by() \
.annotate(count=Count('id'))
It's explained in Django documentation here:
Fields that are mentioned in the order_by() part of a queryset (or which are used in the default ordering on a model) are used when selecting the output data, even if they are not otherwise specified in the values() call. These extra fields are used to group “like” results together and they can make otherwise identical result rows appear to be separate. This shows up, particularly, when counting things.
I'm building an ecommerce website.
I have a Product model that holds info common to all product types:
class Product(models.Model):
name=models.CharField()
description=models.CharField()
categories = models.ManyToManyField(Category)
Then I have SimpleProduct and BundleProduct that have FK to Product and hold info specific to the product type. BundleProduct has a m2m field to other Products.
class SimpleProduct(Product):
some_field=models.CharField()
class BundleProduct(Product):
products = models.ManyToManyField(Product)
When displaying the catalog I'm making one query against the Product model
and then another query per product to get the additional info.
This involve a large number of queries.
I can improve it by using select_related on the simpleproduct and bundleproduct fields.
I can further improve it by using the select_reverse app for m2m fields like categories.
This is a big improvement but there are more required queries because a BundleProduct have several products which can also have relations to other products (configurable product).
Is there a way to have a single query against Product that will retrieve the m2m categories, one2one SimpleProduct and BundleProduct and the BundleProduct's products?
Will this custom query look like a django queryset with all the managers and properties?
Thanks
You can possibly take a look at the extra method of querysets. May give you the opportunity to add some additional fields. But if you want raw queries, you can use the raw method of managers, these will return a type of queryset, that will not however harness the full power of normal querysets but should be enough for your concerns. On that same page the execute method is also shown, this is for truly custom sql that can't even translate into raw querysets.
Does select_related work for GenericRelation relations, or is there a reasonable alternative? At the moment Django's doing individual sql calls for each item in my queryset, and I'd like to avoid that using something like select_related.
class Claim(models.Model):
proof = generic.GenericRelation(Proof)
class Proof(models.Model):
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
I'm selecting a bunch of Claims, and I'd like the related Proofs to be pulled in instead of queried individually.
There isn't a built-in way to do this. But I've posted a technique for simulating select_related on generic relations on my blog.
Blog content summarized:
We can use Django's _content_object_cache field to essentially create our own select_related for generic relations.
generics = {}
for item in queryset:
generics.setdefault(item.content_type_id, set()).add(item.object_id)
content_types = ContentType.objects.in_bulk(generics.keys())
relations = {}
for ct, fk_list in generics.items():
ct_model = content_types[ct].model_class()
relations[ct] = ct_model.objects.in_bulk(list(fk_list))
for item in queryset:
setattr(item, '_content_object_cache',
relations[item.content_type_id][item.object_id])
Here we get all the different content types used by the relationships
in the queryset, and the set of distinct object IDs for each one, then
use the built-in in_bulk manager method to get all the content types
at once in a nice ready-to-use dictionary keyed by ID. Then, we do one
query per content type, again using in_bulk, to get all the actual
object.
Finally, we simply set the relevant object to the
_content_object_cache field of the source item. The reason we do this is that this is the attribute that Django would check, and populate if
necessary, if you called x.content_object directly. By pre-populating
it, we're ensuring that Django will never need to call the individual
lookup - in effect what we're doing is implementing a kind of
select_related() for generic relations.
Looks like select_related and GRs don't work together. I guess you could write some kind of accessor for Claim that gets them all via the same query. This post gives you some pointers on raw SQL to get generic objects, if you need them
you can use .extra() function to manually extract fields :
Claims.filter(proof__filteryouwant=valueyouwant).extra(select={'field_to_pull':'proof_proof.field_to_pull'})
The .filter() will do the join, the .extra() will pull a field.
proof_proof is the SQL table name for Proof model.
If you need more than one field, specify each of them in the dictionnary.