This might sound like a stupid question so apologies in case I'm wasting your time. I have a tons or results coming from my data in my Django project. It's a table with many columns and almost 4000 rows. I am using Datatables for pagination, filtering, horizontal scrolling, sorting the columns.
Client-side (I meant server-side) I am also using django-filter for querying the database.
My problem is that the loading of the initial data (non filtered via django-filter) takes a lot of time. Shall I implement pagination on the client-side (I meant server-side)? If so, how does this work with Datatables? Will Datatables paginate/display only the (first page of) paginated data coming from the server-side query? Is there a way for the two to work together?
Thanks.
Related
I've a model with approximately 150K rows.
It takes 1.3s to render the ListView for this model.
When I click the change link in the ListView I takes almost 2 minutes to render the change view.
Other models have normal render times for the edit view.
Any ideas how to speed this up?
Your best bet is to limit the number of returned rows and implement some type of pagination in your application.
Django conveniently implements a type of pagination
First of all, ask yourself these questions:
Do you have much work with your data in templates?
Can I do this work in a backend and in a template only render it?
Do I use pagination?
As I know pagination in Django implemented with LIMIT and OFFSET sql statements, which work not so quickly when you're having many pages. In our projects, we wrote a row SQL for this purpose which works a little bit faster.
Also, you can install Django Debug Toolbar which can show you what statements django ORM is executing and measure time.
How can I make Django Rest Frameworks browsable UI fast with RelatedField?
I'm aware this has already been asked here: Django REST Framework: slow browsable UI because of large related table but the answer is no longer valid for new versions of DRF
Including two PrimaryKeyRelatedFields gives me a 5s+ load time, removing them takes me back down to under .3
I've tried setting html_cutoff=100 or even html_cutoff=1but it seems to make no difference to load times.
Any ideas? currently on DRF '3.3.2'
Edit: tables involved have 12000 to 120 records - but it would be great to handle much larger amounts
Since DRF version 3.4.4, it's possible to limit number of relationships being displayed by using selected fields cutoffs.
From DRF documentations:
When rendered in the browsable API relational fields will default to only displaying a maximum of 1000 selectable items. If more items are present then a disabled option with "More than 1000 items…" will be displayed.
...
You can also control these globally using the settings HTML_SELECT_CUTOFF and HTML_SELECT_CUTOFF_TEXT.
This question is similar or duplicate of this one Django REST Framework: slow browsable UI because of large related table.
In essence it's N+1 Problem and in context of Django it can be fixed by eager loading of data by calling prefetch_related() or select_related() on QuerySet. Check this answare
Not quite the answer I am looking for, but currently it looks like there is activity around this already on github - https://github.com/tomchristie/django-rest-framework/issues/3329 with a little luck, one of those patches will get merged soon
I have a database table with over a million records
in my views, i select all records like below:
data=Student.objects.all()
I get a memory error when rendering the result to a grid on the template.
Any good practise to run large querysets without error please?
Regards
Joshua
I can't comment yet so I'll just post this as an answer. You might want to consider using
jquery datatable for front end UI. It has server side processing option which is ideal for dealing with large database. Just a suggestion.
I have a site which uses the standard Django comments (well, a subclass of that, but mostly the same). I want to cache the list of comments that's rendered on each page, as that's quite a big and slow query. But, while I know how to cache individual querysets, I can't see how best to do it for the comments app.
It looks like the querysets for these lists of comments are generated in the BaseCommentNode templatetag. So I can't see an easy way to see if there's a cached version of that QS and return that if so... what's my best way of nicely caching this query?
(I'm also caching every page for all logged-out users, with a 5 minute expiry, but think my site would benefit from caching queries like this for a longer period.)
I'm started using the Django REST framework in preparation for production, but unfortunately, it is performing quite slowly.
I am calling an array of 500 dictionaries, each with 5 key-value pairs each. In the shell, the call-time is not noticeable at all - you press enter, and it's done. Previously, when I was serving the same content directly without the REST framework, there was also no noticeable delay. However, with the REST framework, it takes about 1 - 2 seconds after the page has rendered for the content to display.
I do not think this is due to javascript as hitting the same details through the browseable API results in a similar delay.
Also, I am NOT cacheing at the moment.
There's no way anyone else is going to be able to debug this for you from the details given in the question.
Are you reusing an existing generic view or writing your own view?
Are you serializing the data, if so what does your serializer definition look like?
Does the issue manifest when rendering to JSON, or just when rendering to the Browsable API?
You mention serving the content without REST framework - what did the views looks like before, and what do they look like after?
The REST framework views are trivial, so use a profiling tool, or simply override them and add some timing. Likewise the renderers are trivial - subclass whatever renderer you're using at the moment, override .render() and add a couple of timing calls either side of calling the parent's .render() method.
If you think you've narrowed down a problem to a specific area then throw together a minimal test case and submit it as an issue.
The serialization itself is unlikely to be an issue, I've used the same serialization engine to replicate Django's fixture dumping and there was no significant degradation.
It's feasible that if you're doing lookups across model relationships you might need to add .select_related() or .prefetch_related() calls when constructing the queryset, exactly as you would with any other Django view.
Edit: Note that following on from this post there were significant serializer speed improvements made, as noted in this ticket.