Running a Django application on Appengine we need to make a query that returns approx. 450 rows per request including joins M2M prefetch_related and select_related.
When we make many concurrent requests, the query time for each request goes up in a way that all requests end simultaneously.
Running the same concurrent requests on a non-appengine Django installation or in an appengine instance that has threading set to false do not show this behavior.
There is also a slight improvement when the requests are separated to different appengine instances.
Has anyone seen this before?
Sounds like your database backend is too heavily loaded by your query. Have you tried upgrading to a higher tier?
The basic tier only handles 25 concurrent queries. You said "many" in your question, so if "many" > 25 that's the source of your problem:
https://developers.google.com/cloud-sql/pricing
Related
I want to test whether Django can handle heavy load requests at once continuously. How to conduct this scenario? using tests?.
Eg:
300 Read requests
300 Write requests
within 1 sec.
You can take any basic ORM query like
User.objects.get(id=`some id)
for 1 request and take this as 300 different read requests.
Same for write requests as well but change any field of your choice.
Later check if that works for at-least 10 sec.
You can think of this scenario as multiple concurrent users using the Django app.
I think I got the solution myself to use locust But I'll still wait for someone to give a better answer before accepting my own answer.
There is also a paid site for testing Django - gatling
I came across this article
https://medium.com/#hakibenita/how-to-manage-concurrency-in-django-models-b240fed4ee2
Which describes how a request can change a record that another request is currently working with.
Now this article is from 2017 and I haven't found anything about django cobcurrency since.
Also manage.py is single threaded.
Does this mean the issue is now managed by django internally or do I have to manage concurrency manually still when I deploy it with apache?
Requests are usually handled by multiple workers which are in fact different processes. This means that you will be handling simultaneous requests accessing the database so yes, you have to watch for race conditions and the framework will not do that for you.
So I have been developing a Django server application that mostly works as an API endpoint for a mobile app, with about 30 different models, almost all of them having FKs to each other or other or being in MTM relationships. Now that we're going into production (but do not yet have a lot of users), I have noticed (using Silk) that the most complex query that fetches a bunch of objects as JSON makes about 500 SQL queries (and those objects each have about 5 FKs and 2 MTMs, which are all fetched as object fields in the JSON). The numbers don't seem to be too huge (as 50k qps seems to be a normal number for Postgres, which we are using as our DBMS), but I am getting worried about the future. Are those numbers normal in early production? What is the normal distribution of database requests per view for an API like the one I described? We are not currently using DRF, but I am looking towards it. Does it solve this problem?
I have implemented data sync using MS Sync framework 2.1 over WCF to sync multiple SQL Express databases with a central SQL server. Syncing is happening every three minutes through a windows service. Recently, we noticed that huge amounts of data is being exchanged over the network (~100 MB every 15 minutes). When I checked using Fiddler, the client calls the service with a GetKnowledge request four times in a session and each response is around 6 MB in size, although there are no changes at all in either database. This does not seem to be normal? How do I optimize the system to reduce such heavy traffic? Please help.
I have defined two scopes with first one having 15 tables all download only. The second one has 3 tables with upload only direction.
The XML response has a very huge number of <range> tags under coreFragments/coreFragment/ranges tag which is the major portion contributing to the response size.
Let me know if any additional information is required.
must be the sync knowledge. do you do lots of deletes? or do you have lots of replicas? try running a metadata cleanup and see if it compacts the sync knowledge.
Creating one to one scopes and re-provisioning fixed the issue. I am not still sure what caused the original issue.
Do you happen to have any join tables and use any ORM. If you do, then this post might help.
https://kumarkrish.wordpress.com/2015/01/07/microsoft-sync-frameworks-heavy-traffic/
Does running Django on GoogleAppEngine, as opposed to the default WebApp2 framework, consume additional resources? Any metrics?
From the mailing list, I've seen multiple comments on Django instances taking significantly longer to start up if cold. This would be noticeable if your app is rarely used.
From my app's log it looks like the first request took about 4 seconds to start up.
My Django instances use 41-43MB. Not sure about webapp2.
If you're using Django-nonrel as an ORM, I'm sure there's a few cycles spent in the ORM translation, but I doubt it's significant. My Tastypie REST layer can take anywhere between 40-120ms for the same request, it looks like that performance is dictated more by the datastore than anything else.