I have observed that our production environment is slow while keeping
debug=False
We need to wait 20s or 15 sec for one API response.
The same API's are performing in milliseconds, only we changed
debug=True
Related
We have a setup where a CDN is calling Nginx which is calling a uwsgi server. Some of the requests take a lot of time for Django to handle, so we are relying on the CDN for caching. However, the CDN has a hard timeout of 30 seconds, which is unfortunately not configurable.
If we were able to send a blank line every few seconds before the request is received from the uwsgi server, it would mean that the CDN would not timeout. Is there a way to send a blank line every few seconds with Nginx until the response is received?
I see a few possibilities:
Update your Django app to work this way-- have /it/ start dribbling a response immediately.
Rework your design to avoid user's periodically having requests that take more than 30 seconds to respond. Use a frequent cron job to prime the cache on your backend server, so when the CDN asks for assets, they are already ready. Web servers can be configured to check for a static ".gz" versions of URLs, which might be a good fit here.
Configure Nginx to cache the requests. The first time the CDN requests the slow URL, it may timeout, but Nginx ought to eventually cache the result anyway. The next time the CDN asks, Nginx should have the cached response ready.
I have a website that is experiencing some slow initial response time. The site is built with Django, and runs on an Apache2 server on Ubuntu. I have been using the Django Debug Toolbar for debugging and optimization.
When making a request to a user profile page, the browser is 'waiting' for ~800ms and receiving for ~60ms for the initial request. However, the Django Debug Toolbar shows that time spent on the CPU and time spent on SQL queries only adds up to ~425ms.
Chrome devtools:
Django debug toolbar:
Even a request to the index page (which has no SQL queries and almost no processing - it just responds with the template) shows ~250ms wait time.
I tried temporarily upgrading the VM to a much more powerful CPU, but that did not (noticeably) change this metric.
This leads me to believe that the wait is not due to inefficient code or database latency, but instead due to some Apache or Ubuntu settings.
After the initial response, the other requests to load page resources (js files, images etc) have a much more reasonable wait time of ~20ms.
What could account for the relatively large initial 'waiting' time?
What tools can I use to get a better picture of where that time is going?
In my project, I use Django and heroku to deploy it. In Heroku, I use uWSGI server (with asynchronous mode), database is MySQL (on AWS RDS). I used 7 dyno for scaling django app
When I run stress test with 600 request/second, timeout is 30 second.
My server return > 50% with timeouts request.
Any ideas can help me improve my server performance?
If your async setup is right (and this is the hardest part), well your only solution is adding more dynos. If you are not sure about django+async (or if you have not done any particular customization to make them work together), you have probably a screwed up setup (no concurrency at all).
Take in account that uWSGI async mode could means dozens of different setup (gevent, ugreen, callbacks, greenlets...), so some detail on your configuration could help.
I have a Django server running on Apache via mod_wsgi. I have a massive background task, called via a API call, that searches emails in the background (generally takes a few hours) that is done in the background.
In order to facilitate debugging - as exceptions and everything else happen in the background - I created a API call to run the task blocking. So the browser actually blocks for those hours and receives the results.
In localhost this is fine. However, in the real Apache environment, after about 30 minutes I get a 504 Gateway Timeout error.
How do I change the settings so that Apache allows - just in this debug phase - for the HTTP request to block for a few hours without returning a 504 Gateway Timeout?
I'm assuming this can be changed in the Apache configuration.
You should not be doing long running tasks within Apache processes, nor even waiting for them. Use a background task queueing system such as Celery to run them. Have any web request return as soon as it is queued and implement some sort of polling mechanism as necessary to see if the job is complete and results can be obtained.
Also, are you sure the 504 isn't coming from some front end proxy (explicit or transparent) or load balancer? There is no default timeout in Apache which is 30 minutes.
I have a django application running behind varnish and nginx.
There is a periodic task running every two minutes, accessing a locally running jsonrpc daemon and updating a django model with the result.
Sometimes the django app is not responding, ending up in an nginx gateway failed message. Looking through the logs it seems that when this happens the backend task accessing the jsonrpc daemon is also timing out.
The task itself is pretty simple: A value is requested from jsonrpc daemon and saved in a django model, either updating an existing entry or creating a new one. I don't think that any database deadlock is involved here.
I am a bit lost in how to track this down. To start, I don't know if the timeout of the task is causing the overall site timeout OR if some other problem is causing BOTH timeouts. After all, a timout in the asynchronous task should not have any influence on the website response?