Gateway timeout 504 when moving from django development server to Nginx/uWSGI? - django

I have a Django app that talks to a remote Raspberry Pi to acquire imagery through the Pi's camera. Whenever I test the "get a new image" button in the app, the browser gets hung up for about 60 seconds and the image never arrives.
The Raspberry Pi is trying to POST the image to the Django app which is then supposed to save it to persistent storage.
The logs of Nginx show a 504 "gateway timeout" at the 60 second mark. However, this worked smoothly when I was using the Django development server and took about a second to POST the image. What's going wrong now?

Make sure you're running uWSGI with multiple processes and threads.
You can test this on the uWSGI command line by adding:
--processes 4 --threads 2
or in a uWSGI ini file:
processes = 4
threads = 2
If the Pi is POSTing the image back to the app while waiting to display the result to the user, then uWSGI needs to be able to handle both things concurrently.
Another possibility is that your django app uses threads itself, and without the --threads N or --enable-threads option to uWSGI, the GIL isn't enabled when your app runs. Adding --enable-threads (or enable-threads = true to the ini file) will enable the GIL without starting multiple app threads.
See a note on python threads in the uWSGI docs if you suspect that your app's threading may be the problem.
In any case, make sure you've provided for enough concurrency if you're seeing gateway timeouts.

Related

Django dev server extremely slow

We have an app that works perfectly fine on production but very slow on the dev machine.
Django==2.2.4
I'm using Ubuntu 20.04 but other devs are using macOS and even Windows.
Our production server is a very small compared to the dev laptops (it works very slow on every dev environment, we are 5 developers).
The app makes several request since it's a Single Page application that uses Django Rest Framework and React.js in the front-end.
We have tried different databases locally (currently postgresql, tried MySQL and sqlite3), using docker, no docker, but it does not change the performance.
Each individual request takes a few seconds to execute, but when they go all toghether the thing gets very slow. As more request are executed, the performance starts to drop.
It takes the app between 2/3 minutes to load in the dev environment and in any production or staging environment it takes little above 10 seconds.
Also tried disabling DEBUG in the back and front-end, nothing changes.
It is my opinion that one of the causes is that the dev server is single thread and it does not process a request until the previous is finished.
This makes the dev environemnt very hard to work with.
I've seen alternatives (plugins) to make the dev server multi-thread but those solutions do not work with the latests versions of django.
What alternatives could we try to improve this?
Looks like posting this question helped me think in an alternative. Using gunicorn in the dev environment really helps.
Installed it with
pip install gunicorn
And then execute it using this:
venv/bin/gunicorn be-app.wsgi --access-logfile - --workers 2 --bind localhost:8000
Of course if I want to access the static and media files I'll have to set up a local nginx but it's not a big deal

apache + mod_wsgi restart keeping active tasks

Im running my django project using apache + mod_wsgi in daemon mode. When I have to make the server notice changes in the source code, I touch the wsgi.py file, but I have an issue with this approach.
Some tasks that are triggered from the front-end take 10 minutes to complete. If I touch the wsgi file while one of this long tasks are running, they get killed by the restart.
Is there any way to make the server to refresh the code, but keeping the previous unfinished tasks running until the are done?
Thanks!
Don't run long-running tasks in web processes. Use an offline task manager like Celery.

Data integrity and recovery with Django and Nginx

Django can be run either in Nginx (or some other server, but we are currently going to use Nginx) or with manage.py runserver Django own's server. In both cases I need data integrity and recovery.
For data integrity I need not to terminate (some of) Django requests until they finish. They should not terminate because they should finish started data modification in the DB to preserve data integrity (and no, using SQL transactions is not a solution). They should not terminate as soon as Nginx receives service nginx stop (on Debian Linux) or some other similar command (on other OSes), but finish processing before terminating. How to do this?
For data recovery I want:
create an empty file when the server starts, remove it when the server stops (how to do it both with Nginx and with manage.py runserver?)
When the server starts, if the file is found (indicating a crash of our software), before server starting we need to run my "data recovery" script. How to do this?
A couple things here. First, you should definitely never use runserver in production. Secondly, you don't run really Django in nginx—you run it in a WSGI server, such as uWSGI or Gunicorn. Often, these are run behind nginx, but they don't start and stop when it does.
If you're just trying to make sure that the server doesn't stop while views are still processing, you could definitely do that with uWSGI. There's a pretty good writeup on this subject in the uWSGI documentation called "The Art of Graceful Reloading".
As long as the server is gracefully reloaded, I don't think you need to worry as much about recovery. However, if you still wanted to try that, you'd need to do the empty file deletion in a bash script wrapper—there's no Python code that runs when the server stops.

Django-Gunicorn-Nginx Why do I get an Internal server error every time I restart Gunicorn?

I set up a droplet on digital ocean using the one-click installer. I host my code on a git repo. and I use git pull to merge changes, followed by service gunicorn reload to restart gunicorn. The problem is that everytime I do this and try to visit my site I get an 'internal server error' messages, and after I refresh once or twice the actual page loads.
It is strange because I get the message even if I wait for a while (15 minutes) before visiting the web page, so I'm not sure if I get this because gunicorn was still restarting or for some other reason. Any hints on what might be going on ?
--graceful-timeout or graceful_timeout
Otherwise the workers will stick around until the next request seemingly.
When set to 0 will cause the workers and the master to instantly quit.
Hope it helps.

Django - What happens when setting WEB_CONCURRENCY too high in heroku 1xdyno

I have a Django app running in Heroku with GUnicorn, I have 1Xdyno and just found that you can set your WEB_CONCURRENCY.
What's the optimal WEB_CONCURRENCY?
Article Deploying Python Applications with Gunicorn tells about various parameters of Gunicorn and their effect on Heroku.
Below is the text copied from this article regarding WEB_CONCURRENCY
Gunicorn forks multiple system processes within each dyno to allow a Python app to support multiple concurrent requests without requiring them to be thread-safe. In Gunicorn terminology, these are referred to as worker processes (not to be confused with Heroku worker processes, which run in their own dynos).
Each forked system process consumes additional memory. This limits how many processes you can run in a single dyno. With a typical Django application memory footprint, you can expect to run 2–4 Gunicorn worker processes on a 1X dyno. Your application may allow for a variation of this, depending on your application’s specific memory requirements.
We recommend setting a configuration variable for this setting, so you can tweak it without editing code and redeploying your application.
$ heroku config:set WEB_CONCURRENCY=3