Problem
Uploading 1-2mb files works fine.
When I attempt to upload 16mb file, i get 502 error after several seconds
More detalied:
I click "Upload"
Google Chrome uploads file (upload status is changing from 0% to 100% in left bottom corner)
Status changes to "Waiting for HOST", where HOST is my site hostname
After a half of minute server returns "502 Bad Gateway"
My view:
def upload(request):
if request.method == 'POST':
f = File(data=request.FILES['file'])
f.save()
return redirect(reverse(display), f.id)
else:
return render('filehosting_upload.html', request)
render(template, request [,data]) is my own shorthand that deals with some ajax stuff;
The filehosting_upload.html:
{% extends "base.html" %}
{% block content %}
<h2>File upload</h2>
<form action="{% url nexus.filehosting.views.upload %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
<input type="file" name="file">
<button type="submit" class="btn">Upload</button>
</form>
{% endblock %}
Logs & specs
There are nothing informative in logs i can find.
Versions:
Django==1.4.2
Nginx==1.2.1
gunicorn==0.17.2
Command line parameters
command=/var/www/ernado/data/envs/PROJECT_NAME/bin/gunicorn -b localhost:8801 -w 4 PROJECT_NAME:application
Nginx configuration for related location:
location /files/upload {
client_max_body_size 100m;
proxy_pass http://HOST;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
}
Nginx log entry (changed MY_IP and HOST)
2013/03/23 19:31:06 [error] 12701#0: *88 upstream prematurely closed connection while reading response header from upstream, client: MY_IP, server: HOST, request: "POST /files/upload HTTP/1.1", upstream: "http://127.0.0.1:8801/files/upload", host: "HOST", referrer: "http://HOST/files/upload"
Django log
2013-03-23 19:31:06 [12634] [CRITICAL] WORKER TIMEOUT (pid:12829)
2013-03-23 19:31:06 [12634] [CRITICAL] WORKER TIMEOUT (pid:12829)
2013-03-23 19:31:06 [13854] [INFO] Booting worker with pid: 13854
Question(s)
how to fix that?
is it possible to fix that without nginx upload module?
Update 1
Tried suggested config
gunicorn --workers=3 --worker-class=tornado --timeout=90 --graceful-timeout=10 --log-level=DEBUG --bind localhost:8801 --debug
Works fine for me now.
I run my gunicorn with that parameters, try :
python manage.py run_gunicorn --workers=3 --worker-class=tornado --timeout=90 --graceful-timeout=10 --log-level=DEBUG --bind 127.0.0.1:8151 --debug
or if you run differently, you may run with that options
For large files handling you should use a worker-class. Also I had some trouble using gevent in python 3.7, better to use 3.6.
Django, Python 3.6 example:
Install:
pip install gevent
Run
gunicorn --chdir myApp myApp.wsgi --workers 4 --worker-class=gevent --bind 0.0.0.0:80 --timeout=90 --graceful-timeout=10
You need to used an other worker type class an async one like gevent or tornado see this for more explanation :
First explantion :
You may also want to install Eventlet or Gevent if you expect that your application code may need to pause for extended periods of time during request processing
Second one :
The default synchronous workers assume that your application is resource bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers.
Related
I am running a docker-compose with nginx routing requests to my Django server, with this config:
upstream django {
server backend:8000;
}
server {
listen 80;
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
To use the {% if debug %} template tag I know I need both the correct internal IP and the DEBUG setting turned on. I get the correct internal IPs by running the following code snippet in my Django settings.py:
import socket
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS = [ip for ip in ips] + ['127.0.0.1']
When I docker exec -it backend sh, and import the DEBUG and INTERNAL_IPS settings, I get that the values are ['172.19.0.4', '127.0.0.1'] and True, respectively. Then, when I run docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) to check the containers' IP address, I see that the backend container that runs Django has an IP of 172.19.0.4, same as the internal IP.
Despite this all, in the Django template, I get a Django debug mode error (further proving that debug is turned on) that says that an error got triggered on line 61!!!
57 {% if debug %}
58 <!-- This url will be different for each type of app. Point it to your main js file. -->
59 <script type="module" src="http://frontend:3000/src/main.jsx"></script>
60 {% else %}
61 {% render_vite_bundle %}
62 {% endif %}
Can someone help me understand what I'm doing wrong and why this template tag won't work? Is there an easier way to do an if statement based on the Django debug setting? Thanks in advance!
I figured it out! I was getting the IP of the backend container when I typed hostname, _, ips = socket.gethostbyname_ex(socket.gethostname()), but what I really wanted was to mark Nginx's forwarding as "internal". So, I changed the line to hostname, _, ips = socket.gethostbyname_ex("nginx") and the request got forwarded correctly. Might be worth it to consider whitelisting all of the DOcker containers' internal IPs so we can receive messages freely.
I installed a Mezzanine CMS by default and I will try to serve by gunicorn
-- With python manage.py runserver, all static files are served only if DEBUG = True
Logs said:
... (DEBUG=False)
[07/Sep/2018 12:23:56] "GET /static/css/bootstrap.css HTTP/1.1" 301 0
[07/Sep/2018 12:23:57] "GET /static/css/bootstrap.css/ HTTP/1.1" 404 6165
...
-- With gunicorn helloworld.wsgi --bind 127.0.0.1:8000, no static found!
Logs said:
$ gunicorn helloworld.wsgi --bind 127.0.0.1:8000
[2018-09-07 14:03:56 +0200] [15999] [INFO] Starting gunicorn 19.9.0
[2018-09-07 14:03:56 +0200] [15999] [INFO] Listening at: http://127.0.0.1:8000 (15999)
[2018-09-07 14:03:56 +0200] [15999] [INFO] Using worker: sync
[2018-09-07 14:03:56 +0200] [16017] [INFO] Booting worker with pid: 16017
Not Found: /static/css/bootstrap.css/
Not Found: /static/css/mezzanine.css/
Not Found: /static/css/bootstrap-theme.css/
Not Found: /static/mezzanine/js/jquery-1.8.3.min.js/
Not Found: /static/js/bootstrap.js/
Not Found: /static/js/bootstrap-extras.js/
Please have a look to url wanted: gunicorn or mezzanine (or else?) add a / character in the end of url.
I did this command too python manage.py collectstatic with no effect :(
STATIC_ROOT is correct and I applied https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-static-files-during-development
Do you have a tips or solution? I'm afraid I didn't search correctly!
Thanks
Momo
It's Ok for me, thanks!
A correct search found me the answers
-- for internal server DEBUG=False will not serve static files, it's a job for webserver (nginx, apache) : Why does DEBUG=False setting make my django Static Files Access fail?
If you still need to server static locally (e.g. for testing without debug) you can run devserver in insecure mode:
manage.py runserver --insecure
-- for gunicorn webserver, we need add static manually in urls.py : How to make Django serve static files with Gunicorn?
in urls.py, add this:
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
# ... the rest of your URLconf goes here ...
urlpatterns += staticfiles_urlpatterns()
More info here
well done!
I'm running a Django web application using Nginx and uWSGI. I'm having problems with the requests hanging for no apparent reason.
I have added a bunch of logging in the application, and this snippet is where it seems to hang. There are two log lines at the start of the try block, and the first one gets printed, but not he second one, so it would seem that it hangs in the middle of the code. This code is from a middleware class that I added in the Django configuration.
def process_request(self, request):
if 'auth' not in request.session:
try:
log.info("Auth not found") # this line is logged
log.info("another log line") # this line is never logged
if request.is_ajax():
return HttpResponse(status=401)
...
I managed to get a backtrace from the uWSGI thread and this is where it's stuck:
*** backtrace of 76 ***
/usr/bin/uwsgi(uwsgi_backtrace+0x2e) [0x45121e]
/usr/bin/uwsgi(what_i_am_doing+0x30) [0x451350]
/lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7f8a4b2b8c30]
/lib/x86_64-linux-gnu/libc.so.6(epoll_wait+0x33) [0x7f8a4b37d653]
/home/vdr/vdr-ui/env/local/lib/python2.7/site-packages/gevent/core.so(+0x27625) [0x7f8a44092625]
/home/vdr/vdr-ui/env/local/lib/python2.7/site-packages/gevent/core.so(ev_run+0x29b) [0x7f8a4409d11b]
/home/vdr/vdr-ui/env/local/lib/python2.7/site-packages/gevent/core.so(+0x32bc0) [0x7f8a4409dbc0]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x4bd4) [0x7f8a4a0c30d4]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x80d) [0x7f8a4a0c517d]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x162310) [0x7f8a4a0c5310]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7f8a4a08ce23]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x7d30d) [0x7f8a49fe030d]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_Call+0x43) [0x7f8a4a08ce23]
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x47) [0x7f8a4a04b837]
/home/vdr/vdr-ui/env/local/lib/python2.7/site-packages/greenlet.so(+0x375c) [0x7f8a49b1c75c]
/home/vdr/vdr-ui/env/local/lib/python2.7/site-packages/greenlet.so(+0x30a6) [0x7f8a49b1c0a6]
[0x7f8a42f26f38]
*** end of backtrace ***
SIGUSR2: --- uWSGI worker 3 (pid: 76) is managing request /login?next=/&token=45092ca6-c1a0-4c23-9d44-4d171fc561b8 since Wed Dec 2 09:52:44 2015 ---
The Nginx error log prints out [error] 619#0: *55 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.17.0.1, server: vdr
There are no errors in the printouts from uWSGI, so I'm a bit at a loss. Has anyone seen anything similar? All this is running within a Docker container if that makes any difference.
Nginx conf:
upstream uwsgi {
server unix:///tmp/vdr.sock;
}
server {
listen 80;
charset utf-8;
client_max_body_size 500M;
server_name localhost 172.17.0.2;
location /static {
alias /home/vdr/vdr-ui/static;
}
location / {
include uwsgi_params;
uwsgi_pass uwsgi;
uwsgi_read_timeout 200s;
}
}
uWSGI conf:
[uwsgi]
chdir = %d
module = alft_ui.wsgi:application
uid=1000
master=true
pidfile=/tmp/vdr.pid
vacuum=true
max-requests=5000
processes=4
env=DJANGO_SETTINGS_MODULE=alft_ui.settings.prod-live
home=/home/vdr/vdr-ui/env
socket=/tmp/vdr.sock
chmod-socket=666
So I finally found the cause for this. It turns out that my setup script added some logstash settings to the Django configuration. These settings pointed to the IP 10.8.0.1 which wasn't reachable from this environment. This would explain why the app got stuck on a logging line. Removing these settings made everything work again.
Always good to know that it was your own fault all along :)
I have a Django site developed using Pinax. When I deploy it in apache+mod_wsgi, it works fine. But when I deploy it in nginx+uwsgi, it nearly works fine, but the page includes a {% csrf_token %} tag. The crashed page dose not display a Django error page, but displays an Nginx 502 error page. The Nginx error log is:
2012/06/08 09:11:59 [error] 30224#0: *79 upstream sent invalid header
while reading response header from upstream, client: 211.142.12.3,
server: mysite.com, request: "GET /discuss/ HTTP/1.1", upstream:
"uwsgi://127.0.0.1:9001", host: "mysite.com", referrer:
"http://mysite.com/"
uwsgi displays:
{address space usage: 42319872 bytes/40MB} {rss usage: 22573056
bytes/21MB} [pid: 21398|app: 0|req: 1/3] 110.178.82.221 () {42 vars in
988 bytes} [Fri Jun 8 18:27:01 2012] GET /discuss/ => generated 31139
bytes in 2306 msecs (HTTP/1.1 200) 5 headers in 358 bytes (1 switches
on core 0)
The error occurs on a GET request, not a POST request. I tested this - when I delete the csrf_token token from the template, it's OK. So, there must be a relationship between the token and the error, not anything else.
What's going on?
Okay, it's solved. I had installed uwsgi by compiling the source. Now I delete that version, and reinstall it using pip install uwsgi, and everything is fine!
I'm using django-piston for my REST json api, and I have it all set up for documentation through pistons built in generate_doc function. Under the django runserver, it works great. The template that loops over the doc objects successfully lists the docstrings for both the class and for each method.
When I serve the site via nginx and uwsgi, the docstrings are empty. At first I thought this was a problem with the django markup filter and using restructuredtext formatting, but when I turned that off and just simply tried to see the raw docstring values in the template, they are None.
I don't see any issues in the logs, and I can't understand why nginx/uwsgi is the factor here, but honestly it does work great over the dev runserver. I'm kind of stuck on how to start debugging this through nginx/uwsgi. Has anyone run into this situation or have a suggestion of where I can start to look?
My doc view is pretty simple:
views.py
def ApiDoc(request):
docs = [
generate_doc(handlers.UsersHandler),
generate_doc(handlers.CategoryHandler),
]
c = {
'docs': docs,
'model': 'Users'
}
return render_to_response("api/docs.html", c, RequestContext(request))
And my template is almost identical to the stock piston template:
api/docs.html
{% load markup %}
...
{% for doc in docs %}
<h5>top</h5>
<h3><a id="{{doc.name}}">{{ doc.name|cut:"Handler" }}:</a></h3>
<p>
{{ doc.doc|default:""|restructuredtext }}
</p>
...
{% for method in doc.get_all_methods %}
{% if method.http_name in doc.allowed_methods %}
<dt><a id="{{doc.name}}_{{method.http_name}}">request</a> <i>{{ method.http_name }}</i></dt>
{% if method.doc %}
<dd>
{{ method.doc|default:""|restructuredtext }}
<dd>
{% endif %}
The rendered result of this template under nginx would be that doc.doc and method.doc are None. I have tried removing the filter and just checking the raw value to confirm this.
I'm guessing the problem would have to be somewhere in the uwsgi layer, and its environment. Im running uwsgi with a config like this:
/etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems
and runlevel [2345])
stop on runlevel [016]
respawn
exec /usr/sbin/uwsgi \
--uid www-data \
--socket /opt/run/uwsgi.sock \
--master \
--logto /opt/log/uwsgi_access.log \
--logdate \
--optimize 2 \
--processes 4 \
--harakiri 120 \
--post-buffering 8192 \
--buffer-size 8192 \
--vhost \
--no-site
And my nginx server entry location snippet looks like this:
sites-enabled/mysite.com
server {
listen 80;
server_name www.mysite.com mysite.com;
set $home /var/www/mysite.com/projects/mysite;
set $pyhome /var/www/mysite.com/env/mysite;
root $home;
...
location ~ ^/(admin|api)/ {
include uwsgi_params;
uwsgi_pass uwsgi_main;
uwsgi_param UWSGI_CHDIR $home;
uwsgi_param UWSGI_SCRIPT wsgi_app;
uwsgi_param UWSGI_PYHOME $pyhome;
expires epoch;
}
...
}
Edit: Configuration Info
Server: Ubuntu 11.04
uWSGI version 1.0
nginx version: nginx/1.0.11
django non-rel 1.3.1
django-piston latest pypi 0.2.3
python 2.7
uWSGI is starting up the interpreter in way equivalent to -OO option to Python command line. This second level of optimisation removes doc strings.
-OO : remove doc-strings in addition to the -O optimizations
Change:
--optimize 2
to:
--optimize 1