Nginx Django csrf_token get 502 error invalid header - django

I have a Django site developed using Pinax. When I deploy it in apache+mod_wsgi, it works fine. But when I deploy it in nginx+uwsgi, it nearly works fine, but the page includes a {% csrf_token %} tag. The crashed page dose not display a Django error page, but displays an Nginx 502 error page. The Nginx error log is:
2012/06/08 09:11:59 [error] 30224#0: *79 upstream sent invalid header
while reading response header from upstream, client: 211.142.12.3,
server: mysite.com, request: "GET /discuss/ HTTP/1.1", upstream:
"uwsgi://127.0.0.1:9001", host: "mysite.com", referrer:
"http://mysite.com/"
uwsgi displays:
{address space usage: 42319872 bytes/40MB} {rss usage: 22573056
bytes/21MB} [pid: 21398|app: 0|req: 1/3] 110.178.82.221 () {42 vars in
988 bytes} [Fri Jun 8 18:27:01 2012] GET /discuss/ => generated 31139
bytes in 2306 msecs (HTTP/1.1 200) 5 headers in 358 bytes (1 switches
on core 0)
The error occurs on a GET request, not a POST request. I tested this - when I delete the csrf_token token from the template, it's OK. So, there must be a relationship between the token and the error, not anything else.
What's going on?

Okay, it's solved. I had installed uwsgi by compiling the source. Now I delete that version, and reinstall it using pip install uwsgi, and everything is fine!

Related

Nginx+uWSGI+Django are returning 502 when big request body and expired session

I have a Django view that process POST request with random size(between 20 char to 30k char). This API is only available for registered users and is validated with a session header. The API works well with my test cases but I notice some 502 in the Nginx log. The error log show this line::
2016/12/26 19:53:15 [error] 1048#0: *72 sendfile() failed (32: Broken pipe) while sending request to upstream, client: XXX.XXX.XXX.XXX, server: , request: "POST /api/v1/purchase HTTP/1.1", upstream: "uwsgi://unix:///opt/project/sockets/uwsgi.sock:", host: "staging.example.com"
After some tests, I managed to recreate this call with a big body request.
curl -XPOST https://staging.example.com/api/v1/purchase \
-H "Accept: application/json" \
-H "token: development-token" \
-H "session: bad-session" \
-i -d '{"receipt-data": "<25677 character string>"}'
HTTP/1.1 100 Continue
HTTP/1.1 502 Bad Gateway
Server: nginx/1.4.6 (Ubuntu)
Date: Mon, 26 Dec 2016 19:54:32 GMT
Content-Type: text/html
Content-Length: 181
Connection: keep-alive
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
What it seems to happen is that the Django checks that the session is not valid and return the response(403) before the client finish delivers the body.
If I'm correct, is there a way to make Django send that 100 status after checking the headers instead of the Nginx?
If not, is there a more elegant solution than wait for the body before checking the headers?
I've found a statement that adding HTTP header connection:keep-alive to the client should fix this issue. I'll verify it later, but already posting it here, hope it will help someone.

Django + uWSGI hold a long time to response

I'm running a Django web application using Nginx and uWSGI. Now I meet a problem that the finish_process view in django
I have added logging at the begin and the end of Django finish_process view.
I make a request at 17:20:18, and the view finished at 17:20:48. But uWSGI does not return response at this time, and after 577 seconds, it throws IOError when it try to write response to client, because nginx close the connection (uwsgi_read_timeout is 300 seconds).
My question is why uWSGI holds the response so long after Django handled the view? I'm a bit at a loss.
Django log:
[INFO]246 views.py/finish_process 2016-03-06 17:20:18: [VIEW][START] finish_process: id=4
[INFO]282 views.py/finish_process 2016-03-06 17:20:48: [VIEW][END] finish_process: id=4
uWSGI log:
Sun Mar 6 17:29:55 2016 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/finish_process/ (10.11.16.251)
IOError: write error
[pid: 3275|app: 0|req: 48689/48688] 10.11.16.251 () {34 vars in 553 bytes} [Sun Mar 6 17:20:18 2016] POST /api/finish_process/ => generated 0 bytes in 577024 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 4)
Nginx error.log:
2016/03/06 17:25:18 [error] 3052#0: *44561 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.11.16.251, server: skyline, request: "POST /api/finish_process/ HTTP/1.1", upstream: "uwsgi://unix:/var/run/skyline.sock:", host: "10.11.16.253"
uwsgi.ini:
[uwsgi]
socket = /var/run/skyline.sock
chdir = /opt/skyline
processes = 1
threads = 10
master = true
env = DJANGO_SETTINGS_MODULE=skyline.prod_settings
module = skyline.wsgi:application
chmod-socket = 666
vacuum = true
die-on-term = true
Nginx conf:
server {
listen 80;
server_name skyline;
charset utf-8;
client_max_body_size 50M;
uwsgi_read_timeout 300;
location / {
include uwsgi_params;
uwsgi_pass unix:/var/run/skyline.sock;
}
}
Updated:
Solved. I made a mistake.

http 403 error + "readv() failed (104: Connection reset by peer) while reading upstream"

Preface: I'm running nginx + gunicorn + django on an amazon ec2 instance using s3boto as a default storage backend. I am free tier. The ec2 security group allows: http, ssh, & https.
I'm attempting to send a multipart/form-data request containing a single element: a photo. When attempting to upload the photo, the iPhone (where the request is coming from) hangs. The photo is around 9.5 MB in size.
When I check the nginx-access.logs:
"POST /myUrl/ HTTP/1.1" 400 5 "-""....
When I check the nginx-error.logs:
[error] 5562#0: *1 readv() failed (104: Connection reset by peer) while reading upstream, client: my.ip.addr.iphone, server: default, request: "POST /myUrl/ HTTP/1.1", upstream: "http://127.0.0.1:8000/myUrl/", host: "ec2-my-server-ip-addr.the-location-2.compute.amazonaws.com"
[info] 5562#0: *1 client my.ip.addr.iphone closed keepalive connection
I really cannot figure out why this is happening... I have tried changing the /etc/nginx/sites-available/default timeout settings...
server { ...
client_max_body_size 20M;
client_body_buffer_size 20M;
location / {
keepalive_timeout 300;
proxy_read_timeout 300;
}
}
Any thoughts?
EDIT: After talking on IRC a little more, his problem is the 403 itself, not the nginx error. Leaving my comments on the nginx error below, in case anyone else stumbles into it someday.
I ran into this very problem last week and spent quite a while trying to figure out what was going on. See here: https://github.com/benoitc/gunicorn/issues/872
Basically, as soon as django sees the headers, it knows that the request isn't authenticated. It doesn't wait for the large request body to finish uploading; it responds immediately, and gunicorn closes the connection right after. nginx keeps sending data, and the end result is that gunicorn sends a RST packet to nginx. Once this happens, nginx cannot recover and instead of sending the actual response from gunicorn/django, it sends a 502 Bad Gateway.
I ended up putting in a piece of middleware that acecsses a couple fields in the django request, which ensures that the entire request body is downloaded before Django sends a response:
checker = re.compile(feed_url_regexp)
class AccessPostBodyMiddleware:
def process_request(self, request):
if checker.match(request.path.lstrip('/')) is not None:
# just need to access the request info here
# not sure which one of these actually does the trick.
# This will download the entire request,
# fixing this random issue between gunicorn and nginx
_ = request.POST
_ = request.REQUEST
_ = request.body
return None
However, I do not have control of the client. Since you do (in the form of your iphone app), maybe you can find a way to handle the 502 Bad Gateway. That will keep your app from having to send the entire request twice.

django-paypal: IPN requests are always INVALID

I'm using dcramer's fork of django-paypal, but I always encounter an invalid IPN while working with my sandbox accounts.
I receive the following IPN:
Invalid postback. (INVALID)
I tried everything that showed up on google:
checked seller & buyer emails
sandbox accounts are both verified
I use form.sandbox to render the paypal form
tried removing custom values
there is no non-ascii character in the request
When manually checking the request with https://www.sandbox.paypal.com/cgi-bin/webscr, I also get INVALID.
Did someone encounter this issue ? Is there any more-verbose page to validate ipn requests ?
Yes, I also get errors on post-back starting yesterday (18 June):
Opened POST Back Socket to PayPal.
PayPal Post Back returns HTTP/1.0 400 Bad Request
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 216
Expires: Mon, 18 Jun 2012 22:18:00 GMT
Date: Mon, 18 Jun 2012 22:18:00 GMT
Connection: close
<HTML><HEAD>
<TITLE>Invalid URL</TITLE>
</HEAD><BODY>
<H1>Invalid URL</H1>
The requested URL "/cgi-bin/webscr", is invalid.<p>
....
</BODY></HTML>
: not handled.
I use my own IPN integration. It tries to handle all replies from PayPal, which is why I get the last message (: not handled.) I made a package upgrade yesterday, so I'm not quite sure it is a PayPal problem though.

Nginx connection reset, response from uWsgi lost

I have a django app hosted via Nginx and uWsgi. In a certain very simple request, I get different behaviour for GET and POST, which should not be the case.
The uWsgi daemon log:
[pid: 32454|app: 0|req: 5/17] 127.0.0.1 () {36 vars in 636 bytes} [Tue Oct 19 11:18:36 2010] POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
[pid: 32455|app: 0|req: 5/18] 127.0.0.1 () {32 vars in 521 bytes} [Tue Oct 19 11:18:50 2010] GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
The Nginx accesslog:
127.0.0.1 - - [19/Oct/2010:18:18:36 +0200] "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 0 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
127.0.0.1 - - [19/Oct/2010:18:18:50 +0200] "GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 80 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
The Nginx errorlog:
2010/10/19 18:18:36 [error] 4615#0: *5 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0", upstream: "uwsgi://unix:sock/uwsgi.sock:", host: "localhost:9201"
In essence, Nginx somewhere loses the response if I use POST, not so if I use GET.
Anybody knows something about that?
Pass --post-buffering 1 to uwsgi
This will automatically buffer all the http body > 1 byte
The problem is raised by the way nginx manages upstream disconnections
I hit the same issue, but on my case I can't disable "uwsgi_pass_request_body" as most times (but not always) my app do need the POST data.
This is the workaround I found, while this issue is not fixed in uwsgi:
http://permalink.gmane.org/gmane.comp.python.wsgi.uwsgi.general/813
import django.core.handlers.wsgi
class ForcePostHandler(django.core.handlers.wsgi.WSGIHandler):
"""Workaround for: http://lists.unbit.it/pipermail/uwsgi/2011-February/001395.html
"""
def get_response(self, request):
request.POST # force reading of POST data
return super(ForcePostHandler, self).get_response(request)
application = ForcePostHandler()
I am facing the same issues. I tried all solutions above, but they were not working. Ignoring the response body in my case is simply not an option.
Apparently it is a bug with nginx and uwsgi when dealing with POST requests whose response is smaller than 4052 bytes
What solved it for me was adding "--pep3333-input" to the parameter list of uwsgi. After that all POSTs are returned correctly.
Versions of nginx/uwsgi I'm using:
$ nginx -V
nginx: nginx version: nginx/0.9.6
$ uwsgi --version
uWSGI 0.9.7
After a lucky find in further research (http://answerpot.com/showthread.php?577619-Several%20Bugs/Page2) I found something that helped...
Supplying the uwsgi_pass_request_body off; parameter in the Nginx conf resolves this problem...