Unknown timegap in django + uwsgi profile - django

I'm trying to understand why my website (django + uwsgi + nginx) response is take about 1.5 seconds.
I'm using jazzband/django-silk for this purpose, and in it's admin panel I see that request takes about 800ms
https://i.stack.imgur.com/SJNaZ.png
But when I inspect nginx / uwsi logs I see that execution time is much more from 1.5 to 2 seconds
uwsgi log:
=> generated 99087 bytes in 1818 msecs (HTTP/2.0 200) 4 headers in 133 bytes (1 switches on core 0)
nginx log (time in the end)
10.136.52.100 - - [27/May/2020:14:51:04 +0000] "GET / HTTP/2.0" 200 12250 "https://localhost:4443/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36" 1.819 1.820 .
So I'm losing about 500ms on every request. Can somebody help me to understand what is wrong, or this is right behavior ?
Config of uwsgi bellow:
[uwsgi]
module = newsletter.wsgi
master = true
processes = 10
socket = /var/run/newsletter/app.sock
chmod-socket = 777
vacuum = true
enable-threads = true
ignore-sigpipe = true
ignore-write-errors = true
disable-write-exception = true

Related

Oauth2-proxy - 404 error when redirecting to upstream url (Django application web page)

I'm trying to protect a Django application with oauth2-proxy
In the oauth2-proxy configuration: (version 7.2.1 or 7.3.0)
When the upstream url is set to something like this: --upstream="http://127.0.0.1:8000"
the redirection works fine. (and it returns a home page I have defined in the application )
But, if I use an upstream like this: --upstream="http://127.0.0.1:8000/hello"
it returns 404 error instead of the hello page that is also defined in the application
The page http://127.0.0.1:8000/hello is working fine when invoked directly and it returns "GET /hello HTTP/1.1" 200 136
So I would say it is not a problem with the page.
This is the command line I'm using:
oauth2-proxy.exe ^
--http-address=127.0.0.1:4180 ^
--email-domain=* ^
--cookie-secure=false ^
--cookie-secret=adqeqpioqr809718 ^
--upstream="http://127.0.0.1:8000/hello" ^
--redirect-url=http://127.0.0.1:4180/oauth2/callback ^
--oidc-issuer-url=http://127.0.0.1:28081/auth/realms/testrealm ^
--insecure-oidc-allow-unverified-email=true ^
--provider=keycloak-oidc ^
--client-id=oauth2_proxy ^
--ssl-insecure-skip-verify=true ^
--client-secret=L2znXLhGX4N0j3nsZYxDKfdYpXHMGDkX ^
--skip-provider-button=true
When the oauth2-proxy succeeds to redirect (--upstream="http://127.0.0.1:8000"), I get the page and the following output:
This is the output for the oauth2-proxy:
[2022/09/08 10:52:06] [proxy.go:89] mapping path "/" => upstream "http://127.0.0.1:8000"
[2022/09/08 10:52:06] [oauthproxy.go:148] OAuthProxy configured for Keycloak OIDC Client ID: oauth2_proxy
[2022/09/08 10:52:06] [oauthproxy.go:154] Cookie settings: name:_oauth2_proxy secure(https):false httponly:true expiry:168h0m0s domains: path:/ samesite: refresh:disabled
[2022/09/08 10:57:01] [oauthproxy.go:866] No valid authentication in request. Initiating login.
127.0.0.1:54337 - 9bbfcf75-da91-487a-a55e-40472e4adb23 - - [2022/09/08 10:57:01] 127.0.0.1:4180 GET - "/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 302 380 0.001
127.0.0.1:54337 - e0d8ed12-e4dd-4da6-9fbb-cf689fc53f8f - mail#gmail.com [2022/09/08 10:57:09] [AuthSuccess] Authenticated via OAuth2: Session{email:mail#gmail.com user:93547bcc-2441-414a-9149-c7533c4f5d23 PreferredUsername:testuser token:true id_token:true created:2022-09-08 10:57:09.789934 -0300 -03 m=+303.019857301 expires:2022-09-08 11:02:09.7839238 -0300 -03 m=+603.013847101 refresh_token:true groups:[role:offline_access role:uma_authorization role:default-roles-testrealm role:account:manage-account role:account:manage-account-links role:account:view-profile]}
[2022/09/08 10:57:09] [session_store.go:163] WARNING: Multiple cookies are required for this session as it exceeds the 4kb cookie limit. Please use server side session storage (eg. Redis) instead.
127.0.0.1:54337 - e0d8ed12-e4dd-4da6-9fbb-cf689fc53f8f - - [2022/09/08 10:57:09] 127.0.0.1:4180 GET - "/oauth2/callback?state=ahuKzCYr7jR4P4mmjniIt67TttZKyxGv4mLfEwKlQio%3A%2F&session_state=86ac9bd1-9756-4916-83e9-ec0496b5b767&code=df3940e5-58f5-49ac-a821-5607f0f2faae.86ac9bd1-9756-4916-83e9-ec0496b5b767.cd30a162-8e4d-4a2d-bff6-168e444aed92" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 302 24 0.029
127.0.0.1:54337 - d58ace6e-afe9-4737-9b12-dbc17fdd0ca2 - mail#gmail.com [2022/09/08 10:57:09] 127.0.0.1:4180 GET / "/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 200 138 0.005
On the Django side I get:
**"GET / HTTP/1.1" 200 138**
When the oauth2-proxy fails to redirect --upstream="http://127.0.0.1:8000/hello"), I get the following output:
This is the output for the oauth2-proxy:
[2022/09/08 10:33:58] [proxy.go:89] mapping path "/hello" => upstream "http://127.0.0.1:8000/hello"
[2022/09/08 10:33:58] [oauthproxy.go:148] OAuthProxy configured for Keycloak OIDC Client ID: oauth2_proxy
[2022/09/08 10:33:58] [oauthproxy.go:154] Cookie settings: name:_oauth2_proxy secure(https):false httponly:true expiry:168h0m0s domains: path:/ samesite: refresh:disabled
[2022/09/08 10:37:20] [oauthproxy.go:866] No valid authentication in request. Initiating login.
127.0.0.1:53615 - 54c0f3d8-b3c0-4d48-8353-fe69be0e4500 - - [2022/09/08 10:37:20] 127.0.0.1:4180 GET - "/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 302 380 0.001
127.0.0.1:53615 - 0bec934e-05a3-4cc8-9306-fffc28597c8f - mail#gmail.com [2022/09/08 10:37:28] [AuthSuccess] Authenticated via OAuth2: Session{email:mail#gmail.com user:93547bcc-2441-414a-9149-c7533c4f5d23 PreferredUsername:testuser token:true id_token:true created:2022-09-08 10:37:28.6527488 -0300 -03 m=+210.486252601 expires:2022-09-08 10:42:28.6468518 -0300 -03 m=+510.480355601 refresh_token:true groups:[role:offline_access role:uma_authorization role:default-roles-testrealm role:account:manage-account role:account:manage-account-links role:account:view-profile]}
[2022/09/08 10:37:28] [session_store.go:163] WARNING: Multiple cookies are required for this session as it exceeds the 4kb cookie limit. Please use server side session storage (eg. Redis) instead.
127.0.0.1:53615 - 0bec934e-05a3-4cc8-9306-fffc28597c8f - - [2022/09/08 10:37:28] 127.0.0.1:4180 GET - "/oauth2/callback?state=nox0LM3fIlVU1kamoLBaktByeLCcIWiBvRLdHFIuhd4%3A%2F&session_state=808c0654-c9e7-4593-b5dc-95d3231438ea&code=e220414d-e949-4e2d-8d33-55de96f8f5d4.808c0654-c9e7-4593-b5dc-95d3231438ea.cd30a162-8e4d-4a2d-bff6-168e444aed92" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 302 24 0.024
127.0.0.1:53615 - 9454773f-cade-46fe-870f-70d09fc49ffb - mail#gmail.com [2022/09/08 10:37:28] 127.0.0.1:4180 GET - "/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.27" 404 19 0.000
On the Django side I get:
Nothing. As the Django app is never reached and so there are no logs.
Could you please help me find out what could be happening? I will really appreciate it!!
It doesn't seem to be a problem with the application, as the pages work fine when invoked directly.
If it is a mistake in my oauth2-proxy command line/configuration, I would appreciate someone points me to the error, so I can correct it.
Otherwise, any hint would also be much appreciated.
The only thing I've noticed in the logs of oauth2-proxy is that no matter what I put in the --upstream, the final GET (I think it is the redirection to the upstream) is as follows: GET - "/" ... it is the same in both attempts, and it only succeeds in the first one, because it matches the [proxy.go:89] mapping path "/"
The reason it was giving the 404 error, was that the configuration --upstreams points to a url to which the proxy is going to pass the request once authenticated, but it is not going to redirect to that address unless you specifically ask for it in the original request.
So the correct way of making the request is http://127.0.0.1:4180/hello, which is including the whole path to the endpoint you want to reach. (instead of for example http://127.0.0.1:4180 )

How to use the pool connections set up with django-postgrespool2 when querying the database?

I have just set up django-postgrespool2 for creating a connection pool between my Django and PostgreSQL database. I followed the readme guide here on how to install and config it. It works now and I can run my project with django-postgrespool2.
However, here comes my question. How do I verify that the pool connections are being used when I query the database? What code should be used when connecting to the database, is it any different or can I use the same code as before?
My database settings in settings.py, I have set the database engine to django_postgrespool2:
DATABASES = {
'default': {
'ENGINE': 'django_postgrespool2',
'NAME': env_config.get('DB_NAME'),
'OPTIONS': {
'options': '-c search_path=myappdjango'
},
'USER': env_config.get('DB_USER'),
'PASSWORD': env_config.get('DB_PASSWORD'),
'HOST': env_config.get('DB_HOST'),
'PORT': env_config.get('DB_PORT')
}
}
My settings for django-postgrespool2:
DATABASE_POOL_CLASS = 'sqlalchemy.pool.QueuePool'
DATABASE_POOL_ARGS = {
'max_overflow': 10,
'pool_size': 5,
'recycle': 300
}
Code example of how I connect and query the database:
def paginateData(self, paginationData, search):
sqlSelect = "SELECT * FROM tablex "
sqlWhere = self.buildCTPagianteSqlWhere(search)
sqlOrderBy = "ORDER BY name "
sqlPagination = "LIMIT %s OFFSET %s;"
sql = sqlSelect + sqlWhere + sqlOrderBy + sqlPagination
sqlParams = self.buildCTPaginateParams(paginationData, search)
cursor = db.cursor("mydatabase", sql, sqlParams)
dataResult = cursor.connect()
return dataResult
Does the following db code cursor = db.cursor("mydatabase", sql, sqlParams) actually utilize the pool connections? Or does the db code have to be written differently with django-postgrespool2?
Output sample from Django:
2021-03-08 08:35:50:DEBUG:z.pool: new connection
2021-03-08 08:35:50:DEBUG:z.pool: retrieved from pool
March 08, 2021 - 08:35:50
Django version 2.2.5, using settings 'myapp.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
2021-03-08 08:35:52:DEBUG:z.pool: new connection
2021-03-08 08:35:52:DEBUG:z.pool: retrieved from pool
[08/Mar/2021 08:35:53] "GET /administration/getlineandcellname?lineid=14&cellid=58 HTTP/1.1" 200 77
[08/Mar/2021 08:35:53] "GET /checkuser/?username=tobbe HTTP/1.1" 200 3
2021-03-08 08:35:53:DEBUG:z.pool: returned to pool
[08/Mar/2021 08:35:54] "GET /administration/getlineandcellname?lineid=14&cellid=58 HTTP/1.1" 200 77
[08/Mar/2021 08:35:56] "GET /administration/getlineandcellname?lineid=14&cellid=58 HTTP/1.1" 200 77
2021-03-08 08:35:57:DEBUG:z.pool: retrieved from pool
[08/Mar/2021 08:35:57] "GET /checkuser/?username=tobbe HTTP/1.1" 200 3
2021-03-08 08:35:57:DEBUG:z.pool: returned to pool
[08/Mar/2021 08:35:58] "GET /administration/getlineandcellname?lineid=14&cellid=58 HTTP/1.1" 200 77
I can see information in the Django output such as retrieved from pool. So it does seem to work. However, I want help with verifying if I am using the db connection pool correctly.
Thanks!
You can turn on the settings log_connections and log_disconnections in your postgresql.conf, and check that the database is logging fewer connections that your code is checking out of and back in to the pool.

Internal Server Error when I try to use HTTPS protocol for traefik backend

My setup is ELB --https--> traefik --https--> service
I get back a 500 Internal Server Error from traefik on every request. It doesn't appear the request ever makes it to the service. The service is running Apache with access logging and I see no incoming requests logged. I am able to curl the service directly and receive an expected response. Both traefik and the service are running in Docker containers. I am also able to use port 80 all the way through with success, and I can use https to traefik and port 80 to the service. I get an error from apache, but it does go all the way through.
traefik.toml
logLevel = "DEBUG"
RootCAs = [ "/etc/certs/ca.pem" ]
#InsecureSkipVerify = true
defaultEntryPoints = ["https"]
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/certs/cert.pem"
keyFile = "/etc/certs/key.pem"
[entryPoints.http]
address = ":80"
[web]
address = ":8080"
[traefikLog]
[accessLog]
[consulCatalog]
endpoint = "127.0.0.1:8500"
domain = "consul.localhost"
exposedByDefault = false
prefix = "traefik"
The tags used for the consul service:
"traefik.enable=true",
"traefik.protocol=https",
"traefik.frontend.passHostHeader=true",
"traefik.frontend.redirect.entryPoint=https",
"traefik.frontend.entryPoints=https",
"traefik.frontend.rule=Host:hostname"
The debug output from traefik for each request:
time="2018-04-08T02:46:36Z"
level=debug
msg="vulcand/oxy/roundrobin/rr: begin ServeHttp on request"
Request="{"Method":"GET","URL":{"Scheme":"","Opaque":"","User":null,"Host":"","Path":"/","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"Proto":"HTTP/1.1","ProtoMajor":1,"ProtoMinor":1,"Header":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br"],"Accept-Language":["en-US,en;q=0.9"],"Cache-Control":["max-age=0"],"Cookie":["__utmc=80117009; PHPSESSID=64c928bgf265fgqdqqbgdbuqso; _ga=GA1.2.573328135.1514428072; messagesUtk=d353002175524322ac26ff221d1e80a6; __hstc=27968611.cbdd9ce39324304b461d515d0a8f4cb0.1523037648547.1523037648547.1523037648547.1; __hssrc=1; hubspotutk=cbdd9ce39324304b461d515d0a8f4cb0; __utmz=80117009.1523037658.5.2.utmcsr=|utmccn=(referral)|utmcmd=referral|utmcct=/; __utma=80117009.573328135.1514428072.1523037658.1523128344.6"],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.81 Safari/537.36"],"X-Amzn-Trace-Id":["Root=1-5ac982a8-b9615451a35258e3fd2a825d"],"X-Forwarded-For":["76.105.255.147"],"X-Forwarded-Port":["443"],"X-Forwarded-Proto":["https"]},"ContentLength":0,"TransferEncoding":null,"Host”:”hostname”,”Form":null,"PostForm":null,"MultipartForm":null,"Trailer":null,"RemoteAddr":"10.200.20.130:4880","RequestURI":"/","TLS":null}"
time="2018-04-08T02:46:36Z" level=debug
msg="vulcand/oxy/roundrobin/rr: Forwarding this request to URL"
Request="{"Method":"GET","URL":{"Scheme":"","Opaque":"","User":null,"Host":"","Path":"/","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":""},"Proto":"HTTP/1.1","ProtoMajor":1,"ProtoMinor":1,"Header":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br"],"Accept-Language":["en-US,en;q=0.9"],"Cache-Control":["max-age=0"],"Cookie":["__utmc=80117009; PHPSESSID=64c928bgf265fgqdqqbgdbuqso; _ga=GA1.2.573328135.1514428072; messagesUtk=d353002175524322ac26ff221d1e80a6; __hstc=27968611.cbdd9ce39324304b461d515d0a8f4cb0.1523037648547.1523037648547.1523037648547.1; __hssrc=1; hubspotutk=cbdd9ce39324304b461d515d0a8f4cb0; __utmz=80117009.1523037658.5.2.utmcsr=|utmccn=(referral)|utmcmd=referral|utmcct=/; __utma=80117009.573328135.1514428072.1523037658.1523128344.6"],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.81 Safari/537.36"],"X-Amzn-Trace-Id":["Root=1-5ac982a8-b9615451a35258e3fd2a825d"],"X-Forwarded-For":["76.105.255.147"],"X-Forwarded-Port":["443"],"X-Forwarded-Proto":["https"]},"ContentLength":0,"TransferEncoding":null,"Host”:”hostname”,”Form":null,"PostForm":null,"MultipartForm":null,"Trailer":null,"RemoteAddr":"10.200.20.130:4880","RequestURI":"/","TLS":null}" ForwardURL="https://10.200.115.53:443"
assume "hostname" is the correct host name. Any assistance is appreciated.
I think your problem come from "traefik.protocol=https", remove this tag.
Also you can remove traefik.frontend.entryPoints=https because it's useless: this tag create a redirection to https entrypoint but your frontend is already on the https entry point ("traefik.frontend.entryPoints=https")

Django + uWSGI hold a long time to response

I'm running a Django web application using Nginx and uWSGI. Now I meet a problem that the finish_process view in django
I have added logging at the begin and the end of Django finish_process view.
I make a request at 17:20:18, and the view finished at 17:20:48. But uWSGI does not return response at this time, and after 577 seconds, it throws IOError when it try to write response to client, because nginx close the connection (uwsgi_read_timeout is 300 seconds).
My question is why uWSGI holds the response so long after Django handled the view? I'm a bit at a loss.
Django log:
[INFO]246 views.py/finish_process 2016-03-06 17:20:18: [VIEW][START] finish_process: id=4
[INFO]282 views.py/finish_process 2016-03-06 17:20:48: [VIEW][END] finish_process: id=4
uWSGI log:
Sun Mar 6 17:29:55 2016 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/finish_process/ (10.11.16.251)
IOError: write error
[pid: 3275|app: 0|req: 48689/48688] 10.11.16.251 () {34 vars in 553 bytes} [Sun Mar 6 17:20:18 2016] POST /api/finish_process/ => generated 0 bytes in 577024 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 4)
Nginx error.log:
2016/03/06 17:25:18 [error] 3052#0: *44561 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.11.16.251, server: skyline, request: "POST /api/finish_process/ HTTP/1.1", upstream: "uwsgi://unix:/var/run/skyline.sock:", host: "10.11.16.253"
uwsgi.ini:
[uwsgi]
socket = /var/run/skyline.sock
chdir = /opt/skyline
processes = 1
threads = 10
master = true
env = DJANGO_SETTINGS_MODULE=skyline.prod_settings
module = skyline.wsgi:application
chmod-socket = 666
vacuum = true
die-on-term = true
Nginx conf:
server {
listen 80;
server_name skyline;
charset utf-8;
client_max_body_size 50M;
uwsgi_read_timeout 300;
location / {
include uwsgi_params;
uwsgi_pass unix:/var/run/skyline.sock;
}
}
Updated:
Solved. I made a mistake.

Nginx connection reset, response from uWsgi lost

I have a django app hosted via Nginx and uWsgi. In a certain very simple request, I get different behaviour for GET and POST, which should not be the case.
The uWsgi daemon log:
[pid: 32454|app: 0|req: 5/17] 127.0.0.1 () {36 vars in 636 bytes} [Tue Oct 19 11:18:36 2010] POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
[pid: 32455|app: 0|req: 5/18] 127.0.0.1 () {32 vars in 521 bytes} [Tue Oct 19 11:18:50 2010] GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
The Nginx accesslog:
127.0.0.1 - - [19/Oct/2010:18:18:36 +0200] "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 0 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
127.0.0.1 - - [19/Oct/2010:18:18:50 +0200] "GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 80 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
The Nginx errorlog:
2010/10/19 18:18:36 [error] 4615#0: *5 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0", upstream: "uwsgi://unix:sock/uwsgi.sock:", host: "localhost:9201"
In essence, Nginx somewhere loses the response if I use POST, not so if I use GET.
Anybody knows something about that?
Pass --post-buffering 1 to uwsgi
This will automatically buffer all the http body > 1 byte
The problem is raised by the way nginx manages upstream disconnections
I hit the same issue, but on my case I can't disable "uwsgi_pass_request_body" as most times (but not always) my app do need the POST data.
This is the workaround I found, while this issue is not fixed in uwsgi:
http://permalink.gmane.org/gmane.comp.python.wsgi.uwsgi.general/813
import django.core.handlers.wsgi
class ForcePostHandler(django.core.handlers.wsgi.WSGIHandler):
"""Workaround for: http://lists.unbit.it/pipermail/uwsgi/2011-February/001395.html
"""
def get_response(self, request):
request.POST # force reading of POST data
return super(ForcePostHandler, self).get_response(request)
application = ForcePostHandler()
I am facing the same issues. I tried all solutions above, but they were not working. Ignoring the response body in my case is simply not an option.
Apparently it is a bug with nginx and uwsgi when dealing with POST requests whose response is smaller than 4052 bytes
What solved it for me was adding "--pep3333-input" to the parameter list of uwsgi. After that all POSTs are returned correctly.
Versions of nginx/uwsgi I'm using:
$ nginx -V
nginx: nginx version: nginx/0.9.6
$ uwsgi --version
uWSGI 0.9.7
After a lucky find in further research (http://answerpot.com/showthread.php?577619-Several%20Bugs/Page2) I found something that helped...
Supplying the uwsgi_pass_request_body off; parameter in the Nginx conf resolves this problem...