how to disable nginx proxy buffer in elastic beanstalk nginx - amazon-web-services

i keep getting the following errors:
2022/12/18 04:04:00 [warn] 9797#9797: *3712915 an upstream response is
buffered to a temporary file /var/lib/nginx/tmp/proxy/5/07/0000015075
while reading upstream, client: 10.8.5.39, server: , request: "GET
/api/test HTTP/1.1", upstream: "http://127.0.0.1:8080/api/test", host:
"cms-api.internal.testtest.com"
so i decided to disable proxy buffer since its a server to server communication within the LAN, not a slower client. asking EC2 support is useless they just told me they dont support nginx - DUH.
found a great article on how to calculate buffers, etc. https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx
I set the following ebextension the following settings.
client_body_buffer_size 100M;
client_max_body_size 100M;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
realise still having same issue. Initially i tried to adjust buffer size, but it didnt work, than i outright turned it off, still having same issue. Any advice?

I set the following ebextension the following settings
That's why it does not work. For configuring nginx you have to use .platform, not .ebextension, as explained in the AWS docs. So you have to create a file, e.g.
.platform/nginx/conf.d/myconf.conf
wit content
client_body_buffer_size 100M;
client_max_body_size 100M;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;

Related

*25 upstream prematurely closed connection while reading response header from upstream. Flask + AWS ELB

I'm using AWS ELB to deploy flask backend api. I have an endpoint that uploads a file and another endpoint that reads data from the file.
Every other endpoint works fine but when I run the endpoint that reads data the file that was uploaded it runs for about 30 seconds and it throws the 502 bad gateway error.
ELB error: 2022/03/08 17:30:19 [error] 25807#25807: *25 upstream prematurely closed connection while reading response header from upstream, client: 172.31.0.71,
.platforms/nginx/conf.d/myconfig.conf:
keepalive_timeout 3600s;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
fastcgi_send_timeout 3600s;
fastcgi_read_timeout 3600s;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
uwsgi_read_timeout 3600s;
uwsgi_send_timeout 3600s;
uwsgi_socket_keepalive on;
I have tried a whole lot of answers but none has worked for me. Please I need help

nginx returning malformed packet/no response with 200 when request body is large

I've hosted my Django rest framework API server with gunicorn behind the nginx. When I'm hitting the API to the nginx with a small body in the request, the response comes. But, with a large payload, it returns nothing with 200 OK response.
However, when I hit the gunicorn directly, it returns a proper response.
NGNIX is messing up with the response if the request payload is large.
I captured packets via tcpdump, there it is showing that the response contains MALFORMED PACKET. Following is the TCP dump:
[Malformed Packet: JSON]
[Expert Info (Error/Malformed): Malformed Packet (Exception occurred)]
[Malformed Packet (Exception occurred)]
[Severity level: Error]
[Group: Malformed]
NGINX config :
server {
listen 6678 backlog=10000;
client_body_timeout 180s;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
proxy_pass http://localhost:8000;
proxy_redirect default;
}
}
I've never seen NGINX playing hard on me. Any help appreciated.
If nginx and gunicorn are running on the same server, rather than using a loopback for the two to talk to each other, a unix socket is a bit more performant I believe. I can't tell if you already doing that from the config snippet. The only other thing I'm seeing from the gunicorn deploy docs that might be helpful here is client_max_body_size 4G;, which according to the nginx docs defaults to 1 MB.

Nginx: broken_header with proxy_protocol and ELB

I am trying to set up proxy_protocol in my nginx config. My server sits behind an AWS load balancer (ELB), and I have enabled Proxy Protocol on that for both ports 80 and 443.
However, this is what I get when I hit my server:
broken header: "��/��
'���\DW�Vc�A{����
�#��kj98���=5���g#32ED�</A
" while reading PROXY protocol, client: 172.31.12.223, server: 0.0.0.0:443
That is a direct copy paste from the nginx error log - wonky characters and all.
Here is a snip from my nginx config:
server {
listen 80 proxy_protocol;
set_real_ip_from 172.31.0.0/20; # Coming from ELB
real_ip_header proxy_protocol;
return 301 https://$http_host$request_uri;
}
server {
listen 443 ssl proxy_protocol;
server_name *.....com
ssl_certificate /etc/ssl/<....>;
ssl_certificate_key /etc/ssl/<....?;
ssl_prefer_server_ciphers On;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
ssl_session_cache shared:SSL:10m;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_stapling on;
ssl_stapling_verify on;
...
I can't find any help online about this issue. Other people have had broken header issues, but the error with bad headers are always readable - they don't look like they are encoded like they are for me.
Any ideas?
Two suggestions:
Verify that your ELB listener is configured to use TCP as the protocol, not HTTP. I have an LB config like the following that's routing to Nginx with proxy_protocol configured:
{
"LoadBalancerName": "my-lb",
"Listeners": [
{
"Protocol": "TCP",
"LoadBalancerPort": 80,
"InstanceProtocol": "TCP",
"InstancePort": 80
}
],
"AvailabilityZones": [
"us-east-1a",
"us-east-1b",
"us-east-1d",
"us-east-1e"
],
"SecurityGroups": [
"sg-mysg"
]
}
You mentioned that you have enabled Proxy Protocol in the ELB, so I'm assuming you've followed AWS setup steps. If so then the ELB should be crafting the HTTP request correctly with the first line as something like PROXY TCP4 198.51.100.22 203.0.113.7 35646 80\r\n. However if the HTTP request is not coming into Nginx with the PROXY ... line then it could cause the problem you're seeing. You could reproduce that if you hit the EC2 DNS name directly in the browser, or you ssh into the EC2 instance and try something like curl localhost, then you should see a similar broken header error in the Nginx logs.
To find out whether it works with a correctly formed HTTP request you can use telnet:
$ telnet localhost 80
PROXY TCP4 198.51.100.22 203.0.113.7 35646 80
GET /index.html HTTP/1.1
Host: your-nginx-config-server_name
Connection: Keep-Alive
Then check the Nginx logs and see if you have the same broken header error. If not then the ELB is likely not sending the properly formatted PROXY request, and I'd suggest re-doing the ELB Proxy Protocol configuration, maybe with a new LB, to verify it's set up correctly.
I had similar situation, nginx had 'proxy_protocol' on but AWS ELB settings was not on, so I got the similar message.
Solutions to edit settings to turn it on:
I had this error and came across this ticket:
https://trac.nginx.org/nginx/ticket/886
which ultimately led me to figuring out that I had an unneeded proxy_protocol declaration in my nginx.conf file. I removed that and everything was working again.
Oddly enough, everything worked fine with nginx version 1.8.0, but when I upgraded to nginx version 1.8.1 is when I started seeing the error.
I got this unreadable header issue too and here are the cause and how I fixed it.
In my case, Nginx is configured with use-proxy-protocol=true properly. It complains about the broken header solely because AWS ELB did not add the required header (e.g. PROXY TCP4 198.51.100.22 203.0.113.7 35646 80) at all. Nginx sees the encrypted HTTPS payload directly. That's why it prints out all the unreadable characters.
So, why didn't the AWS ELB add the PROXY header? It turned out I used wrong ports in the commands to enable Proxy Protocol policy. Instance ports should be used instead of 80 and 443.
The ELB has the following port mapping.
80 -> 30440
443 -> 31772
The commands should be
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
--instance-port 30440 \
--policy-names ars-ProxyProtocol-policy
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
--instance-port 31272 \
--policy-names ars-ProxyProtocol-policy
but I used 80 and 443 by mistake.
Hope this helps somebody.
Stephen Karger's solution above is correct, you must adjust make sure to configure your ELB to support proxy. Here is the AWS docs for doing exactly that: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html. The docs are a bit daunting at first, so if you want you can just skip to steps 3 and 4 under the Enable Proxy Protocol Using the AWS CLI section. Those are the only necessary steps for enabling the proxy channeling. Additionally, as Stephen also suggested, you must make sure that your ELB is using TCP instead of http or https, as both of these will not behave properly with ELB's proxy implementation. I suggest moving your socket channel away from common ports like 80 and 443, just so you can still maintain those standardized connections with their default behavior. Of course, making that call is entirely dependent on how your app stack looks.
If it helps, you can use the npm package wscat to debug your websocket connections like so:
$ npm install -g wscat
$ wscat --connect 127.0.0.1
If the connection works in local, then it is for sure your load balancer. However, if it doesn't, there is almost definitely a problem with your socket host.
Additionally, a tool like nmap will aid you in discovering open ports. A nice checklist for debugging:
npm install -g wscat
# can you connect to it from within the server?
ssh ubuntu#69.69.69.69
wscat -c 127.0.0.1:80
# can you connect to it from outside the server?
exit
wscat -c 69.69.69.69:80
# if not, is your socket port open for business?
nmap 69.69.69.69:80
You can also use nmap from within your server to discover open ports. to install nmap on ubuntu, simply sudo apt-get install nmap. on osx, brew install nmap
Here is a working config that i have, although it does not provide ssl support at the moment. In this configuration, I have port 80 feeding my rails app, port 81 feeding a socket connection through my elb, and port 82 open for internal socket connections. Hope this helps somebody!! Anybody using Rails, unicorn, and Faye to deploy should find this helpful. :) happy hacking!
# sets up deployed ruby on rails server
upstream unicorn {
server unix:/path/to/unicorn/unicorn.sock fail_timeout=0;
}
# sets up Faye socket
upstream rack_upstream {
server 127.0.0.1:9292;
}
# sets port 80 to proxy to rails app
server {
listen 80 default_server;
keepalive_timeout 300;
client_max_body_size 4G;
root /path/to/rails/public;
try_files $uri/index.html $uri.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/rails/public;
}
}
# open 81 to load balancers (external socket connection)
server {
listen 81 proxy_protocol;
server_name _;
charset UTF-8;
location / {
proxy_pass http://rack_upstream;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# open 82 to internal network (internal socket connections)
server {
listen 82;
server_name _;
charset UTF-8;
location / {
proxy_pass http://rack_upstream;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

Nginx consumes Upgrade header after proxy_pass

So I have been banging my head against the wall for the better part of 2 days, please help.
I am attempting to establish a Websocket connection using this
django-websocket-redis configuration.
There are 2 instances of uwsgi running, one for the website and one for the websocket communication.
I used wireshark heavily to find out what exactly is happening, and apparently nginx is eating the headers "Connection: Upgrade" and "Upgrade: websocket".
here is the critical nginx config part:
upstream websocket {
server 127.0.0.1:9868;
}
location /ws/ {
proxy_pass_request_headers on;
access_log off;
proxy_http_version 1.1;
proxy_pass http://websocket;
proxy_set_header Connection "Upgrade";
proxy_set_header Upgrade websocket;
}
As you can see on those 2 screenshots, tcpdump of internal communication shows that the handshake works nicely. but in my browser (second image) the headers are missing.
Any ideas are greatly appreciated. I am truly stuck here :(
Versions:
nginx - 1.7.4
uwsgi - 2.0.7
pip freeze:
Django==1.7
MySQL-python==1.2.5
django-redis-sessions==0.4.0
django-websocket-redis==0.4.2
gevent==1.0.1
greenlet==0.4.4
redis==2.10.3
six==1.8.0
uWSGI==2.0.7
wsgiref==0.1.2
I would use gunicorn for deploying a django application, but anyway.
I remembered that I saw this on the gunicorn docs:
If you want to be able to handle streaming request/responses or other
fancy features like Comet, Long polling, or Web sockets, you need to
turn off the proxy buffering. When you do this you must run with one
of the async worker classes.
To turn off buffering, you only need to add proxy_buffering off; to
your location block:
In your location would be:
location /ws/ {
proxy_pass_request_headers on;
access_log off;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://websocket;
proxy_set_header Connection "upgrade";
proxy_set_header Upgrade websocket;
}
Link to the guide of gunicorn for deploying in nginx.
http://docs.gunicorn.org/en/latest/deploy.html?highlight=header
Hope this helps

Nginx responds with 502 error

While trying to deploy my app to Digital ocean I did everything according to this tutorial: How To Deploy a Local Django App to a VPS.
While Gunicorn is working perfectly and http://95.85.34.87:8001/ opens my app, Nginx, however, does not work, http://95.85.34.87 or http://95.85.34.87/static causes a 502 error.
Nginx log says, that :
2014/04/19 02:43:52 [error] 896#0: *62 connect() failed (111: Connection refused) while connecting to upstream, client: 78.62.163.9, server: 95.85.34.87, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "95.85.34.87"
My nginx configuration file looks like this:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name 95.85.34.87;
access_log off;
location /static/ {
alias /opt/myenv/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
In Django.settings I have ALLOWED_HOSTS set to '[*]'
Nginx is listening to port 80:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 894/nginx
tcp6 0 0 :::80 :::* LISTEN 894/nginx
I think that the point is that Nginx does not point user to Gunicorn for some reason...
EDIT: I changed the proxy_pass http://127.0.0.1:8001; line under location / to my servers IP address (instead of loccalhost) and everything worked. I am not sure if it's good decission or not.
I see the instructions tell you to use this to start Gunicorn:
$ gunicorn_django --bind yourdomainorip.com:8001
If you start it like this then Gunicorn will listen only on the interface that is bound to yourdomainorip.com. So it won't listen on the loopback interface and won't receive anything sent to 127.0.0.1. Rather than changing nginx's configuration like you mention in your edit, you should do:
$ gunicorn_django --bind localhost:8001
This would cause Gunicorn to listen on the loopback. This is preferable because if you bind Gunicorn to an external interface people can access it without going through nginx.
With this setup the interaction between nginx and your Django app is like this:
nginx is the entry point for all HTTP requests. It listens on 95.85.34.87:80.
When an request is made to a URL that should be forwarded to your application, nginx forwards it by connecting on localhost:8001 (same as 127.0.0.1:8001).
Your Django application is listening on localhost:8001 to receive forwards from nginx.
By the way, gunicorn_django is deprecated.
And another thing: don't set ALLOWED_HOSTS to serve all domains. If you do so you are opening yourself to cache poisoning. Set it only to the list of domains that your Django project is meant to serve. See the documentation for details.