I have successfully deployed a Next.js with Next-Auth project on AWS EB. Everything looks fine. However, I can't get passed the sign in form.
Here's what the browser console shows:
502 Bad Gateway: POST http://----.elasticbeanstalk.com/api/auth/callback/credentials? Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
Here's what the EB logs show:
upstream sent too big header while reading response header from upstream, client: x.x.x.x, server: , request: "POST /api/auth/callback/credentials? HTTP/1.1", upstream: "http://-----/api/auth/callback/credentials?", host: "----", referrer: "http://----.elasticbeanstalk.com/"
I've also shoved some console logs in [...nextauth].tsx file to find the issue.
The console logs show fine all through the Providers.Credentials.
The console logs show fine all through the jwt callbacks.
But the console logs never appear in the session callback.
So it dies somewhere between jwt and session callback.
Here's the config that I have under .ebextensions/
.ebextensions/proxy_custom.config:
files:
"/etc/nginx/conf.d/proxy_custom.conf":
mode: "000644"
owner: root
group: root
content: |
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
Here is an updated config (based on what I've used), you may want to try them individually to see which config works best for you.
files:
"/etc/nginx/conf.d/proxy_custom.conf":
mode: "000644"
owner: root
group: root
content: |
large_client_header_buffers 4 32k;
fastcgi_buffers 16 32k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"
make sure to check what the actual header sizes of your requests are and adjust the sizes accordingly. curl -s -w \%{size_header} -o /dev/null https://example.com by replacing example.com with your service url and add request headers via -H, if needed. This will give you the header size in bytes.
don't set those buffers too high and use calculations specific to your app. Arbitrarily high values won't do good to your RAM, because those buffers are used per connection.
Reference: https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx
The issue was caused by a cookie being too large to set.
The .ebextensions was never being applied. This is due to the app being under Amazon Linux 2. The configs need to go under .platform for Amazon Linux 2.
These are the lines that did it:
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;
proxy_buffers 4 256k;
This answer from another post helped narrow down the issue.
Related
i keep getting the following errors:
2022/12/18 04:04:00 [warn] 9797#9797: *3712915 an upstream response is
buffered to a temporary file /var/lib/nginx/tmp/proxy/5/07/0000015075
while reading upstream, client: 10.8.5.39, server: , request: "GET
/api/test HTTP/1.1", upstream: "http://127.0.0.1:8080/api/test", host:
"cms-api.internal.testtest.com"
so i decided to disable proxy buffer since its a server to server communication within the LAN, not a slower client. asking EC2 support is useless they just told me they dont support nginx - DUH.
found a great article on how to calculate buffers, etc. https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx
I set the following ebextension the following settings.
client_body_buffer_size 100M;
client_max_body_size 100M;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
realise still having same issue. Initially i tried to adjust buffer size, but it didnt work, than i outright turned it off, still having same issue. Any advice?
I set the following ebextension the following settings
That's why it does not work. For configuring nginx you have to use .platform, not .ebextension, as explained in the AWS docs. So you have to create a file, e.g.
.platform/nginx/conf.d/myconf.conf
wit content
client_body_buffer_size 100M;
client_max_body_size 100M;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
I've a Django application that is running on EC2 instance. It is uploading file upto 100MB without any problems, but above 100MB file size, it gives error 413 Request Entity Too Large.
I've tried in file /etc/nginx/sites-available/default under server configuration.
client_max_body_size 10G;
I've also applied the same configuration in my domain configuration files, but all in vein. This configuration works well in my other servers, where I am running php applications.
Note: I've used gunicorn with supervisor for running Django application.
The client_max_body_size has to be defined in both http and https as stated in this answer.
So your nginx.conf file under /etc/nginx/sites-available/default would look something like:
http {
server {
...
listen 80;
server_name xxxx.net;
client_max_body_size 10G;
}
server {
...
listen 443 default_server ssl;
server_name xxxx.net;
client_max_body_size 10G;
}
}
#Helge thanks, I've applied the configuration parameters in both of them.
The issue is resolved now. The issue was, initially it was throwing error due to configuration parameter defined below other parameters, after resolving that Cloudflare started throwing 413 error. So, my team lead made configuration changes for the same.
in you 01__django.config add
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
#use These
or go to the your root via ssh
pass the commands
cd ..
cd ..
now use :
sudo nano /etc/nginx/nginx.conf
add :
client_max_body_size 20M; # these will add 20mb of size to your request body
Now Reload Your NGINX Server
using nginx -s restart
I'm trying to increase the size of uploadable files to the server. However it seems there's a cap that prevents anything over 1MB of being uploaded.
I've read a lot of answers and none have worked for me.
I've done everything in this question
Stackoverflow question
I did everything here as well.
AWS resource
Here's what I have as the full error
2021/01/15 05:08:35 [error] 24140#0: *150 client intended to send too large body: 2695262 bytes, client: [ip-address-removed], server: , request: "PATCH /user HTTP/1.1", host: "host.domain.com", referrer: "host.domain.com"
I've made a file in this directory (which is at the root of my source code)
.ebextensions/nginx/conf.d/myconf.conf
In it I have
client_max_body_size 40M;
I know that EB is acknowledging it because if there's an error it won't deploy it.
Other than that I have no idea what to do
Does anybody know what might be the issue here?
Edit: backend is nodejs v12
Edit: Also tried this inside of a .conf file in .ebextensions
files:
"/etc/nginx/conf.d/myconf.conf":
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
Update:
Beanstalk on Amazon Linux 2 AMI has a little different path for NGINX config extensions:
.platform/nginx/conf.d
There you can place NGINX config extension files with *.conf extension, for example:
.platform/nginx/conf.d/upload_size.conf:
client_max_body_size 20M;
Documentation for this is here.
Original answer:
Nginx normally does not read config from where is serves the content. By default all config nginx read is /etc/nginx/nginx.conf and this file includes other files from /etc/nginx/conf.d/ with *.conf extension.
You need to place client_max_body_size 40M; there.
First at all, I have researched about the topic after writing the question, there are many similar question but I think the problem here is different:
No 'Access-Control-Allow-Origin' - Node / Apache Port Issue
We are getting this Access-Control-Allow-Origin cors error just when we send a heavy request:
Access to XMLHttpRequest at 'https://backend.*****.es/xxxxxx' from
origin 'https://www.testing.*******.es' has been blocked by CORS
policy: No 'Access-Control-Allow-Origin' header is present on the
requested resource
The rest of the request are working fine, I tried setting:
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
next();
});
but I got the same error.
I am starting to think it can be related with nginx.
This is the architecture we are using:
NodeJs, Expressjs
Middlewares:
const LIMIT = '100mb';
const global = () => [
morganMiddleware(),
compression(),
cors(),
bodyParser.urlencoded({ extended: true, limit: LIMIT }),
bodyParser.json({ limit: LIMIT }),
helmet(),
];
Server: Nginx 12.14.1
Host in AWS Elastic BeanStalk
Let me know if anyone have any clue what can be happening because I do not know if it is coming from our nodejs server, or nginx. We have tested many solutions and are still checking out other options.
Locate the type of Amazon Linux 2 (AL2) on your Elastic Beanstalk Dashboard:
Create the following directory structure from the root directory of your App:
.platform/nginx/conf.d/
Create the following file:
.platform/nginx/conf.d/01_custom_ngix.conf
Add client_max_body_size and change the value on the following inside your 01_custom_ngix.conf file you created (the example here is 1000M = 1GB) and make sure your AWS Instance Types is large enough t2.medium:
client_max_body_size 1000M;
client_body_buffer_size 100M;
...
...
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
Git Commit your changes.
Deploy your code:
> eb deploy
options: eb deploy --verbose --debug
We are getting this error in our sever:
2020/02/20 10:56:00 [error] 2784#0: *127 client intended to send too
large body: 4487648 bytes, client: 172.31.3.222, server: , request:
"PUT /cars/drafts/f7124841-f72c-4133-b49e-d9f709b4cf4d HTTP/1.1",
host: "backend.xxxxx.es", referrer:
"https://www.testing.xxxx.es/sellcar/photos/f7124841-f72c-4133-b49e-d9f709b4cf4d"
So besides the error that appears in the front is Access-Control-Allow-Origin it is related with the a limit set in NGINX.
Now we just need to figure out how to access to our elastic beanstalk instance to change that variables and that is already solved.
SSH to Elastic Beanstalk instance
To solve it, create a file x.config inside .ebextensions and change this variables with the next yaml file:
files:
"/etc/nginx/conf.d/01-client_max_body_size.conf":
mode: 000644
owner: root
group: root
content: |
# MODIFIED VARIABLES ADDED BY 01-client_max_body_size.conf
client_max_body_size 12m;
client_body_buffer_size 16k;
container_commands:
nginx_reload:
command: sudo service nginx reload
Then redeploy you environment.
Thank you very much all for your help!
I have an nginx configuration that redirects to a Django rest service (Through gunicorn).
Everything works correctly, but when the response is too big (takes more than 30s to respond) I'm getting a 503 service unavailable error.
I am sure it is because of this issue because it works correctly on other requests, and only on specific requests where the response is too big (and fetching the request from a third party api) takes too long.
Below is my nginx configuration :
server {
listen www.server.com:80;
server_name www.server.com;
client_max_body_size 200M;
keepalive_timeout 300;
location /server/ {
proxy_pass http://127.0.0.1:8000/;
proxy_connect_timeout 120s;
proxy_read_timeout 300s;
client_max_body_size 200M;
}
location / {
root /var/www/html;
index index.html index.htm;
}
}
I am sure the issue is from Nginx and not gunicorn, because if i do a curl from inside the machine i get a response.
Thanks,
You do specify proxy_connect_timeout and proxy_read_timeout, but never proxy_send_timeout. (TBH, I don't think you need to modify timeout for connect(2), as that call simply established the TCP connection, and wouldn't depend on the size or time of an individual page; but the other two seem like a fair game.)
Additionally, as per https://stackoverflow.com/a/48614613/1122270, another consideration might be proxy_http_version — your curl is probably using HTTP/1.1, whereas nginx does HTTP/1.0 by default, and your backend might behave differently.
When you run below
$ gunicorn --help | grep -A2 -i time
--graceful-timeout INT
Timeout for graceful workers restart. [30]
--do-handshake-on-connect
Whether to perform SSL handshake on socket connect
--
-t INT, --timeout INT
Workers silent for more than this many seconds are
killed and restarted. [30]
So I would assume the timeout happens from gunicorn and not through nginx. So You don't just need timeout increase on nginx side but also on gunicorn
You can either add
timeout=180
to your config.py file or you can add it to the command line when launching gunicorn
gunicorn -t 180 ......