I have a Django web app that utilizes an nginx reverse proxy with a gunicorn application server (upstream).
My nginx logs are filling up with errors like these: 2020/03/03 22:51:57 [error] 9605#9605: *162393 upstream sent too big header while reading response header from upstream, client: 168.155.46.104, server: example.com, request: "GET /static/img/favicons/manifest.json HTTP/2.0", upstream: "http://unix:/home/ubuntu/app/myproj/gunicorn.sock:/static/img/favicons/manifest.json", host: "example.com", referrer: "https://example.com/signup/create-pass/new/6d756265726e2d3131/18/"
I'm assuming gunicorn was unable to serve manifest.json.
This shouldn't have happened. I've created manifest.json and placed it in the relevant location. Using the Favicon checker at https://realfavicongenerator.net/ shows me this error:
The Web App Manifest at https://example/com/static/img/favicons/site.webmanifest cannot be downloaded. If I hit that url directly in the brower, I end up seeing a 502 Bad Gateway error.
How can I fix this?
I figured out the error.
Files placed in the /static/ folder in my project are supposed to be served directly via nginx. But the json file in question wasn't included in nginx conf in a manner that allowed it to be served by the web server.
Once I fixed that (see below), the manifest file was directly served by nginx (and gunicorn never came in the loop). Problem solved!
This is what I added to nginx's virtual host file to solve the issue: Notice this includes the json extension:
location ~* \.(?:ico|css|js|gif|jpg|jpeg|png|svg|woff|ttf|eot|json)$ {
root /home/ubuntu/app/my_proj;
expires 120d;
access_log off;
error_log off;
}
Related
I have been following this tutorial on how to configure my contact form using AWS Lambda/SES/API Gateway
https://www.freecodecamp.org/news/how-to-receive-emails-via-your-sites-contact-us-form-with-aws-ses-lambda-api-gateway/
I am successfully able to test with the deployed Lambda code but he doesn't explain the Nginx/Webserver configuration part. Not sure if this is the correct way but I have tried to have the Contact form POST to the AWS lambda function but Nginx keeps appending my root directory namely /usr/share/nginx/html
Here is the Nginx log:
[error] 29#29: *10 open() "/usr/share/nginx/html/contact/<https:/xxxxxxxxx.execute-api.us-east-1.amazonaws.com/default/SendContactEmail>" failed (2: No such file or directory), client: <ip address>, server: www.website.net, request: "POST /contact/%3Chttps://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/default/SendContactEmail%3E HTTP/2.0", host: "www.website.net", referrer: "https://www.website.net/contact/"
"POST /contact/%3Chttps://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/default/SendContactEmail%3E HTTP/2.0" 404 548 "https://www.website.net/contact/"
I realize the issue is with location tag in Nginx config but I'm trying to figure out what I'm doing wrong here.
location /contact/ {
proxy_set_header Host $proxy_host;
proxy_ssl_server_name on;
}
When I hit the Submit button it should try to contact the AWS Lambda function but instead it gives a 404 not found message because Nginx has the wrong path.
I found the issue, The javascript code for endpoint did not need the brackets <> . Once I removed that, it all started working as it should.
I'm trying to increase the size of uploadable files to the server. However it seems there's a cap that prevents anything over 1MB of being uploaded.
I've read a lot of answers and none have worked for me.
I've done everything in this question
Stackoverflow question
I did everything here as well.
AWS resource
Here's what I have as the full error
2021/01/15 05:08:35 [error] 24140#0: *150 client intended to send too large body: 2695262 bytes, client: [ip-address-removed], server: , request: "PATCH /user HTTP/1.1", host: "host.domain.com", referrer: "host.domain.com"
I've made a file in this directory (which is at the root of my source code)
.ebextensions/nginx/conf.d/myconf.conf
In it I have
client_max_body_size 40M;
I know that EB is acknowledging it because if there's an error it won't deploy it.
Other than that I have no idea what to do
Does anybody know what might be the issue here?
Edit: backend is nodejs v12
Edit: Also tried this inside of a .conf file in .ebextensions
files:
"/etc/nginx/conf.d/myconf.conf":
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
Update:
Beanstalk on Amazon Linux 2 AMI has a little different path for NGINX config extensions:
.platform/nginx/conf.d
There you can place NGINX config extension files with *.conf extension, for example:
.platform/nginx/conf.d/upload_size.conf:
client_max_body_size 20M;
Documentation for this is here.
Original answer:
Nginx normally does not read config from where is serves the content. By default all config nginx read is /etc/nginx/nginx.conf and this file includes other files from /etc/nginx/conf.d/ with *.conf extension.
You need to place client_max_body_size 40M; there.
I'm using nginx and gunicorn to deploy my Django+React web application. When I start the nginx services and try to access the application in the browser, in the browser's console I'm getting an error saying
GET http://server_ip/static/css/2.2546a949.chunk.css net::ERR_ABORTED 403 (Forbidden)
Below is what my nginx conf file looks like
server {
listen 80;
server_name server_ip;
location / {
proxy_pass http://127.0.0.1:8000;
}
location /static/ {
autoindex on;
alias react_build_folder_location;
}
}
Also, I've given read and execute permission to the static files still not able to fix the issue! Can anyone please help me. Kindly excuse if its a repeated question.
I have tried a lot of different things but all of the solutions I found are not helping.
I putting my corporate site on a digitalocean site on ubuntu 16.04 by following the digitalocean directions (which have worked well before) but it is only serving some of the static files.
Here are the links to the images.
<h3>Here is the image that doesn't load</h3>
<img src="http://206.189.161.104/static/images/frac_stack_1.jpg" alt="Image that doesn't load">
<h3>Here is the image that does load in the same folder</h3>
<img src="http://206.189.161.104/static/images/coil_pic.jpg" alt="Image that doesn't load" style="width:200px;height:200px;>
Here is my nginx config:
server {
listen 80;
server_name 206.189.161.104;
location = /favicon.ico { access_log off; log_not_found off; }
location /static {
root /home/dmckim/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/dmckim/myproject/myproject.sock;
}
}
I tried removing the trailing slash off the static (as shown above). I also tried changing root to alias and adding the static folder to the path but I had the same results.
Here is the code from my settings.py file:
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
'/home/dmckim/myproject/static/',
'/home/dmckim/myproject/static/images/',
)
I also tried clearing collectstatic before collectingstatic again and I always run these commands after and make sure my browser cache is cleared.
sudo systemctl restart gunicorn
sudo nginx -t && sudo systemctl restart nginx
My permission on the files are -rw-rw-r--, for the image that does load and the one that doesn't load. I also tried a lot of ways to change permissions (I don't really understand them but they were suggested in other posts). I even destoyed the server and started from scratch to make sure I didn't mess anything up with the permissions.
I don't see anything wrong with the nginx process logs or the access logs but the error logs show the following:
2018/05/31 13:04:19 [error] 11481#11481: *22 open()
"/home/dmckim/myproject/static/images/frac_stack_1.jpg" failed (2: No such
file or directory), client: 12.184.4.50, server: 206.189.161.104, request:
"GET /static/images/frac_stack_1.jpg HTTP/1.1", host: "206.189.161.104",
referrer: "http://206.189.161.104/frac-stacks/"
The gunicorn logs show a 404 for the images that won't load.
Here is the www-data group uid=33(www-data) gid=33(www-data) groups=33(www-data)
Here is my group uid=1000(dmckim) gid=1000(dmckim) groups=1000(dmckim),27(sudo)
The file names are case sensitive. Your image is named "http://206.189.161.104/static/images/frac_stack_1.JPG" not "http://206.189.161.104/static/images/frac_stack_1.jpg".
<h3>Here is the image that loads</h3>
<img src="http://206.189.161.104/static/images/frac_stack_1.JPG" alt="Image that doesn't load" style="width:200px;height:200px;">
<h3>Here is the other image that does load in the same folder</h3>
<img src="http://206.189.161.104/static/images/coil_pic.jpg" alt="Image that doesn't load" style="width:200px;height:200px;>
Note that your results may differ when running this locally. Windows isn't case sensitive, Linux is. See this question for details
I've setup my Django application on Apache+mod_wsgi. To serve the static files I'm using Nginx, as suggested on Django's project website. http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/
Apache is running on port 8081 and nginx is on port 80. Now some people have suggested that my configuration is wrong and I should reverse the roles of Apache and Nginx. I'm not sure why that should be. And if indeed my configuration is wrong, why would django website suggest the wrong method?
The django docs you linked to do not suggest you use apache as a reverse proxy. They simply suggest you use a separate web server, so I'd say the docs are not clear on that subject -- they are not suggesting anything wrong.
My initial answer was assuming you had nginx as a reverse proxy because port 80 is the HTTP port, the one used when a browser attempts to go to a url with no port specified.
There are numerous complete guides to setting up nginx + apache via a quick google search but here is the gist for setting up nginx:
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
All you need to do is remove the proxy lines from your apache configuration and add the proxy statements to your nginx.conf instead.
If you really want to serve your site from port 8081, you could potentially have nginx listen on port 8081 and have apache listen on a different port.
The point is, apache sits in some obscure port, only serving requests sent to it from nginx, while static file serving is handled by nginx.