I'm using Nginx and uwsgi with wsgi app. When I try to upload the image sometimes the application does not get the image and there used to be error 413 entity too large.
I fixed this issue by adding client_max_body_size 4M;and my Nginx conf looks something like:
//Add sample Nginx Server
//Block here
The error stopped showing but still the file does not reach the application. I don't understand it works on some computers and it dosent work on some.
If you’re getting 413 Request Entity Too Large errors trying to upload, you need to increase the size limit in nginx.conf or any other configuration file . Add client_max_body_size xxM inside the server section, where xx is the size (in megabytes) that you want to allow.
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
client_max_body_size 20M;
listen 80;
server_name localhost;
# Main location
location / {
proxy_pass http://127.0.0.1:8000/;
}
}
}
It means the max file size is larger than the upload size. See client_max_body_size
So try using instead of using a fixed value.
server {
[...]
client_max_body_size 0;
[...]
}
A value of 0 will disable the max upload check, I'd recommend putting a fixed value such as 3M, 10M, etc... instead though.
Related
I have 2 servers: first for Nginx and Django and second server for storage
Nginx and app server IP: 192.168.1.1
storage server IP: 192.168.1.2
Nginx installed on two servers.
In Django config media path:
MEDIA_ROOT = os.path.join(BASE_DIR,'media')
MEDIA_URL = '/media/'
Nginx config is in the first server:
server {
listen 80;
server_name 192.168.1.1;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log ;
location = /favicon.ico { access_log off; log_not_found off; }
location /media/ {
proxy_pass http://192.168.1.2/;
}
location / {
uwsgi_pass unix:/tmp/uwsgi/app.sock;
include uwsgi_params;
}
location /static/ {
alias /home/ubuntu/app/static/;
}
}
and storage server Nginx config:
server {
listen 80;
server_name 192.168.1.2;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log ;
location /media/ {
alias /home/ubuntu/app/media;
}
}
Questions:
How can Django save (upload file) to the storage server (192.168.1.2)?
better if suggest solutions with minimum changes in code.
How Nginx can reverse files from a storage server?
the end user just typing 192.168.1.1
Solution 1: Network File System (NFS)
An example of NFS is GlusterFS.
What it does is, it makes multiple disks or servers available as a single directory. So you can configure it to show your other server as media directory and any files you put in there, it will be automatically stored in your other server. And you can fetch those files as if they are in the media directory even though they are on another machine. No changes required to your django code, or nginx config.
Solution 2: Custom File Storage backend
Another solution is to write your own File Storage backend and save the images to your second server from there. In fact, there's a library called django-storages which supports uploading files to another server using FTP. See docs: http://django-storages.readthedocs.io/en/latest/backends/ftp.html
Personally, second solution seems better to me because you don't really need NFS right now. And even if you do later on, you can install Gluster on your second server and scale out from there.
I'm building a web app with Django that uses a pre-trained scikit-learn model to process data that an user inputs through a web form. During development I'm able to load the model into memory by running the following command in urls.py
modelRF = joblib.load('model.pkl')
However, when I try to deploy the app inside a Docker container I receive a 504 Gateway Timeout Error. I've tried increasing the timeout limits in the nginx.conf file without any success. I was wondering whether this could also be a problem with the amount of memory assigned to the container.
I'm not sure whether the problem is related to Docker or to the way I'm loading the model into memory while in deployment (rather than in development). I'm using docker-compose with nginx, supervisor and uwsgi.
My nginx.conf file looks like this:
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
}
server {
listen 80 default_server;
server_name .example.com;
charset utf-8;
# max upload size
client_max_body_size 75M;
# Django media
location /media {
alias /home/docker/code/media;
}
location /static {
alias /home/docker/code/static;
}
location / {
uwsgi_pass django;
include /home/docker/code/uwsgi_params;
}
}
Add uwsgi_read_timeout directive inside django location curly braces like this:
location / {
uwsgi_pass django;
include /home/docker/code/uwsgi_params;
uwsgi_read_timeout 3000;
}
I have been following the Digital Ocean tutorial How To Serve Django Applications with uWSGI and Nginx on Ubuntu 14.04, so that later i can deploy my own django application using Nginx+uWSGI.
In this tutorial they create 2 basic Django apps to be later served by Nginx. I have tested that the apps were working using the Django server and uWSGI alone.
When i passed to the Nignx part i ran into a problem, basically i dont have a server_name for now only have an IP to work with, and i tried to differentiate between Django apps using the port number.
The default Nginx server (xxx.xxx.xxx.xxx:80) is responding correctly, but when i try to access the Django apps using (xxx.xxx.xxx.xxx:8080 or xxx.xxx.xxx.xxx:8081) i get 502 bad gateway.
I think i have a problem in the way or logic i am defining my listen inside the server block. What would be the correct way of doing this, or what might i be doing incorrectly.
This are my server blocks (in sites-enabled):
firstsite app
server {
listen xxx.xxx.xxx.xxx:8080;
#server_name _;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /root/firstsite;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/root/firstsite/firstsite.sock;
}
}
econdsite app
server {
listen xxx.xxx.xxx.xxx:8081;
#server_name _;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /root/secondsite;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/root/secondsite/secondsite.sock;
}
}
default Nginx
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}
UPDATE:
I was checking the error log under /var/log/nginx and when i try to connect to firstsite i get the following error:
2016/02/05 15:55:23 [crit] 11451#0: *6 connect() to
unix:/root/firstsite/firstsite.sock failed (13: Permission denied)
while connecting to upstream, client: 188.37.180.101, server: ,
request: "GET / HTTP/1.1", upstream:
"uwsgi://unix:/root/firstsite/firstsite.sock:", host:
"178.62.229.183:8080"
Nginx server on ubuntu will run on www-data user by default, uWSGI server won't (which is actually a good thing, unless it runs on root). If you're creating unix socket for uWSGI, access to it will be defined as for any system file. And by default, access to it might be restricted only to user that created socket.
More on that, you're creating your sockets in /root/ directory. That directory is readable only by root user and some of Linux distributions won't allow accessing anything inside even if permissions are set correctly.
So what you have to do is:
put sockets outside of /root/ directory (/var/run is good place for that)
Make sure that nginx will have access to that sockets (put --chmod-socket 666 or `--chown-socket yourusername:www-data into your uWSGI startup line)
And if you're running that uWSGI server on root, be aware that this is really dangerous. Any process running on root can do anything with your system, so if you will make mistake in your code or someone will hack in, he can inject any malicious software into your server, steal some data from it or just destroy everything.
I am trying to configure Nginx to leverage on static file caching on browser.
My configuration file is as following
server {
listen 80;
server_name localhost;
client_max_body_size 4G;
access_log /home/user/webapps/app_env/logs/nginx-access.log;
error_log /home/user/webapps/app_env/logs/nginx-error.log;
location /static/ {
alias /home/user/webapps/app_env/static/;
}
location /media/ {
alias /home/user/webapps/app_env/media/;
}
...
}
When I add in the following caching configuration, the server fails to load static files and I am not able to restart my Nginx.
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
The nginx-error log shows open() "/usr/share/nginx/html/media/cover_photos/292f109e-17ef-4d23-b0b5-bddc80708d19_thumbnail.jpeg" failed (2: No such file or directory)
I have done quite some research online but cannot solve this problem.
Can anyone help me or just give me some suggestions on implementing static file caching in Nginx?
Thank you!
Reference: Leverage browser caching for Nginx
Again, I have to answer my own question.
The root problem lays on the "path".
I find the answer from #Dayo, here I quote:
You are missing the rootdirective for the images location block.
Therefore, nginx will look for the files in the default location which
varies by installation and since you have most likely not placed the
files there, you will get a 404 Not Found error.
Answer from Dayo
Thus, I added the root path in my configuration file as following:
root /home/user/webapps/app_env/;
The whole configuration will look like this:
server {
listen 80;
server_name localhost;
root /home/user/webapps/app_env/;
client_max_body_size 4G;
access_log /home/user/webapps/app_env/logs/nginx-access.log;
error_log /home/user/webapps/app_env/logs/nginx-error.log;
location /static/ {
alias /home/user/webapps/app_env/static/;
}
location /media/ {
alias /home/user/webapps/app_env/media/;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
...
}
And everything just work nice.
I hope people with the same problem can learn from this.
I'm attempting to set up browser caching on nginx with Django. The current (working) configuration of my nginx configuration file for static files is the following:
server {
listen 443 ssl;
server_name SERVER;
ssl_certificate /etc/ssl/CERT.pem;
ssl_certificate_key /etc/ssl/KEY.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 4G;
access_log /webapps/site/logs/nginx-access.log;
error_log /webapps/site/logs/nginx-error.log;
location /static/ {
alias /webapps/site/static/;
}
# other locations, etc.
}
I would like to set up a rule that caches images etc. within the browser to limit the number of requests per page (there are often 100 or so images per page but the images are the same throughout the entire site). I tried adding a few variations of the following rule:
location ~* \.(css|js|gif|jpe?g|png)$ {
expires 365d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
However, when I do this, I get nothing but 404 errors (though the configuration file checks out and reloads without errors). I believe that this has something to do with the alias but I am not sure how to fix it.
Any suggestions would be appreciated!
You are missing the rootdirective for the images location block. Therefore, nginx will look for the files in the default location which varies by installation and since you have most likely not placed the files there, you will get a 404 Not Found error.
It works for the /static/location block because you defined an alias. I suspect though that the alias is simply what should be the root for both. If so, then try ...
server {
listen 443 ssl;
server_name SERVER;
root /path/to/web/root/folder/;
[...]
# Your locations ... Most likely no need for alias in any.
}