I’m looking for "how to compress load time js file" and I try the solution of my question (I’m using Extjs).
My friend suggest this too. But, it use Apache as web server. Anybody know how to do the trick in NGINX??
My hosting uses nginx as web server and i don’t know anything about web server configuration.
sorry, if my english bad..
If you do not know anything about web server configuration, I am assuming you also do not know how/where to edit the config file.
The nginx conf file is located at /etc/nginx/nginx.conf (verified in Ubuntu 12.04)
By default, nginx gzip module is enabled. So check from this service whether it is enabled on not using an online tool like this.
If it is disabled, add this before server {...} entry in nginx.conf
# output compression saves bandwidth
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
# make sure gzip does not lose large gzipped js or css files
# see http://blog.leetsoft.com/2007/07/25/nginx-gzip-ssl.html
gzip_buffers 16 8k;
# Disable gzip for certain browsers.
gzip_disable “MSIE [1-6].(?!.*SV1)”;
I make this configuration in my nginx.config you need
gzip on;
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
gzip_static on;
expires 1w;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg)$ {
gzip_static on;
expires 1w;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
}
You need to use the nginx HTTP gzip or the nginx HTTP gzip static module. The static module would be helpful for content like your JavaScript libraries that rarely changes, saving you needless re-compression for every client.
Related
I'm working on an nginx reverse proxy container image to proxy frontend files from s3, and Im trying to access these files from a specific folder location, instead of just the base path of the s3 bucket. As of yet I can only serve up the index.html which I'm using a rewrite for, but I'm getting a 403 on the js and css files.
I've tried including mime.types
include mime.types;
I've tried adding an s3 folder bucket param
proxy_pass http://YOURBUCKET.s3-website.eu-central-1.amazonaws.com/$1;
And then various regex options
Here is my nginx conf file
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/ssl/nginx-server.crt;
ssl_certificate_key /etc/ssl/nginx-server.key;
server_name timemachine.com;
sendfile on;
default_type application/octet-stream;
resolver 8.8.8.8;
server_tokens off;
location ~ ^/app1/(.*) {
set $s3_bucket_endpoint "timemachineapp.s3-us-east-1.amazonaws.com";
proxy_http_version 1.1;
proxy_buffering off;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-s3cmd-attrs;
proxy_hide_header Set-Cookie;
proxy_set_header Authorization "";
proxy_intercept_errors on;
rewrite ^/app1/?$ /dev/app1/index.html; <-- I can only access index.html and the other js and css files throw a 403
proxy_pass https://timemachineapp.s3-us-east-1.amazonaws.com;
break;
}
}
As you can see, I'm trying to make this so that the user goes to https://timemachine/app1 that this will go to the homepage and load all the css and js files. Again, what im getting is a 403 and sometimes a 404. Insight appreciated.
From the question it looks like
There's a constant request-url prefix /app1/
There's a constant proxied-url prefix /dev/app1/
On that basis...
First, enable the debug log
There will already be an error_log directive in the nginx config, locate it and temporarily change to debug:
error_log /dev/stderr debug;
This will allow you to see how these requests are being processed.
Try naive-simple first
Let's use this config (other header directives omitted for brevity):
location = /app1 { # redirect for consistency
return 301 /app1/;
}
location = /app1/ { # explicitly handle the 'index' request
proxy_pass https://example.com/dev/app1/index.html;
}
location /app1/ {
proxy_pass https://example.com/dev/;
}
And emit a request to it:
$ ~ curl -I http://test-nginx/app1/some/path/some-file.txt
HTTP/1.1 403 Forbidden
...
Note that S3 returns a 403 for requests that don't exist, nginx is just proxying that response here.
Let's look in the logs to see what happened:
2023/01/28 14:46:10 [debug] 15#0: *1 test location: "/"
2023/01/28 14:46:10 [debug] 15#0: *1 test location: "app1/"
2023/01/28 14:46:10 [debug] 15#0: *1 using configuration "/app1/"
...
"HEAD /dev/some/path/some-file.txt HTTP/1.0
Host: example.com
Connection: close
User-Agent: curl/7.79.1
Accept: */*
"
So our request became https://example.com/dev/some/path/some-file.txt
That's because the way proxy_pass works is:
If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive
Meaning:
Nginx receives:
/app1/some/path/some-file.txt
^ the normalized path starts here
Proxied-upstream receives:
/dev/some/path/some-file.txt
^ and was appended to proxy-pass URI
I point this out as renaming/moving things on s3 may lead to a simpler nginx setup.
Rewrite all paths, not specific requests
Modifying the config above like so:
location = /app1 { # redirect for consistency
return 301 /app1/;
}
location = /app1/ { # explicitly handle the 'index' request
proxy_pass https://example.com/dev/app1/index.html;
}
location /app1/ {
rewrite ^/(.*) /dev/$1 break; # prepend with /dev/
# rewrite ^/app1/(.*) /dev/app1/$1 break; # OR this
proxy_pass https://example.com/; # no path here
}
And trying that test-request again yields the following logs:
"HEAD /dev/app1/some/path/some-file.txt HTTP/1.0
Host: example.com
Connection: close
User-Agent: curl/7.79.1
Accept: */*
"
In this way the index request works, but also arbitrary paths - and there's no need to modify this config to handle each individual url requested.
Alright so found a solution. Unless I'm missing something, this is easier than thought. For my use case, all I had to do was simply add multiple writes with those css files passed in (I'm sure there's a simpler way to just specify any .css file extension regardless of the naming of the file. Anyway, here is solution at the moment:
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/ssl/nginx-server.crt;
ssl_certificate_key /etc/ssl/nginx-server.key;
server_name timemachine.com;
sendfile on;
default_type application/octet-stream;
resolver 8.8.8.8;
server_tokens off;
location ~ ^/app1/(.*) {
set $s3_bucket_endpoint "timemachineapp.s3-us-east-1.amazonaws.com";
proxy_http_version 1.1;
proxy_buffering off;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-s3cmd-attrs;
proxy_hide_header Set-Cookie;
proxy_set_header Authorization "";
proxy_intercept_errors on;
rewrite ^/app1/?$ /dev/app1/index.html;
rewrite ^/app1/?$ /dev/app1/cssfile.css; <- and keep adding, if needed
proxy_pass https://timemachineapp.s3-us-east-1.amazonaws.com;
break;
}
}
I'm trying to find a way to load an html that shows "Server Down" or something similar when I'm building my application.
Right now I every time I build my backend and frontend there are couple of seconds when I see the below template if I go to the site:
I will like to customize that page or show a different template saying : Server Down at the moment or Building
My nginx.conf is the following. Where should I put the location for a 403.html template to load ?: This needs to be outside of the build folder I think, since the 403 page appears while it's building.
server { # [ASK]: is this what's causing the problem ?
root /home/smiling/smiling-frontend/website/build; ## development build
index index.html;
server_name frontend.develop.smiling.be; ## development domain
charset utf-8;
gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;
location / {
try_files $uri $uri/ /index.html;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|svg|woff|woff2|ttf)\$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)\$ {
expires 7d;
access_log off;
add_header Cache-Control "public";
}
location ~ /\.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
add_header Access-Control-Allow-Origin '*/';
add_header Access-Control-Allow-Headers 'origin, x-requested-with, content-type, accept, authorization';
add_header Access-Control-Allow-Methods 'GET, POST';
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/backend.develop.smiling.be/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/backend.develop.smiling.be/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Your last sentence is kind of inconsistent... You like not to do something, but want to do the same thing nevertheless.
You could define your own page or string to be served on errors:
error_page 403 /403.html;
location = /403.html {
internal;
return 403 "Server Down at the moment"; # <- this could also contain an HTML string if your nginx defaults to text/html as content type.
}
You could also put a 403.html file in your root folder and skip the location part in order to serve a full HTML file here.
I have everything cached, if I logged into my account, you will not be able to log out any more) how do you get out when you quit? i need to know how to delete cookies and session! when i'll logout!
P.S. if i'll disable caching on nginx level, everything works fine,
problem in nginx
nginx conf
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
proxy_connect_timeout 5;
proxy_send_timeout 10;
proxy_read_timeout 10;
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 24 16k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /tmp/nginx/proxy_temp;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=first_zone:100m;
proxy_cache one;
proxy_cache_valid any 30d;
proxy_cache_key $scheme$proxy_host$request_uri$cookie_US;
server conf
upstream some site {
server unix:/webapps/some/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name server name;
expires 7d;
client_max_body_size 4G;
access_log /webapps/some/logs/nginx-access.log;
error_log /webapps/some/logs/nginx-error.log;
error_log /webapps/some/logs/nginx-crit-error.log crit;
error_log /webapps/some/logs/nginx-debug.log debug;
location /static/ {
alias /webapps/some/static/;
}
location /media/ {
alias /webapps/some/media/;
}
location ~* ^(?!/media).*.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
root root_path;
expires 7d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
}
location ~* ^(?!/static).*.(?:css|js|html)$ {
root root_path;
expires 7d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_cache one;
proxy_cache_min_uses 1;
proxy_cache_use_stale error timeout;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://some;
break;
}
}
error_page 404 /404.html;
location = /error_404.html {
root /webapps/some/src/templates;
}
error_page 500 502 503 504 /500.html;
location = /error_500.html {
root /webapps/some/src/templates;
}
}
Instead of logging out with a GET request, change your logout view to accept a form POST.
POST requests should not be cached.
This has the added security benefit of preventing users from being logged out with iframes or malicious links (ie: https://example.com/logout/, assuming you have not disabled django's CSRF protection).
Note: there is a ticket on django's bug tracker related to this issue.
You have the following question:
i need to know how to delete cookies and session! when i'll logout!
With the following code:
proxy_cache_key $scheme$proxy_host$request_uri$cookie_US;
We first have to know what's in $cookie_US?
If it's simply the name of the login, then you need to realise that anyone who knows the name of the login, and sets their own cookie as such, and knows the complete URL of a hidden resource that such user (and only such user) has access to, and which has been accessed recently (thus freshly cached), can now gain ‘unauthorised’ access to the given resource, since it'll be served straight from cache, and likely without any sort of re-validation.
Basically, for caching user-specific content, you have to make sure that you set http://nginx.org/r/proxy_cache_key to represent an actually secret non-guessable value, which could then be cleared on the user's end to logout. Subsequently, if the user does logout, then your cache is still subject to the replay attacks by anyone who somehow still posesses such secret value, but it'd usually be minimised by a short expiration time of the cache, plus, the secret is still supposed to stay a secret even after logout.
And clearing the session is as easy as simply re-setting the variable to something that wouldn't be giving the access to the user, e.g., you can even implement the whole logout thing entirely within nginx, too:
proxy_cache_key $scheme$proxy_host$request_uri$cookie_US;
location /logout {
add_header Set-Cookie "US=empty; Expires=Tue, 19-Jan-2038 03:14:07 GMT; Path=/";
return 200 "You've been logged out!";
}
P.S. Note that above code technically opens you up to XSS attacks — any other page can simply embed an iframe with /logout on your site, and your users would be logged out. Ideally, you might want to use a confirmation of logout, or check $http_referer to ensure the link is clicked from your own site.
I want to maintain one single URL for all pages and I'm using the index index.html directive to have a page at /writing/index.html be displayed when someone visits /writing/. However, with this index directive /writing/index.html is still a valid URL that nginx serves a page at.
I want /writing/index.html to 301 redirect to /writing/, and so forth for the root path (/index.html -> /) and all other URLS too (/foo/bar/index.html -> /foo/bar/).
I want to use a regular expression that only matches the /index.html ending such as: ^(.*/)index\.html$
But if if I add
rewrite "^(.*)/index\.html$" $1 last;
to my nginx conf I'm seeing /writing/index.html 301 redirect to /writing/ which is good but I also see /writing/ 301 redirect to /writing/ in an infinite loop.
So my question is why does that above rewrite regex match /writing/ when it does not end in index.html? Is it because of the internal index directive in the nginx conf?
I've seen other one off solutions on StackOverflow for redirecting a single path, but not a solution that does it in a clean/generic way like this.
Below is my current nginx.conf
server {
listen 80;
server_name example.com *.example.com;
charset utf-8;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
rewrite "^(.*/)index\.html$" $1 permanent;
location / {
root /srv/www/example.com/;
index index.html;
}
error_page 404 /404/;
}
So the solution to this problem was to have the rewrite inside a location that checks against the actual request's $request_uri which avoids internal re-routing with the index directive.
Pretty much use this instead:
if ($request_uri ~ "^(.*/)index\.html$") {
rewrite "^(.*/)index\.html$" $1 permanent;
}
I believe a location block with a return would be more efficient and easier to read:
location ~ ^(.*/)index\.html$ {
return 301 $1;
}
We are having trouble with uploads to my site with django, gunicorn, running behind nginx. we also have a gluster mount on the app server where the files are uploaded and distributed-replicated across several servers. (All tiers are on AWS)
When we go to upload a file(~15mb), we get a 502 Bad Gateway. we also check the nginx logs which show a upstream prematurely closed connection while reading response header from upstream, client. Our upload speeds are being extremely slow (<5k). we can upload to other sites just fine, and our internet upload is around 10MB with anything else.
Is there any configuration file that I am missing to allow uploads of a file through gunicorn or nginx?
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 256;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml
application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
conf.d files:
client_max_body_size 256m;
_
proxy_read_timeout 10m;
proxy_buffering off;
send_timeout 5m;
_
We have a feeling that it may be either nginx or the gluster mount. We have been working on this for days, and have looked all through the timeout* variables in nginx and gunicorn and haven't made any progress.
Any help would be appreciated, Thank you!
So, we solved the problem. It had nothing to do with any of our code, server setup, or amazon. We narrowed it down to only linux machines uploading in our network. there was a bug with 'tcp window scaling' in the firewall that was resetting the upload after it reaches a limit.
Thanks for anyone that attempted.