Deny access specific file extenions in specific dir on NGNIX - regex

I want to deny access on manifest.json, list.json (or simply *.json) and all sourcemaps *.maps in packs/ folder.
I tried something like:
location ^/packs/.*\.(json|map)$ {
deny all;
return 404;
}
Didn't worked out. I still access those files :(
How can I restrict access to those files in packs/ folder?

You are trying to use regex matching location, those locations declared with a ~ (or ~* if you want case-insensitive matching) sign:
location ~ ^/packs/.*\.(json|map)$ {
deny all;
return 404;
}

Related

NGINX 301 remove .html from url, but not for files from a specific path (folder)

In my location / directive I have added the following if statement to redirect all requests with the .html extension to the non-extension url:
if ($request_uri ~ ^/(.*)\.html) {
return 301 /$1$is_args$args;
}
This works fine, but actual .html files are also being redirected which results in a 404. I want to avoid this by adding an exception for all requests if the the url contains the directory "/static/", but I am not sure how to do that. I tried to add the following if statement before the if statement above:
if ($request_uri ~ /static/) {
}
But I'm not sure what to put in it to stop it from executing the .html if statement. Return $host$request_uri doesn't work, unfortunately.
How would I be able to do this? I cannot use location directives due to limitations of my host. Everything is being done in in the location / directive.

nginx redirection loop with negative lookahead

I'm trying to deny access to some ressources unless they are known IPs.
I've came up with this, it seems good on the paper, but when running it I've got an endless redirection loop:
location ~ ^/(?!(not-ready-yet|robots\.txt)).*$ {
error_page 403 = #badip;
allow 172.22.0.8;
deny all;
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location #badip {
return 301 $scheme://$http_host/not-ready-yet;
}
I dont't understand why, shouldn't I be redirected once and then match the second location ?
This is a temporary restriction, so an ugly hack is totally acceptable as it won't stay in the codebase.

How can I serve nested projects in Nginx

I have a Lumen api project with multiple git tags for api versioning. So I have to deploy multiple checkouts of the project.
The folder structure on the server looks like this:
var
www
api-staging
master
v1
public
index.php
...
v2
public
index.php
...
lastest
public
index.php
...
...
Now I'd like to serve the projects via nginx so that the url looks something like this.
http://BRANCH.domain.tld/VERSION/ eg. http://master.domain.tld/lastest/
I have tried a lot with regexp, but nothing really worked. I hope you can help me out.
You will need to capture the BRANCH using a regular expression server_name statement. See this document for more.
The root is constructed by appending /public to the captured VERSION, which requires a regular expression location and an alias statement. See this document for more.
For example:
server {
...
server_name ~^(?<branch>.+)\.domain\.tld$;
location ~ ^/(?<version>[^/]+)/(?<name>.*)$ {
alias /var/www/api-staging/$branch$version/public/$name;
if (!-e $request_filename) { rewrite ^ $version/index.php last; }
location ~ \.php$ {
if (!-f $request_filename) { return 404; }
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
...
}
}
}

nginx rule for limiting access based on url

I am trying to setup a rule in nginx where I will deny traffic if a certain url is accessed from outside the specified ip address range. Instead of specifying the url pattern that needs to be protected, I would like to setup a rule that would essentially say the following:
if URL is not a certain url format, apply the deny/allow rules in a permissions.conf file.
Previously I had this (which seems to work):
location ^~ /admin {
include permissions.conf file
}
permissions.conf file
allow 127.0.0.1;
deny all;
I would now like to replace the rule above and specify a rule which only gets hit if the url is not of a certain pattern (so in the case below, if it is not /a/test, then it should apply the permissions.conf file. The format below is not working - any ideas on how to fix it?
location ~ (/a\/(?!test)) {
include permissons.conf
}
I tried this as well:
location ~ (/a/(?!test)) {
include permissons.conf
}
&
location ~* ^(?!/a/test/) {
include permissons.conf
}
thanks in advance
I think you can try this:
location / {
if ($request_uri !~ "^/a/test$")
{
include permissons.conf
}
}

Retrieve static file from amazom s3 bucket

I am trying to configure my nginx in such a way that whenever there is some bad gateway response, I try to fetch static html contents from the s3 bucket.
The url structure of the request is some_bucket/folder1/folder2/text
And the data is stored in s3 bucket with directory structure as s3.amazonaws.com/some_bucket/folder1/folder2/folder1_folder2.html
I am not able to determine the values for folder1 and folder2 so that I can make
the html file dynamically and use proxy_pass.
Also, I tried try_files but I think that does not work for urls.
Any idea how to tackle this problem.
Thanks.
Nginx S3 proxy can handle dynamically built URL, you can also hide a directory and even part of private URL such AWS Key:
For instance the basis URL is the following:
https://your_bucket.s3.amazonaws.com/readme.txt?AWSAccessKeyId=YOUR_ONLY_ACCESS_KEY&Signature=sagw4gsafdhsd&Expires=3453445231
Resulted URL:
https://your_server/proxy_private_file/readme.txt?st=sagw4gsafdhsd&e=3453445231
The configuration is not difficult:
location ~* ^/proxy_private_file/(.*) {
set $s3_bucket 'your_bucket.s3.amazonaws.com';
set $aws_access_key 'AWSAccessKeyId=YOUR_ONLY_ACCESS_KEY';
set $url_expires 'Expires=$arg_e';
set $url_signature 'Signature=$arg_st';
set $url_full '$1?$aws_access_key&$url_expires&$url_signature';
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_buffering off;
proxy_intercept_errors on;
proxy_pass http://$s3_bucket/$url_full;
}
See the full configuration for more details.
This is what I did for someone(probably newbie) who may encounter this problem.
location ~* ^/some_bucket/(.*)/(.*)/.* {
proxy_pass http://s3.amazonaws.com/some_bucket/$1/$2/$1_$2.html;
}
~* means case insensitive regex match
^ means anything before
() for catching parameters.
For example,
User enters www.example.com/some_bucket/folder1/folder2/text
Then, it is processed as,
~* ensures case insensitive search(for case sensitive skip *(means just put ~))
^ matches www.example.com.
/some_bucket/ is matched then,
.* means any number of any character(for any numeric, replace with [0-9]*)
() ensures that matched values gets catched
So, $1 catches folder1
$2 catches folder2
Then
.* without parenthesis matches any charater but does not catch the matched value
Now the catched values can be used to find the file in amazon bucket using
proxy_pass http://s3.amazonaws.com/some_bucket/$1/$2/$1_$2.html
https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms can be helpful