I'm trying to use Nginx in front of Django's localhost webserver (127.0.0.1:8000) to serve the static content. I'd like Nginx to serve all files under '/static', and if not, pass the request onto the Django's webserver, but I'm stuck! Here's what I've done:
Got Nginx running on my OSX, so the 'welcome to Nginx!' page shows on localhost.
Changed my /etc/hosts file to add 'testdev.com':
127.0.0.1 localhost
127.0.0.1 testdev.com
Made /sites-available and /sites-enabled files in /usr/local/src/nginx-1.2.6
My nginx.conf file in /conf is the default plus the include statement:
include /usr/local/src/nginx.1.2.6/sites-enabled/testdev.com
5.My testdev.com file is in sites-available, with a symlink in /sites-enabled.
server {
root /<path-to-my-django-project>/website/static;
server_name testdev.com;
gzip off;
listen 8000;
location = /favicon.ico {
rewrite "/favicon.ico" /img/favicon.ico;
}
proxy_set_header Host $host;
location / {
if (-f $request_filename) {
add_header X-Static hit;
access_log off;
}
if (!-f $request_filename) {
proxy_pass http://127.0.0.1:8000;
add_header X-Static miss;
}
}
}
If I curl the testdev.com, it shows Nginx:
curl -I http://testdev.com
HTTP/1.1 200 OK
Server: nginx/1.2.6
Date: Mon, 22 Apr 2013 18:37:30 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Sun, 21 Apr 2013 19:39:47 GMT
Connection: keep-alive
Accept-Ranges: bytes
But if I try to access a static file, nothing:
curl -I http://testdev.com/static/css/style.css
HTTP/1.1 404 Not Found
Server: nginx/1.2.6
Date: Mon, 22 Apr 2013 18:38:53 GMT
Content-Type: text/html
Content-Length: 168
Connection: keep-alive
All this is based from a Google search, and finding this.
I added in the
listen 8000
statement in my testdev.com conf file as I thought that was needed for the Nginx virtual host, but I'm super confused. The blog author used
127.0.1.1 testdev.com
In his hosts file, but if i add that, the first curl statement just hangs.
What am I doing wrong?
Thanks all - I've got it working, here's my working testdev conf:
server {
root /<path-to-django-site>;
server_name testdev.com;
gzip off;
autoindex on;
proxy_set_header Host $host;
location /static/ {
add_header X-Static hit;
}
location / {
proxy_pass http://127.0.0.1:8000;
}
}
Looks like the location block takes the server root path if you don't supply one. Now when I curl:
curl -I http://testdev.com/static/js/utils.js
HTTP/1.1 200 OK
Server: nginx/1.2.6
Date: Tue, 23 Apr 2013 01:36:07 GMT
Content-Type: application/x-javascript
Content-Length: 2730
Last-Modified: Thu, 13 Dec 2012 18:54:10 GMT
Connection: keep-alive
X-Static: hit
Accept-Ranges: bytes
Many thanks #Evgeny - got me on the right lines. Miget be useful for others looking to do the same.
Related
We recently moved from a traditional webserver to AWS. For each relevant website, we have two EC2 instances (mirrors) that share an S3 bucket for their httpdocs folder, which is where the relevant website files are served out of. Each pair of EC2 instances is behind an AWS load balancer. We have taken steps (described below) to keep caching out of the system. Unfortunately, I'm running into some caching (?) issues as I try to update one of the sites.
The specific behavior I'm encountering is that I will update a file within the httpdocs folder on one of the EC2 instances via SFTP. I can then SSH onto either EC2 instance and confirm in Vi that the change is in place. However, if I re-load the webpage (whether regularly or with an Empty Cache and Hard Reload), the change does not show up.
Previously, I could work around this issue by switching my SFTP connection to the other EC2 instance for the pair and re-uploading the file. However, that work-around appears to have stopped working.
We have worked to eliminate all caching throughout the system. We do not have ElastiCache installed, we added some configuration lines to Nginx's config file to disable its cache, and we have our browsers (Chrome) set to "Disable cache" under the Network tab of the Developer Console. However, despite those efforts, it appears that an older version of the page is getting cached somewhere.
Where might this caching be taking place, and how can we disable it? Or might there be another cause for this behavior?
cURL Output Requested by Parsifal
Because the original file I worked with is behind a login screen, I created a test.php file to place in the root of the webserver, outside the login screen. I confirmed similar behavior with that file, i.e. updates show up on the server if checked via Vi but the changed file is not served via the webserver. I also checked the file via the S3 web console, and S3 shows the changes that have been made.
Unfortunately I'm uncertain what "EC2_HOST_NAME" means or what "PUBLIC_DNS" means. I provided both options for what EC2_HOST_NAME might mean. All of our public DNS records point to the load balancer, but I tried that anyway.
First request from comments:
ubuntu#ip-[IP ADDRESS]:~$ curl -v http://localhost/test.php > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Mon, 22 Nov 2021 19:39:33 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
<
{ [76 bytes data]
100 65 0 65 0 0 9285 0 --:--:-- --:--:-- --:--:-- 9285
* Connection #0 to host localhost left intact
There was the same output on the mirror EC2 instance for the same command, except the timestamp was a few minutes later, the 65's were 20's, and the 9285's were 5000's.
Second request from comments, IP address-based:
PS C:\Users\nwehneman> curl.exe -v http://[IP ADDRESS]/test.php
* Trying [IP ADDRESS]...
* TCP_NODELAY set
* Connected to [IP ADDRESS] ([IP ADDRESS]) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: [IP ADDRESS]
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Mon, 22 Nov 2021 19:44:36 GMT
< Content-Type: text/html; charset=UTF-8
< X-Cache: MISS from barracuda.ama.local
< Transfer-Encoding: chunked
< Via: 1.1 barracuda.ama.local (http_scan_byf/3.5.16)
< Connection: keep-alive
<
This is a test page.<BR><BR>This is a test edit of the test page.* Connection #0 to host [IP ADDRESS] left intact
Second request from comments, EC2-related name-based:
PS C:\Users\nwehneman> curl.exe -v http://ec2-[IP ADDRESS].us-east-2.compute.amazonaws.com/test.php
* Trying [IP ADDRESS]...
* TCP_NODELAY set
* Connected to ec2-[IP ADDRESS].us-east-2.compute.amazonaws.com ([IP ADDRESS]) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: ec2-[IP ADDRESS].us-east-2.compute.amazonaws.com
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Mon, 22 Nov 2021 19:26:23 GMT
< Content-Type: text/html; charset=UTF-8
< X-Cache: MISS from barracuda.ama.local
< Transfer-Encoding: chunked
< Via: 1.1 barracuda.ama.local (http_scan_byf/3.5.16)
< Connection: keep-alive
<
This is a test page.<BR><BR>This is a test edit of the test page.* Connection #0 to host ec2-[IP ADDRESS].us-east-2.compute.amazonaws.com left intact
Third request from comments:
PS C:\Users\nwehneman> curl.exe -v http://[DOMAIN NAME]-test-lb-1130955252.us-east-2.elb.amazonaws.com/test.php
* Trying [IP ADDRESS]...
* TCP_NODELAY set
* Connected to [DOMAIN NAME]-test-lb-1130955252.us-east-2.elb.amazonaws.com ([IP ADDRESS]) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: [DOMAIN NAME]-test-lb-1130955252.us-east-2.elb.amazonaws.com
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: awselb/2.0
< Date: Mon, 22 Nov 2021 19:51:52 GMT
< Content-Type: text/html
< Content-Length: 134
< Location: https://[DOMAIN NAME]-test-lb-1130955252.us-east-2.elb.amazonaws.com:443/test.php
< X-Cache: MISS from barracuda.ama.local
< Via: 1.1 barracuda.ama.local (http_scan_byf/3.5.16)
< Connection: keep-alive
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
</body>
</html>
* Connection #0 to host [DOMAIN NAME]-test-lb-1130955252.us-east-2.elb.amazonaws.com left intact
Second Update
So, the problem seems to be isolated to one of the EC2 instances. When I run curl -v http://localhost/test.php on the first EC2 instance, it downloads the then-current file. Here is the output:
[USERID]#ip-[IP ADDRESS]:~$ curl -v http://localhost/test.php
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Tue, 23 Nov 2021 20:39:16 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Last-Modified: Tuesday, 23-Nov-2021 20:39:16 GMT
< Cache-Control: no-store
<
* Connection #0 to host localhost left intact
This is a reverted test page.
However, if I run the same command on the second EC2 instance, I get an older, presumably cached, version of the file:
[USERID]#ip-[IP ADDRESS]:/etc/nginx$ curl -v http://localhost/test.php
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Tue, 23 Nov 2021 20:39:49 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Last-Modified: Tuesday, 23-Nov-2021 20:39:49 GMT
< Cache-Control: no-store
<
* Connection #0 to host localhost left intact
This is a test page.<BR><BR>This is a test edit of the test page.<BR><BR>This is a second test edit of the test page.
The nginx configuration files are the same between the two instances. Here's the /etc/nginx/nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
# gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
And here's the /etc/nginx/sites-available/default file:
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80;
# listen [::]:80 default_server;
#server_name _;
if ($http_x_forwarded_proto = 'http') {
return 301 https://$host$request_uri;
}
server_name [DOMAIN];
[PERMUTATIONS ON DOMAIN OMITTED]
client_max_body_size 128m;
proxy_read_timeout 120;
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
# root "/var/www/";
root "/var/www/vhosts/[DOMAIN]/httpdocs";
access_log "/var/www/vhosts/system/[DOMAIN]/logs/proxy_access_log";
error_log "/var/www/vhosts/system/[DOMAIN]/logs/proxy_error_log";
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
# server_name _;
location ~ \.php$ {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php/php-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control "no-store";
if_modified_since off;
expires off;
etag off;
}
location / {
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control "no-store";
if_modified_since off;
expires off;
etag off;
# kill cache
# expires -1;
# don't cache it
# proxy_no_cache 1;
# even if cached, don't try to use it
# proxy_cache_bypass 1;
}
}
Again, these files are identical between the two EC2 instances, yet one caches and the other one doesn't. I've also manually looked for a cache location on the EC2 instance, and found none. (If there's a programmatic way to identify nginx's cache, I'm all ears!)
It's also weird (to me at least) that every web browser request gets routed to the second, caching EC2 instance, even though we have a load balancer in place before the two instances. Shouldn't the load balancer be dividing the load between the two different instances?
I'm attempting to use a video tag to display video in safari.
Here is my snippet of html:
<video autoplay="" muted="" loop="" preload="auto" poster="http://my.ip.add.ress/static/my_video_image.jpg">
<source src="http://my.ip.add.ress/static/my_video.mp4" type="video/mp4" />
<source src="http://my.ip.add.ress/static/my_video.webm" type="video/webm" />
</video>
The static files (css, js, images) are being served up properly.
The problem I run into is when safari requests the video, nginx is supposed to return a 206 partial content response. However, it returns a 200 OK with the whole video (I think the whole file is returned). But safari didn't request the whole video, it requested a range of the video using the range header.
So this causes the video to not play in Safari. As it sits, my current setup works in Chrome and Firefox.
I'm using nginx to serve the video content. I'd like to avoid using a 3rd party server as this is for a small project :).
My question is how do I properly setup nginx to serve videos to safari? I know that nginx is ignoring the range header in the request. Is there a way to tell nginx to pay attention to that header?
Here is my nginx config in /etc/nginx/sites-available/myproject:
server {
listen 80;
server_name my.ip.add.ress;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /home/website/my_python_virtual_env/my_project/static_folder_containing_mp4_videos/;
}
location / {
# gunicorn to django
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Here is the request:
Request
Range: bytes=0-1
Accept: */*
Connection: Keep-Alive
Accept-Encoding: identity
DNT: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1.2 Safari/605.1.15
Referer: http://my.ip.add.ress/
X-Playback-Session-Id: 97A1EC54-85A3-42A1-8EA2-8657D03058B6
Here is the response:
Response
Content-Type: video/mp4
Date: Thu, 13 Sep 2018 17:48:40 GMT
Last-Modified: Wed, 12 Sep 2018 22:20:39 GMT
Server: nginx/1.14.0 (Ubuntu)
Content-Length: 10732143
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
On sites that do have video working, the request/response looks like this:
Request
GET /big_buck_bunny.mp4 HTTP/1.1
Range: bytes=0-1
Host: clips.vorwaerts-gmbh.de
Accept: */*
Connection: keep-alive
Accept-Encoding: identity
Accept-Language: en-us
DNT: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1.2 Safari/605.1.15
X-Playback-Session-Id: C2EAAF63-1230-44A9-9A16-6332C1EDEBF0
Response
HTTP/1.1 206 Partial Content
ETag: "5416d8-47f21fa7d3300"
Content-Type: video/mp4
Date: Thu, 13 Sep 2018 17:28:47 GMT
Last-Modified: Tue, 09 Feb 2010 02:50:20 GMT
Server: cloudflare
Content-Length: 2
Expires: Fri, 13 Sep 2019 17:28:47 GMT
Connection: keep-alive
Content-Range: bytes 0-1/5510872
Set-Cookie: __cfduid=d2776dbf7a6baaa1b2f2572d600deda141536859727; expires=Fri, 13-Sep-19 17:28:47 GMT; path=/; domain=.vorwaerts-gmbh.de; HttpOnly
Vary: Accept-Encoding
Cache-Control: public, max-age=31536000
CF-RAY: 459c5511b243a064-SLC
CF-Cache-Status: HIT
I feel silly posting this but here was my problem.
My nginx instance was not setup to serve the media. Anything from /media/ was being served by django. Django does not properly serve mp4 videos for safari because it doesn't work with Range requests. It will serve them properly enough for chrome to work though! ;)
The fix was simple. Add the location entry for /media/ to my nginx.conf file for the website.
server {
listen 80;
server_name my.ip.add.ress;
location = /favicon.ico { access_log off; log_not_found off; }
# still have to have this location entry to serve the static files...
location /static/ {
alias /home/website/my_python_virtual_env/my_project/static_folder_containing_static_files/;
}
# Added this location entry to nginx, my videos were NOT in the static folders, they were in the media folders. I feel dumb but hopefully this will help someone else out there!
location /media/ {
alias /home/website/my_python_virtual_env/my_project/media_folder_containing_mp4_videos/;
}
location / {
# gunicorn to django
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
I'm using WhiteNoise to serve static files from a Django app running under gunicorn. For some reason, the Cache-Control and Access-Control-Allow-Origin headers returned by the gunicorn backend are not being passed back to the client through the nginx proxy.
Here's what the response looks like for a sample request to the gunicorn backend:
% curl -I -H "host: www.myhost.com" -H "X-Forwarded-Proto: https" http://localhost:8000/static/img/sample-image.1bca02e3206a.jpg
HTTP/1.1 200 OK
Server: gunicorn/19.8.1
Date: Mon, 02 Jul 2018 14:20:42 GMT
Connection: close
Content-Length: 76640
Last-Modified: Mon, 18 Jun 2018 09:04:15 GMT
Access-Control-Allow-Origin: *
Cache-Control: max-age=315360000, public, immutable
Content-Type: image/jpeg
When I make a request for the same file via the nginx server, the two headers are missing.
% curl -I -H "Host: www.myhost.com" -k https://my.server.com/static/img/sample-image.1bca02e3206a.jpg
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 02 Jul 2018 14:09:25 GMT
Content-Type: image/jpeg
Content-Length: 76640
Last-Modified: Mon, 18 Jun 2018 09:04:15 GMT
Connection: keep-alive
ETag: "5b27758f-12b60"
Accept-Ranges: bytes
My nginx config is pretty much what is documented in the gunicorn deployment docs, i.e. I haven't enabled nginx caching (nginx -T | grep -i cache is empty) or done anything else I would think is out of the ordinary.
What am I missing?
The problem is that you have
location / {
try_files $uri #proxy_to_app;
}
directive in nginx config, so nginx just serves files himself and gunicorn doesn't even knows about it, and of course can't add headers.
It turns out I had forgotten the root directive I had configured many months ago, which was now picking up the static files. My error was in assuming that since I hadn't configured a location /static directive, nginx would be proxying all requests to the backend.
The solution for me was to remove the $uri reference from the try_files directive:
location / {
try_files /dev/null #proxy_to_app;
}
Alternatively, I could have simply put the contents of the #proxy_to_app location block directly inside the location / block.
Thanks to Alexandr Tatarinov for the suggestion in the comments.
My configuration now is nginx + uwsgi + different django apps.
My configuration for nginx is configure as follow:
location /app1/ {
uwsgi_pass app1;
include /home/code/uwsgi_params; # the uwsgi_params file you installed
}
location /app2/ {
uwsgi_pass app2;
include /home/code/uwsgi_params; # the uwsgi_params file you installed
}
and I also set the mount point in uwsgi in order to make the reverse proxy works.
something like this:
mount = /app1=app1.wsgi:application
manage-script-name = true
because my app require login before access the content of the website.
So when I type www.example.com/app1
Uwsgi will return a 302 redirection response back:
< HTTP/1.1 302 Found
< Server: nginx/1.10.3 (Ubuntu)
< Date: Mon, 11 Dec 2017 10:30:47 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Location: /login/?next=/app1/
< X-Frame-Options: DENY
< Vary: Cookie
Nginx will follow the link in location, however, because this location is not /app1/login/?next=/app1/ so it is unable to send the request to uwsgi. Instead, it trys to find login/?next=/app1/ locally in its root.
How can I rewrite the redirect response with correct prefix? Should configure on nginx side or on uwsgi side?
I have some problem with nginx. Here is my simple virtualhost config that doesn't seem to work properly:
server {
listen 80;
server_name my.site;
access_log /home/my.site/www/my.site/log/access.log;
error_log /home/my.site/www/my.site/log/error.log error;
root /home/my.site/www/my.site/public/;
charset utf-8;
location /search/ {
error_page 418 = #passenger;
recursive_error_pages on;
if ( $arg_mode = block ) { return 418; }
default_type text/html;
try_files $request_uri #passenger;
}
location / {
try_files $uri #passenger;
}
location #passenger {
root /home/my.site/www/my.site/public/;
passenger_enabled on;
}
}
The problem exactly is with the location /search/. I want nginx to pass the request immediately to the backend if uri includes the parameter 'mode' with the value 'block' (i.e. uri looks like http://my.site/search/word?mode=block&type=... (other parameters) )
But now it doesn't work. If static file /public/search/word exists server sends it even if parameter mode=block exists in uri... What is my misstep?
Your configuration is fine Nginx wise and any request for /search/ with mode=block will get sent to #passenger.
A simple test case is ...
server {
listen 80;
server example.com;
location /search/ {
error_page 418 = #passenger;
recursive_error_pages on;
if ( $arg_mode = block ) { return 418; }
default_type text/html;
echo "/search/";
}
location #passenger {
echo "#passenger!";
}
}
Calling this using curl ...
# curl -i http://example.com/search/word/?mode=block&a=b$c=d
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 02 Oct 2014 11:35:50 GMT
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Connection: keep-alive
#passenger!
So, if files are getting served, then this is happening in the #passenger block as #Alexey Ten inferred in his comment.