I've come up with a small issue, we're using a load balancer for a new project, but we cannot force the www. without having a redirect loop between requests.
We're currently using NGINX, and the snippet to redirect is the following:
LOAD BALANCER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
# FORGE CONFIG (DOT NOT REMOVE!)
include upstreams/mywebsite.com;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name .mywebsite.com;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/mywebsite.com/225451/server.crt;
ssl_certificate_key /etc/nginx/ssl/mywebsite.com/225451/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
charset utf-8;
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://370308_app/;
proxy_redirect off;
# Handle Web Socket Connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
HTTP SERVER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
server {
listen 80;
listen [::]:80;
server_name .mywebsite.com;
root /home/forge/mywebsite.com/public;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
Thing is, with this config I'm only getting redirect loops from the server.
Help please :D <3
After writing the prior general-purpose answer, I Googled "FORGE CONFIG (DOT NOT REMOVE!)", and this was the first result:
https://laracasts.com/discuss/channels/forge/forge-how-to-disable-nginx-default-redirection
inside nginx/forge-conf/be106.net/before/redirect.conf file there is this simple config:
…
server_name www.my-domain.net;
return 301 $scheme://my-domain.net$request_uri;
…
is there a simple way of removing this without altering the file itself(as it look like bad idea).
So, it appears that the redirect is being caused by the application you're using, so, we found the most likely cause of the loop!
In turn, the appropriate way to configure your application to avoid said loop would be outside of the score of StackOverflow.
However, as a workaround:
consider whether you actually need all those forge-conf include directives at the load-balancer level; subsequently, you could fake the appropriate domain to be passed to the backend that would not cause a redirect (provided you remove your own redundant redirects):
- proxy_set_header Host $http_host;
+ proxy_set_header Host example.com;
note that the reason the forge-conf/example.com/before/redirect.conf directive takes precedence over your own configuration for .example.com is the order of the directive — you could potentially move the /before/* include to be after your own configuration, if such a move would otherwise make sense.
I don't think the nginx snippets you provided would cause a redirect loop by themselves.
First, you have to figure out whether it's an actual redirect — very often in these questions, the 301 Moved Permanently response gets cached in your browser, and subsequently you see a cached version, instead of a fresh one.
Subsequently, you'd have to figure out what is causing the redirect loop:
Try adding unique strings to each redirect directive, to see which one would be causing the loop.
if ($host !~* ^www\.) {return 301 $scheme://www.$host/levelX$request_uri}
Ask yourself why do you have so many redirect directives in the first place — there doesn't seem to be much of a valid reason to have redirect directives both at the front-end load balancer, as well as the backend.
If the above doesn't resolve the issue, then you know that the redirect loop is not coming from the files you've provided, and you have to dig deeper — it's possible for it to come from some other files, perhaps one of your include directives, or perhaps a default server of www.example.com is defined elsewhere, which redirects to example.com, or perhaps the redirect is done at the application layer.
Related
I am setting up etherpad-lite in a subdirectory at this location.
Unfortunately the files in 'static' aren't being loaded:
Clearly something is going on in my nginx, which (partially) looks like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
listen [::]:80;
server_name _
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name www.whitewaterwriters.com;
ssl_certificate /etc/letsencrypt/live/www.whitewaterwriters.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.whitewaterwriters.com/privkey.pem;
return 301 https://whitewaterwriters.com$request_uri;
}
server {
listen 443 ssl;
server_name whitewaterwriters.com;
ssl_certificate /etc/letsencrypt/live/whitewaterwriters.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/whitewaterwriters.com/privkey.pem;
root /usr/share/nginx/html;
index index.html index.php;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~/watchtower/.*/live/pdfs/ {
autoindex on;
}
location /watchtower {
root /usr/share/nginx/html/;
}
location /etherpad {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_read_timeout 300;
proxy_pass http://localhost:9001/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /{
root /usr/share/nginx/html/whitewaterwriters-site/_site/;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers PROFILE=SYSTEM;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
My question is: how do I configure nginx so that the missing files appear?
There are some other questions on this topic both in the github issues and SE, but they, in general, are solved by moving from etherpad to etherpad-lite, which I already use, or are both unanswered and approaching a decade old...
Short answer: if you add a trailing slash to your prefixed location, everything would work as expected.
map $http_upgrade $connection_upgrade {
'' close;
default upgrade;
}
server {
...
location /etherpad/ {
proxy_buffering off; # recommended by etherpad nginx hosting examples
proxy_set_header Host $host;
# optional headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; # EP logs to show the actual remote IP
proxy_set_header X-Forwarded-Proto $scheme; # for EP to set secure cookie flag when https is used
# recommended with keepalive connections
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# upstream
proxy_pass http://127.0.0.1:9001/;
}
}
If you want /etherpad URI to work too, add the following location if you won't get HTTP 301 redirect from /etherpad to /etherpad/ with the above configuration:
location = /etherpad {
return 301 /etherpad/;
}
For me it wasn't necessary, but it can depend on your server environment.
To preserve query string, if any, you can use return 301 /etherpad/$is_args$args; or rewrite ^ /etherpad/ permanent instead.
Long answer (and what happened undercover).
There are many question on SO about "how can I host a webapp under an URI prefix". Here is one on my answers and here is a ServerFault thread on the similar topic.
The only right way to do it is to made your proxied app request its assets via relative URIs only (consider assets/script.js instead of /assets/script.js) or using the right URI prefix (/etherpad/assets/script.js).
Luckily, etherpad requests its assets using a relative paths (e.g. <script src="static/js/index.js"></script>) making it suitable to be hosted under any URI prefix. The problem is, when your origin URI is /etherpad, browser considers the current remote web server directory as the root one, and requests above script from server as scheme://domain/static/js/index.js. That request won't even caught by your location /etherpad { ... } (since it isn't starts with /etherpad). On the other hand, when your origin URI is /etherpad/, browser considers the current remote web server directory as the /etherpad/ and correctly requests above script from server as scheme://domain/etherpad/static/js/index.js.
Now let's see what happened with the proxied request /etherpad/<path> using your original configuration. Since you are using a trailing slash after the upstream address (http://localhost + /), nginx cut the location /etherpad prefix from the request URI and prepend it with that slash (or any other URI used in a proxy_pass directive after the upstream name) resulting in //<path>. You can read A little confused about trailing slash behavior in nginx or nginx and trailing slash with proxy pass SO threads for more details. Anyway that URI won't served by etherpad giving you Cannot GET //<path> error.
Changing location /etherpad { ... } to the location /etherpad/ { ... } you'll made both of the aforementioned problems gone.
A few words about the etherpad wiki examples, especially this one.
Both
location /etherpad/ {
proxy_pass http://127.0.0.1/;
...
}
and
location /etherpad/ {
rewrite ^/etherpad(/.*) $1 break;
proxy_pass http://127.0.0.1;
...
}
do the same string - stripping the /etherpad prefix from the request URI before passing it to the upstream. However the first one do it in a much more efficient way. It is a good practice to avoid regular expressions whenever possible. Using
location = /etherpad {
return 301 /etherpad/;
}
is also more efficient than
rewrite ^/etherpad$ /etherpad/ permanent;
Second and third location blocks from the above wiki example completely duplicate functionality from the first one. Moreover, that example breaks WebSocket support (whoever wrote it, he can at least add that support to the location /pad/socket.io { ... } block).
And never do the thing used at this example:
location ~ ^/$ { ... }
Use exact matching location instead:
location = / { ... }
Here is one more configuration I've tested in order to check if I can serve etherpad static assets directly via nginx. It seems to be workable, although I didn't tested it a lot. It uses an uncompressed js/css assets versions (which should not impact performance when you are using gzip or some other compression). It is also a good example of a configuration where you can't avoid using rewrite directive to strip a prefix from the request URI.
location /etherpad/static/ {
# trying to serve assets directly via nginx
# if the asset is not found, pass the request to the nodejs upstream
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
Update
As being suggested on the GitHub, URIs started with /etherpad/static/plugins/ prefix should always be passed to the nodejs upstream since there are no corresponding assets would exists under the /path/to/etherpad/src/static/ directory. Despite there is already defined fallback to the nodejs upstream (try_files $uri #etherpad), to eliminate an extra stat system call produced by the try_files directive we can modify the above configuration to this one:
location ~ ^/etherpad/static/(?!plugins/) {
# trying to serve assets directly via nginx
# if the asset is not found, pass the request to the nodejs upstream
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
(using negative lookahead regex, better readability) or to this one:
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location /etherpad/static/ {
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/static/plugins/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/static/plugins/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
(only prefix locations, better performance). The repetitive part
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
and probably other optional headers (X-Real-IP, X-Forwarded-For, X-Forwarded-Proto) setup mentioned at the very beginning of the answer, can be used as a separate file, e.g. etherpad-proxy.conf, and included into the main nginx config with the include directive.
You can try to navigate the static content to the correct folder with:
location /static {
root root /usr/share/nginx/html/whitewaterwriters-site/_site/static;
}
# or something like:
location /etherpad/static {
root root /usr/share/nginx/html/whitewaterwriters-site/_site/;
}
since this is working: https://whitewaterwriters.com/etherpad/static/js/vendors/html10n.js?v=869d568c
I'm trying to create a django site on my nginx server. I already have other site in other sub-folders. I use gunicorn service to redirect from nginx to django.
I'm able to access the default django welcome page (https://example.com/django/) but I can't go to the admin page of my django site (if I enter https://example.com/django/admin, it redirect me to https://example.com/admin/login/?next=/admin/ and I get a nginx 404). Renaming the redirection to https://example.com/django/admin/login/?next=/admin/ shows a plain html login page (like if the static content was not loaded).
I'm only starting webdev so I might be wrong, but is seems the error is in my nginx config.
Here is my nginx configuration file:
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
server_name example.com www.example.com;
# listen 80;
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # drop SSLv3 (POODLE vulnerability)
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
root /var/www/example.com;
index index.php index.html;
location / {
try_files $uri $uri/ $uri.html $uri.php$is_args$query_string;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# With php-cgi (or other tcp sockets):
#fastcgi_pass 127.0.0.1:9000;
}
location /biketrack {
try_files $uri $uri/ =404;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
# Django configuration
location /django/static/ {
alias /home/pi/elops-tracker-project/static;
}
location /django {
include proxy_params;
rewrite ^/django/(.*) /$1 break;
# alias /home/pi/elops-tracker-project
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header Host $http_host;
# proxy_set_header X-Forwarded-Host $server_name;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_redirect off;
# proxy_set_header SCRIPT_NAME /django
proxy_pass http://unix:/home/pi/elops-tracker-project/elops_tracker.sock;
}
}```
I need to know if it is possible to use Nginx as a reverse proxy to serve several web apps hosted each one in a different Raspberry Pi.
As it can be seen in the diagram, the Raspberries will be all connected to an unmanaged switch, the first switch I intend to install nginx so it could serve as reverse proxy depending on the website requested from the internet. Ex: wwww.site1.com, www.site2.www, etc
Is this possible?
Will I be able to access those RPis from a computer connected to the modem, not to the switch?
Note: The modem is a wifi modem and the switch is an unmanaged wired switch.
Apologies for my poor drawing skills, and thanks for any help. I need to know if this idea is possible before buying all this stuff.
I think, it is possible, but there are some requrements:
static external IP assigned to Modem;
static IP's on RPi's;
correct forwarding rules on modem.
I mean, you need forward all requests like the following:
modem:80 -> rp0:80
modem:443 -> rp0:443
On rp0 ports may differ from 80 and 443, so, please, set up correct rules and note it in nginx config.
After that set up upstreams or use IP's of rp1-3 in websites configs:
upstream rp1 {
server 192.168.1.11:port;
}
upstream rp2 {
server 192.168.1.12:port;
}
upstream rp3 {
server 192.168.1.13:port;
}
Replace port with port, which is listened on apropriate RPi.
Website configs will be like the following:
server {
server_name site1.com www.site1.com ;
location / { proxy_pass http://rp1 ; }
}
server {
server_name site2.com www.site2.com ;
location / { proxy_pass http://rp2 ; }
}
Add any params you need.
Also, if you are going to host some static websites, the best way is too place them on rp0.
EDIT 1
Example of working config:
server {
listen 80;
server_name site1.com www.site1.com ;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server {
listen 443 ssl;
server_name site1.com www.site1.com;
ssl_certificate /etc/letsencrypt/live/site1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://rp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://rp1/ /;
}
Please, note, if you are going to use Letsencrypt, the best way is to set up certbot (or smth else) on rp0. It will be easier to renew certs automatically. Also, use /etc/letsencrypt/live/site1/fullchain.pem .
In order to use multiple SSL-domains, be sure that install nginx supports SNI:
# nginx -V
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
This is the nginx conf on the side of the head node:
server {
listen 80;
server_name www.codingindfw.com codingindfw.com;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server{
listen 443 ssl;
server_name www.codingindfw.www codingindfw.com;
client_max_body_size 4G;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_pass http://192.168.0.8;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://192.168.0.8/ /;
}
}
And this is the nginx conf file on the client running the actual Django app:
server {
listen 80 default_server;
server_name www.codingindfw.com;
client_max_body_size 4G;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/pi/coding-in-dfw;
}
location /media/ {
root /home/pi/coding-in-dfw;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/pi/coding-in-dfw/mysocket.sock;
}
}
Now I have one Django project in one domain. I want to server three Django projects under one domain separated by / .For example: www.domain.com/firstone/, www.domain.com/secondone/ etc. How to configure nGinx to serve multiple Django-projects under one domain? How configure static-files serving in this case?
My current nGinx config is:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com www.domain.com;
ssl_certificate /etc/nginx/ssl/Certificate.crt;
ssl_certificate_key /etc/nginx/ssl/Certificate.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /home/admin/web/project;
location /static {
alias /home/admin/web/project/static;
}
location /media {
alias /home/admin/web/project/media;
}
location /assets {
alias /home/admin/web/project/assets;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_pass http://127.0.0.1:8000/;
client_max_body_size 100M;
}
# Proxies
# location /first {
# proxy_pass http://127.0.0.1:8001/;
# }
#
# location /second {
# proxy_pass http://127.0.0.1:8002/;
# }
error_page 500 502 503 504 /media/50x.html;
You have to run your projects on different ports like firsrone on 8000 and secondone on 8001.
Then in nginx conf, in place of location /, you have to write location /firstone/ and proxy pass this to port 8000 and then write same location object for second one as location /secondone/ and proxy pass it to port 8001.
For static files and media, you have to make them available as /firstone/static and same for secondone.
Other way is to specify MEDIA_ROOT and STATIC_ROOT same for both the projects.
As #prof.phython correctly states, you'll need to run a separate gunicorn process for each of the apps. This results in you having each app running on a separate port.
Next create a separate upstream block, under http for each of these app servers:
upstream app1 {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:9000 fail_timeout=0;
}
Obviously change the title, and port number for each upstream block accordingly.
Then, under your http->server block define the following for each:
location #app1_proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app1;
}
Make sure the last line there, points at what you called the upstream block (app1) and #app1_proxy should be specific to that app also.
Finally within the http->server block, use the following code to map a URL to the app server:
location /any/subpath {
# checks for static file, if not found proxy to app
try_files $uri #app1_proxy;
}
What prof.phython said should be correct. I'm not an expert on this but I saw a similar situation with our server as well. Hope the shared nginx.conf file helps!
server {
listen 80;
listen [::]:80;
server_name alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name web.alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 443 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/html/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 443 ssl;
server_name web.alicebot.tech;
ssl_certificate /etc/letsencrypt/live/web.alicebot.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web.alicebot.tech/privkey.pem; # managed by Certbot
location /static/ {
autoindex on;
alias /var/www/html/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 8000 ssl;
listen [::]:8000 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
autoindex on;
alias /var/www/alice_v2/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
}
}
As you can see we had different domain names here, which you wouldn't be needing. So you'll need to change the server names inside the server {...}
https://mydomainName.com --> AWS-ELB [ingress 443 --> egress 80]) --> OmnibusGitlab
Now Omnibus redirects to the following and times out
http://mydomainName.com/users/sign_in
Any way to debug this issue.
Full path has to be in https because if you are going forward via reverse proxy that accepts https and the you have to come back as as https.
Separate the Nginx configuration because Omnibus solution have to constrains that block the flexibility we have on standard nginx.
Do the following to make this change:
edit /etc/gitlab/gitlab.rb
and add
nginx['enable'] = false
web_server['external_users'] = ['www-data'] #for ubuntu nginx user
web_server['external_users'] = ['nginx'] # for centos 6-7
Add the following configuration to enable gitlab via simple nginx
/etc/nginx/site-availabe/server
server {
listen *:443 default_server ssl;
ssl_certificate /etc/ssl/certs/myserver.crt;
ssl_certificate_key /etc/ssl/private/myserver.key;
server_name myhostname.com
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 50m; #or 5000
access_log /var/log/gitlab/nginx_access.log;
error_log /var/log/gitlab/nginx_error.log;
location / {
try_files $uri $uri/index.html $uri.html #gitlab;
}
location #gitlab {
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gitlab;
}
error_page 502 /502.html;
}
gitlab-redirect
/etc/nginx/sites-available/gitlab-redirect
server {
listen 80;
server_name myhostname.com;
return 301 https://myhostname.com;
}