I need to know if it is possible to use Nginx as a reverse proxy to serve several web apps hosted each one in a different Raspberry Pi.
As it can be seen in the diagram, the Raspberries will be all connected to an unmanaged switch, the first switch I intend to install nginx so it could serve as reverse proxy depending on the website requested from the internet. Ex: wwww.site1.com, www.site2.www, etc
Is this possible?
Will I be able to access those RPis from a computer connected to the modem, not to the switch?
Note: The modem is a wifi modem and the switch is an unmanaged wired switch.
Apologies for my poor drawing skills, and thanks for any help. I need to know if this idea is possible before buying all this stuff.
I think, it is possible, but there are some requrements:
static external IP assigned to Modem;
static IP's on RPi's;
correct forwarding rules on modem.
I mean, you need forward all requests like the following:
modem:80 -> rp0:80
modem:443 -> rp0:443
On rp0 ports may differ from 80 and 443, so, please, set up correct rules and note it in nginx config.
After that set up upstreams or use IP's of rp1-3 in websites configs:
upstream rp1 {
server 192.168.1.11:port;
}
upstream rp2 {
server 192.168.1.12:port;
}
upstream rp3 {
server 192.168.1.13:port;
}
Replace port with port, which is listened on apropriate RPi.
Website configs will be like the following:
server {
server_name site1.com www.site1.com ;
location / { proxy_pass http://rp1 ; }
}
server {
server_name site2.com www.site2.com ;
location / { proxy_pass http://rp2 ; }
}
Add any params you need.
Also, if you are going to host some static websites, the best way is too place them on rp0.
EDIT 1
Example of working config:
server {
listen 80;
server_name site1.com www.site1.com ;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server {
listen 443 ssl;
server_name site1.com www.site1.com;
ssl_certificate /etc/letsencrypt/live/site1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://rp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://rp1/ /;
}
Please, note, if you are going to use Letsencrypt, the best way is to set up certbot (or smth else) on rp0. It will be easier to renew certs automatically. Also, use /etc/letsencrypt/live/site1/fullchain.pem .
In order to use multiple SSL-domains, be sure that install nginx supports SNI:
# nginx -V
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
This is the nginx conf on the side of the head node:
server {
listen 80;
server_name www.codingindfw.com codingindfw.com;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server{
listen 443 ssl;
server_name www.codingindfw.www codingindfw.com;
client_max_body_size 4G;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_pass http://192.168.0.8;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://192.168.0.8/ /;
}
}
And this is the nginx conf file on the client running the actual Django app:
server {
listen 80 default_server;
server_name www.codingindfw.com;
client_max_body_size 4G;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/pi/coding-in-dfw;
}
location /media/ {
root /home/pi/coding-in-dfw;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/pi/coding-in-dfw/mysocket.sock;
}
}
Related
I am trying to add my SSL certificate to my django application according to this tutorial. I can turn on my website using 'https://ebluedesign.online'. But web browsers return something in style 'The certificate can not be verified due to a trusted certificate authority.' After accepting the messages, my page is displayed correctly.
My nginx file looks like this:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name ebluedesign.online;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/ebluedesign.online/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/ebluedesign.online/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
listen [::]:80;
server_name ebluedesign.online;
return 301 https://$host$request_uri;
}
My certificates are also visible here:
/etc/letsencrypt/live/ebluedesign.online/...
How can I solve this problem with SSL certificate. I use free SSL by https://letsencrypt.org/.
EDIT:
What is odd is if you go to http://bluedesign.online/ it works fine even though your file makes it seem as if port 80 isn’t listened too at all. Do you happen to have two setup files in nginx? It is possible that the one you posted is not being used.
I’ve followed this tutorial many times with success. You could try using it from scratch if you have the opportunity: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
Now I have one Django project in one domain. I want to server three Django projects under one domain separated by / .For example: www.domain.com/firstone/, www.domain.com/secondone/ etc. How to configure nGinx to serve multiple Django-projects under one domain? How configure static-files serving in this case?
My current nGinx config is:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com www.domain.com;
ssl_certificate /etc/nginx/ssl/Certificate.crt;
ssl_certificate_key /etc/nginx/ssl/Certificate.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /home/admin/web/project;
location /static {
alias /home/admin/web/project/static;
}
location /media {
alias /home/admin/web/project/media;
}
location /assets {
alias /home/admin/web/project/assets;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_pass http://127.0.0.1:8000/;
client_max_body_size 100M;
}
# Proxies
# location /first {
# proxy_pass http://127.0.0.1:8001/;
# }
#
# location /second {
# proxy_pass http://127.0.0.1:8002/;
# }
error_page 500 502 503 504 /media/50x.html;
You have to run your projects on different ports like firsrone on 8000 and secondone on 8001.
Then in nginx conf, in place of location /, you have to write location /firstone/ and proxy pass this to port 8000 and then write same location object for second one as location /secondone/ and proxy pass it to port 8001.
For static files and media, you have to make them available as /firstone/static and same for secondone.
Other way is to specify MEDIA_ROOT and STATIC_ROOT same for both the projects.
As #prof.phython correctly states, you'll need to run a separate gunicorn process for each of the apps. This results in you having each app running on a separate port.
Next create a separate upstream block, under http for each of these app servers:
upstream app1 {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:9000 fail_timeout=0;
}
Obviously change the title, and port number for each upstream block accordingly.
Then, under your http->server block define the following for each:
location #app1_proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app1;
}
Make sure the last line there, points at what you called the upstream block (app1) and #app1_proxy should be specific to that app also.
Finally within the http->server block, use the following code to map a URL to the app server:
location /any/subpath {
# checks for static file, if not found proxy to app
try_files $uri #app1_proxy;
}
What prof.phython said should be correct. I'm not an expert on this but I saw a similar situation with our server as well. Hope the shared nginx.conf file helps!
server {
listen 80;
listen [::]:80;
server_name alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name web.alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 443 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/html/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 443 ssl;
server_name web.alicebot.tech;
ssl_certificate /etc/letsencrypt/live/web.alicebot.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web.alicebot.tech/privkey.pem; # managed by Certbot
location /static/ {
autoindex on;
alias /var/www/html/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 8000 ssl;
listen [::]:8000 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
autoindex on;
alias /var/www/alice_v2/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
}
}
As you can see we had different domain names here, which you wouldn't be needing. So you'll need to change the server names inside the server {...}
I currently have a working Django + Gunicorn + Nginx setup for https://www.example.com and http://sub.example.com. Note the main domain has ssl whereas the subdomain does not.
This is working correctly with the following two nginx configs. First is www.example.com:
upstream example_app_server {
server unix:/path/to/example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
if ($host = 'example.com') {
return 301 https://www.example.com$request_uri;
}
ssl_certificate /etc/nginx/example/cert_chain.crt;
ssl_certificate_key /etc/nginx/example/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ciphers removed to save space in post';
ssl_prefer_server_ciphers on;
client_max_body_size 4G;
access_log /var/log/nginx/www.example.com.access.log;
error_log /var/log/nginx/www.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/example/static;
}
location /media {
autoindex on;
alias /path/to/example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://example_app_server;
break;
}
}
}
Next is sub.example.com:
upstream sub_example_app_server {
server unix:/path/to/sub_example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sub.example.com;
client_max_body_size 4G;
access_log /var/log/nginx/sub.example.com.access.log;
error_log /var/log/nginx/sub.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/sub_example/static;
}
location /media {
autoindex on;
alias /path/to/sub_example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sub_example_app_server;
break;
}
}
}
As mentioned, this is all working. What I am trying to do now is to use ssl on the subdomain as well. I have a second ssl certificate for this purpose which has been activated with the domain register for this subdomain.
I have updated the original nginx config from above for sub.example.com to have exactly the same format as example.com, but pointing to the relevant ssl cert/key etc:
upstream sub_example_app_server {
server unix:/path/to/sub_example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sub.example.com;
return 301 https://sub.example.com$request_uri;
}
server {
listen 443 ssl;
server_name sub.example.com;
if ($host = 'sub.example.com') {
return 301 https://sub.example.com$request_uri;
}
ssl_certificate /etc/nginx/sub_example/cert_chain.crt;
ssl_certificate_key /etc/nginx/sub_example/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ciphers removed to save space in post';
ssl_prefer_server_ciphers on;
client_max_body_size 4G;
access_log /var/log/nginx/sub.example.com.access.log;
error_log /var/log/nginx/sub.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/sub_example/static;
}
location /media {
autoindex on;
alias /path/to/sub_example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sub_example_app_server;
break;
}
}
}
I haven't changed anything with my domain register / dns because everything was already working correctly before adding the ssl for the subdomain. Not sure if there is something I need to change?
When browsing to http://sub.example.com I am redirected to https://sub.example.com, so that part appears to be working. However the site does not load and the browser error is: This page isn't working. sub.example.com redirected you too many times. ERR_TOO_MANY_REDIRECTS
https://www.example.com is still working.
I don't have any errors in my nginx or gunicorn logs. I can only guess I have configured something in the sub.example.com nginx config incorrectly.
The section in the ssl server configuration:
if ($host = 'sub.example.com') { return 301 sub.example.com$request_uri }
is the problem. That rule will always be triggered. Removing it should eliminate the too many redirect errors.
I am trying to load balance frontend (public) and backend (private) servers in AWS. I got the nginx file working with a single server with IP address, but the loadbalancer DNS name doesn't seem to work, below is my nginx.conf for the frontend server. In the listener section of the loadbalancer the loadbalancer port is 443 and instance port is 9000. Any suggessions greatly appreciated.
WORKING...
server {
listen 80;
rewrite ^(.*) https://example.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/chain.crt;
ssl_certificate_key /etc/ssl/key.crt;
listen localhost:443;
server_tokens off;
client_max_body_size 300M;
location / {
root /var/www/html;
index index.html index.htm;
}
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://<BackendIP>:9000/api/;
proxy_set_header Host $http_host;
}
}
}
NOT WORKING...
server {
listen 80;
rewrite ^(.*) https://example.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/chain.crt;
ssl_certificate_key /etc/ssl/key.crt;
listen localhost:443;
server_tokens off;
client_max_body_size 300M;
location / {
root /var/www/html;
index index.html index.htm;
}
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://<LOADBALANCER-DNS>:9000/api/;
proxy_set_header Host $http_host;
}
}
}
https://mydomainName.com --> AWS-ELB [ingress 443 --> egress 80]) --> OmnibusGitlab
Now Omnibus redirects to the following and times out
http://mydomainName.com/users/sign_in
Any way to debug this issue.
Full path has to be in https because if you are going forward via reverse proxy that accepts https and the you have to come back as as https.
Separate the Nginx configuration because Omnibus solution have to constrains that block the flexibility we have on standard nginx.
Do the following to make this change:
edit /etc/gitlab/gitlab.rb
and add
nginx['enable'] = false
web_server['external_users'] = ['www-data'] #for ubuntu nginx user
web_server['external_users'] = ['nginx'] # for centos 6-7
Add the following configuration to enable gitlab via simple nginx
/etc/nginx/site-availabe/server
server {
listen *:443 default_server ssl;
ssl_certificate /etc/ssl/certs/myserver.crt;
ssl_certificate_key /etc/ssl/private/myserver.key;
server_name myhostname.com
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 50m; #or 5000
access_log /var/log/gitlab/nginx_access.log;
error_log /var/log/gitlab/nginx_error.log;
location / {
try_files $uri $uri/index.html $uri.html #gitlab;
}
location #gitlab {
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gitlab;
}
error_page 502 /502.html;
}
gitlab-redirect
/etc/nginx/sites-available/gitlab-redirect
server {
listen 80;
server_name myhostname.com;
return 301 https://myhostname.com;
}