I have an AWS EMR and I have been trying to configure a path (/hbase) to access HBase in EMR through NGINX. To achieve my goal I have created a configuration file /etc/nginx/conf.d/hbase.conf.
server {
charset utf-8;
listen 80;
#Hbase works when location /hbase/ is replaced with location /.
It does not work like below.
location /hbase/
{
proxy_pass http://localhost:16010;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Here is my /etc/nginx/nginx.confon EMR
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
log_format main '$remote_addr - [$time_local] "$request" '
'$status "$http_referer" '
'"$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 65;
# HTTPS server
server {
listen 18888 ssl;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/certs/nginx.key;
server_name localhost;
location /webhdfs/v1/user {
proxy_pass http://localhost:14000;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
location /sessions {
proxy_pass http://localhost:8998;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
location /batches {
proxy_pass http://localhost:8998;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
location /proxy {
proxy_pass http://ip-10-100-0-4.ec2.internal:20888;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
} #end server tag
} #end http tag
The problem is when I hit the http://tempmyserverurl/hbase it gives me 404 Not Found error. But when I update the location /hbase to / in my hbase.conf it redirects to master_status and HBase UI is accessable.
I simply want NGINX to load HBase with location /hbase. I have tried using a different server and mentioned proxy pass to this EMR server but it did not work.
Can anyone help me put in the right direction? Help me figure out what I am missing here.
Thanks in advance.
Given
location /hbase
{
proxy_pass http://localhost:16010;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
A request to http://example.com/hbase/some/path would be proxied to http://localhost:16010 like http://localhost:16010/hbase/some/path which most likely does not exist because the hbase is in the URL thus the 404 error.
From https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter.
location /hbase
{
proxy_pass http://localhost:16010/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
With this example, a request to http://example.com/hbase/some/path would be proxied to http://localhost:16010 like http://localhost:16010/some/path thus removing the hbase from the URL which most likely fixes the issue.
Related
I am setting up etherpad-lite in a subdirectory at this location.
Unfortunately the files in 'static' aren't being loaded:
Clearly something is going on in my nginx, which (partially) looks like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
listen [::]:80;
server_name _
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name www.whitewaterwriters.com;
ssl_certificate /etc/letsencrypt/live/www.whitewaterwriters.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.whitewaterwriters.com/privkey.pem;
return 301 https://whitewaterwriters.com$request_uri;
}
server {
listen 443 ssl;
server_name whitewaterwriters.com;
ssl_certificate /etc/letsencrypt/live/whitewaterwriters.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/whitewaterwriters.com/privkey.pem;
root /usr/share/nginx/html;
index index.html index.php;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~/watchtower/.*/live/pdfs/ {
autoindex on;
}
location /watchtower {
root /usr/share/nginx/html/;
}
location /etherpad {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_read_timeout 300;
proxy_pass http://localhost:9001/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /{
root /usr/share/nginx/html/whitewaterwriters-site/_site/;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers PROFILE=SYSTEM;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
My question is: how do I configure nginx so that the missing files appear?
There are some other questions on this topic both in the github issues and SE, but they, in general, are solved by moving from etherpad to etherpad-lite, which I already use, or are both unanswered and approaching a decade old...
Short answer: if you add a trailing slash to your prefixed location, everything would work as expected.
map $http_upgrade $connection_upgrade {
'' close;
default upgrade;
}
server {
...
location /etherpad/ {
proxy_buffering off; # recommended by etherpad nginx hosting examples
proxy_set_header Host $host;
# optional headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; # EP logs to show the actual remote IP
proxy_set_header X-Forwarded-Proto $scheme; # for EP to set secure cookie flag when https is used
# recommended with keepalive connections
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# upstream
proxy_pass http://127.0.0.1:9001/;
}
}
If you want /etherpad URI to work too, add the following location if you won't get HTTP 301 redirect from /etherpad to /etherpad/ with the above configuration:
location = /etherpad {
return 301 /etherpad/;
}
For me it wasn't necessary, but it can depend on your server environment.
To preserve query string, if any, you can use return 301 /etherpad/$is_args$args; or rewrite ^ /etherpad/ permanent instead.
Long answer (and what happened undercover).
There are many question on SO about "how can I host a webapp under an URI prefix". Here is one on my answers and here is a ServerFault thread on the similar topic.
The only right way to do it is to made your proxied app request its assets via relative URIs only (consider assets/script.js instead of /assets/script.js) or using the right URI prefix (/etherpad/assets/script.js).
Luckily, etherpad requests its assets using a relative paths (e.g. <script src="static/js/index.js"></script>) making it suitable to be hosted under any URI prefix. The problem is, when your origin URI is /etherpad, browser considers the current remote web server directory as the root one, and requests above script from server as scheme://domain/static/js/index.js. That request won't even caught by your location /etherpad { ... } (since it isn't starts with /etherpad). On the other hand, when your origin URI is /etherpad/, browser considers the current remote web server directory as the /etherpad/ and correctly requests above script from server as scheme://domain/etherpad/static/js/index.js.
Now let's see what happened with the proxied request /etherpad/<path> using your original configuration. Since you are using a trailing slash after the upstream address (http://localhost + /), nginx cut the location /etherpad prefix from the request URI and prepend it with that slash (or any other URI used in a proxy_pass directive after the upstream name) resulting in //<path>. You can read A little confused about trailing slash behavior in nginx or nginx and trailing slash with proxy pass SO threads for more details. Anyway that URI won't served by etherpad giving you Cannot GET //<path> error.
Changing location /etherpad { ... } to the location /etherpad/ { ... } you'll made both of the aforementioned problems gone.
A few words about the etherpad wiki examples, especially this one.
Both
location /etherpad/ {
proxy_pass http://127.0.0.1/;
...
}
and
location /etherpad/ {
rewrite ^/etherpad(/.*) $1 break;
proxy_pass http://127.0.0.1;
...
}
do the same string - stripping the /etherpad prefix from the request URI before passing it to the upstream. However the first one do it in a much more efficient way. It is a good practice to avoid regular expressions whenever possible. Using
location = /etherpad {
return 301 /etherpad/;
}
is also more efficient than
rewrite ^/etherpad$ /etherpad/ permanent;
Second and third location blocks from the above wiki example completely duplicate functionality from the first one. Moreover, that example breaks WebSocket support (whoever wrote it, he can at least add that support to the location /pad/socket.io { ... } block).
And never do the thing used at this example:
location ~ ^/$ { ... }
Use exact matching location instead:
location = / { ... }
Here is one more configuration I've tested in order to check if I can serve etherpad static assets directly via nginx. It seems to be workable, although I didn't tested it a lot. It uses an uncompressed js/css assets versions (which should not impact performance when you are using gzip or some other compression). It is also a good example of a configuration where you can't avoid using rewrite directive to strip a prefix from the request URI.
location /etherpad/static/ {
# trying to serve assets directly via nginx
# if the asset is not found, pass the request to the nodejs upstream
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
Update
As being suggested on the GitHub, URIs started with /etherpad/static/plugins/ prefix should always be passed to the nodejs upstream since there are no corresponding assets would exists under the /path/to/etherpad/src/static/ directory. Despite there is already defined fallback to the nodejs upstream (try_files $uri #etherpad), to eliminate an extra stat system call produced by the try_files directive we can modify the above configuration to this one:
location ~ ^/etherpad/static/(?!plugins/) {
# trying to serve assets directly via nginx
# if the asset is not found, pass the request to the nodejs upstream
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
(using negative lookahead regex, better readability) or to this one:
location /etherpad/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/;
}
location /etherpad/static/ {
rewrite ^/etherpad(/.*) $1 break;
root /full/path/to/etherpad-lite/src;
try_files $uri #etherpad;
}
location /etherpad/static/plugins/ {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001/static/plugins/;
}
location #etherpad {
proxy_redirect / /etherpad/;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:9001;
}
(only prefix locations, better performance). The repetitive part
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
and probably other optional headers (X-Real-IP, X-Forwarded-For, X-Forwarded-Proto) setup mentioned at the very beginning of the answer, can be used as a separate file, e.g. etherpad-proxy.conf, and included into the main nginx config with the include directive.
You can try to navigate the static content to the correct folder with:
location /static {
root root /usr/share/nginx/html/whitewaterwriters-site/_site/static;
}
# or something like:
location /etherpad/static {
root root /usr/share/nginx/html/whitewaterwriters-site/_site/;
}
since this is working: https://whitewaterwriters.com/etherpad/static/js/vendors/html10n.js?v=869d568c
Im trying to set up a reverse wss proxy with nginx to an amazon api gateaway websocket api but I have had no luck with the configuration of nginx so i would be glad if you helped me sort this out.
Let me give you some details:
I have an EC2 instance running nginx that has attached to it an elastic ip address.
I also have dns records to point traffic from connect.example.com to that ip address.
I have set up nginx as a reverse proxy to proxy the traffic from connect.example.com to app.example.com on port 443 with ssl(I have generated the relevant certificates).
On app.example.com lies a websockets api on amazon's api gateaway service.
I can see from nginx's access logs that my requests reach the ec2 instance but I always get error responses no matter how i change the nginx config file(400,403,500,502 etc).
I dont seem to understand where the problem lies even though I have searched around and tried various configurations.
Im attaching my config files below for reference:
nginx.conf
# Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
#include mime.types;
#default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log";
#add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
#connection_pool_size 112;
#sendfile on;
#tcp_nopush on;
#tcp_nodelay on;
#gzip on;
#gzip_http_version 1.0;
#gzip_comp_level 2;
#gzip_proxied any;
#gzip_types text/plain text/css application/javascript text/xml application/xml+rss;
#keepalive_timeout 65;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 80M;
#server_tokens on;
#include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
#server {
# Port to listen on, can also be set in IP:PORT format
# listen 80;
# include "/opt/bitnami/nginx/conf/bitnami/*.conf";
# include "/opt/bitnami/nginx/conf/ssl/ssl-redirect.conf";
# location /status {
# stub_status on;
# access_log off;
# allow 127.0.0.1;
# deny all;
# }
# }
include "/opt/bitnami/nginx/conf/ssl/ssl.conf";
}
ssl.conf
resolver app.example.com;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
#listen [::]:443 ssl;
server_name connect.example.com;
#ssl on;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /usr/share/nginx/html;
underscores_in_headers on;
location / {
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass https://ws-backend$uri$is_args$args;
proxy_read_timeout 9000;
proxy_pass_request_headers on;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Sec-WebSocket-Accept $http_sec_websocket_accept;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
upstream ws-backend {
server app.example.com:443;
}
When i connect directly to app.example.com i have no problem and the response is the following:
expected response
But when i connect though connect.example.com i get the following response:
actual response
Now I have one Django project in one domain. I want to server three Django projects under one domain separated by / .For example: www.domain.com/firstone/, www.domain.com/secondone/ etc. How to configure nGinx to serve multiple Django-projects under one domain? How configure static-files serving in this case?
My current nGinx config is:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com www.domain.com;
ssl_certificate /etc/nginx/ssl/Certificate.crt;
ssl_certificate_key /etc/nginx/ssl/Certificate.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /home/admin/web/project;
location /static {
alias /home/admin/web/project/static;
}
location /media {
alias /home/admin/web/project/media;
}
location /assets {
alias /home/admin/web/project/assets;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_pass http://127.0.0.1:8000/;
client_max_body_size 100M;
}
# Proxies
# location /first {
# proxy_pass http://127.0.0.1:8001/;
# }
#
# location /second {
# proxy_pass http://127.0.0.1:8002/;
# }
error_page 500 502 503 504 /media/50x.html;
You have to run your projects on different ports like firsrone on 8000 and secondone on 8001.
Then in nginx conf, in place of location /, you have to write location /firstone/ and proxy pass this to port 8000 and then write same location object for second one as location /secondone/ and proxy pass it to port 8001.
For static files and media, you have to make them available as /firstone/static and same for secondone.
Other way is to specify MEDIA_ROOT and STATIC_ROOT same for both the projects.
As #prof.phython correctly states, you'll need to run a separate gunicorn process for each of the apps. This results in you having each app running on a separate port.
Next create a separate upstream block, under http for each of these app servers:
upstream app1 {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:9000 fail_timeout=0;
}
Obviously change the title, and port number for each upstream block accordingly.
Then, under your http->server block define the following for each:
location #app1_proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app1;
}
Make sure the last line there, points at what you called the upstream block (app1) and #app1_proxy should be specific to that app also.
Finally within the http->server block, use the following code to map a URL to the app server:
location /any/subpath {
# checks for static file, if not found proxy to app
try_files $uri #app1_proxy;
}
What prof.phython said should be correct. I'm not an expert on this but I saw a similar situation with our server as well. Hope the shared nginx.conf file helps!
server {
listen 80;
listen [::]:80;
server_name alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name web.alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 443 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/html/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 443 ssl;
server_name web.alicebot.tech;
ssl_certificate /etc/letsencrypt/live/web.alicebot.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web.alicebot.tech/privkey.pem; # managed by Certbot
location /static/ {
autoindex on;
alias /var/www/html/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 8000 ssl;
listen [::]:8000 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
autoindex on;
alias /var/www/alice_v2/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
}
}
As you can see we had different domain names here, which you wouldn't be needing. So you'll need to change the server names inside the server {...}
I need to edit nginx configuration of an AWS ELB environment so that it accepts payload(post request body) up to 50MB.
The default nginx payload limit is 1MB.
I researched many questions and answers, and found this:
https://stackoverflow.com/a/40745569
But I'm not sure how to access the nginx configuration file behind the ELB environment.
I also tried with this AWS doc:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html
But I still couldn't get responses from post requests with larger payload than 1M. (The server is a Node.js server)
So, please let me know how to change the max payload size of nginx from the default 1M to 50MB. Please note that the nginx is working on an AWS ELB environment.
Appendix 1: Here's the .ebextensions/nginx/nginx.conf file code I used:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 33282;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
root /var/app/current/public;
location / {
}
location /api {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/01_static.conf;
include conf.d/elasticbeanstalk/healthd.conf;
client_max_body_size 100M; #100mb
}
client_max_body_size 100M; #100mb
}
Appendix 2. I already added these 2 lines to the Node.js Express app:
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: "50mb", extended: true, parameterLimit:50000}));
https://mydomainName.com --> AWS-ELB [ingress 443 --> egress 80]) --> OmnibusGitlab
Now Omnibus redirects to the following and times out
http://mydomainName.com/users/sign_in
Any way to debug this issue.
Full path has to be in https because if you are going forward via reverse proxy that accepts https and the you have to come back as as https.
Separate the Nginx configuration because Omnibus solution have to constrains that block the flexibility we have on standard nginx.
Do the following to make this change:
edit /etc/gitlab/gitlab.rb
and add
nginx['enable'] = false
web_server['external_users'] = ['www-data'] #for ubuntu nginx user
web_server['external_users'] = ['nginx'] # for centos 6-7
Add the following configuration to enable gitlab via simple nginx
/etc/nginx/site-availabe/server
server {
listen *:443 default_server ssl;
ssl_certificate /etc/ssl/certs/myserver.crt;
ssl_certificate_key /etc/ssl/private/myserver.key;
server_name myhostname.com
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 50m; #or 5000
access_log /var/log/gitlab/nginx_access.log;
error_log /var/log/gitlab/nginx_error.log;
location / {
try_files $uri $uri/index.html $uri.html #gitlab;
}
location #gitlab {
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gitlab;
}
error_page 502 /502.html;
}
gitlab-redirect
/etc/nginx/sites-available/gitlab-redirect
server {
listen 80;
server_name myhostname.com;
return 301 https://myhostname.com;
}