I want to use http2 by nginx image, but i tried very long the protocol are still using http/1.1
Dockerfile for nginx:
FROM nginx
COPY ./docker/nginx/etc/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/etc/nginx/conf.d/default.conf.https /etc/nginx/conf.d/default.conf
/etc/nginx/nginx.conf is
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
# run ulimit -n to check
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# Buffer size for post submission
client_body_buffer_size 10k;
client_max_body_size 8m;
# Buffer size for header
client_header_buffer_size 1k;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf is:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 0.0.0.0;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
expires $expires;
location = /favicon.ico {
log_not_found off;
}
location /static/ {
alias /static_files/;
}
location / {
access_log /var/log/nginx/wsgi.access.log;
error_log /var/log/nginx/wsgi.error_log warn;
proxy_pass http://app_wsgi:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
access_log /var/log/nginx/asgi.access.log;
error_log /var/log/nginx/asgi.error_log warn;
proxy_pass http://app_asgi:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
Docker-compose file for nginx part:
nginx:
restart: always
build:
context: .
dockerfile: docker/nginx/Dockerfile.https
ports:
- 80:80
- 443:443
volumes:
- ./app/static:/static_files
- ./ssl/certs:/etc/nginx/certs
depends_on:
- app_wsgi
- app_asgi
go inside nginx container and run nginx -V command:
root#0a15f404bf1d:/# nginx -V
nginx version: nginx/1.17.9
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.17.9/debian/debuild-base/nginx-1.17.9=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
is there anything wrong of my settings?
i checked in chrome dev tool and saw all the request are still send through http/1.1 protocol
my architecture is
Nginx <-> gunicorn <-> Django application
I had a similar issue, I was implementing a proxy pass and calling the nginx server, I had been receiving status 426, 'til I put set up following configuration:
upstream mservername {
server my.example.domain:443;
keepalive 20;
}
server {
listen 8443 ssl http2;
server_name my.example.domain;
access_log /opt/bitnami/nginx/logs/access_my_example_domain.log;
error_log /opt/bitnami/nginx/logs/error_my_example_domain.log;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_protocols TLSv1.3 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location /resource {
http2_push_preload on;
proxy_ssl_session_reuse off;
proxy_ssl_server_name on;
proxy_ssl_name my.example.domain;
proxy_ssl_trusted_certificate /opt/bitnami/nginx/conf/bitnami/certs/my_example_domain/my_domain_cert.crt;
proxy_set_header content-type "application/xml";
proxy_set_header accept "application/xml";
proxy_hide_header X-Frame-Options;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://my.example.domain/resource;
}
}
Hope can help. In my case it solved the issue.
Related
I am trying to setup websockets on my django application using Daphne and Ngnix. On my local setup everything works as expected but when I have uploaded to the server the websockets do not respond. This is Nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 10M;
}
and this is my sites-available file which is accessed by Nginx:
server {
server_name 139.59.9.118 newadmin.aysle.tech;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/AysleServer/src;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /wss/ {
proxy_pass http://0.0.0.1:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/newadmin.aysle.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/newadmin.aysle.tech/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = newadmin.aysle.tech) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name 139.59.9.118 newadmin.aysle.tech;
listen 80;
return 404; # managed by Certbot
}
and this is my daphne.service file:
GNU nano 4.8 daphne.service
[Unit]
Description=WebSocket Daphne Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/home/django/AysleServer/src
#ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -b 0.0.0.0 -p 8001 adminPanel.asgi:application
ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -e ssl:8001:privateKey=/etc/letsencrypt/live/newadmin.aysle.tech/privkey.>
Restart=on-failure
[Install]
WantedBy=multi-user.target
I tried sending a websocket request like this:
ws://newadmin.aysle.tech/ws/test/
ws://newadmin.aysle.tech:8001/ws/test/
But I do not get any response back. I tried checking the log files for error but there is no error. My guess is Nginx is not forwarding the request to Daphne. Probably a configuration issue. But I do not know what to change. Please help me with this. Thanks for your time in advance. Please note that I am also using Gunicorn to handle the HTTP request and they work as expected.
Since you are using ssl in your nginx config, you also have to use wss instead of ws as the scheme.
Also your location is /wss/ so your uri should use this location too.
Try this for a request from the client:
wss://newadmin.aysle.tech/wss/test/
If this doesn't work, you could also check, if your host does even allow WebSockets, or if you have to activate it. For example, I used a Djangoeurope server and had to activate WebSockets for the uri.
Im trying to set up a reverse wss proxy with nginx to an amazon api gateaway websocket api but I have had no luck with the configuration of nginx so i would be glad if you helped me sort this out.
Let me give you some details:
I have an EC2 instance running nginx that has attached to it an elastic ip address.
I also have dns records to point traffic from connect.example.com to that ip address.
I have set up nginx as a reverse proxy to proxy the traffic from connect.example.com to app.example.com on port 443 with ssl(I have generated the relevant certificates).
On app.example.com lies a websockets api on amazon's api gateaway service.
I can see from nginx's access logs that my requests reach the ec2 instance but I always get error responses no matter how i change the nginx config file(400,403,500,502 etc).
I dont seem to understand where the problem lies even though I have searched around and tried various configurations.
Im attaching my config files below for reference:
nginx.conf
# Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
#include mime.types;
#default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log";
#add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
#connection_pool_size 112;
#sendfile on;
#tcp_nopush on;
#tcp_nodelay on;
#gzip on;
#gzip_http_version 1.0;
#gzip_comp_level 2;
#gzip_proxied any;
#gzip_types text/plain text/css application/javascript text/xml application/xml+rss;
#keepalive_timeout 65;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 80M;
#server_tokens on;
#include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
#server {
# Port to listen on, can also be set in IP:PORT format
# listen 80;
# include "/opt/bitnami/nginx/conf/bitnami/*.conf";
# include "/opt/bitnami/nginx/conf/ssl/ssl-redirect.conf";
# location /status {
# stub_status on;
# access_log off;
# allow 127.0.0.1;
# deny all;
# }
# }
include "/opt/bitnami/nginx/conf/ssl/ssl.conf";
}
ssl.conf
resolver app.example.com;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
#listen [::]:443 ssl;
server_name connect.example.com;
#ssl on;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /usr/share/nginx/html;
underscores_in_headers on;
location / {
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass https://ws-backend$uri$is_args$args;
proxy_read_timeout 9000;
proxy_pass_request_headers on;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Sec-WebSocket-Accept $http_sec_websocket_accept;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
upstream ws-backend {
server app.example.com:443;
}
When i connect directly to app.example.com i have no problem and the response is the following:
expected response
But when i connect though connect.example.com i get the following response:
actual response
after migrating my Django web app from HTTP to https, when I type
for example r= requests.get('http://xxxx.com')
it gives me this error :
requests.exceptions.SSLError: HTTPSConnectionPool(host=my_host_name,port:443) Max retries exceeded with url:http://xxxx.com (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)'),))
but I made the nginx config for the redirection for example when I put any HTTP address on my browser it redirects me to the correct https address.
I would like to do the same thing on the API request.
I don't like to change my requests addresses on my backend code I just want to redirect the HTTP requests to https if it is possible?
my nginx config :
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
proxy_cache_path /path/cache keys_zone=cache:10m levels=1:2 inactive=600s
max_size=100m;
default_type application/octet-stream;
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
access_log /path/access.log;
error_log /Path/error.log error;
gzip on;
gzip_disable "msie6";
text/xml application/xml application/xml+rss text/javascript;
upstream app_servers {
server 127.0.0.1:8080;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl on;
ssl_certificate /PATH/certificate.crt;
ssl_certificate_key /PATH/certificate.key;
proxy_cache cache;
proxy_cache_valid 200 1s;
#ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
server_name my_host_name;
access_log /path/nginx-access.log compression;
location /static/ {
alias /path/static/;
}
location /nginx_status {
stub_status on;
allow all;
deny all;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_pass_request_headers on;
proxy_read_timeout 1200;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 9999 ;
server_name my_host_name ;
return 307 https://my_domain.com$request_uri;
}
}
The error kind of puzzles me, but in you Nginx config file, I see that you're not listening on the default HTTP port. You should add a server block that listens on the HTTP (80) port, and redirect to https (443) from there.
Add this block inside your http:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name my_host_name;
return 301 https://$host$request_uri;
}
I'm setting up the first production architecture for my Django-based app. I'm using nginx + gunicorn + remote postgres database setup.
After performing simple API load tests with https://loader.io I've found out that when increasing the number of clients sending api requests over 60 clients/second for 30 seconds-long test the tool shows errors that the connections timeout.
When using double server setup with a load balancer I can double the clients/second number but I would expect a single 3vCPU /1 GB ram setup to be able to handle more than 30 requests/second - am I right?
I've tried a lot of differente gunicorn / nginx config parameters but nothing seems to help.
This is the content of my /etc/nginx/nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
reset_timedout_connection on;
keepalive_requests 100000;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
This is the content of my /etc/nginx/sites-available/MY_DOMAIN file:
server {
listen 80;
listen [::]:80;
server_name MY_DOMAIN www.MY_DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
client_max_body_size 5M;
server_name MY_DOMAIN www.MY_DOMAIN;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/backend;
}
location /loaderio-b061bddf86a67379411d4ef54f7ee430/ {
root /var/www/backend;
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/backend/MY_SOCKET.sock;
}
location /ws/ {
include proxy_params;
proxy_pass http://unix:/var/www/backend/ws.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem; # managed by Certbot
This is the content of my supervisor file:
[program:gunicorn]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/gunicorn --workers 5 --keep-alive 15 --worker-class gevent --bind unix:/var/www/backend/SOCK_NAME.sock config.wsgi:application
autostart=true
autorestart=true
log_level=debug
stderr_logfile=/var/log/gunicorn/gunicorn.out.log
stdout_logfile=/var/log/gunicorn/gunicorn.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[program:daphne]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/daphne -u /var/www/backend/ws.sock config.asgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne/daphne.out.log
stdout_logfile=/var/log/daphne/daphne.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[group:GROUP_NAME]
programs:gunicorn,daphne
When performing the load test the CPU vCores are between 10-18% load and the RAM usage is around 70%
Is it possible to enhance this single server instance performance above the 60 req/sec or is it just hardware limitation (I've already tried digital ocean 16vCPU 8GB ram droplet size and the results where pretty much the same (no matter if 5 or 15 workers were used)?
I upgraded ubuntu and after up-gradation, example.com (name changed) stopped working and giving below error. I am using nginx (nginx/1.10.0 ) and ubuntu (release 16.04). The whole setup is on AWS EC2
502 Bad Gateway
Below are contents from nginx error log
[error] 1201#1201: *27 connect() failed (111: Connection refused) while connecting to upstream, client: 171.76.106.51, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3000/favicon.ico", host: "public IP", referrer: "https://publicIP/"
Below is nginx.conf file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nginx sites-enabled default has symlink to sites-available default and below is the content
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}
Another nginx conf file
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
When I am doing http://localhost:3000/ it says connection refused
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:3000... failed: Connection refused.
When I am doing "wget www.google.com" its says connected
Can you please help here ?