I have an AWS elb loadbalancer with three dynamic IPs and domain- example.com, port 443.
our client wants to access API but he had outbound firewall rules which required to whitelist dynamic IPs every time.
for the resolution, we created a subdomain (api.example.com) with elastic IP and Nginx reverse proxy. So every request that comes on api.example.com will be forwarded to example.com.
The issue is that if the client whitelists load balancer IPs and makes a request on the proxy server(api.example.com) everything is working fine, but if elb IPs are not whitelisted he is getting a timeout error.
// server configuration api.pelocal.com
server {
server_name api.example.com ;
resolver 8.8.8.8 valid=10s;
resolver_timeout 10s;
set $upstream_endpoint https://example.com;
location / {
proxy_redirect off;
proxy_read_timeout 3600;
proxy_connect_timeout 1m;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_pass $upstream_endpoint;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
}
}
please help. Thanks in advance.
Related
I installed Ghost on my EC2 instance running Ubuntu 18 by following the official guide.
I didn't opt-in for a LetsEncrypt certificate though. I wanted to roll my own with the Amazon Certificate Manager and load-balance requests to the website via Route53 and a CloudFront distribution.
The issue is that the blog doesn't load - instead I am presented the default nginx homepage.
This is my website config in /etc/nginx/sites-enabled:
server {
listen 80;
listen [::]:80;
server_name paulrberg.com;
root /var/www/ghost/system/nginx-root; # Used for acme.sh SSL verification (https://acme.sh)
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
location ~ /.well-known {
allow all;
}
client_max_body_size 50m;
}
I suspect that the issue is the nginx configuration. The way Ghost provides may not be compatible with an Amazon certificate coupled with Route53 and CloudFront.
Is this even doable or do I have to use the LetsEncrypt certificate and give up on my infrastructure of choice?
This is what I have setup on AWS in a nutshell:
I have an ec2 (windows server) lets call it my WebAppInstance which hosts a .Net based web api application.
Then I have another ec2 instance (windows server) which has another instance of the same web app, lets call it WebAppInstanceStaging.
Now, in order to achieve canary deployment, I created another ec2 (ubuntu) to host nginx to redirect the request to either WebAppInstance OR WebAppInstanceStaging based on the request header.
I have put my nginx behind an elb to make use of the ssl cert I have in AWS Certificate Manager (ACM) since it cannot be directly used with an ec2. And then I created a Route 53 record set in the domain registered with AWS (*.mydomain.com).
In Route 53 I created a record set as myapp.mydomain.com.
Now when I access the http://myapp.mydomain.com I am able to access it but when I try to access https://myapp.mydomain.com I am seeing error saying This site can't be reached (ERR_CONNECTION_REFUSED).
Below is the configuration of my nginx:
upstream staging {
server myappstaging.somedomain.com:443;
}
upstream prod {
server myapp.somedomain.com:443;
}
# map to different upstream backends based on header
map $http_x_server_select $pool {
default "prod";
staging "staging";
}
server {
listen 80;
server_name <publicIPOfMyNginxEC2> <myapp.mydomain.com>;
location / {
proxy_pass https://$pool;
#standard proxy settings
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
Been more than a day trying to figure it out. What am I missing?
I want to add TLS to my AWS Elastic Beanstalk application. It is a node.js app running behind nginx proxy server.
Here are the steps I've completed
Get a wildcard certificate from Amazon Certificate Manager.
Add the certificate in the load balancer configuration section of my EB instance.
My relevant part of my nginx config is
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' ) {
return 301 https://$host$request_uri;
}
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When I try to access my application using https, I get a 408: Request Timed Out.
It is my understanding that to enable ssl on nginx we need to add the cert along with pem file and listen on port 443. But since I'm using ACM certificate, I don't have the cert and pem files.
What am I missing to add in my nginx.conf for this to work?
Thanks
In the load balancer listener configuration, for the port 443 listener, the "Instance Port" setting should be 80.
I have spent hours trying to come up with a solution and read a lot of web socket solution with a nginx, still no luck
I have a websocket application containerised with docker and running on ec2 instances using ecs. my websocket need to be able to autoscale when needed.
I have tested connectivity with Classic elb and all works well but it doesn't support websocket protocol.
Down to ALB and NLB
ALB only allows HTTP and HTTPS protocol and support websockets and i am unsure of how to implement that to achieve accessing my websocket over the WSS protocol also target group heath checks fails.
NLB works well as it allows TCP protocol but the only issue is that it doesn't terminate SSL.
The only solution was to install the SSL cert on the EC2 and setup nginx reverse proxy to the docker container. but i have had no joys with that as i have had no experience with Nginx i might not have the right config. but i am still not able to connect to websocket over wss. Any assistance welcomed
My main objective is connecting to websocket over wss.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server nginx:443;
ssl_certificate /etc/ssl/private/example.chained.crt;
ssl_certificate_key /etc/ssl/private/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
}
upstream localhost {
server apache:443;
}
server {
listen 443;
location / {
proxy_pass http://localhost;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443;
location / {
proxy_pass http://example.com;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
This was resolved by using ALB.
I am having a doubt on configuring nginx to forward HTTPS traffic received on a custom port to the same custom port number of the destination url. My case is given below.
I have a VPC in AWS. I'm running nginx on the NAT (a Bastion server in my case) instance which receives HTTPS traffic.
My app-instance within the VPC is the destination for the requests forwarded by nginx. It has two custom sub-domains, one for one-way SSL authentication and the other for two-way SSL authentication.
I am serving the URI having two-way authentication on a custom port , rather than 443. The URIs having services running on port 443 use a one-way SSL authentication (server authentication).
In my nginx configuration file, I listen on this custom port to get the HTTPS requests redirected to the same custom port on the app-instance, after the SSL handshake is done. But I observed that after the handshake phase, it was being redirected to port 443, by default, of the app-instance.
The HTTPS packets are being sent using a HTTPBuilder object that is available in Java/Groovy, after setting up a HTTPS scheme.
A sample nginx configuration which I'm using is given here:
server {
listen 8000;
server_name myserver.com;
ssl on;
ssl_certificate /path/to/server/cert;
ssl_certificate_key /path/to/server/key;
ssl_client_certificate /path/to/client/cert;
ssl_verify_client on;
#https requests to this URI gets redirected to port 443 by default
location /customUri1 {
# switch off logging
access_log off;
proxy_pass http://app-instance-ip:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
server {
listen 443;
server_name myserver.com;
ssl on;
ssl_certificate /path/to/server/cert;
ssl_certificate_key /path/to/server/key;
location /customUri2 {
# switch off logging
access_log off;
proxy_pass http://app-instance-ip:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
Is there any nginx configuration mechanism which might allow me to send the HTTPS requests to the custom port on the app-instance? Thanks.
I found the issue. It was with the way I was posting the HTTPS packets to the server. In my Groovy code, the HTTPBuilder's uri.port field was set to 443 rather than 8000. After replacing it with 8000, my client is able to post to the server.