Configuring nginx to allow HTTPS traffic on a custom port - amazon-web-services

I am having a doubt on configuring nginx to forward HTTPS traffic received on a custom port to the same custom port number of the destination url. My case is given below.
I have a VPC in AWS. I'm running nginx on the NAT (a Bastion server in my case) instance which receives HTTPS traffic.
My app-instance within the VPC is the destination for the requests forwarded by nginx. It has two custom sub-domains, one for one-way SSL authentication and the other for two-way SSL authentication.
I am serving the URI having two-way authentication on a custom port , rather than 443. The URIs having services running on port 443 use a one-way SSL authentication (server authentication).
In my nginx configuration file, I listen on this custom port to get the HTTPS requests redirected to the same custom port on the app-instance, after the SSL handshake is done. But I observed that after the handshake phase, it was being redirected to port 443, by default, of the app-instance.
The HTTPS packets are being sent using a HTTPBuilder object that is available in Java/Groovy, after setting up a HTTPS scheme.
A sample nginx configuration which I'm using is given here:
server {
listen 8000;
server_name myserver.com;
ssl on;
ssl_certificate /path/to/server/cert;
ssl_certificate_key /path/to/server/key;
ssl_client_certificate /path/to/client/cert;
ssl_verify_client on;
#https requests to this URI gets redirected to port 443 by default
location /customUri1 {
# switch off logging
access_log off;
proxy_pass http://app-instance-ip:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
server {
listen 443;
server_name myserver.com;
ssl on;
ssl_certificate /path/to/server/cert;
ssl_certificate_key /path/to/server/key;
location /customUri2 {
# switch off logging
access_log off;
proxy_pass http://app-instance-ip:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
Is there any nginx configuration mechanism which might allow me to send the HTTPS requests to the custom port on the app-instance? Thanks.

I found the issue. It was with the way I was posting the HTTPS packets to the server. In my Groovy code, the HTTPBuilder's uri.port field was set to 443 rather than 8000. After replacing it with 8000, my client is able to post to the server.

Related

nginx reverse proxy keeps showing the default website

I created an HTTP API with ec2 instance integration. Two python applications are running on the ec2 instance on ports 8002 and 5005. There is an nginx reverse proxy running on ec2 instance that should direct requests from API gateway to the correct port based on the server name. But it always end up directing traffic to the default server. Any idea whats the issue?
This is how my nginx config looks like:
server {
listen 80 default_server;
server_name example1.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8002;
}
}
server {
listen 80;
server_name example2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:5005;
}
}
I tried changing the nginx config couple of times but nothing worked.

Nginx revers proxy getting timeout error issue

I have an AWS elb loadbalancer with three dynamic IPs and domain- example.com, port 443.
our client wants to access API but he had outbound firewall rules which required to whitelist dynamic IPs every time.
for the resolution, we created a subdomain (api.example.com) with elastic IP and Nginx reverse proxy. So every request that comes on api.example.com will be forwarded to example.com.
The issue is that if the client whitelists load balancer IPs and makes a request on the proxy server(api.example.com) everything is working fine, but if elb IPs are not whitelisted he is getting a timeout error.
// server configuration api.pelocal.com
server {
server_name api.example.com ;
resolver 8.8.8.8 valid=10s;
resolver_timeout 10s;
set $upstream_endpoint https://example.com;
location / {
proxy_redirect off;
proxy_read_timeout 3600;
proxy_connect_timeout 1m;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_pass $upstream_endpoint;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
}
}
please help. Thanks in advance.

How to configure Nginx Load Balancer for WSO2 IS + EI + BPS cluster on the same server?

I have one server with WSO2 IS,EI,BPS and 2nd server with IS,EI,BPS. I want to create a cluster with Load Balancer. IS uses 9444 port, EI uses 9443 port, BPS uses 9445 port. I can't configure correctly Nginx for Load Balancing for 3 systems, because all systems use different ports. I didn't find any info in documentation. Where should i write different ports of IS,EI,BPS in Nginx config to open LB web-page with different ports for IS,EI,BPS?
I configured Nginx LB for IS cluster, it works. Then i configured Nginx LB for EI, it works. Then BPS. I don't know how to merge these configurations in 1 config.
Config for EI. Configs for IS & BPS the same but with other ports.
upstream example.com (SHOULD I WRITE 9443 PORT HERE?) {
server 1.1.1.1:9443;
server 1.1.1.2:9443;
ip_hash;}
server {
listen 443 (SHOULD I WRITE 9443 PORT HERE?);
server_name example.com (SHOULD I WRITE 9443 PORT HERE?);
ssl on;
ssl_certificate /etc/nginx/ssl/cert.cer;
ssl_certificate_key /etc/nginx/ssl/key.key;
ssl_client_certificate /etc/nginx/ssl/ca.pem;
ssl_verify_client on;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://example.com (SHOULD I WRITE 9443 PORT HERE?);
proxy_ssl_certificate /etc/nginx/ssl/cert.cer;
proxy_ssl_certificate_key /etc/nginx/ssl/key.key;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}}
You can define 3 hostnames for the servers as below.
bps.wso2.com
is.wso2.com
ei.wso2.com
Then you can define 3 upstreams and 3 servers. Example config can be found in https://docs.wso2.com/display/AM210/Configuring+the+Proxy+Server+and+the+Load+Balancer

TLS on Elastic Beanstalk running reverse proxy

I want to add TLS to my AWS Elastic Beanstalk application. It is a node.js app running behind nginx proxy server.
Here are the steps I've completed
Get a wildcard certificate from Amazon Certificate Manager.
Add the certificate in the load balancer configuration section of my EB instance.
My relevant part of my nginx config is
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' ) {
return 301 https://$host$request_uri;
}
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When I try to access my application using https, I get a 408: Request Timed Out.
It is my understanding that to enable ssl on nginx we need to add the cert along with pem file and listen on port 443. But since I'm using ACM certificate, I don't have the cert and pem files.
What am I missing to add in my nginx.conf for this to work?
Thanks
In the load balancer listener configuration, for the port 443 listener, the "Instance Port" setting should be 80.

Websocket Wss on on AWS Application Load Balancer or Network Load Balancer

I have spent hours trying to come up with a solution and read a lot of web socket solution with a nginx, still no luck
I have a websocket application containerised with docker and running on ec2 instances using ecs. my websocket need to be able to autoscale when needed.
I have tested connectivity with Classic elb and all works well but it doesn't support websocket protocol.
Down to ALB and NLB
ALB only allows HTTP and HTTPS protocol and support websockets and i am unsure of how to implement that to achieve accessing my websocket over the WSS protocol also target group heath checks fails.
NLB works well as it allows TCP protocol but the only issue is that it doesn't terminate SSL.
The only solution was to install the SSL cert on the EC2 and setup nginx reverse proxy to the docker container. but i have had no joys with that as i have had no experience with Nginx i might not have the right config. but i am still not able to connect to websocket over wss. Any assistance welcomed
My main objective is connecting to websocket over wss.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server nginx:443;
ssl_certificate /etc/ssl/private/example.chained.crt;
ssl_certificate_key /etc/ssl/private/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
}
upstream localhost {
server apache:443;
}
server {
listen 443;
location / {
proxy_pass http://localhost;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443;
location / {
proxy_pass http://example.com;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
This was resolved by using ALB.