Kubernetes nginx ingress path-based routing of HTTPS in AWS - amazon-web-services

Question: Within Kubernetes, how do I configure the nginx ingress to treat traffic from an elastic load balancer as HTTPS, when it is defined as TCP?
I am working with a Kubernetes cluster in an AWS environment. I want to use an nginx ingress to do path-based routing of the HTTPS traffic; however, I do not want to do SSL termination or reencryption on the AWS elastic load balancer.
The desired setup is:
client -> elastic load balancer -> nginx ingress -> pod
Requirements:
1. The traffic be end-to-end encrypted.
2. An AWS ELB must be used (the traffic cannot go directly into Kubernetes from the outside world).
The problem that I have is that to do SSL passthrough on the ELB, I must configure the ELB as TCP traffic. However, when the ELB is defined as TCP, all traffic bypasses nginx.
As far as I can tell, I can set up a TCP passthrough via a ConfigMap, but that is merely another passthrough; it does not allow me to do path-based routing within nginx.
I am looking for a way to define the ELB as TCP (for passthrough) while still having the ingress treat the traffic as HTTPS.
I can define the ELB as HTTPS, but then there is a second, unnecessary negotiate/break/reencrypt step in the process that I want to avoid if at all possible.

To make it more clear I'll start from OSI model, which tells us that TCP is level 4 protocol and HTTP/HTTPS is level 7 protocol. So, frankly speaking HTTP/HTTP data is encapsulated to TCP data before doing rest levels encapsulations to transfer packet to another network device.
If you setup Classic (TCP) LoadBalancer it stops reading packet data after reading TCP part, which is enough to decide (according to LB configuration) to which IP address and to which IP port this data packet should be delivered. After that LB takes the TCP payload data and wrap it around with another TCP layer data and send it to the destination point (which in turn cause all other OSI layers applied).
To make your configuration works as expected, it's required to expose nginx-ingress-controller Pod using NodePort service. Then Classic ELB can be configured to deliver traffic to any cluster node to port selected for that NodePort service. Usually it is in between 30000 and 32767. Sou your LB pool will look like the following:
Let's imagine cluster nodes have IP addresses 10.132.10.1...10 and NodePort port is 30276.
ELB Endpoint 1: 10.132.10.1:30276
ELB Endpoint 2: 10.132.10.2:30276
...
ELB Endpoint 10: 10.132.10.10:30276
Note: In case of AWS ELB, I guess, nodes DNS names should be used instead of IP addresses.
So it should cause the following sequence of traffic distribution from a client to Kubernetes application Pod:
Client sends TCP packet with HTTP/HTTPS request in payload to ELB_IP:ELB_port (a.b.c.d:80).
ELB receives IP packet, analyze its TCP data, finds the appropriate endpoint from backend pool (whole list of Kubernetes cluster nodes), and creates another TCP packet with the same HTTP/HTTPS data inside, and also replaces destination IP and destination TCP port to cluster node IP and Service NodePort TCP port (l.m.n.k:30xxx) and then send it to the selected destination.
Kubernetes node receives TCP packet and using the iptables rules changes the destination IP and destination port of the TCP packet again, and forward the packet (according to the Nodeport Service configuration) to destination pod. In this case it would be nginx-ingress-controller pod.
Nginx-ingress-controller pod receives the TCP packet, and because according to TCP data it have to be delivered locally, extracts HTTP/HTTP data out of it and send the data (HTTP/HTTPS request) to Nginx process inside the Nginx container in the Pod,
Nginx process in the container receives HTTP/HTTPS request, decrypt it (in case of HTTPS) and analyze all HTTP headers.
According to nginx.conf settings, Nginx process change HTTP request and deliver it to the cluster service, specified for the configured host and URL path.
Nginx process sends changed HTTP request to the backend application.
Then TCP header is added to the HTTP request and send it to the backend service IP_address:TCP_port.
iptables rules defined for the backend service, deliver packet to one of the service endpoints (application Pods).
Note: To terminate SSL on ingress controller you have to create SSL certificates that includes ELB IP and ELB FQDN in the SAN section.
Note: If you want to terminate SSL on the application Pod to have end to end SSL encryption, you may want to configure nginx to bypass SSL traffic.
Bottom line: ELB configured for delivering TCP traffic to Kubernetes cluster works perfectly with nginx-ingress controller if you configure it in the correct way.
In GKE (Google Kubernetes Engine) if you create a Service with type:LoadBalancer it creates you exactly TCP LB which forward traffic to a Service NodePort and then Kubernetes is responsible to deliver it to the Pod. EKS (Elastic Kubernetes Service) from AWS works in pretty much similar way.

Related

aws network load balancer ping fail from terminal

We have configured our website server with network load balancing. When we tried to ping our domain name using terminal all ping lost.
I tried to figure it out and have no clue how to configure NLB to listen ping from terminal.
You need to create one/ multiple listeners in case of NLB and route them to specific target for serving the intended requests
Network traffic that does not match a configured listener is classified as unintended traffic. ICMP requests other than Type 3 (unreachable) are also considered unintended traffic. Network Load Balancers drop unintended traffic without forwarding it to any targets.
Source : https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html

Regarding TLS termination at application load balancer AWS

I have application in container , which runs on protocol HTTP with port 1429. Conainer is deployed in AWS EKS. I have conferred the ALB with certificate. Listener port is HTTPS and port 443.
I need to terminate TLS at ALB and forward request http to 1429.
I configured ingress target port as 1429.
I am getting target TLS Negotiation Error in cloud watch metrics.
Any suggestions on this.
I would double check that the target group protocol is set to HTTP. Seeing as your application is deployed to EKS you could port-forward to the port in question and make a HTTP curl request to ensure that no TLS errors are thrown and that the request is handled as expected.

how to configure the aws elastic beanstalk load balancer for different layers?

I read the aws documentation inside the elastic beanstalk program where aws is responsible for scaling the servers and auto managing it. In the same documentation there is an option for changing and configuring the load balancer. In my case I want to change it to balance the requests that come to the servers on the IP network layer (L3), but it says that only HTTP and TCP can be listened and balanced.
I am developing a chat application backend that need to be developed with scaling in considerations. how can I configure the load balancer to listen on L3 ?
the chat application in order to work it must make the tcp connection with the server not the load balancer so that's why I must load the packets on the IP layer to the server so the server can establish a tcp connection with the app ( if I am wrong and I can do it on the tcp layer tell me ).
If I can't, does that give me another option or I will just be forced on using ec2 and handle all the system management overhead myself and create my own load balancer ?
ELB Classic operates at either Layer 4 or Layer 7. Those are the options.
the chat application in order to work it must make the tcp connection with the server not the load balancer so that's why I must load the packets on the IP layer to the server so the server can establish a tcp connection with the app.
You're actually incorrect about this. If you need to know the client's source IP address, you can enable the Proxy Protocol on your ELB, and support this in your server code.
When the ELB establishes each new connection to the instance, with the Proxy Protocol enabled, the ELB emits a single line preamble containing the 5-way tuple describing the external connection, which your application can interpret. Then it opens up the L4 connection's payload streams and is transparent for the remainder of the connection.
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html

Websocket timeouts using AWS Application Load Balancer

I'm getting gateway time-outs when trying to use a port specifically for websockets using an Application Load Balancer inside an Elastic Beanstalk environment.
The web application and websocket server is held within a Docker container, the application runs fine however wss://domain.com:8080 will just time out.
Here is the Load balancer listeners, using the SSL cert for wss.
The target group it points to is accepting 'Protocol' of HTTP (I've tried HTTPS) and forwards to 8080 onto an EC2 instance. Or.. It should be. (Doesn't appear to be an option for TCP on Application Load Balancers).
I've had a look over the Application Load Balancer logs and it looks like the it reaches the target group, but times out between it's connection to the EC2 instance, and I'm stumped on why.
All AWS Security Groups have been opened on all traffic for the time being, I've checked the host and found that the port is open and being listened to by Nginx which will route to the correct port to the docker container:
docker ps also shows me:
And once inside the container I can see that the port is being listened to by the Websocket server:
So it can't be the EC2 instance itself, can it? Is there an issue routing websockets via ports in an ALB?
-- Edit --
Current SG of the ALB:
The EC2 instance SG:
Accepted answer here seems to be "open Security Groups for EC2 (web server) and ALB inbound & outbound communication on required ports since websockets need two way communication."
This is incorrect and the reason why it solved the problem is coincidental.
Let me explain:
"Websockets needs two way communication..." - Sure but the TCP sessions is only ever opened from one way - from the client.
You don't have to allow any outbound connections from the EC2 instance (web server) in order to use web sockets.
Of course the ALB needs to be able to do TCP connections to the EC2 instance. But not to the client. Why? Well the ALB is accepting TCP connections (usually on port 80 and 443). It is setting up a TCP session that was initiated by the client. It is then trying to set up a new TCP session to the web server behind the ALB. This should be done on the port that you decided to have the web server listening on. The Security Group around the ALB needs to be able to do outbound connections on this port to the web server. This is the reason why "open up everything" worked. It has nothing to do with "two way communication".
You could use any ports of course but you don't need to use any other ports than 80 & 443 (such as 8080) on both the Load Balancer or the EC2.
Websockets need two way communication, make sure security groups attached to all resources (EC2 & ALB) allow both inbound & outbound communication on required ports.

How to get the port number of client request with AWS ELB using TCP and Nginx on server

While using HTTP/HTTPS as load balancer protocol we get the requested origin protocol (i.e. it is HTTP or HTTPS) from x-forwarded-protocol header.
Now, using this header in nginx configuration it can be determined that whether the originating call was from HTTP or HTTPS and action could be performed accordingly.
But if the ELB listeners configuration is as shown in the below image, then how to determine that the request has come via port 80 or port 443?
You have a couple of options, at least:
Option 1 is not to send both types of traffic to the same port on the instances. Instead, configure the application to listen on an additional port, such as 81 or 8080, and send the SSL-originating traffic there. Then use the port where the traffic arrives at the instance, to differentiate between the two types of traffic.
Option 2 is to enable the PROXY protocol on the ELB, after modifying the application to understand it. This has the advantage of also giving you the client IP address.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt