Regarding TLS termination at application load balancer AWS - amazon-web-services

I have application in container , which runs on protocol HTTP with port 1429. Conainer is deployed in AWS EKS. I have conferred the ALB with certificate. Listener port is HTTPS and port 443.
I need to terminate TLS at ALB and forward request http to 1429.
I configured ingress target port as 1429.
I am getting target TLS Negotiation Error in cloud watch metrics.
Any suggestions on this.

I would double check that the target group protocol is set to HTTP. Seeing as your application is deployed to EKS you could port-forward to the port in question and make a HTTP curl request to ensure that no TLS errors are thrown and that the request is handled as expected.

Related

Where to configure websockets with an application load balancer on AWS EC2?

According to the AWS documentation, "WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer."
However, when I select Application Load Balancer in EC2, I don't have any option other than HTTP and HTTPS:
I would like to use the secure websocket protocol (wss://) which I believe would be over TLS:8888.
How can I input this option?
The solution was to use HTTPS for the listener protocol, even though the browser is making requests to wss://.
For port number, configuring both the listener and environment instance to port 8888 works.

How to set up a GCP LoadBalancer for mixed HTTPS and gRPC traffic

I am trying to make sense of the GCP LoadBalancer for the use case of mixed HTTPS and gRPC backend. The LoadBalancer documentation seems to indicate that you can/should use the HTTP(S) LoadBalancer, as that "includes HTTP/2". For backend services I appear to be able to specify a named "grpc" port and set it to be number 7000, but if I use the gcloud command to view my backend services:
gcloud compute backend-services list --format=json
My service is shown to use portName "grpc" (correct) with port "80" (incorrect). This even though I was prompted that the instance group had named ports, and I could (and did) choose "grpc:7000".
On the frontend side, I can only select ports 80 and 8080 for HTTP, or 443 for HTTPS. No mention of HTTP/2, but I guess "HTTPS includes HTTP/2".
Am I right in that I cannot use the layer 7 LoadBalancer at all for my scenario? The documentation is not very explicit on ports, and if I search the Web for gRPC I get loads of stories on LoadBalancing Kubernetes-hosted apps.
In order to use gRPC you need to use HTTP/2
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
HTTP/2 and HTTPS are not one and the same, however H2 (HTTPS/2) can only work over HTTPS. But by default H2 is not enabled you need to enable it.
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
See: https://cloud.google.com/load-balancing/docs/https/ for further information.

Kubernetes nginx ingress path-based routing of HTTPS in AWS

Question: Within Kubernetes, how do I configure the nginx ingress to treat traffic from an elastic load balancer as HTTPS, when it is defined as TCP?
I am working with a Kubernetes cluster in an AWS environment. I want to use an nginx ingress to do path-based routing of the HTTPS traffic; however, I do not want to do SSL termination or reencryption on the AWS elastic load balancer.
The desired setup is:
client -> elastic load balancer -> nginx ingress -> pod
Requirements:
1. The traffic be end-to-end encrypted.
2. An AWS ELB must be used (the traffic cannot go directly into Kubernetes from the outside world).
The problem that I have is that to do SSL passthrough on the ELB, I must configure the ELB as TCP traffic. However, when the ELB is defined as TCP, all traffic bypasses nginx.
As far as I can tell, I can set up a TCP passthrough via a ConfigMap, but that is merely another passthrough; it does not allow me to do path-based routing within nginx.
I am looking for a way to define the ELB as TCP (for passthrough) while still having the ingress treat the traffic as HTTPS.
I can define the ELB as HTTPS, but then there is a second, unnecessary negotiate/break/reencrypt step in the process that I want to avoid if at all possible.
To make it more clear I'll start from OSI model, which tells us that TCP is level 4 protocol and HTTP/HTTPS is level 7 protocol. So, frankly speaking HTTP/HTTP data is encapsulated to TCP data before doing rest levels encapsulations to transfer packet to another network device.
If you setup Classic (TCP) LoadBalancer it stops reading packet data after reading TCP part, which is enough to decide (according to LB configuration) to which IP address and to which IP port this data packet should be delivered. After that LB takes the TCP payload data and wrap it around with another TCP layer data and send it to the destination point (which in turn cause all other OSI layers applied).
To make your configuration works as expected, it's required to expose nginx-ingress-controller Pod using NodePort service. Then Classic ELB can be configured to deliver traffic to any cluster node to port selected for that NodePort service. Usually it is in between 30000 and 32767. Sou your LB pool will look like the following:
Let's imagine cluster nodes have IP addresses 10.132.10.1...10 and NodePort port is 30276.
ELB Endpoint 1: 10.132.10.1:30276
ELB Endpoint 2: 10.132.10.2:30276
...
ELB Endpoint 10: 10.132.10.10:30276
Note: In case of AWS ELB, I guess, nodes DNS names should be used instead of IP addresses.
So it should cause the following sequence of traffic distribution from a client to Kubernetes application Pod:
Client sends TCP packet with HTTP/HTTPS request in payload to ELB_IP:ELB_port (a.b.c.d:80).
ELB receives IP packet, analyze its TCP data, finds the appropriate endpoint from backend pool (whole list of Kubernetes cluster nodes), and creates another TCP packet with the same HTTP/HTTPS data inside, and also replaces destination IP and destination TCP port to cluster node IP and Service NodePort TCP port (l.m.n.k:30xxx) and then send it to the selected destination.
Kubernetes node receives TCP packet and using the iptables rules changes the destination IP and destination port of the TCP packet again, and forward the packet (according to the Nodeport Service configuration) to destination pod. In this case it would be nginx-ingress-controller pod.
Nginx-ingress-controller pod receives the TCP packet, and because according to TCP data it have to be delivered locally, extracts HTTP/HTTP data out of it and send the data (HTTP/HTTPS request) to Nginx process inside the Nginx container in the Pod,
Nginx process in the container receives HTTP/HTTPS request, decrypt it (in case of HTTPS) and analyze all HTTP headers.
According to nginx.conf settings, Nginx process change HTTP request and deliver it to the cluster service, specified for the configured host and URL path.
Nginx process sends changed HTTP request to the backend application.
Then TCP header is added to the HTTP request and send it to the backend service IP_address:TCP_port.
iptables rules defined for the backend service, deliver packet to one of the service endpoints (application Pods).
Note: To terminate SSL on ingress controller you have to create SSL certificates that includes ELB IP and ELB FQDN in the SAN section.
Note: If you want to terminate SSL on the application Pod to have end to end SSL encryption, you may want to configure nginx to bypass SSL traffic.
Bottom line: ELB configured for delivering TCP traffic to Kubernetes cluster works perfectly with nginx-ingress controller if you configure it in the correct way.
In GKE (Google Kubernetes Engine) if you create a Service with type:LoadBalancer it creates you exactly TCP LB which forward traffic to a Service NodePort and then Kubernetes is responsible to deliver it to the Pod. EKS (Elastic Kubernetes Service) from AWS works in pretty much similar way.

running 2 SSL listeners on ELB with different ports

I have a springboot web application that is running in an AWS EC2. The application is running behind a classic ELB. I am using HTTPS between the client and the ELB so traffic coming in on port 443 is being routed to port 8080 I have deployed the certificate to the ELB.
In the same application I have an embedded ActiveMQ running on port 61616. It is running as part of the JVM. Clients connect to it using TCP (TCP://domain.com:61616).
I want the client to connect to my AMQ using SSL similar to the way they connect to the application (through HTTPS).
I have added a listener to the ELB where the client connects to the ELB using SSL (SSL://domain.com:61616) and the ELB routes to the internal port using TCP and I have deployed the same certificate to the ELB as the one I used for the application. for example here is what I have:
Basically I want to use SSL between the client and the ELB and TCP from ELB to the instance.
Why this doesn't work? when I try to connect using openssl
openssl s_client -connect domain.com:61616
I get the following:
`CONNECTED(00000003)
write:errno=104
no peer certificate available
No client certificate CA names sent
SSL handshake has read 0 bytes and written 247 bytes
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---`
Why I cannot use a different port other than 443 to use SSL?

AWS Application Load Balancer Web Socket Issues

I have used EB to create an environment with Tomcat 8, Java 8 configured as load balanced, auto scaling.
Deployed a WebSocket server on this.
On the EC2 running instance, my websocket client (tyrus API) is able to communicate over websocket, for example ws://ip/chat
Now I need a TCP connection (count) based auto scaling strategy, for which switched to Application Load Balancer (ALB) with a target group pointing to this EC2 instance.
Stickiness has been enabled and ALB is using HTTP listener on port 80 with listener rule "/chat" pointing to this target group.
All involved SG have All TCP in and out traffic enabled for testing.
Invoking ws://ELB/chat results doesnt work resulting in a 404:
Caused by: org.glassfish.tyrus.core.HandshakeException: Response code was not 101: 404
Any inputs on how this should be configured.
Final aim is to be able to communicate with WebSocket server over ALB and then auto scale based on TCP "ActiveConnectionCount"