Uniform balancing with AWS network load balancer - amazon-web-services

We have several servers behind an AWS network load balancer.
The used algorithm for balancing trafic is the "round robin" describe as below :
"With Network Load Balancers, the load balancer node that receives the connection uses the following process:
Selects a target from the target group for the default rule using a flow hash algorithm. It bases the algorithm on:
The protocol
The source IP address and source port
The destination IP address and destination port
The TCP sequence number
Routes each individual TCP connection to a single target for the life of the connection.
The TCP connections from a client have different source ports and sequence numbers, and can be routed to different targets."
Due to the persistance of connections, servers load may be unbalanced and can cause problems.
How to configure the network load balancer to route new connections on the server that have the less load ?

ALBs now support Least Outstanding Request routing. NLB does not appear to support this (yet?)
Is there any possibility of adapting your LB strategy to ALBs from NLBs?

Related

aws network load balancer ping fail from terminal

We have configured our website server with network load balancing. When we tried to ping our domain name using terminal all ping lost.
I tried to figure it out and have no clue how to configure NLB to listen ping from terminal.
You need to create one/ multiple listeners in case of NLB and route them to specific target for serving the intended requests
Network traffic that does not match a configured listener is classified as unintended traffic. ICMP requests other than Type 3 (unreachable) are also considered unintended traffic. Network Load Balancers drop unintended traffic without forwarding it to any targets.
Source : https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html

Has anyone identified a good solution for iPv6 in GKE, Google Game Servers, or Agones?

I am currently hosting a game service with Google Game Servers (https://cloud.google.com/game-servers). This is basically running Agones in GKE. This works great for the most part except when it comes to iPv6.
I am struggling to find any suggestions on how to get this kind of a setup to be iPv6 compatible. It seems like this should be the answer https://cloud.google.com/load-balancing/docs/ipv6 but Agones is setup to run servers across a port range as it spins up and shuts down servers and it seems I need to have a specific port and non-persistent connections to a specific machine to use googles Load balancer solution.
For reference, this is a NodeJS backend relying on socket.io communication.
Any suggestions would be appreciated.
As already stated in comments Google Cloud VPCs do not support IPv6 connectivity:
Google Cloud VPCs do not support IPv6. A few public facing services
such as HTTPS Load Balancers do support IPv6 but that will not help
you with internal services. – John Hanley Sep 29 at 12:23
If your stack requires IPv6 connectivity, unfortunatelly you won't be able to deploy it on Google Kubernetes Engine at the time being as it is subject to the same rules as any other Compute resource on GCP and uses the same VPC network.
As you can read in the official VPC specifications:
VPC networks only support IPv4
unicast traffic. They do not
support
broadcast,
multicast, or IPv6 traffic
within the network; VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. However, it
is possible to create an IPv6 address for a global load
balancer.
As to Global Loadbalancer (that do support IPv6) here is all the information you need:
Google Cloud supports IPv6 clients with HTTP(S) Load
Balancing, SSL
Proxy Load
Balancing, and
TCP Proxy Load
Balancing. The load
balancer accepts IPv6 connections from your users, and then proxies
those connections to your backends.
You can configure both IPv4 and IPv6 external addresses for the
following:
external HTTP(S) load balancers
SSL proxy load balancers
TCP proxy load balancers
Protocols and port ranges it supported by each of them are listed in their individual specifications (all links available above).
SSL Proxy Load Balancing:
This does not affect SSL proxy load balancers. External forwarding
rules, which are used in the definition of an SSL load balancer, can
only reference TCP ports 25, 43, 110, 143, 195, 443, 465, 587, 700,
993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085,
8099, 9092, 9200, and 9300. Traffic with a different TCP destination
port is not forwarded to the load balancer's backend.
TCP Proxy Load Balancing:
TCP Proxy Load Balancing is intended for TCP traffic on specific
well-known ports, such as port 25 for Simple Mail Transfer Protocol
(SMTP). For more information, see Port
specifications.
For client traffic that is encrypted on these same ports, use SSL
Proxy Load
Balancing.
with one caveat:
Note: TCP Proxy Load Balancing doesn't support TCP ports 80 or 8080. For HTTP traffic, use HTTP(S) Load Balancing.
When it comes to External HTTP(S) Load Balancing its name speaks for itself.
So if you rather need to use some arbitrary port ranges as you mentioned, the answer is: no, unfortunatelly you can't do that using Google Cloud Load Balancing solutions

TCP Connection forcibly closed by pass-through load balancer?

I've set up a TCP network load balancer, as described here: https://cloud.google.com/load-balancing/docs/network. I need to balance traffic from anywhere on the internet to my backend VMs, running a custom application listening to a non-standard TCP port.
Everything seems to work initially, but after about 10 seconds the connected clients are disconnected, reporting the error "An existing connection was forcibly closed by the remote host.". For debugging I allow my backend VMs to have public IPs and when connecting to any of them directly, bypassing the load balancer, everything works and there's no disconnect.
As I understand it, this load balancer setup I'm using should be pass through: Once the backend VM is selected, the TCP connection should essentially be with the back end VM and the load balancer no longer involved. The backend VMs are certainly not terminating the connection forcibly - as far as the backends are concerned, the connection persists after the client disconnect and time out later. The timeout settings described for other google cloud load balancers don't seem to apply to External TCP/UDP Network Load Balancing.
What am I missing?
TCP/UDP network load balancers are pass-through load balancers and do not proxy connections to your backend instances, so your backends receive the original client request. The network load balancer doesn't do any Transport Layer Security (TLS) offloading or proxying. Traffic is directly routed to your VMs.
Confirm that your network load balancer is set up correctly using these
steps.
Ensure that server software running on your backend VMs is listening on the IP address of the load balancer's forwarding rule.
Make sure you’ve configured firewall rules using source IP ranges for Network load balancing health checks.
Additionally, you can capture tcpdump to narrow down your issue, which may provide information to specific resource.

Kubernetes nginx ingress path-based routing of HTTPS in AWS

Question: Within Kubernetes, how do I configure the nginx ingress to treat traffic from an elastic load balancer as HTTPS, when it is defined as TCP?
I am working with a Kubernetes cluster in an AWS environment. I want to use an nginx ingress to do path-based routing of the HTTPS traffic; however, I do not want to do SSL termination or reencryption on the AWS elastic load balancer.
The desired setup is:
client -> elastic load balancer -> nginx ingress -> pod
Requirements:
1. The traffic be end-to-end encrypted.
2. An AWS ELB must be used (the traffic cannot go directly into Kubernetes from the outside world).
The problem that I have is that to do SSL passthrough on the ELB, I must configure the ELB as TCP traffic. However, when the ELB is defined as TCP, all traffic bypasses nginx.
As far as I can tell, I can set up a TCP passthrough via a ConfigMap, but that is merely another passthrough; it does not allow me to do path-based routing within nginx.
I am looking for a way to define the ELB as TCP (for passthrough) while still having the ingress treat the traffic as HTTPS.
I can define the ELB as HTTPS, but then there is a second, unnecessary negotiate/break/reencrypt step in the process that I want to avoid if at all possible.
To make it more clear I'll start from OSI model, which tells us that TCP is level 4 protocol and HTTP/HTTPS is level 7 protocol. So, frankly speaking HTTP/HTTP data is encapsulated to TCP data before doing rest levels encapsulations to transfer packet to another network device.
If you setup Classic (TCP) LoadBalancer it stops reading packet data after reading TCP part, which is enough to decide (according to LB configuration) to which IP address and to which IP port this data packet should be delivered. After that LB takes the TCP payload data and wrap it around with another TCP layer data and send it to the destination point (which in turn cause all other OSI layers applied).
To make your configuration works as expected, it's required to expose nginx-ingress-controller Pod using NodePort service. Then Classic ELB can be configured to deliver traffic to any cluster node to port selected for that NodePort service. Usually it is in between 30000 and 32767. Sou your LB pool will look like the following:
Let's imagine cluster nodes have IP addresses 10.132.10.1...10 and NodePort port is 30276.
ELB Endpoint 1: 10.132.10.1:30276
ELB Endpoint 2: 10.132.10.2:30276
...
ELB Endpoint 10: 10.132.10.10:30276
Note: In case of AWS ELB, I guess, nodes DNS names should be used instead of IP addresses.
So it should cause the following sequence of traffic distribution from a client to Kubernetes application Pod:
Client sends TCP packet with HTTP/HTTPS request in payload to ELB_IP:ELB_port (a.b.c.d:80).
ELB receives IP packet, analyze its TCP data, finds the appropriate endpoint from backend pool (whole list of Kubernetes cluster nodes), and creates another TCP packet with the same HTTP/HTTPS data inside, and also replaces destination IP and destination TCP port to cluster node IP and Service NodePort TCP port (l.m.n.k:30xxx) and then send it to the selected destination.
Kubernetes node receives TCP packet and using the iptables rules changes the destination IP and destination port of the TCP packet again, and forward the packet (according to the Nodeport Service configuration) to destination pod. In this case it would be nginx-ingress-controller pod.
Nginx-ingress-controller pod receives the TCP packet, and because according to TCP data it have to be delivered locally, extracts HTTP/HTTP data out of it and send the data (HTTP/HTTPS request) to Nginx process inside the Nginx container in the Pod,
Nginx process in the container receives HTTP/HTTPS request, decrypt it (in case of HTTPS) and analyze all HTTP headers.
According to nginx.conf settings, Nginx process change HTTP request and deliver it to the cluster service, specified for the configured host and URL path.
Nginx process sends changed HTTP request to the backend application.
Then TCP header is added to the HTTP request and send it to the backend service IP_address:TCP_port.
iptables rules defined for the backend service, deliver packet to one of the service endpoints (application Pods).
Note: To terminate SSL on ingress controller you have to create SSL certificates that includes ELB IP and ELB FQDN in the SAN section.
Note: If you want to terminate SSL on the application Pod to have end to end SSL encryption, you may want to configure nginx to bypass SSL traffic.
Bottom line: ELB configured for delivering TCP traffic to Kubernetes cluster works perfectly with nginx-ingress controller if you configure it in the correct way.
In GKE (Google Kubernetes Engine) if you create a Service with type:LoadBalancer it creates you exactly TCP LB which forward traffic to a Service NodePort and then Kubernetes is responsible to deliver it to the Pod. EKS (Elastic Kubernetes Service) from AWS works in pretty much similar way.

how to configure the aws elastic beanstalk load balancer for different layers?

I read the aws documentation inside the elastic beanstalk program where aws is responsible for scaling the servers and auto managing it. In the same documentation there is an option for changing and configuring the load balancer. In my case I want to change it to balance the requests that come to the servers on the IP network layer (L3), but it says that only HTTP and TCP can be listened and balanced.
I am developing a chat application backend that need to be developed with scaling in considerations. how can I configure the load balancer to listen on L3 ?
the chat application in order to work it must make the tcp connection with the server not the load balancer so that's why I must load the packets on the IP layer to the server so the server can establish a tcp connection with the app ( if I am wrong and I can do it on the tcp layer tell me ).
If I can't, does that give me another option or I will just be forced on using ec2 and handle all the system management overhead myself and create my own load balancer ?
ELB Classic operates at either Layer 4 or Layer 7. Those are the options.
the chat application in order to work it must make the tcp connection with the server not the load balancer so that's why I must load the packets on the IP layer to the server so the server can establish a tcp connection with the app.
You're actually incorrect about this. If you need to know the client's source IP address, you can enable the Proxy Protocol on your ELB, and support this in your server code.
When the ELB establishes each new connection to the instance, with the Proxy Protocol enabled, the ELB emits a single line preamble containing the 5-way tuple describing the external connection, which your application can interpret. Then it opens up the L4 connection's payload streams and is transparent for the remainder of the connection.
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html