EKS using NLB ingress and multiple services deployed in node group - amazon-web-services

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated

when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

Related

Create static domain address for Network Load Balancer in multizone EKS cluster

I'm fairly new to AWS EKS service, and I'm trying to deploy an UDP network load balancer.
I've a EKS cluster inside a VPC with two subnets in two availabilty zones, and I want to have a fixed address assigned to the NLB. Currently, I have this in my service yaml:
apiVersion: v1
kind: Service
metadata:
name: udpserver-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-XXXXX,eipalloc-YYYYY"
spec:
selector:
app: udpserver
type: LoadBalancer
ports:
- protocol: UDP
port: 5002
targetPort: 5002
externalTrafficPolicy: Local
This is the closest case that I found in StackOverflow, but the accepted solution only works when you have just one availabilty zone, as each Elastic IP defined in the service.beta.kubernetes.io/aws-load-balancer-eip-allocations annotation is assigned to the subnet on a different availability zone.
So, with this approach, I have two static IP addresses pointing to the two different subnets in both availability zones, instead of one single domain name pointing to the "global" load balancer.
The problem however is the same: every time that I deploy the service, a new NLB is created, with a different domain name.
How could I make this load balancer DNS fixed? Am I missing/misunderstanding anything?

AWS Network Load Balancer and TCP traffic with AWS Fargate

I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets

Connecting to Kubernetes cluster on AWS internal network

I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.
You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.

Expose a Hazelcast cluster on AWS EKS with a load balancer

We have a Hazelcast 3.12 cluster running inside an AWS EKS kubernetes cluster.
Do you know how to expose a Hazelcast cluster with more than 1 pod that is running inside an AWS EKS kubernetes cluster to outside the kubernetes cluster?
The Hazelcast cluster has 6 pods and is exposed outside of the kubernetes cluster with a kubernetes "Service" of type LoadBalancer (AWS classic load balancer).
When I run a Hazelcast client from outside of the kubernetes cluster, I am able to connect to the Hazelcast cluster using the AWS load balancer. However, when I try to get some value from a Hazelcast map, the client fails with this error:
java.io.IOException: No available connection to address [172.17.251.81]:5701 at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:75
The error mentions the IP address 172.17.251.81. This is an internal kubernetes IP for a Hazelcast pod that I cannot connect to from outside the kubernetes cluster. I don't know why the client is trying to connect to this IP address instead of the Load Balancer public IP address.
On the other hand, when I scale the hazelcast cluster from 6 to 1 pod, I am able to connect and get the map value without any problem.
In case that you want to review the kubernetes LoadBalancer Service configuration:
kind: Service
apiVersion: v1
metadata:
name: hazelcast-elb
labels:
app: hazelcast
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- name: tcp-hazelcast-elb
port: 443
targetPort: 5701
selector:
app: hazelcast
type: LoadBalancer
If you expose all Pods with one LoadBalancer service, then you need to use Hazelcast Unisocket Client.
hazelcast-client:
smart-routing: false
If you want to use the default Smart Client (which means better performance), then you need to expose each Pod with a separate service, because each Pod needs to be accessible from outside the Kubernetes cluster.
Read more in the blog post: How to Set Up Your Own On-Premises Hazelcast on Kubernetes.

Replacing AWS ELB in K8 cluster

I have a k8 cluster deployed in AWS using kube-aws. When I deploy a service, a new ELB is added for exposing the service to internet. Can I use ingress-controller to replace ELB or is there any other way to expose services other than ELB?
First, replace type: LoadBalancer with type: ClusterIP in your service definition. Then you have to configure the ingress and deploy a controller, like Nginx
If you are looking for a full example, I have one here: nginx-ingress-controller.
The ingress will expose you services using some of your workers public IPs, usually 2 of them. Just check your ingress kubectl get ing -o wide and create the DNS records.