I currently have a setup where I have 1 NLB in my EKS cluster. I deployed ServiceA and ServiceB and both of them use the same NLB.
However, I am curious to know whether it is possible to create more than 1 NLB in a EKS cluster?
If yes, which use case would this be useful for?
And how would I specify ServiceC to use NLB1 and ServiceD to use NLB2?
I did not find specific documentation surrounding this and any pointers on this would be helpful. Thanks!
It is possible.
I've never done it with NLBs but this should be as simple as deploying a second Service of type LoadBalancer with the annotation indicating it's a nlb rather than an elb.
As to use case, a few that spring to mind:
strict requirements for segregation of traffic
namespacing of project resources
Routing would be accomplished by binding the deployment manifests for service C to the Service for NLB1 and the deployment manifests of service D to NLB2. Services route to pods through selectors, so it's merely a matter of ensuring your mapping is correct.
Related
In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?
AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.
I'm pretty new to k8s and I'm trying to figure out how to expose to Internet, multiple HTTP services, in cheap manner. Currently I'm using AWS EKS cluster with managed node groups, so the cheap way will be not to provision any kind ELB as it cost. Also I would like those services to be in private subnets so just f.ex only Ingress Resource will be exposed and the nodes will stay private. One load balancer per svc is definitely not an option as it will break down my budget
The options I consider:
Using K8s ingress resources (to be precise: Istio Ingress controller). But the downside of that is, when we creating ingress resource, AWS create Load Balancer for which I will need to pay.
Run node groups in public subnets, and create K8s Services of type NodePort so I could reach service using NodeIP:NodePort (NodePort will be specific for each service). The downside of that I will need to remember all IPs and ports assigned to each service. I can live with one service but when the number increase that will be pretty awful to remember.
At last, without any other option is to create one load balancer with public IP and also create Ingress controller with Istio. So I will reach each services by single DNS name of Load Balancer and I will route to services by request path.
Looking forward to any solution and inputs.
I don't think there is any magic here. Option 1 and 3 are basically one and the same (unless I am missing something). As you pointed out I don't think option 2 is viable for the reasons you call out. You have a number of options to go with. I don't know the Istio ingress (but I assume it will be fine). We often see customers using either the NGINX ingress or the ALB ingress.
All of these options require a Load Balancer.
I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks
When creating ALB rules in AWS console is there a ingress controller running in the background?
I'm a bit confused when learning about ALB ingress controller. I thought it'd be a just a API calls to ALB services in AWS. But the instructor seems to install a aws ingress controller. Then write rules to redirect path to node port services.
What's the difference compared to creating in AWS console and doing it via kubernets cluster.
This is the controller that's being installed.
https://github.com/kubernetes-sigs/aws-load-balancer-controller
In general, in Kubernetes, a controller is a running program that observes some Kubernetes objects and manages other objects based on that. For example, there is a controller that observes Deployments and creates ReplicaSets from them, and a controller that observes ReplicaSets and creates matching Pods. In this case, the ALB ingress controller observes Ingress objects and creates the corresponding AWS ALB rules.
Nothing stops you from doing the same setup by hand, or using a tool like Terraform that's a little more specialized for managing cloud resources. Using Ingress objects is a little more portable across environments (you can use a similar Ingress setup for a local minikube installation, for example). This approach would also let you use a single tool like Helm to create a Deployment, Service, and Ingress, and have the ALB automatically updated; if you delete the Helm chart, it will delete the Kubernetes objects, and the ALB will again update itself.
It's also possible that a development team would have permissions in Kubernetes to create Ingress objects, but wouldn't (directly) have permissions in AWS to create load balancers, and this setup might make sense depending on your local security and governance requirements.
just check my diagram first
So from a high level point of view you have a AWS ALB which redirects the traffic to the underlaying cluster. The Ingress Controller then is responsible to redirect the traffic to the correct Kubernetes service and the service is redirecting the traffic to one of its pods. There are multiple different solutions for that.
My favorite solution is:
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here)
Configure the IngressController Service to use NodePort as a type and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create an Ingress resource to configure the IngressController
So i hope that the diagram help to understand the reason of an ALB and an Ingress Controller.
If you still have questions, just ask here.
I have two microservices that need to be deployed in the same ECS service for efficient resource usage.
Both of them have the same context path so cannot use path-pattern filter in the ALB and ECS doesn't seem to allow multiple ALB's in a single ECS.
Is it possible to have two target groups serving the micro services at different ports ? Or is there any other solution ?
Yes you can have two different Target groups each with a unique Port under the same ALB. I use this construction to support HTTP and HTTPS protocol on the same instance with ALB. Should be the same for ECS
You can definitely have a single ALB serving up two different microservices on different ports of ECS instances. Typically when you're going this far, you might want to look at dynamic port mapping. The ALB still needs a way to decide which target group to go to -- hostname matching, for instance.
What I'm not totally sure I understand is why you want to share an ECS service -- why not put each microservice in its own ECS service and share an ALB instead?
Anyway, both are likely possible. I have several microservices, each with their own ECS service sharing ECS instances and an ALB in a cluster using host name matching on the ALB. If you really want to use a single ECS service, it seems like it would still be possible.