When a Kubernetes service is exposed via an Ingress object, is the load balancer "phisically" deployed in the cluster, i.e. as some pod controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?
Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?
By default, a kubernetes cluster has no IngressController at all. This means that you need to deploy one yourself if you are on premise.
Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.
Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.
I would like to make some clarification concerning the Google Ingress Controller starting from its definition:
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub description of this resource.
In particular notice that:
It is a Daemon
It is deployed in a pod
It is in kube-system namespace
It is hidden to the customer
However you will not be able to "see" this resource for example running :
kubectl get all --all-namaspaces, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the official documentation:
GCE/Google Kubernetes Engine deploys an ingress controller on the master
Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.
I will answer with respect to Google Cloud Engine.
Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:
GCP Console --> Network services --> LoadBalancing
Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc
Related
In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?
AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.
I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).
I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks
When creating ALB rules in AWS console is there a ingress controller running in the background?
I'm a bit confused when learning about ALB ingress controller. I thought it'd be a just a API calls to ALB services in AWS. But the instructor seems to install a aws ingress controller. Then write rules to redirect path to node port services.
What's the difference compared to creating in AWS console and doing it via kubernets cluster.
This is the controller that's being installed.
https://github.com/kubernetes-sigs/aws-load-balancer-controller
In general, in Kubernetes, a controller is a running program that observes some Kubernetes objects and manages other objects based on that. For example, there is a controller that observes Deployments and creates ReplicaSets from them, and a controller that observes ReplicaSets and creates matching Pods. In this case, the ALB ingress controller observes Ingress objects and creates the corresponding AWS ALB rules.
Nothing stops you from doing the same setup by hand, or using a tool like Terraform that's a little more specialized for managing cloud resources. Using Ingress objects is a little more portable across environments (you can use a similar Ingress setup for a local minikube installation, for example). This approach would also let you use a single tool like Helm to create a Deployment, Service, and Ingress, and have the ALB automatically updated; if you delete the Helm chart, it will delete the Kubernetes objects, and the ALB will again update itself.
It's also possible that a development team would have permissions in Kubernetes to create Ingress objects, but wouldn't (directly) have permissions in AWS to create load balancers, and this setup might make sense depending on your local security and governance requirements.
just check my diagram first
So from a high level point of view you have a AWS ALB which redirects the traffic to the underlaying cluster. The Ingress Controller then is responsible to redirect the traffic to the correct Kubernetes service and the service is redirecting the traffic to one of its pods. There are multiple different solutions for that.
My favorite solution is:
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here)
Configure the IngressController Service to use NodePort as a type and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create an Ingress resource to configure the IngressController
So i hope that the diagram help to understand the reason of an ALB and an Ingress Controller.
If you still have questions, just ask here.
Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress