Kubernetes loadbalancer service vs cloud loadbalancer - amazon-web-services

In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?

AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.

Related

AWS EKS - Multi-cluster communication

I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).

Do I need AWS ALB for application running in EKS?

I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks

Expose Kubernetes services running in EKS through API Gateway

I am new to Kubernetes and AWS and exploring different AWS technologies for a project. One thing I am doing as part of that is to see how we can have routes in API Gateway connect to an EKS cluster (in a VPC).
This is what I have working:
An EKS Cluster
In the EKS Cluster I have nginx ingress-controller running
I have an EC2 inside the VPC and verified that I can reach a service running in the cluster through EC2 by using the ingress-controller url
This is what I am trying:
I tried to create an API Gateway route to access the same service using the ingress-controller url -> To achieve that, I am trying the steps here (because my cluster is in a VPC): https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-nlb-for-vpclink-using-console.html
One thing that is not clear to me is that, how do I specify the ingress-controller url as a target for the NLB? The only targets that I can specify are EC2 instances, but I want to direct the traffic through the ingress-controller (which is a service of type loadbalancer in K8s).
If I am doing this wrong way, please advice the right way of exposing EKS cluster in API Gateway through the nginx ingress controller. Thanks!
I have found the problem. When using nginx-ingress-controller, I just had to specify the annotation that it is of type "nlb"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Once I deploy the ingress controller with this annotation, it automatically creates an nlb in aws and sets the target according to the ingress defined! I was creating a new nlb myself and then trying to point to the ingress-controller which is not needed (nor the right way).

Are Kubernetes Ingress objects deployed in cluster

When a Kubernetes service is exposed via an Ingress object, is the load balancer "phisically" deployed in the cluster, i.e. as some pod controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?
Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?
By default, a kubernetes cluster has no IngressController at all. This means that you need to deploy one yourself if you are on premise.
Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.
Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.
I would like to make some clarification concerning the Google Ingress Controller starting from its definition:
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub description of this resource.
In particular notice that:
It is a Daemon
It is deployed in a pod
It is in kube-system namespace
It is hidden to the customer
However you will not be able to "see" this resource for example running :
kubectl get all --all-namaspaces, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the official documentation:
GCE/Google Kubernetes Engine deploys an ingress controller on the master
Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.
I will answer with respect to Google Cloud Engine.
Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:
GCP Console --> Network services --> LoadBalancing
Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc

Make k8s services available via ingress on an AWS cluster created with kops

After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS with kops and serve a few websites with it.
The mind-blowing magic of kops create cluster ... gives me a bunch of ec2 instances, makes the k8s API available at test-cluster.example.com and even configures my local ~/.kube/config so that I can kubectl apply -f any-stuff.yaml right away. This is just great!
I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.
In my small KVM k8s I simply install traefik and expose it on ports :80 and :443. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that my traefik helm chart service exposes two random ports instead of fixed :80 and :443, but I'm not sure.
What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS?
Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above.
You should add the kops nginx ingress addon: https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
In this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.
You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)
You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controller https://github.com/coreos/alb-ingress-controller for this.
Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you. https://github.com/jetstack/kube-lego
In my case I used nginx-ingress-controller. I think that setup with traefik will be the same.
1) Set traefik service type as loadBalancer.
Kubernetes will add an ELB rule.
2) Set CNAME or ALIAS in Route53 to ELB hostname.
You can use https://github.com/kubernetes-incubator/external-dns for synchronize exposed services and ingresses with Route53.