Kubernetes Load Balancers AWS - amazon-web-services

We've currently got a production application using Kubernetes on AWS. Everything's working very well except I think we've made a misconfiguration problem.
We expose different services from within the cluster on domain names and we're now up to about 5 different services. Kubernetes' standard way to expose these services is through load balancers, but in our config we've created 6 load balancers. As you can imagine that many load balancers running can incur substantial cost overheads.
Is there any way to configure an individual load balancer to route to kubernetes targets based on domain names? So we can have one domain pointing at an ELB and have that route to the correct services internally?

You can use Ingress controller. Ingress will setup a single AWS load balancer and can be used to expose many services. If you services are all HTTP based, it should work quite well. For more information about ingress you can have a look to the Kubernetes docs or at the default Nginx based implementation. If needed there are also some other implementations using for example Envoy proxy etc.

Related

routing traffic from an external load balancer to an internal load balancer on gcp

Lets assume we are laboring under the following requirements:
We would like to run our code on gcp and wish to do so on kubernetes as opposed to on any of the managed solutions (app engine, cloud run...).
Our services need to be exposed to the internet
We would like to preserve client ips (the service should be able to read it)
We would also like to deploy different versions of our services and split traffic between them based on weights
All of the above, except for traffic splitting, can be achieved by a traditional ingress resource which on gcp is implemented by a Global external HTTP(S) load balancer (classic). For this reason I was overjoyed when I noticed that gcp is implementing the new kubernetes gateway and route resources. As explained here they are in combination able to do everything that the ingress resource did and more, specifically, weighted traffic distribution.
Unfortunately however, once I began implementing this on a test cluster, I discovered that the gateway resource is implemented by 4 different classes, all of which are backed by the same two load balancers already backing the existing ingress resource. And only the two internal gateway classes, backed by gcp's internal HTTP(S) load balancer supports traffic splitting. See the following two images:
and
So, if we want to expose our services to the internet, we cannot traffic split, and if we wish to traffic split, we may only do so internally. I wasn't dismayed by this, it actually kind of makes sense. I mean ideally, the gke-l7-gxlb (backed by global external https load balancer (classic)) would support traffic splitting but I've encountered this sort of architecture in orgs before. You have an external load balancer, which does ssl termination and then sends traffic to internal load balancers which split traffic based on various rules. There are even images of such diagrams, using multiple load balancers on gcp's various tutorial pages. However, to finally return to the title, I cannot seem to be able to convince gcp's external load balancer to route traffic to the internal one. It seems to be very restricted in its backend definitions. And simply choosing an ip address (provided to us upon the creation of the gateway resource, i.e. the internal load balancer) does not appear to be an option.
Is this possible? Am I completely off base here? should I be solving this in a completely different way? Is there an easier way to achieve the above 4 checkboxes? I feel like an external load balancer sending traffic to an ip in my vpc network should be one of its most basic functions, but maybe there is a reason it's not allowing me to do this?

Do I need AWS ALB for application running in EKS?

I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks

Load balancer for kubernetes clusters

I need some help configuring my load balancer for my Kubernetes clusters. The internal load balancer works fine. Now, I'd like to expose the service via https and I'm stumped on the Ingress configuration.
First of all, please take in count that whenever any HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make are overwritten by GKE.
So, once that we are aware about the above, We have to know that Ingress for Internal HTTP(S) Load Balancing deploys the Google Cloud Internal HTTP(S) Load Balancer. This private pool of load balancers is deployed in your network and provides internal load balancing for clients and backends within your VPC. As per this documentation.
Now, we are ready to configure an Ingress for Internal Load Balancer, This is an example about how to configure a simple Ingress in order to expose a simple service.
My suggestion is to try first to implement the first configuration in order to know how an ingress works, and then, try to configure an ingress for GKE as per this documentation.
Let me know if you still having doubts about it or if you need more assistance.
Have a nice day, and stay safe.

Microservice security with AWS

I've got an ECS cluster where I have a couple of services running. All of them have their own load balancer so for every service I have a URL like http://my-service-1234554321.eu-west-1.elb.amazonaws.com. But I would like to open only one service of all these (f.ex. 10) services for the whole world while all the others I would like to be hidden and have access to them only from services in this cluster via HTTP. Is it possible and how can I do that?
Elastic Load Balancers can be either be internet facing (open to traffic from the Internet) or internal facing (accepting traffic from within a VPC).
When you create the load balancer for your service, specify the scheme as internal for the services you only wish to access from within the cluster. For the service that needs to be external, set it as internet facing.
The ECS documentation talks about setting the Load Balancer scheme here.
Just remember that a load balancer cannot be both internet facing and internal at the same time. If you decide that you want to expose services that were internal over the Internet at a later date, you will probably need to create a second internet facing ELB for that.

How to create application load balancer on aws for kubernetes

Question is similar to following SO question. But I am not looking forward to create classic load balancer.
How to create Kubernetes load balancer on aws
AWS now provide 2 types of loadbalancer, classic load balancer and application load balancer. Please read following document for more information,
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
I already know how classic load balancer work with kubernetes. I wonder if there is any flag/ tool exist so that we can also configure application loadbalancer.
An AWS ALB Ingress Controller has been built which you can find on GitHub: https://github.com/coreos/alb-ingress-controller
I can tell you that as of K8 v1.2.3/4 there is no built-in support for Application Load Balancers.
That said, what I do is expose internally load balanced pods via a service NodePort. You can then implement any type of AWS load balancing you would like, including new Application Load Balancing features such as Content-Based Routing, by setting up your own AWS ALB that directs a URL path like /blog to a specific NodePort.
You can read more about NodePorts here: http://kubernetes.io/docs/user-guide/services/#type-nodeport
For bonus points, you could script the creation of the ALB via something like BOTO3 and have it provisioned when you provision the K8 services/pods/rc.