Load balancer for kubernetes clusters - google-cloud-platform

I need some help configuring my load balancer for my Kubernetes clusters. The internal load balancer works fine. Now, I'd like to expose the service via https and I'm stumped on the Ingress configuration.

First of all, please take in count that whenever any HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make are overwritten by GKE.
So, once that we are aware about the above, We have to know that Ingress for Internal HTTP(S) Load Balancing deploys the Google Cloud Internal HTTP(S) Load Balancer. This private pool of load balancers is deployed in your network and provides internal load balancing for clients and backends within your VPC. As per this documentation.
Now, we are ready to configure an Ingress for Internal Load Balancer, This is an example about how to configure a simple Ingress in order to expose a simple service.
My suggestion is to try first to implement the first configuration in order to know how an ingress works, and then, try to configure an ingress for GKE as per this documentation.
Let me know if you still having doubts about it or if you need more assistance.
Have a nice day, and stay safe.

Related

GCP HTTP Load balancer to TCP Load balancer

I am trying to figure out is how can I connect a TCP Load balancer with a http/https load balancer in GCP.
I have installed kong on a GKE cluster and it creates a TCP Load balancer.
Now if I have multiple GKE clusters with Kong they all will have their own TCP Load balancers.
From a user perspective I need to then do a DNS load balancing which I dont think is always fruitful.
So m trying to figure out if I can use Cloud CDN, NEG and or HTTP/HTTPS load balancer to act as a front end for Kong's TCP Load balancer..
Is it possible, r there any alternatives... Thanks!!!
There are several options you can follow depending on what you are trying to do and your needs, but if you must use Kong inside each GKE cluster and handle your SSL certs yourself, then:
TCP Proxy LB
(optional) You can deploy GKE NodePorts instead of Load Balancer service for your Kong deployment, since you try to unify all your Kong services, having individual Load Balancer exposing to the public internet can work, but you will be paying for any extra external IP address you are using.
You can manually deploy a TCP Proxy Load Balancer that will use the same GKE Instance Groups and port as your NodePort / current Load Balancer (behind the scenes), you would need to setup each backend for each GKE cluster node pool you are currently using (across the all the GKE clusters that you are deploying your Kong service).
HTTP(S) LB
You can use NodePorts or take advantage (same thing as TCP Proxy LB) from your current Load Balancer setup to use as backends, with the addition of NEGs in case you want to use those.
You would need to deploy and maintain this manually, but you can also configure your SSL certificates here (if you plan to provide HTTPS connections) since client termination happens here.
The advantage here is that you can leave SSL cert renewal to GCP (once configured) and you can also use Cloud CDN to reduce latency and costs, this feature can only be used with HTTP(S) LB as per today.

GCP, Autoscaling on internal load balancer

I managed to set up autoscaling based on an external load balancer, but I didn't find a way to do the same for an internal load balancer.
Is this feature supported, how to go about auto-scaling my instance group based on the internal load balancer?
The issue is, when you configure an instance group to scale by HTTP requests, you need an HTTP load balancer, which is internet facing, So, the UDP load balancer, which can be internal doesn't work for that.
The Internal Load Balancer uses a backend service which can use a managed instance group. You can assign a managed instance group to the backend or target pools of both internal and network load balancers.
Keep in mind that the Network Load Balancer uses target pools instead of backend services, but target pools can use managed instance groups as well.
Take a look at the documentation for more details. Alternatively I found this and this posts that I believe can be useful to you.
From your last comment:
I'm not able to setup a TCP load balancer which has a backend service, I only get a REGIONAL backend service, which doesn't support http load balancing..
As stated in the Internal Load Balancing Concepts, "internal client requests stay internal to your VPC network and region", so there is neither need of HTTP here, nor a multi-regional setup.
On the same page, under section "About Internal Load Balancing", the schema shows a classic load balancing architecture, featuring one global (http) and multiple internal (tcp/udp) load balancers for each region.
Further on, under "Deploying Internal Load Balancing with clients across VPN or Interconnect", the following is stated in an "Important" note:
Internal Load Balancing is a regional product. [...] An internal load balancer cannot forward or receive traffic to and from VM instances in other regions.
Basically, if your managed instance group has instances across multiple regions, then you need an external load balancer, but if all your instances are within the same region (instances can be split across zones within this same region, e.g. us-west1-a/b/c), then you can rely on an internal load balancer.

Difference between Classic and Elastic Load Balancer

I am learning about AWS elastic and classic load balancer. I understand what a load balancer does, but can someone please explain what the difference is between them?
I'm studying for a AWS certificate and I need to be able to explain the difference. Thanks in advance.
As others have said, you have three types of Elastic Load Balancer (ELB).
You can select the appropriate load balancer based on your application needs. If you need flexible application management, we recommend that you use an Application Load Balancer. If extreme performance and static IP is needed for your application, we recommend that you use a Network Load Balancer. If you have an existing application that was built within the EC2-Classic network, then you should use a Classic Load Balancer.
That's from the AWS ELB page, see a feature comparison and description of each service here: https://aws.amazon.com/elasticloadbalancing/features/
The AWS api and documentation is very confusing about load balancers.
First release of LoadBalancer (TCP load balancer only) was called ELB for Elastic LoadBalancer.
Second and actual release of load balancers are called ALB for Application Load Balancer. They deal with TCP/HTTP/HTTPS, filtering rules, etc. Be carefull, in the API ALB are called LoadBalancer_v2 !!!
In 2022 we have Gateway Load Balancer in addition.
So there are 4 Balancers:
Application Load Balancer - HTTP, HTTPS, gRPC (for IP, Instance, Lambda),
Network Load Balancer - TCP, UDP, TLS (for IP, Instance, App.. Load Balancer),
Gateway Load Balancer - IP (for IP, Instance),
Classic Load Balancer - SSL/TLS, HTTP, HTTPS (for classic EC2-networks).
https://aws.amazon.com/elasticloadbalancing/features/

Kubernetes Load Balancers AWS

We've currently got a production application using Kubernetes on AWS. Everything's working very well except I think we've made a misconfiguration problem.
We expose different services from within the cluster on domain names and we're now up to about 5 different services. Kubernetes' standard way to expose these services is through load balancers, but in our config we've created 6 load balancers. As you can imagine that many load balancers running can incur substantial cost overheads.
Is there any way to configure an individual load balancer to route to kubernetes targets based on domain names? So we can have one domain pointing at an ELB and have that route to the correct services internally?
You can use Ingress controller. Ingress will setup a single AWS load balancer and can be used to expose many services. If you services are all HTTP based, it should work quite well. For more information about ingress you can have a look to the Kubernetes docs or at the default Nginx based implementation. If needed there are also some other implementations using for example Envoy proxy etc.

How to create application load balancer on aws for kubernetes

Question is similar to following SO question. But I am not looking forward to create classic load balancer.
How to create Kubernetes load balancer on aws
AWS now provide 2 types of loadbalancer, classic load balancer and application load balancer. Please read following document for more information,
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
I already know how classic load balancer work with kubernetes. I wonder if there is any flag/ tool exist so that we can also configure application loadbalancer.
An AWS ALB Ingress Controller has been built which you can find on GitHub: https://github.com/coreos/alb-ingress-controller
I can tell you that as of K8 v1.2.3/4 there is no built-in support for Application Load Balancers.
That said, what I do is expose internally load balanced pods via a service NodePort. You can then implement any type of AWS load balancing you would like, including new Application Load Balancing features such as Content-Based Routing, by setting up your own AWS ALB that directs a URL path like /blog to a specific NodePort.
You can read more about NodePorts here: http://kubernetes.io/docs/user-guide/services/#type-nodeport
For bonus points, you could script the creation of the ALB via something like BOTO3 and have it provisioned when you provision the K8 services/pods/rc.