I am trying to figure out is how can I connect a TCP Load balancer with a http/https load balancer in GCP.
I have installed kong on a GKE cluster and it creates a TCP Load balancer.
Now if I have multiple GKE clusters with Kong they all will have their own TCP Load balancers.
From a user perspective I need to then do a DNS load balancing which I dont think is always fruitful.
So m trying to figure out if I can use Cloud CDN, NEG and or HTTP/HTTPS load balancer to act as a front end for Kong's TCP Load balancer..
Is it possible, r there any alternatives... Thanks!!!
There are several options you can follow depending on what you are trying to do and your needs, but if you must use Kong inside each GKE cluster and handle your SSL certs yourself, then:
TCP Proxy LB
(optional) You can deploy GKE NodePorts instead of Load Balancer service for your Kong deployment, since you try to unify all your Kong services, having individual Load Balancer exposing to the public internet can work, but you will be paying for any extra external IP address you are using.
You can manually deploy a TCP Proxy Load Balancer that will use the same GKE Instance Groups and port as your NodePort / current Load Balancer (behind the scenes), you would need to setup each backend for each GKE cluster node pool you are currently using (across the all the GKE clusters that you are deploying your Kong service).
HTTP(S) LB
You can use NodePorts or take advantage (same thing as TCP Proxy LB) from your current Load Balancer setup to use as backends, with the addition of NEGs in case you want to use those.
You would need to deploy and maintain this manually, but you can also configure your SSL certificates here (if you plan to provide HTTPS connections) since client termination happens here.
The advantage here is that you can leave SSL cert renewal to GCP (once configured) and you can also use Cloud CDN to reduce latency and costs, this feature can only be used with HTTP(S) LB as per today.
Related
It appears that istio based or MultiCluster Ingress on GKE will only work with External Load balancers. We have a need(due to a regulatory limitation) that allows external traffic ingress only via an on premise hardware load balancer, this traffic must then be routed to GCP via partner interconnect.
With such a setup - supposing we create an a static IP with an internal load balancer on GCP - Will a MultiCluster Ingress work with this Internal load balancer?
Alternatively, how would you design a solution if you have multiple GKE clusters that need to be load balanced with this type of on-premise hardware LB type ingress?
MultiCluster Ingress only supports Public LoadBalancers.
You can use the new Gateway API with the gke-l7-rilb-mc Gateway Class, this should deploy an internal Multi cluster L7 LoadBalancer. You can route external traffic thought you onprem F5 and via Partner Interconnect to GCP to the LoadBalancer provisioned via the Gateway API
Just keep in mind that the Gateway API and controller are still in preview for now, they should be GA by the eoy https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways#blue-green
I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.
I would like to know a few things with GCP Load balancers.
Are GCP Load Balancers support HTTPS and AMQP ports with SSL termination?
Whether GCP Load Balancers can forward requests to internal IPs?
As i am not familiar with GCP, Can anyone help me out on this?
Thanks.
Your question is very broad so I will point you to a relevant documentation here.
GCP support HTTPS load balacing
Google Cloud HTTP(S) Load Balancing is a global, proxy-based Layer 7 load balancer that enables you to run and scale your services worldwide behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and Google Kubernetes Engine (GKE).
and SSL termination;
When using SSL Proxy Load Balancing for your SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP. For the types of backends that are supported, see Backends.
Have a look at the general overview of GCP's load balancing options - I don't know what your goals and requirements are but this should be helpful when choosing your load balancing method.
Answering your last question about forwarding requests to internal IP's - yes - LB's can and will do just that. Load Balancer can either terminate or forward your HTTPS requests.
If you need you can use it in the layer 4 and make it just a simple TCP/UDP load balancer.
If you specify more requirements then I may be able to make my answer more detailed.
I need some help configuring my load balancer for my Kubernetes clusters. The internal load balancer works fine. Now, I'd like to expose the service via https and I'm stumped on the Ingress configuration.
First of all, please take in count that whenever any HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make are overwritten by GKE.
So, once that we are aware about the above, We have to know that Ingress for Internal HTTP(S) Load Balancing deploys the Google Cloud Internal HTTP(S) Load Balancer. This private pool of load balancers is deployed in your network and provides internal load balancing for clients and backends within your VPC. As per this documentation.
Now, we are ready to configure an Ingress for Internal Load Balancer, This is an example about how to configure a simple Ingress in order to expose a simple service.
My suggestion is to try first to implement the first configuration in order to know how an ingress works, and then, try to configure an ingress for GKE as per this documentation.
Let me know if you still having doubts about it or if you need more assistance.
Have a nice day, and stay safe.
I am trying to understand in which scenario I should pick a service registry over a load balancer.
From my understanding both solutions are covering the same functionality.
For instance if we consider consul.io as a feature list we have:
Service Discovery
Health Checking
Key/Value Store
Multi Datacenter
Where a load balancer like Amazon ELB for instance has:
configurable to accept traffic only from your load balancer
accept traffic using the following protocols: HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP)
distribute requests to EC2 instances in multiple Availability Zones
The number of connections scales with the number of concurrent requests that the load balancer receives
configure the health checks that Elastic Load Balancing uses to monitor the health of the EC2 instances registered with the load balancer so that it can send requests only to the healthy instances
You can use end-to-end traffic encryption on those networks that use secure (HTTPS/SSL) connections
[EC2-VPC] You can create an Internet-facing load balancer, which takes requests from clients over the Internet and routes them to your EC2 instances, or an internal-facing load balancer, which takes requests from clients in your VPC and routes them to EC2 instances in your private subnets. Load balancers in EC2-Classic are always Internet-facing.
[EC2-Classic] Load balancers for EC2-Classic support both IPv4 and IPv6 addresses. Load balancers for a VPC do not support IPv6 addresses.
You can monitor your load balancer using CloudWatch metrics, access logs, and AWS CloudTrail.
You can associate your Internet-facing load balancer with your domain name.
etc.
So in this scenario I am failing to understand why I would pick something like consul.io or netflix eureka over Amazon ELB for service discovery.
I have a hunch that this might be due to implementing client side service discovery vs server side service discovery, but I am not quite sure.
You should think about it as client side load balancing versus dedicated load balancing.
Client side load balancers include Baker Street (http://bakerstreet.io); SmartStack (http://nerds.airbnb.com/smartstack-service-discovery-cloud/); or Consul HA Proxy (https://hashicorp.com/blog/haproxy-with-consul.html).
Client side LBs use a service discovery component (Baker Street uses a stateless pub/sub service discovery mechanism; SmartStack uses ZooKeeper; Consul HA Proxy uses Consul) as part of their implementation, but they provide the health checking / end-to-end functionality you're probably looking for.
AWS ELB and Eureka differ at many points:
Edge Services vs Mid-tier Services
AWS ELB is a load balancing solution for edge services exposed to end-user web traffic. Eureka fills the need for mid-tier load balancing.
Mid-tier server refers to an application server that sits between the user's machine and the database server where the processing takes place in. The middle tier server performs the business logic.
While you can theoretically put your mid-tier services behind the AWS ELB, in EC2
Classic you expose them to the outside world and thereby losing all the usefulness of the AWS security groups.
Dedicated vs Client side Load Balancing
AWS ELB is also a traditional proxy-based load balancing solution whereas with Eureka it is different in that the load balancing happens at the instance/server/host level in a round robin fashion. The client instances know all the information about which servers they need to talk to.
If you are looking for a sticky user session (all requests from a user during the session are sent to the same instance) based load balancing which AWS now offers, Eureka does not offer a solution out of the box.
Load Balancer Outages
Another important aspect that differentiates proxy-based load balancing from load balancing using Eureka is that your application can be resilient to the outages of the load balancers since the information regarding the available servers is cached on the Eureka client.
This does require a small amount of memory but buys better resiliency. The Eureka client gets all the registry information at once and in subsequent requests to the Eureka server, it only receives the delta i.e the changes in the registry information rather than the whole registry information. Also, Eureka servers can operate in cluster mode where each peer is not affected by the performance of other peers.
Scale and convenience
Also, imagine, 1000s of microservices running and each having multiple instances. You will require 1000 ELBs, one for each of the microservice, or something like HAProxy that sits behind the ELB to make layer 7 decisions based on the hostname, etc. and then forward the traffic to a subset of instances. While with Eureka, you only play with the application name which is far less complicated.
Service Discovery component usually has a notification component. It is not a load balancer eventhough some might have the capability to do so. It can notify registered clients about changes, for example a loadbalancer going down.
A client can query a service discovery/registry to get a load balancer that is running. Whereas a load balancer does not noitfy a client when it is down.
You should also read about EUREKA
Amazon ELB provides the EC2 instances to your service requests based on Load balancer and the IP addresses of the EC2 instances are not consistent so you can also use EUREKA which does the same job but is based on service registry and client side load balancing in which the Application client for each region has the registry.
You can read more about it here :
https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance