Is it possible in GCP to create an internal load balancer that balances the traffic between two Compute Instances in different regions?
Two instances (written NAT on them) are in different regions (e.g one in us-central1 and other in asia-south1) serving something on the same ports and the internal load balancer (e.g with IP: 170.0.0.4) is accepting requests from the clients and forwarding them to these VMs.
This would help in creating a highly available service (NAT in this case) that will work even when one VM or the service or region is down.
EDIT:
Adding some more details here:
Both VMs and the Load Balancer have internal IPs.
Both VMs and the Load Balancer are in the same VPC network
I need a layer 7 (HTTP(S)) internal lLoad balancer.
Internal Load balancing is only regional and since you want to have back-end in different regions it will still work but you will have to set up one by yourself.
It's not possible "out of the box".
You can have a look at the Internal Load Balacing documentation which explains how this works and why. Here's also a table with available Load balancing options.
If you want to configure your own LB then maybe try Envoy Proxy (or Nginx, or any solution you want).
In essence - unless you set up your own load balancer GCP doesn't have the functionality.
You could also use external load balancer (which is risky) and use it to load balance traffic and restrict external traffic to your instances (just a bunch of known IP's or ranges).
With only two endpoints, it is impossible to use a load-balancer, because there is nothing to balance. You could only put both of them behind a load balancer, then it could balance their traffic. Moving both into the same region might be the only option to use the internal load balancer, but also here, the servers need to be put behind it.
Related
Lets assume we are laboring under the following requirements:
We would like to run our code on gcp and wish to do so on kubernetes as opposed to on any of the managed solutions (app engine, cloud run...).
Our services need to be exposed to the internet
We would like to preserve client ips (the service should be able to read it)
We would also like to deploy different versions of our services and split traffic between them based on weights
All of the above, except for traffic splitting, can be achieved by a traditional ingress resource which on gcp is implemented by a Global external HTTP(S) load balancer (classic). For this reason I was overjoyed when I noticed that gcp is implementing the new kubernetes gateway and route resources. As explained here they are in combination able to do everything that the ingress resource did and more, specifically, weighted traffic distribution.
Unfortunately however, once I began implementing this on a test cluster, I discovered that the gateway resource is implemented by 4 different classes, all of which are backed by the same two load balancers already backing the existing ingress resource. And only the two internal gateway classes, backed by gcp's internal HTTP(S) load balancer supports traffic splitting. See the following two images:
and
So, if we want to expose our services to the internet, we cannot traffic split, and if we wish to traffic split, we may only do so internally. I wasn't dismayed by this, it actually kind of makes sense. I mean ideally, the gke-l7-gxlb (backed by global external https load balancer (classic)) would support traffic splitting but I've encountered this sort of architecture in orgs before. You have an external load balancer, which does ssl termination and then sends traffic to internal load balancers which split traffic based on various rules. There are even images of such diagrams, using multiple load balancers on gcp's various tutorial pages. However, to finally return to the title, I cannot seem to be able to convince gcp's external load balancer to route traffic to the internal one. It seems to be very restricted in its backend definitions. And simply choosing an ip address (provided to us upon the creation of the gateway resource, i.e. the internal load balancer) does not appear to be an option.
Is this possible? Am I completely off base here? should I be solving this in a completely different way? Is there an easier way to achieve the above 4 checkboxes? I feel like an external load balancer sending traffic to an ip in my vpc network should be one of its most basic functions, but maybe there is a reason it's not allowing me to do this?
I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.
I managed to set up autoscaling based on an external load balancer, but I didn't find a way to do the same for an internal load balancer.
Is this feature supported, how to go about auto-scaling my instance group based on the internal load balancer?
The issue is, when you configure an instance group to scale by HTTP requests, you need an HTTP load balancer, which is internet facing, So, the UDP load balancer, which can be internal doesn't work for that.
The Internal Load Balancer uses a backend service which can use a managed instance group. You can assign a managed instance group to the backend or target pools of both internal and network load balancers.
Keep in mind that the Network Load Balancer uses target pools instead of backend services, but target pools can use managed instance groups as well.
Take a look at the documentation for more details. Alternatively I found this and this posts that I believe can be useful to you.
From your last comment:
I'm not able to setup a TCP load balancer which has a backend service, I only get a REGIONAL backend service, which doesn't support http load balancing..
As stated in the Internal Load Balancing Concepts, "internal client requests stay internal to your VPC network and region", so there is neither need of HTTP here, nor a multi-regional setup.
On the same page, under section "About Internal Load Balancing", the schema shows a classic load balancing architecture, featuring one global (http) and multiple internal (tcp/udp) load balancers for each region.
Further on, under "Deploying Internal Load Balancing with clients across VPN or Interconnect", the following is stated in an "Important" note:
Internal Load Balancing is a regional product. [...] An internal load balancer cannot forward or receive traffic to and from VM instances in other regions.
Basically, if your managed instance group has instances across multiple regions, then you need an external load balancer, but if all your instances are within the same region (instances can be split across zones within this same region, e.g. us-west1-a/b/c), then you can rely on an internal load balancer.
We want to give more server resources to some enterprise customer. How can we configure our load balancer so that users from certain IP addresses will route to our more high-end servers?
This isn't possible with Elastic Load Balancers (ELBs). ELB is designed to distribute all traffic approximately equally to all of the instances behind it. It does not have any selective routing capability or custom "weighting" of back-ends.
Given the relatively low cost of an additional balancer, one option is to set up a second one with a different hostname, in front of this preferred class of instances, and provide that alternate hostname to your priority clients.
Otherwise you'll need to use a third party balancer, either behind, or instead of, ELB, which will allow you to perform more advanced routing of requests, based on the client IP, the URI path, or other variables.
A balancer operating behind the ELB seems redundant at first glance, but it really isn't, since the second balancer can provide more features, while ELB conveniently provides the front-end entry point resiliency into a cluster of load balancers spanning availability zones, without you having to manage that aspect.
I am working on AWS. I have a doubt regarding how many applications a load balancer can support.
Like if I have an application whose traffic is routed and managed by one load balancer, then can I use that LB for another application also???
Also if I can use that ELB for another applications also than how ELB will get to know that which traffic should be routed to Application A server and which to Application B server??
Thanks
I think you may be misunderstanding the role of the load balancer. The whole point of a load balancer is that any of the servers behind it can provide any of the services. By setting it up this way you ensure that the failure of any one server will not affect availability of the service.
You can load balance any TCP service such as HTTP just by adding it as a "listener" for the ELB. The ELB can therefore support as many applications as you want to forward to the servers behind it.
If you set up an image of a server that provides all the services you need, you can even have the ELB auto scale the number of servers up and down by launching or terminating instances from that image as the load varies.