Consolidating multiple AWS Classic Load Balancers into a single load balancer - amazon-web-services

We currently use a single AWS classic load balancer per EC2. This was cost effective for not many EC2s but now we're a growing project, we have 8 Classic Load Balancers which is starting to cost more than we'd like
What could I do to consolidate these multiple load balancers into a single load balancer?
The current load balancers are only used to forward HTTP/HTTPs traffic to an EC2 that's registered against it
I have DNS A records setup to route to the load balancers

Without knowing all the details, you might be better creating a single application load balancer with multiple target groups, this way it's only one load balancer and then you have the segregation at target group level rather than load balancer level.
If you need http/s access to some pieces of infrastructure and app access to others then you might consider one network LB and one application LB.

Related

Load Balancing of 2 instances in AWS

I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.

What's the difference between a load balancer and target group in AWS?

I'm following along a course and I don't really get the difference between an aws load balancer and an aws target group. The course kinda talks about them interchangeably. Does an aws target group include an aws load balancer? What's the theoretical and practical difference?
In AWS, a load balancer is an actual server (or cluster of servers) managed entirely by Amazon that accepts incoming traffic and routes the traffic across multiple backend servers, thus distributing the load.
A target group is simply a list of target servers that the load balancer should distribute the load to.
You configure the load balancer by telling it to send all traffic that matches a certain pattern (like all traffic that comes in on a certain port, or all traffic that is for a certain domain name) to a specific target group.
Load Balancer - AWS thing..
Target Group - your thing.
Target group is collection of your own servers ( one or more than one server)..
Load balancer help to distribute incoming traffic (API Request etc.) to these different target groups based on rules and listeners.
You need to assign DNS/domain name to load balancer, all incoming traffic first comes to this then it distribute to target groups servers..

GCP internal load balancer between two VMs (Compute instances)

Is it possible in GCP to create an internal load balancer that balances the traffic between two Compute Instances in different regions?
Two instances (written NAT on them) are in different regions (e.g one in us-central1 and other in asia-south1) serving something on the same ports and the internal load balancer (e.g with IP: 170.0.0.4) is accepting requests from the clients and forwarding them to these VMs.
This would help in creating a highly available service (NAT in this case) that will work even when one VM or the service or region is down.
EDIT:
Adding some more details here:
Both VMs and the Load Balancer have internal IPs.
Both VMs and the Load Balancer are in the same VPC network
I need a layer 7 (HTTP(S)) internal lLoad balancer.
Internal Load balancing is only regional and since you want to have back-end in different regions it will still work but you will have to set up one by yourself.
It's not possible "out of the box".
You can have a look at the Internal Load Balacing documentation which explains how this works and why. Here's also a table with available Load balancing options.
If you want to configure your own LB then maybe try Envoy Proxy (or Nginx, or any solution you want).
In essence - unless you set up your own load balancer GCP doesn't have the functionality.
You could also use external load balancer (which is risky) and use it to load balance traffic and restrict external traffic to your instances (just a bunch of known IP's or ranges).
With only two endpoints, it is impossible to use a load-balancer, because there is nothing to balance. You could only put both of them behind a load balancer, then it could balance their traffic. Moving both into the same region might be the only option to use the internal load balancer, but also here, the servers need to be put behind it.

Why is it that the existing APIs used with Classic Load Balancer cannot be used with Application Load Balancer?

AWS documentation mentions 'Application Load Balancers require a new set of APIs'. Why is it that the existing APIs used with Classic Load Balancer cannot be used with Application Load Balancer?
The main difference between Classic Load Balancers (v1 - old generation 2009) and Application Load Balancers (v2 - new generation 2016) is that ALBs have a port mapping feature to redirect to a dynamic port. In Comparison we would need to create one CLB per application.
Overall CLBs are deprecated and you use ALBs for HTTP/HTTPS and Websockets and Network Load Balancers for TCP.
Coming to your question. On ALBs you map certain paths (like an API endpoint) to a target group (e.g. EC2 instances). Within those instances you could trigger a lambda or whatever to execute your logic. This logic can stay the same as it is when you used it with a CLB.

GCP, Autoscaling on internal load balancer

I managed to set up autoscaling based on an external load balancer, but I didn't find a way to do the same for an internal load balancer.
Is this feature supported, how to go about auto-scaling my instance group based on the internal load balancer?
The issue is, when you configure an instance group to scale by HTTP requests, you need an HTTP load balancer, which is internet facing, So, the UDP load balancer, which can be internal doesn't work for that.
The Internal Load Balancer uses a backend service which can use a managed instance group. You can assign a managed instance group to the backend or target pools of both internal and network load balancers.
Keep in mind that the Network Load Balancer uses target pools instead of backend services, but target pools can use managed instance groups as well.
Take a look at the documentation for more details. Alternatively I found this and this posts that I believe can be useful to you.
From your last comment:
I'm not able to setup a TCP load balancer which has a backend service, I only get a REGIONAL backend service, which doesn't support http load balancing..
As stated in the Internal Load Balancing Concepts, "internal client requests stay internal to your VPC network and region", so there is neither need of HTTP here, nor a multi-regional setup.
On the same page, under section "About Internal Load Balancing", the schema shows a classic load balancing architecture, featuring one global (http) and multiple internal (tcp/udp) load balancers for each region.
Further on, under "Deploying Internal Load Balancing with clients across VPN or Interconnect", the following is stated in an "Important" note:
Internal Load Balancing is a regional product. [...] An internal load balancer cannot forward or receive traffic to and from VM instances in other regions.
Basically, if your managed instance group has instances across multiple regions, then you need an external load balancer, but if all your instances are within the same region (instances can be split across zones within this same region, e.g. us-west1-a/b/c), then you can rely on an internal load balancer.