Difference between Classic and Elastic Load Balancer - amazon-web-services

I am learning about AWS elastic and classic load balancer. I understand what a load balancer does, but can someone please explain what the difference is between them?
I'm studying for a AWS certificate and I need to be able to explain the difference. Thanks in advance.

As others have said, you have three types of Elastic Load Balancer (ELB).
You can select the appropriate load balancer based on your application needs. If you need flexible application management, we recommend that you use an Application Load Balancer. If extreme performance and static IP is needed for your application, we recommend that you use a Network Load Balancer. If you have an existing application that was built within the EC2-Classic network, then you should use a Classic Load Balancer.
That's from the AWS ELB page, see a feature comparison and description of each service here: https://aws.amazon.com/elasticloadbalancing/features/

The AWS api and documentation is very confusing about load balancers.
First release of LoadBalancer (TCP load balancer only) was called ELB for Elastic LoadBalancer.
Second and actual release of load balancers are called ALB for Application Load Balancer. They deal with TCP/HTTP/HTTPS, filtering rules, etc. Be carefull, in the API ALB are called LoadBalancer_v2 !!!

In 2022 we have Gateway Load Balancer in addition.
So there are 4 Balancers:
Application Load Balancer - HTTP, HTTPS, gRPC (for IP, Instance, Lambda),
Network Load Balancer - TCP, UDP, TLS (for IP, Instance, App.. Load Balancer),
Gateway Load Balancer - IP (for IP, Instance),
Classic Load Balancer - SSL/TLS, HTTP, HTTPS (for classic EC2-networks).
https://aws.amazon.com/elasticloadbalancing/features/

Related

Is GCP load balancers supports HTTPS and AMQP ports with SSL termination?

I would like to know a few things with GCP Load balancers.
Are GCP Load Balancers support HTTPS and AMQP ports with SSL termination?
Whether GCP Load Balancers can forward requests to internal IPs?
As i am not familiar with GCP, Can anyone help me out on this?
Thanks.
Your question is very broad so I will point you to a relevant documentation here.
GCP support HTTPS load balacing
Google Cloud HTTP(S) Load Balancing is a global, proxy-based Layer 7 load balancer that enables you to run and scale your services worldwide behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and Google Kubernetes Engine (GKE).
and SSL termination;
When using SSL Proxy Load Balancing for your SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP. For the types of backends that are supported, see Backends.
Have a look at the general overview of GCP's load balancing options - I don't know what your goals and requirements are but this should be helpful when choosing your load balancing method.
Answering your last question about forwarding requests to internal IP's - yes - LB's can and will do just that. Load Balancer can either terminate or forward your HTTPS requests.
If you need you can use it in the layer 4 and make it just a simple TCP/UDP load balancer.
If you specify more requirements then I may be able to make my answer more detailed.

Load balancer for kubernetes clusters

I need some help configuring my load balancer for my Kubernetes clusters. The internal load balancer works fine. Now, I'd like to expose the service via https and I'm stumped on the Ingress configuration.
First of all, please take in count that whenever any HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make are overwritten by GKE.
So, once that we are aware about the above, We have to know that Ingress for Internal HTTP(S) Load Balancing deploys the Google Cloud Internal HTTP(S) Load Balancer. This private pool of load balancers is deployed in your network and provides internal load balancing for clients and backends within your VPC. As per this documentation.
Now, we are ready to configure an Ingress for Internal Load Balancer, This is an example about how to configure a simple Ingress in order to expose a simple service.
My suggestion is to try first to implement the first configuration in order to know how an ingress works, and then, try to configure an ingress for GKE as per this documentation.
Let me know if you still having doubts about it or if you need more assistance.
Have a nice day, and stay safe.

Why is it that the existing APIs used with Classic Load Balancer cannot be used with Application Load Balancer?

AWS documentation mentions 'Application Load Balancers require a new set of APIs'. Why is it that the existing APIs used with Classic Load Balancer cannot be used with Application Load Balancer?
The main difference between Classic Load Balancers (v1 - old generation 2009) and Application Load Balancers (v2 - new generation 2016) is that ALBs have a port mapping feature to redirect to a dynamic port. In Comparison we would need to create one CLB per application.
Overall CLBs are deprecated and you use ALBs for HTTP/HTTPS and Websockets and Network Load Balancers for TCP.
Coming to your question. On ALBs you map certain paths (like an API endpoint) to a target group (e.g. EC2 instances). Within those instances you could trigger a lambda or whatever to execute your logic. This logic can stay the same as it is when you used it with a CLB.

How to assign Elastic IP to Application Load Balancer in AWS?

I created an Application Load Balancer in AWS.
How can I assign an Elastic IP address to the application load balancer? I didn't find any IP address in the load balancer description.
An Application Load Balancer cannot be assigned an Elastic IP address (static IP address).
However, a Network Load Balancer can be assigned one Elastic IP address for each Availability Zone it uses.
If you do not wish to use a Network Load Balancer, you can combine the two by putting the Network Load Balancer in front of the Application Load Balancer:
See: Using static IP addresses for Application Load Balancers | Networking & Content Delivery
You can now get global static IPs for your Application Load Balancer directly from the Load Balancer Management Console, either in the creation wizard or in the Integrated services tab. See this blog post.
Another option is to use AWS Global Accelerator:
AWS Global Accelerator
However, it's probably going to be more expensive than using NLB - ALB architecture.

Load balancer in EC2 AWS

I am working on AWS. I have a doubt regarding how many applications a load balancer can support.
Like if I have an application whose traffic is routed and managed by one load balancer, then can I use that LB for another application also???
Also if I can use that ELB for another applications also than how ELB will get to know that which traffic should be routed to Application A server and which to Application B server??
Thanks
I think you may be misunderstanding the role of the load balancer. The whole point of a load balancer is that any of the servers behind it can provide any of the services. By setting it up this way you ensure that the failure of any one server will not affect availability of the service.
You can load balance any TCP service such as HTTP just by adding it as a "listener" for the ELB. The ELB can therefore support as many applications as you want to forward to the servers behind it.
If you set up an image of a server that provides all the services you need, you can even have the ELB auto scale the number of servers up and down by launching or terminating instances from that image as the load varies.