Load Balancers methodology (L4, L7) and AWS - amazon-web-services

Recently I have been trying to catch up with the knowledge that I'm missing around Load Balancing internals and I have found this great article
But it made me think about more questions than before;)
Till now I understood that if we talk about L4 LB we can differentiate:
LB terminate type - that creates for each incoming connection, a new one to backend.
LB passthrough type - that might be split into NAT, or Direct routing one (eventually with tunneling)
Now one of my questions that came to my mind is that how does it fit into AWS world - what type of LB is AWS Network Load Balancer in that case?
The next thing is about L7 LB's.
Does layer 7 LB also relies on NAT, or Direct routing? Or it's completely beyond that? When it's quite a lot of materials around layer 4, typically layer 7 is really poor in terms of proper articles covering internals - I know only top products like: haproxy or nginx, but still don't get the difference between them :(
I will be very thankful for anyone who might provide me with at least some piece of advice how connect the dots there :)

what type of LB is AWS Network Load Balancer in that case?
Network Load Balancer is a layer 4 load balancing, check the information provided on the amazon docs:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
For the second question.
Does layer 7 LB also relies on NAT, or Direct routing? Or it's completely beyond that?
In the amazon world application load balancer is the only wich support layer 7 LB and works like mention in the article
"The client makes a single HTTP/2 TCP connection to the load balancer. The load balancer then proceeds to make two backend connections"
So for the client is direct connection with the load balancer, and that connection is splited to be served at backend pools.

Related

gcp classic loadbalancer vs modern loadbalancer doesn't work with websocket

We are having some issues with getting websockets to work with a load balancer in google cloud. We narrowed it down to a difference between the classic load balancer (works fine) and the Https Loadbalancer with advanced traffic management that is selected by default but marked as a preview (does not work).
We have an instance group that definitely supports websockets. I.e. we can connect to it via the ip address.
We set up a load balancer and went for the one with traffic management. That worked fine for normal requests but all the websocket requests fail with a 502. We did not select http/2 (which is documented as not working for this). We tried all sorts of things to get this working. Even though it is documented that this should work out of the box it clearly doesn't.
$ websocat wss://lb.tryformation.com/websocket/messages
websocat: WebSocketError: Received unexpected status code (502 Bad Gateway)
websocat: error running
As a last resort, I then set up a classic lb with the same configuration, same instance group, same health check, same certificate, etc. And this worked on the first try.
So, clearly the new style loadbalancer does not work as advertised when it comes to websockets. The question is: why? Is this a known issue or is there something I should configure to get websockets working with that?
We're fine using the classic lb as it works. But I would like to understand the issue.
FWIW:
Assuming you're using GCP's Global External HTTP(S) "modern" Load Balancer, the documentation states under GCP CLB Overview > WebSocket support states:
The global external HTTP(S) load balancer with advanced traffic management capability does not support Websockets. Websockets work with the global external HTTP(S) load balancer (classic) and regional external HTTP(S) load balancer as expected.
If you're using the regional "modern" LB, keep in mind that these "modern" Load Balancers are still in Preview. I'm sure you've seen this, but I'm only noting this because I've had experience with GCP products in the past that claimed to "support websockets" while in "Preview", but didn't work correctly until avaiable in GA.
Since you didn't provide more details It's impossibler to reproduce it - hence try to conclude anything - there are just too many variables.
From your description it looks like some issue with traffic management in https load balacing - if you can reproduce it you can at Google's IssueTracker - under the load balancing component and describe the issue in more detail; provide detailed reproductions steps and if possible your setup that you used (or any other details that - after that someone will get back to you :)

Amazon EC2 Load Balancer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I want to know the major difference between Amazon application load balance (ALB) and classic load balance (CLB). I have searched that but they gave only examples, such as classic load balance contains only same content, but application load balance can contain different content and grouped with target group.
ALB has some features (e.g., host based routing, path based routing), but my question is why we use ALB instead of classic load balance. Please provide use cases for both.
ALB
The application load balancer (Elastic Load Balancer V2) lets you direct traffic to EC2 instances based on more complex criteria/rules, specifically URL paths. You can have users trying to access "/signup" go to a group of instances and users trying to access "/homepage" go to another set of instances. In comparison to the classic ELB, an ALB inspects traffic at the application level (OSI layer 7). Its at this level that URL paths can be parsed for example.
ELB/Classic
The classic load balancer routes traffic uniformly using information at the TCP level (OSI layer 4 Transport). It will either send request to each instances "round-robin" style or utilize sticky-sessions and send each users/client to the same instance they initially landed on.
Why ALB over ELB?
You could use an ALB if you decided to architect your system in such a way that each path had its own set of instances or its own service. So
/signup, /login, /info etc etc all go through one load balancer that is pinned to your domain name https//mysite.com but a different EC2 instance is servicing each. ALBs only support HTTP/HTTPS. If your system uses another protocol you would have to use an ELB/Classic load balancer. Websockets HTTP/2 are currently only supported on ALB.
Another reason you might choose ALB over ELB is that there are some additional newer features that have not yet been added to ELB or may never be added. As Michael points out below AWS WAF is not supported on the classic load balancer but is on ALB. I expanded on other features farther down.
Why ELB over ALB?
Architecturally speaking its much simpler to send every request to the same set of instances and then internally within your application delegate requests for certain paths to certain functions/classes/methods etc etc... This is essentially the monolith design most applications start out as. For most work loads dedicating traffic to certain instance groups (the ALB approach) would be a waste of EC2 power. You would have some instances doing lots of work and others barely used.
Similar to the ALB there are features of the classic ELB that have not yet arrived on ALB. I expand on that below.
Update - More on Feature Differences
From a product perspective they differ in other ways that aren't really related to how they operate and more about some features not being present yet.
HTTP to HTTPS Redirection - For example, in ALB each target group (group of instances your assigning a specific route) currently can only handle one protocol so if you were to implement HTTP to HTTPS redirects this requires two instances minimum. With ELB you can handle HTTP to HTTPS redirection on a single instance. I imagine ALB will have this feature soon.
https://forums.aws.amazon.com/thread.jspa?threadID=247546
Multiple SSL Certificates on One Load Balancer - With an ALB you can assign it multiple SSL certificates for different domains. This is not possible on a classic load balancer though the feature has been requested. For a classic load balancer you can use a wild card certificate but that is not the same thing. An ALB makes use of SNI server name identification to make this possible where as this has not been added to the classic ELB feature set.

Do load balancers flood?

I am reading about load balancing.
I understand the idea that load balancers transfer the load among several slave servers of any given app. However very few literature that I can find talks about what happens when the load balancers themselves start struggling with the huge amount of requests, to the point that the "simple" task of load balancing (distribute requests among slaves) becomes an impossible undertaking.
Take for example this picture where you see 3 Load Balancers (LB) and some slave servers.
Figure 1: Clients know one IP to which they connect, one load balancer is behind that IP and will have to handle all those requests, thus that first load balancer is the bottleneck (and the internet connection).
What happens when the first load balancer starts struggling? If I add a new load balancer to side with the first one, I must add even another one so that the clients only need to know one IP. So the dilema continues: I still have only one load balancer receiving all my requests...!
Figure 2: I added one load balancer, but for having clients to know just one IP I had to add another one to centralize the incoming connections, thus ending up with the same bottleneck.
Moreover, my internet connection will also reach its limit of clients it can handle so I probably will want to have my load balancers in remote places to avoid flooding my internet connection. However if I distribute my load balancers, and want to keep my clients knowing just one single IP they have to connect, I still need to have one central load balancer behind that IP carrying all the traffic once again...
How do real world companies like Google and Facebook handle these issues? Can this be done without giving the clients multiple IPs and expect them to choose one at random avoiding every client to connect to the same load balancer, thus flooding us?
Your question doesn't sound AWS specific, so here's a generic answer (elastic LB in AWS auto-scales depending on traffic):
You're right, you can overwhelm a loadbalancer with the number of requests coming in. If you deploy a LB on a standard build machine, you're likely to first exhaust/overload the network stack including max number of open connections and handling rate of incoming connections.
As a first step, you would fine tune the network stack of your LB machine. If that still does not provide you the required throughput, there are special purpose loadbalancer appliances on the market, that are built ground-up and highly optimized to handle a large number of incoming connections and routing them to several servers. Examples of these are F5 and netscaler
You can also design your application in ways that help you split traffic to different sub domains, thereby reducing the number of requests 1 LB has to handle.
It is also possible to implement a round-robin DNS, where you would have 1 DNS entry point to several client facing LBs instead of just one as you've depicted.
Advanced load balancers like Netscaler and similar also does GSLB with DNS not simple DNS-RR (to explain further scaling)
if you are to connect to i.e service.domain.com, you let the load balancers become Authorative DNS for the zone and you add all the load balancers as valid name servers.
When a client looks up "service.domain.com" any of your loadbalancers will answer the DNS request and reply with the IP of the correct data center for your client. You can then further make the loadbalancer reply on the DNS request based of geo location of your client, latency between clients dns server and netscaler, or you can answer based on the different data centers load.
In each datacenter you typically set up one node or several nodes in cluster. You can scale quite high using such a design.
Since you tagged Amazon, they have load balancers built in to their system so you don't need to. Just use ELB and Amazon will direct the traffic to your correct system.
If you are doing it yourself, load balancers typically have a very light processing load. They typically do little more than redirect a connection from one machine to another based on a shallow inspection (or no inspection) of the data. It is possible for them to be overwhelmed, but typically that requires a load that would saturate most connections.
If you are running it yourself, and if your load balancer is doing more work or your connection is getting saturated, the next step is to use Round-Robin DNS for looking up your load balancers, generally using a combination of NS and CNAME records so different name lookups give different IP addresses.
If you plan to use amazon elastic load balancer they claim that
Elastic Load Balancing automatically scales its request handling
capacity to meet the demands of application traffic. Additionally,
Elastic Load Balancing offers integration with Auto Scaling to ensure
that you have back-end capacity to meet varying levels of traffic
levels without requiring manual intervention.
so you can go with them and do not need to handle the Load Balancer using your own instance/product

Websocket Load Balancing on AWS EC2

We are building a scaled application that uses WebSockets on AWS EC2. We were considering using the default ELB (Elastic Load Balancing) for this, but that, unnecessarily, makes the load balancer itself a bottleneck for traffic-heavy operations (see this related thread), so we are currently looking into a way to send the client the connection details of a "good instance" to connect to instead. However, the Elastic Load Balancer API does not seem to support a query of the sort "give me (public) connection details of a good instance", which is odd because that is the core functionality of any load balancer. Maybe I have just not looked at the right place?
UPDATE:
Currently, we are investigating two simple solutions using default implementations:
Use ELB in TCP mode which tunnels all traffic through the ELB.
Simply connect to the public IP of the instance that the ELB connected you to for your GET request. The second solution requires public IPs to be enabled, but does not route all traffic through the ELB.
I was concerned about that very last part because I assumed that the ELB is not in the same building as the instance it gave you. But I assume, it usually is in the same building or has some other high-speed connection to the instances? In that case, the tunneling overhead is negligible.
Both solutions seem to be equally viable, or am I overseeing something?
If your application manages to make the ELB a bottleneck, then you are a pretty big fish. Why don't you try first using their load balancer trusting that they do their job right? It is difficult to make it "better", and the most difficult part about this is to define what is "better" in the first place. You definitely did not very well define that in your question, so I am pretty sure that you are well off using just their load balancer.
In some cases it might make sense to develop your own load balancing logic, especially if your machine usage depends on very special metrics not per se accessible to the ELB system.
Yes, I'd say both solutions are viable.
The upside of the second is that it allows greater customization of the load balancing logic you may want to implement (providing an improvement over ELBs round robin), dispatching requests to a server of your convenience after an initial HTTP GET request.
The downside may be on the security front. It's not clear whether security, and SSL is part of your requirements, but in case it is, the second solution forces you to handle it at the ec2 instances level, which can be inconvenient and affect each node's performance. Otherwise websocket communications may be left unsecured.

UDP Service with amazon web services

Good Day,
I have been using AWS quite a bit for my cloud based system for a hardware project. Using SimpleDB and the notification service provided is great.
However, I need a backend on AWS that basically listens to requests coming in, processes it and sends it back to a particular address. Some kind of UDP service.
I could easily write a c#/c++ app for it, but i am not sure if I can host it on AWS. Does anyone know how this works?
Short answer: yes.
EC2 instances are just like any other virtual machine, obviously you can put in a server that listens to UDP. Configuring the network for this is, of course, slightly more complicated, but possible. The one thing making it more complicated is that with UDP you will not be able to enjoy the load balancer service that Amazon offers, as it (currently) only supports TCP-based protocols.
So, if you have one server you wish to put on the internet, the procedure is probably same as what you'd do with a TCP server: set up a server and an elastic IP pointing to it, and then have your clients connect to it (by knowing the elastic IP you've been allocated, or by referring to that IP via a DNS resolution). If you have multiple servers you wish to set up, answering the same address, life is a bit more complicated. With TCP, you could have set up an Amazon load balancer and assign your elastic IP to the load balancer. If you'd want a load balancer for UDP, the Amazon stock load balancer can't do that, but you can still find a software load balancer (there are hundreds of them on Amazon's public images library) to set up.
Nginix has an Amazon image that will load balance UDP for $2,500/yr or you can launch your own EC2 instance and use open source Nginx.
My specific use case was for a UDP logging service, if you can use hostnames Route 53 could be a scalable managed solution as well.