I'm very green on running servers, so I'm trying to understand the trade-offs. It seems to me that the primary benefit for using a GCP load balancer would be to have a layer of indirection that could be changed at runtime. i.e. if some disaster happens in my GCP region, and it knocks everything out of commission, I can then spin up my servers quickly in another region, modify my loadbalancer to instead point there, and I'm back up and running in a matter of hours with no altered clients.
Load Balancers are expensive though. It seems like I could get the same level of benefit from using Cloudflare Forwarding Rules. This doesn't seem to be a common practice though.
What am I not realizing? What additional benefits are there to me going with the GCP load balancer, instead of using Cloudflare Forwarding Rules?
Related
We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).
I am reading about load balancing.
I understand the idea that load balancers transfer the load among several slave servers of any given app. However very few literature that I can find talks about what happens when the load balancers themselves start struggling with the huge amount of requests, to the point that the "simple" task of load balancing (distribute requests among slaves) becomes an impossible undertaking.
Take for example this picture where you see 3 Load Balancers (LB) and some slave servers.
Figure 1: Clients know one IP to which they connect, one load balancer is behind that IP and will have to handle all those requests, thus that first load balancer is the bottleneck (and the internet connection).
What happens when the first load balancer starts struggling? If I add a new load balancer to side with the first one, I must add even another one so that the clients only need to know one IP. So the dilema continues: I still have only one load balancer receiving all my requests...!
Figure 2: I added one load balancer, but for having clients to know just one IP I had to add another one to centralize the incoming connections, thus ending up with the same bottleneck.
Moreover, my internet connection will also reach its limit of clients it can handle so I probably will want to have my load balancers in remote places to avoid flooding my internet connection. However if I distribute my load balancers, and want to keep my clients knowing just one single IP they have to connect, I still need to have one central load balancer behind that IP carrying all the traffic once again...
How do real world companies like Google and Facebook handle these issues? Can this be done without giving the clients multiple IPs and expect them to choose one at random avoiding every client to connect to the same load balancer, thus flooding us?
Your question doesn't sound AWS specific, so here's a generic answer (elastic LB in AWS auto-scales depending on traffic):
You're right, you can overwhelm a loadbalancer with the number of requests coming in. If you deploy a LB on a standard build machine, you're likely to first exhaust/overload the network stack including max number of open connections and handling rate of incoming connections.
As a first step, you would fine tune the network stack of your LB machine. If that still does not provide you the required throughput, there are special purpose loadbalancer appliances on the market, that are built ground-up and highly optimized to handle a large number of incoming connections and routing them to several servers. Examples of these are F5 and netscaler
You can also design your application in ways that help you split traffic to different sub domains, thereby reducing the number of requests 1 LB has to handle.
It is also possible to implement a round-robin DNS, where you would have 1 DNS entry point to several client facing LBs instead of just one as you've depicted.
Advanced load balancers like Netscaler and similar also does GSLB with DNS not simple DNS-RR (to explain further scaling)
if you are to connect to i.e service.domain.com, you let the load balancers become Authorative DNS for the zone and you add all the load balancers as valid name servers.
When a client looks up "service.domain.com" any of your loadbalancers will answer the DNS request and reply with the IP of the correct data center for your client. You can then further make the loadbalancer reply on the DNS request based of geo location of your client, latency between clients dns server and netscaler, or you can answer based on the different data centers load.
In each datacenter you typically set up one node or several nodes in cluster. You can scale quite high using such a design.
Since you tagged Amazon, they have load balancers built in to their system so you don't need to. Just use ELB and Amazon will direct the traffic to your correct system.
If you are doing it yourself, load balancers typically have a very light processing load. They typically do little more than redirect a connection from one machine to another based on a shallow inspection (or no inspection) of the data. It is possible for them to be overwhelmed, but typically that requires a load that would saturate most connections.
If you are running it yourself, and if your load balancer is doing more work or your connection is getting saturated, the next step is to use Round-Robin DNS for looking up your load balancers, generally using a combination of NS and CNAME records so different name lookups give different IP addresses.
If you plan to use amazon elastic load balancer they claim that
Elastic Load Balancing automatically scales its request handling
capacity to meet the demands of application traffic. Additionally,
Elastic Load Balancing offers integration with Auto Scaling to ensure
that you have back-end capacity to meet varying levels of traffic
levels without requiring manual intervention.
so you can go with them and do not need to handle the Load Balancer using your own instance/product
We are building a scaled application that uses WebSockets on AWS EC2. We were considering using the default ELB (Elastic Load Balancing) for this, but that, unnecessarily, makes the load balancer itself a bottleneck for traffic-heavy operations (see this related thread), so we are currently looking into a way to send the client the connection details of a "good instance" to connect to instead. However, the Elastic Load Balancer API does not seem to support a query of the sort "give me (public) connection details of a good instance", which is odd because that is the core functionality of any load balancer. Maybe I have just not looked at the right place?
UPDATE:
Currently, we are investigating two simple solutions using default implementations:
Use ELB in TCP mode which tunnels all traffic through the ELB.
Simply connect to the public IP of the instance that the ELB connected you to for your GET request. The second solution requires public IPs to be enabled, but does not route all traffic through the ELB.
I was concerned about that very last part because I assumed that the ELB is not in the same building as the instance it gave you. But I assume, it usually is in the same building or has some other high-speed connection to the instances? In that case, the tunneling overhead is negligible.
Both solutions seem to be equally viable, or am I overseeing something?
If your application manages to make the ELB a bottleneck, then you are a pretty big fish. Why don't you try first using their load balancer trusting that they do their job right? It is difficult to make it "better", and the most difficult part about this is to define what is "better" in the first place. You definitely did not very well define that in your question, so I am pretty sure that you are well off using just their load balancer.
In some cases it might make sense to develop your own load balancing logic, especially if your machine usage depends on very special metrics not per se accessible to the ELB system.
Yes, I'd say both solutions are viable.
The upside of the second is that it allows greater customization of the load balancing logic you may want to implement (providing an improvement over ELBs round robin), dispatching requests to a server of your convenience after an initial HTTP GET request.
The downside may be on the security front. It's not clear whether security, and SSL is part of your requirements, but in case it is, the second solution forces you to handle it at the ec2 instances level, which can be inconvenient and affect each node's performance. Otherwise websocket communications may be left unsecured.
Questions about load balancers if you have time.
So I've been using AWS for some time now. Super basic instances, using them to do some tasks whenever I needed something done.
I have a task that needs to be load balanced now. It's not a public service though. It's pretty much a giant cron job that I don't want running on the same servers as my website.
I set up an AWS load balancer, but it doesn't do what I expected it to do.
It get's stuck on one server, and doesn't load balance at all. I've read why it does this, and that's all fine and well, but I need it to be a serious round-robin load balancer.
edit:
I've set up the instances on different zones, but no matter how many instances I add to the ELB, it just uses one. If I take that instance down, it switches to a different one, so I know it's working. But I really would like it to always use a different one under every circumstance.
I know there are alternatives. Here's my question(s):
Would a custom php load balancer be an ok option for now?
IE: Have a list of servers, and have php randomly select a ec2 instance. Wouldn't be scalable at all, bu atleast I could set this up in 2 mins and it can work for now.
or
Should I take the time to learn how HAProxy works, and set that up in place of the AWS ELB?
or
Am I doing it wrong, and AWS's ELB does do round-robin. I just have something configured wrong?
edit:
Structure:
1) Web server finds a task to do.
2) If it's too large it sends it off to AWS (to load balancer).
3) Do the job on EC2
4) Report back via curl to an API
5) Rinse and repeat
Everything works great. But because the connection always comes from my server (one IP) it get's sticky'd to a single EC2 machine.
ELB works well for sites whose loads increase gradually. If you are expecting an uncommon and sudden increase on the load, you can ask AWS to pre-warm it for you.
I can tell you I used ELB in different scenarios and it always worked well for me. As you didn't provide too much information about your architecture, I would bet that ELB works for you, and the case that all connections are hitting only one server, I would ask you:
1) Did you check the ELB to see how many instances are behind it?
2) The instances that you have behind the ELB, are all alive?
3) Are you accessing your application through the ELB DNS?
Anyway, I took an excerpt from the excellent article that does a very good comparison between ELB and HAProxy. http://harish11g.blogspot.com.br/2012/11/amazon-elb-vs-haproxy-ec2-analysis.html
ELB provides Round Robin and Session Sticky algorithms based on EC2
instance health status. HAProxy provides variety of algorithms like
Round Robin, Static-RR, Least connection, source, uri, url_param etc.
Hope this helps.
This point comes as a surprise to many users using Amazon ELB. Amazon
ELB behaves little strange when incoming traffic is originated from
Single or Specific IP ranges, it does not efficiently do round robin
and sticks the request. Amazon ELB starts favoring a single EC2 or
EC2’s in Single Availability zones alone in Multi-AZ deployments
during such conditions. For example: If you have application
A(customer company) and Application B, and Application B is deployed
inside AWS infrastructure with ELB front end. All the traffic
generated from Application A(single host) is sent to Application B in
AWS, in this case ELB of Application B will not efficiently Round
Robin the traffic to Web/App EC2 instances deployed under it. This is
because the entire incoming traffic from application A will be from a
Single Firewall/ NAT or Specific IP range servers and ELB will start
unevenly sticking the requests to Single EC2 or EC2’s in Single AZ.
Note: Users encounter this usually during load test, so it is ideal to
load test AWS Infra from multiple distributed agents.
More info at the Point 9 in the following article http://harish11g.blogspot.in/2012/07/aws-elastic-load-balancing-elb-amazon.html
HAProxy is not hard to learn and is tremendously lightweight yet flexible. I actually use HAProxy behind ELB for the best of both worlds -- the hardened, managed, hands-off reliability of ELB facing the Internet and unwrapping SSL, and the flexible configuration of HAProxy to allow me to fine tune how things hit my servers. I've never lost an HAProxy instance yet, but it I do, ELB will just take that one out of rotation... as I have seen happen when the back-end servers have all become inaccessible, which (because of the way it's configured) makes ELB think the HAProxy is unhealthy, but that's by design in my setup.
Hey guys I was wondering if this seems like a viable solution to the age old problem of Amazon Elastic Load Balancer's lacking a dedicated IP, and thus A record support.
What if I created a micro/small instance and hooked it to an elastic IP. I can then use that IP as my A record address for my website. That instance will forward 100% of its traffic to an ELB load balancer address (Haproxy?), which will then operate normally and forward that traffic to my server pool.
With this architecture I can use my A-record and an ELB.
Are there any downsides to this aside from the cost of the initial instance that forwards its traffic to the ELB?
Will this double forwarding create too much lag or is it really negligible since they're all in AWS?
Thanks for feedback.
If you are currently using Route53 for you DNS, it does have support for handling zone apex.
https://forums.aws.amazon.com/message.jspa?messageID=260459
Not sure if this answers your question since you didn't mention why you need a dedicated ip.
Are there any downsides to this aside from the cost of the initial instance that forwards its traffic to the ELB?
Er, yes. You're loosing about 99.9% of the benefits of ELB.
Will this double forwarding create too much lag or is it really negligible since they're all in AWS?
No, the lag should be small (sub-milisecond). The two main problems are:
1) Your instance will become a bottleneck when your traffic increases. You won't be able to survive a sudden rush, such as being linked from a high-traffic website like Slashdot or Oprah.
The whole point of ELB is that they can manage scaling (the frontend and the backend) for you. If you insert a single box in the flow, it kinda prevents ELB from doing anything useful.
Also, the micro instance can take very little traffic. You have to go to at least a m1.large if you won't want your network packets throttled.
2) Your instance will become a Single Point Of Failure. When your box dies, your website will be down. ELB can prevent problems on both the front and backend with redundancy.
Perhaps if you explained why you needed an A record?
(It is also possible to run your own front-end(s): Just create a box with an EIP, and put nginx and/or HAProxy on it. But as with everything, there are trade-offs.)