AWS ECS and Load Balancing - amazon-web-services

I see that ECS services can use Application Load Balancers, and the dynamic port stuff works atuomagically. However, an ALB has a maximum of 10 rules other than default rules. Does that mean that I need a separate ALB for every 10 services unless I wish to access via a different port (in which case the default rules would kick in)? This seems obvious, but for something touted to be the solution to load balancing in a microservices environment, this would seem incredibly limiting. Am I missing something?

As far as I know and have experienced, this is indeed true, you are limited to 10 listeners per ALB. Take into account that this setup (ALB + ECS) is fairly new so it is possible that Amazon will adjust the limits as people are requesting this.
Take into account as well that a listener typically has multiple targets, in a microservice architecture this translates to multiple instances of the same service. So you can run 10 different services but you are able to run 10 instances of each service, balancing 100 containers with a single ALB.
Alternatively (to save costs) you could create one listener with multiple rules, but they have to be distinguished by path pattern and have to listen (not route to) the same port. Rules can forward to a target group of your choice. E.g. you can route /service1 to container 1 and /service2 to container 2 within one listener.

Yes, you are correct, and it is a low restriction. However if you are able to use different CNAMES for your services then having them in an ALB with single target group for each service won't behave differently to having one ALB and multiple target groups each with rules. Dynamic ports are probably the main part of their "microservices solution" argument.

Related

HAproxy vs ALB or any other load balancer which one to use?

We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).

AWS ALB - single for all services?

We have many internet services, what are the considerations whether to use alb per service or single alb for all using listener rule pointing to target 🎯 group.
The services has its own clusters/target group with different functionality and different url.
Can one service spike impact other services?
Is it going to be a single point of failure ?
Cost perspective ?
Observability, monitoring, logs ?
Ease of management ?
Personally I would normally use a single ALB and use different listeners for different services.
For example, I have service1.domain.com and service2.domain.com. I would have two hostname listeners in the same ALB which route to the different services.
In my experience ALB is highly available and scales very nicely without any issues. I've never had a service become unreachable due to scaling issues. ALB's scale based on "Load Balancer Capacity Units" (LBCU). As your load balancer requires more capacity, AWS automatically assigns more LBCU's which allows it to handle more traffic.
Source: Own experience working on an international system consisting of monoliths and microservices which have a large degree of scaling between timezones.
You don't have impact on service B if service A has a spike, but the identification of which service is having bad times could be a little pain.
For monitoring perspective it's is a bit hard because is not that easy to have a fast identification of which service/target is suffering.
For management, as soon as different teams need to create/management its targets it can create some conflicts.
I wouldn't encourage you using that monolith architecture.
From cost perspective you can use one load balancer with multi forward rules, but using a single central load balancer for an entire application ecosystem essentially duplicates the standard monolith architecture, but increases the number of instances to be served by one load balancer enormously. In addition to being a single point of failure for the entire system should it go down, this single load balancer can very quickly become a major bottleneck, since all traffic to every microservice has to pass through it.
Using a separate load balancer per microservice type may add additional overhead but it make single point of failure per microservice in this model, incoming traffic for each type of microservice is sent to a different load balancer.

Application Load Balancer on AWS for microservices

I have five microservices which are running on a diff-diff port, I have to implement an application load balancer on AWS. I have two scenarios:
Needs to be created 5 Target Group as per microservices -- I wonder if this will be complicated.
or can I create a rules in a particular listener where I can define the path (port) base routing -- not sure about this.
What things can I try?
You will need 5 target groups regardless of solution. You'd have a listener for HTTP and another for HTTPS. Then use a path based rule to forward the traffic to the appropriate target group.
See additional link:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html

Several firewalls when creating Google Cloud Platform instance. Which to use?

I created a Google Cloud Platform Ubuntu 16.04 instance. It seems the GCP has several places where traffic can be filtered:
The Instances section of the GCP console lets me allow or disallow
HTTP and HTTPS traffic.
In the Networking section I can create additional firewall rules which limit access to the network.
Finally, in the Ubuntu instance itself I can configure UFW to block/allow certain ports.
Should I configure all of these? Would it be better to just configure one and allow all in the others?
As a note, this instance will serve a website, so I would only allow HTTP/HTTPS traffic.
The complete answer is that it depends.
For number one, the only thing that happens is that the default-allow-http rule gets applied to that instance.
The networking section is where you define your own rules that what to be applied to instances. It becomes easier to maintain all your networking configs in Google Cloud if you start having multiple instances and load balancers. You can share apply a single rule to some machines and you can compose them.
Finally, I would use ufw/iptables only as a last resort config. For example I have a some machines behind a load balancer and one of them is doing something weird. I would ssh into it and block port 80 and investigate it.

How to use Application Load Balancer for an ECS Service with multiple port mappings?

I want to be able to use an ALB (ELBv2) to route traffic to multiple port mappings that are exposed by a task of a given service.
Example --
Service A is composed of 1 Task running with Task Definition B.
Task Definition B has one 'Container' which internally runs two daemons on two different port numbers (port 8000 and port 9000, both TCP). Thus, Task Definition B has two ports that need to be mapped to the ALB.
I'm not too worried about the ports that the ALB exposes (they don't have to be 8000 and 9000, but will help if they were).
my-lb-dns.com:8000 -> myservice:8000
my-lb-dns.com:9000 -> myservice:9000
Any ideas on how to create multiple listeners and target groups to achieve this? Nothing in the Console UI is allowing me to do this, and the API has not been very helpful either.
After speaking with AWS support, it appears that the ECS service is geared toward micro-services that are expected to expose only one port.
Having an ECS Service use an Application Load Balancer to map two or more ports isn't supported.
Of course, an additional Load Balancer can be manually added by configuring the appropriate target groups etc., but ECS will not automatically update the configuration when services are updated or scaled up, and also when the underlying container instances change.