discussion
Today I came across an architecture where single load balancer is used to route traffic to multiple target groups using multiple listeners.
loadbalancer:80 -> target group 1
loadbalancer:81 -> target group 2
Is it really advisable to use like this. ? whats the real usecase for load balancer with multiple listeners
Related
I have been looking at the terraform elastic load balancer resources and noticed that there are stickiness resource blocks inside a listener (aws_lb_listener > default_action > forward > stickiness ) and target group .
Is there any difference between the two if one is forwards requests to the associated target group?
should one configure them both in similar ways if you want that sticky behaviour ?
is it better to configure stickiness on the target group instead?
I had to go into the AWS console and look at the Load Balancer settings to see what was going on with this. Apparently you can add multiple target groups to a single listener, and the listener will spread the requests among all target groups. As part of spreading the traffic among multiple target groups you can enable a "group stickiness" setting that will cause all traffic from one source to always go to the same target group.
I had never noticed the ability to add multiple target groups to a listener before, and I had to do some searching to find any documentation on this feature. It was apparently announced via this blog post, which links to some documentation here.
So to summarize, the aws_lb_listener setting is a separate stickiness setting that only applies to weighted target groups, and "sticks" the traffic to a specific target group, not individual targets. The aws_lb_target_group stickiness setting "sticks" the traffic to an individual target.
Unless you are using multiple weighted target groups, you will want to always use the aws_lb_target_group setting for session stickiness. If you are using weighted target groups and also need sticky sessions then you would enable it in both places. If you don't normally need sticky sessions, but you do want to "stick" to a specific target group for some reason, like in a blue-green deployment scenario, then you would only enable it at the listener level.
I have five microservices which are running on a diff-diff port, I have to implement an application load balancer on AWS. I have two scenarios:
Needs to be created 5 Target Group as per microservices -- I wonder if this will be complicated.
or can I create a rules in a particular listener where I can define the path (port) base routing -- not sure about this.
What things can I try?
You will need 5 target groups regardless of solution. You'd have a listener for HTTP and another for HTTPS. Then use a path based rule to forward the traffic to the appropriate target group.
See additional link:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
In Amazon's description of its Application Load Balancer, they include the following diagram (which I have seen replicated in other materials on the topic):
But they describe that Listener is configured to communicate with Target Groups, not the Targets, directly. If that is the case shouldn't there be only one arrow from the Listener to each Target Group that it targets?
A Target Group is really just a logical construct to tell the load balancer what its targets are. The load balancer still does the work of "balancing the load" between the targets, thus it is aware of each available target and sends requests directly to them.
I have an example.com site that currently uses a load balancer with multiple servers.
We wish to launch an isolated application and route /path to only that new isolated load balancer.
I can create Listener of "/path" in the load balancers; but for the life of me I cannot figure out how to best structure Route53 to allow for the use of a setup that follows this psuedocode:
if REQUEST is "/path" or "/path/*"
use load balancer B
else
use load balancer A
Application Load Balancer the only thing you'll need. Within their you create 2 target groups, 1 for 1 app and 1 for another (or as I did, run them on different ports and assign each target group to its own port. Therefore I utilise 1 set of auto scaling servers fro all applications)
Going down your route though, create 2 target groups, pointing to the respective servers needed for each app and then just create the path rules under the load balancer listener tab and send each path to the correct target group.
Regards
Liam
So I am trying to get my AWS setup working with DNS.
I have 2 instances (currently).
I have 4 task definitions. 3 of these need to run on port 80/443, however all on separate subdomains.
Currently if I stop/start a task, it can end up on either of my instances. This causes issues with the subdomain DNS potentially being pointed in the wrong places.
I imagine I need to setup some kind of load balancer to point the DNS at, but unsure how to get that to route through to the correct tasks.
So my questions:
Do I need a single load balancer, or one per 'task / subdomain'?
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
Am I over complicating this massively, or is there a simpler way to achieve this?
Do I need a single load balancer, or one per 'task / subdomain'?
You can have a single application load balancer and three target groups for Api, Site and Web App. Then you can do a rule base routing in the load balancer listener as shown in the following screenshot.
Ref: http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
You can then map your domains www.domain.com and app.domain.com to the load balancer
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
When you create services for your task definitions in ECS you can configure load balancing using the target groups you created.
Ref: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html (Check on "Configuring Your Service to Use a Load Balancer")