Application Load Balancer on AWS for microservices - amazon-web-services

I have five microservices which are running on a diff-diff port, I have to implement an application load balancer on AWS. I have two scenarios:
Needs to be created 5 Target Group as per microservices -- I wonder if this will be complicated.
or can I create a rules in a particular listener where I can define the path (port) base routing -- not sure about this.
What things can I try?

You will need 5 target groups regardless of solution. You'd have a listener for HTTP and another for HTTPS. Then use a path based rule to forward the traffic to the appropriate target group.
See additional link:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html

Related

How can I host a server from two domains?

I have two domains in Cloudflare. I want to use those two domains for my application.
For example, www.abc.com, www.xzy.com
They should host the same server. I created load balancers and target groups for ECS Clusters. They are working but there is one problem. The problem is the first target group's Registered target is updated automatically, but the second one can't do it. Because I can just define one target group for ECS Fargate. So after every deployment, I have to change the registered target manually. How can I do it automatically?
AWS ALB supports host-based routing by using Host header.
You can follow this guide to setup

AWS - ELB - Routing http/https traffic to a custom port of EC2 instance

I've an application up and running on and EC2 instance at port 5000. I've been trying to add either application load balancer or classic load balancer to route my traffic to this application.
Until at this point, the application is available over HTTP protocol at http://example.com:5000/.
So my question is, what steps I need to do to make this application available without typing the port number in the URL.
Please note that I want to have multiple instances of the app up and running at different ports and are mapped to different subdomains.
Thanks
So after spending couple of hours and going through the documentation again, this is how it worked for me.
Created an Application load balancer
Created a Target Group that listens on HTTP port 80.
In this target group, selected the ec2 instance and registered it on port 5000
In the load balancer section, added two listeners. One for HTTP and one for HTTPs. Added default action to forward all traffic to that Target Group that was created in step 2. and it all worked for me.
The important bit was to set up the Target Group in step 2 and 3 correctly. I was creating two target groups for http and https separately which was incorrect. I just had to creat one target group for http only.

Relationship between Forwarding Rules, Target HTTP Proxy, URL Map, and Backend Service, in GCP

I'm new to GCP and pretty confused by the load balancing setup if you have an HTTP service (I asked a different question about TCP load balancing here: purpose of Target Pools in GCP).
It seems like, if you have a service which uses HTTP and you want to use Load Balancing, you have to create a lot of different components to make it happen.
In the tutorial I'm going through in Qwiklabs (https://google.qwiklabs.com/focuses/558?parent=catalog), you need to set things up so that requests flow like this: Forwarding Rule -> Target HTTP Proxy -> URL Map -> Backend Service -> Managed Instance Group. However, it doesn't really explain the relationship between these things.
I think the purpose of the Managed Instance Group is clear, but I don't understand the relationship between the others or their purpose. Can you provide an easy definition of the other components and describe how they are different from each other?
All these entities are not different components - they are just a way to model the configuration in a more flexible and structured way.
Forwarding Rule: This is just a mapping of IP & port to target proxy. You can have multiple forwarding rules pointing to the same target proxy - this is handy when you want to add another IP address or enable IPv6 or additional ports later on without redeploying the whole loadbalancer.
Target Proxy: This is all about how to handle connections. In your case with a target HTTP proxy, it sets up HTTP handling. With a target HTTPS proxy, you can configure SSL certificates as well.
URL Map: This only makes sense in the HTTP/HTTPS case - since the HTTP/HTTPS proxy parses requests, it can make decisions based on the requested URL. With a URL map, you can send different parts of your website to different services - this is for example great for microservice architectures.
Backend Service: This encapsulates the concept of a group of servers / endpoints that can handle a class of requests. The backend service lets you fine-tune some aspects of load balancing like session affinity, how long to wait for backends, what to do if they're unhealthy and how to detect it. The set of backends can be identified by an instance group (with or without autoscaling etc.) but can also be something like a GCS bucket for serving static content.
The reason for having those all separate entities is to let you mix and match or reuse parts as makes sense. For example, if you had some sort of real-time communication platform, you might have forwarding rules for web and RTC traffic. The web traffic might go through a HTTP(S) proxy with a URL map, serving static content from a GCS bucket. The RTC traffic might go through a target TCP proxy or even a UDP network level load balancer but point at the same set of backends / the same instance group.

Running multiple web services on a single ECS cluster

If I have an ECS cluster with N distinct websites running as N services on said cluster - how do I go about setting up the load balancers?
The way I've done it currently is for each website X,
I create a new target group spanning all instances in the cluster
I create a new application load balancer
I attach the ALB to the service using the target group
It seems to work... but am want to make sure this is the correct way to do this
Thanks!
The way you are doing it is of course one way to do it and how most people accomplish this.
Application load balancers also support two other types of routing. Host based and path based.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#host-conditions
Host based routing will allow you to route based off of the incoming host from that website. So for instance if you have website1.com and website2.com you could send them both through the same ALB and route accordingly.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Similarly you can do the same thing with the path. If you websites were website1.com/site1/index.html and website1.com/site2/index.html you could put both of those on the same ALB and route accordingly.

AWS ECS and Load Balancing

I see that ECS services can use Application Load Balancers, and the dynamic port stuff works atuomagically. However, an ALB has a maximum of 10 rules other than default rules. Does that mean that I need a separate ALB for every 10 services unless I wish to access via a different port (in which case the default rules would kick in)? This seems obvious, but for something touted to be the solution to load balancing in a microservices environment, this would seem incredibly limiting. Am I missing something?
As far as I know and have experienced, this is indeed true, you are limited to 10 listeners per ALB. Take into account that this setup (ALB + ECS) is fairly new so it is possible that Amazon will adjust the limits as people are requesting this.
Take into account as well that a listener typically has multiple targets, in a microservice architecture this translates to multiple instances of the same service. So you can run 10 different services but you are able to run 10 instances of each service, balancing 100 containers with a single ALB.
Alternatively (to save costs) you could create one listener with multiple rules, but they have to be distinguished by path pattern and have to listen (not route to) the same port. Rules can forward to a target group of your choice. E.g. you can route /service1 to container 1 and /service2 to container 2 within one listener.
Yes, you are correct, and it is a low restriction. However if you are able to use different CNAMES for your services then having them in an ALB with single target group for each service won't behave differently to having one ALB and multiple target groups each with rules. Dynamic ports are probably the main part of their "microservices solution" argument.