So I am trying to get my AWS setup working with DNS.
I have 2 instances (currently).
I have 4 task definitions. 3 of these need to run on port 80/443, however all on separate subdomains.
Currently if I stop/start a task, it can end up on either of my instances. This causes issues with the subdomain DNS potentially being pointed in the wrong places.
I imagine I need to setup some kind of load balancer to point the DNS at, but unsure how to get that to route through to the correct tasks.
So my questions:
Do I need a single load balancer, or one per 'task / subdomain'?
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
Am I over complicating this massively, or is there a simpler way to achieve this?
Do I need a single load balancer, or one per 'task / subdomain'?
You can have a single application load balancer and three target groups for Api, Site and Web App. Then you can do a rule base routing in the load balancer listener as shown in the following screenshot.
Ref: http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
You can then map your domains www.domain.com and app.domain.com to the load balancer
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
When you create services for your task definitions in ECS you can configure load balancing using the target groups you created.
Ref: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html (Check on "Configuring Your Service to Use a Load Balancer")
Related
I have an example.com site that currently uses a load balancer with multiple servers.
We wish to launch an isolated application and route /path to only that new isolated load balancer.
I can create Listener of "/path" in the load balancers; but for the life of me I cannot figure out how to best structure Route53 to allow for the use of a setup that follows this psuedocode:
if REQUEST is "/path" or "/path/*"
use load balancer B
else
use load balancer A
Application Load Balancer the only thing you'll need. Within their you create 2 target groups, 1 for 1 app and 1 for another (or as I did, run them on different ports and assign each target group to its own port. Therefore I utilise 1 set of auto scaling servers fro all applications)
Going down your route though, create 2 target groups, pointing to the respective servers needed for each app and then just create the path rules under the load balancer listener tab and send each path to the correct target group.
Regards
Liam
If I have an ECS cluster with N distinct websites running as N services on said cluster - how do I go about setting up the load balancers?
The way I've done it currently is for each website X,
I create a new target group spanning all instances in the cluster
I create a new application load balancer
I attach the ALB to the service using the target group
It seems to work... but am want to make sure this is the correct way to do this
Thanks!
The way you are doing it is of course one way to do it and how most people accomplish this.
Application load balancers also support two other types of routing. Host based and path based.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#host-conditions
Host based routing will allow you to route based off of the incoming host from that website. So for instance if you have website1.com and website2.com you could send them both through the same ALB and route accordingly.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Similarly you can do the same thing with the path. If you websites were website1.com/site1/index.html and website1.com/site2/index.html you could put both of those on the same ALB and route accordingly.
I have three EC2 instances, with a classic load balancer. Ideally I should have two tasks running in two instances. So when creating the service I made the desired count of the tasks to 2.
My problem arises when I try to run new version of the task definition. I update the service to run the new task definition. So it should theoretically run two updated tasks replacing the old ones, since i have three ec2 running.
What happens actually is only one updated task is running together with the old tasks. So altogether 3 tasks running even though the desired count is set to 2, as you are able to see in the given image.
Does anyone know a solution for this ?
When using a classic load balancer, you can only map static ports on the ec2 instance.
Your deployment settings are:
min-health: 100%
max-healthy: 200%
The new version of the service would require two more hosts available with the free tcp port you requested. Since you only have 3 servers in the cluster, this condition will not be satisfied. You can either add more servers to your cluster, or use the Application Load Balancer (ALB) which will integrate with docker dynamic port mapping.
Update regarding security groups:
To manage security groups, you can tag a security group with another. For example, tag your ALB with 'app-gateway-alb' which allows specific ports from outside your network, then on the container have a security group which allows ANY TCP from 'app-gateway-alb' this is achieved by putting the security group ID in the text box where you would generally put the CIDR rule.
I want to be able to use an ALB (ELBv2) to route traffic to multiple port mappings that are exposed by a task of a given service.
Example --
Service A is composed of 1 Task running with Task Definition B.
Task Definition B has one 'Container' which internally runs two daemons on two different port numbers (port 8000 and port 9000, both TCP). Thus, Task Definition B has two ports that need to be mapped to the ALB.
I'm not too worried about the ports that the ALB exposes (they don't have to be 8000 and 9000, but will help if they were).
my-lb-dns.com:8000 -> myservice:8000
my-lb-dns.com:9000 -> myservice:9000
Any ideas on how to create multiple listeners and target groups to achieve this? Nothing in the Console UI is allowing me to do this, and the API has not been very helpful either.
After speaking with AWS support, it appears that the ECS service is geared toward micro-services that are expected to expose only one port.
Having an ECS Service use an Application Load Balancer to map two or more ports isn't supported.
Of course, an additional Load Balancer can be manually added by configuring the appropriate target groups etc., but ECS will not automatically update the configuration when services are updated or scaled up, and also when the underlying container instances change.
I am currently using Nginx server for my load balancer. But in order to use the Amazon's Load balancing feature I want to move to Amazon ELB. But the problem is my application has different routes or locations (same domain name with different sub-urls) that are handled by different ec2 instances. Like for example. (abc.com/ is handled by a set of ec2 instances while abc.com/xyz/* is handled by another set of instances). For now I use nginx to specify different upstream lists and and locations they handle. I tried to look at that in Amazon ELB but I didn't find it. So is it possible to do that in Amazon ELB or is there any way around that?
Sorry - other than supporting sticky sessions, there is no request-based routing logic in ELB.