How to use Application Load Balancer for an ECS Service with multiple port mappings? - amazon-web-services

I want to be able to use an ALB (ELBv2) to route traffic to multiple port mappings that are exposed by a task of a given service.
Example --
Service A is composed of 1 Task running with Task Definition B.
Task Definition B has one 'Container' which internally runs two daemons on two different port numbers (port 8000 and port 9000, both TCP). Thus, Task Definition B has two ports that need to be mapped to the ALB.
I'm not too worried about the ports that the ALB exposes (they don't have to be 8000 and 9000, but will help if they were).
my-lb-dns.com:8000 -> myservice:8000
my-lb-dns.com:9000 -> myservice:9000
Any ideas on how to create multiple listeners and target groups to achieve this? Nothing in the Console UI is allowing me to do this, and the API has not been very helpful either.

After speaking with AWS support, it appears that the ECS service is geared toward micro-services that are expected to expose only one port.
Having an ECS Service use an Application Load Balancer to map two or more ports isn't supported.
Of course, an additional Load Balancer can be manually added by configuring the appropriate target groups etc., but ECS will not automatically update the configuration when services are updated or scaled up, and also when the underlying container instances change.

Related

How to expose a port of a container in AWS Fargate

I have a ASP.NET Core 5 Web app and an API(Asp.NET Core API). I want to run these two application in one Fargate Task. I am an absolute beginner in container and AWS Fargate world. So based on my R&D came up with following 4 step solution:
Will create a Task definition which will have two container definitions and each will have its own exposed container port by defining portMappings. lets say port 49153 for Web and 49155 for API container.
Will create two target group with target type IP and with desired ports. Lets assume target1(port-49153) and target2 (port-49155).
Will create a service and add two load balancers in this service like:
"loadBalancers":[
{
"targetGroupArn":"target1Arn",
"containerName":"webapp",
"containerPort":49153
},
{
"targetGroupArn":"target2Arn",
"containerName":"webapi",
"containerPort":49155
} ]
Will route incoming traffic to specific target in the ALB listeners.
I tried to implement this solution but failed as the ports exposed in task definition are not getting hit somehow. (I am no expert, so if the above solution is not as it should be then please suggest the appropriate one)
What I defined in above points is my end goal but for the simplicity I tired exposing a specific port with single container in Task definition but failed with this too. I done this in following ways:
Published my container image to ECR with AWS tool kit for Visual Studio 2019. the Docker file looks like:
Created new Task definition with uploaded container image and 49153 as containerPort in portMappings.
Created target group "Target49153" with target type IP and with port 49153.
Created new Service with name "SRVC" with this Task Definition.
Security Group, my service attached with, is having following inbound rules.
After doing these, my service is failing with the error message
service SRVC (port 49153) is unhealthy in target-group Target49153 due to (reason Request timed out).
when I try to access the app with Task's public IP like "http://taskpublicip:49153" it gives "ERR_CONNECTION_REFUSED". however when I edit Security Group of my service and add inbound rule for All traffic from anywhere, the application works at port 80 like "http://taskpublicip". but not able to hit port 49153 in anyway. Please help me to find out the right way. Thanks!

How does CodeDeploy work with dynamic port mapping?

It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc
As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.
What all our AWS infra is :
We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.
We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)
We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.
What we may have in the futur :
Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.
What's the problem :
Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error
service xxxx-xxxx was unable to place a task because no container
instance met all of its requirements. The closest matching
container-instance yyyy is already using a port required by your task.
Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?
I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...
-- Edit 02/18/2021 --
So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random
But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).
--
Not specifying hostPort seems to assign a "random" port on the host
But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something
You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.
I've tried to visualize the conflict:
The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.
You don't want to explicitly map these container ports to the host port, let ECS worry about that.
Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.
Service Definition docs (search for containerPort)

Eureka with AWS ECS

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port
Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.
The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.
There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.

AWS ECS handling DNS subdomains across multiple instances

So I am trying to get my AWS setup working with DNS.
I have 2 instances (currently).
I have 4 task definitions. 3 of these need to run on port 80/443, however all on separate subdomains.
Currently if I stop/start a task, it can end up on either of my instances. This causes issues with the subdomain DNS potentially being pointed in the wrong places.
I imagine I need to setup some kind of load balancer to point the DNS at, but unsure how to get that to route through to the correct tasks.
So my questions:
Do I need a single load balancer, or one per 'task / subdomain'?
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
Am I over complicating this massively, or is there a simpler way to achieve this?
Do I need a single load balancer, or one per 'task / subdomain'?
You can have a single application load balancer and three target groups for Api, Site and Web App. Then you can do a rule base routing in the load balancer listener as shown in the following screenshot.
Ref: http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
You can then map your domains www.domain.com and app.domain.com to the load balancer
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
When you create services for your task definitions in ECS you can configure load balancing using the target groups you created.
Ref: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html (Check on "Configuring Your Service to Use a Load Balancer")

AWS ECS and Load Balancing

I see that ECS services can use Application Load Balancers, and the dynamic port stuff works atuomagically. However, an ALB has a maximum of 10 rules other than default rules. Does that mean that I need a separate ALB for every 10 services unless I wish to access via a different port (in which case the default rules would kick in)? This seems obvious, but for something touted to be the solution to load balancing in a microservices environment, this would seem incredibly limiting. Am I missing something?
As far as I know and have experienced, this is indeed true, you are limited to 10 listeners per ALB. Take into account that this setup (ALB + ECS) is fairly new so it is possible that Amazon will adjust the limits as people are requesting this.
Take into account as well that a listener typically has multiple targets, in a microservice architecture this translates to multiple instances of the same service. So you can run 10 different services but you are able to run 10 instances of each service, balancing 100 containers with a single ALB.
Alternatively (to save costs) you could create one listener with multiple rules, but they have to be distinguished by path pattern and have to listen (not route to) the same port. Rules can forward to a target group of your choice. E.g. you can route /service1 to container 1 and /service2 to container 2 within one listener.
Yes, you are correct, and it is a low restriction. However if you are able to use different CNAMES for your services then having them in an ALB with single target group for each service won't behave differently to having one ALB and multiple target groups each with rules. Dynamic ports are probably the main part of their "microservices solution" argument.