How does CodeDeploy work with dynamic port mapping? - amazon-web-services

It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc
As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.
What all our AWS infra is :
We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.
We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)
We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.
What we may have in the futur :
Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.
What's the problem :
Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error
service xxxx-xxxx was unable to place a task because no container
instance met all of its requirements. The closest matching
container-instance yyyy is already using a port required by your task.
Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?
I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...
-- Edit 02/18/2021 --
So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random
But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).
--
Not specifying hostPort seems to assign a "random" port on the host
But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something

You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.
I've tried to visualize the conflict:
The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.
You don't want to explicitly map these container ports to the host port, let ECS worry about that.
Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.
Service Definition docs (search for containerPort)

Related

How to expose a port of a container in AWS Fargate

I have a ASP.NET Core 5 Web app and an API(Asp.NET Core API). I want to run these two application in one Fargate Task. I am an absolute beginner in container and AWS Fargate world. So based on my R&D came up with following 4 step solution:
Will create a Task definition which will have two container definitions and each will have its own exposed container port by defining portMappings. lets say port 49153 for Web and 49155 for API container.
Will create two target group with target type IP and with desired ports. Lets assume target1(port-49153) and target2 (port-49155).
Will create a service and add two load balancers in this service like:
"loadBalancers":[
{
"targetGroupArn":"target1Arn",
"containerName":"webapp",
"containerPort":49153
},
{
"targetGroupArn":"target2Arn",
"containerName":"webapi",
"containerPort":49155
} ]
Will route incoming traffic to specific target in the ALB listeners.
I tried to implement this solution but failed as the ports exposed in task definition are not getting hit somehow. (I am no expert, so if the above solution is not as it should be then please suggest the appropriate one)
What I defined in above points is my end goal but for the simplicity I tired exposing a specific port with single container in Task definition but failed with this too. I done this in following ways:
Published my container image to ECR with AWS tool kit for Visual Studio 2019. the Docker file looks like:
Created new Task definition with uploaded container image and 49153 as containerPort in portMappings.
Created target group "Target49153" with target type IP and with port 49153.
Created new Service with name "SRVC" with this Task Definition.
Security Group, my service attached with, is having following inbound rules.
After doing these, my service is failing with the error message
service SRVC (port 49153) is unhealthy in target-group Target49153 due to (reason Request timed out).
when I try to access the app with Task's public IP like "http://taskpublicip:49153" it gives "ERR_CONNECTION_REFUSED". however when I edit Security Group of my service and add inbound rule for All traffic from anywhere, the application works at port 80 like "http://taskpublicip". but not able to hit port 49153 in anyway. Please help me to find out the right way. Thanks!

How 2 services can talk to each other on AWS Fargate?

I setup a Fargate cluster on AWS. My cluster has the following services:
server-A (port 3000)
server-B (port 4000)
Each service is in the same VPC and have the same security group (any ports, any source, any destination). The VPC is isolated from internet.
Now, I want server-A to send a http query to server-B. I would assume that, as in Docker swarm, there is a private DNS that maps the service name to its private IP, and it would be as simple as sending the query to: http://server-B:4000. However, server-A gets a timeout, which means it can't reach server-B.
I've read in the documentation that I can put the 2 containers in the same service, each container listening on a different port, so that, thanks to the loopback interface, from server-A, I could query http://127.0.0.1:4000 and server-B will respond, and vice-versa.
However, I want to be able to scale server-A and server-B independently, so I think it makes sense to keep each server independant from each other by having 2 services.
I've read that, for 2 tasks to talk to each other, I need to setup a load balancer. Coming from the world of Docker Swarm, it was so easy to query the services by their service name, and behind the scene, the request was forwarded to one of the containers in that service. But it doesn't seem to work like that on AWS Fargate.
Questions:
how can server-A talk to server-B?
As service sometimes redeploy, their private IP changes, so it makes no sense to query by IP, querying by hostname seems the most natural way
Do I need to setup any kind of internal DNS?
Thanks for your help, I am really lost on doing this simple setup.
After searching, I found out it was due to the fact that I was not enabling "Service Discovery" during the service creation, so no private DNS was created. Here is some additional documentation which explains exactly the steps:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html

Eureka with AWS ECS

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port
Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.
The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.
There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.

How to deploy continuously using just One EC2 instance with ECS

I want to deploy my nodejs webapp continuously using just One EC2 instance with ECS. I cannot create multiple instances for this app.
My current continuous integration process:
Travis build the code from github, build tag and push docker image and deployed to ECS via ECS Deploy shell script.
Everytime the deployment happen, following error occurs. Because the port 80 is always used by my webapp.
The closest matching container-instance ffa4ec4ccae9
is already using a port required by your task
Is it actually possible to use ECS with one instance? (documentation not clear)
How to get rid of this port issue on ECS? (stop the running container)
What is the way to get this done without using a Load Balancer?
Anything I missed or doing apart from the best practises?
The main issue is the port conflict, which occurs when deploying a second instance of the task on the same node in the cluster. Nothing should stop you from having multiple container instances apart from that (e.g. when not using a load balancer; binding to any ports at all).
To solve this issue, Amazon introduced a dynamic ports feature in a recent update:
Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.
Here's a way to do it using the green/blue deployment pattern:
Host your containers on port 8080 & 8081 (or whatever port you want). Let's call 8080 green and 8081 blue. (You may have to switch the networking mode from bridge to host to get this to work on a single instance).
Use Elastic Load Balancing to redirect the traffic from 80/443 to green or blue.
When you deploy, use a script to swap the active listener on the ELB to the other color/container.
This also allows you to roll back to a 'last known good' state.
See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html for more information.

How to use Application Load Balancer for an ECS Service with multiple port mappings?

I want to be able to use an ALB (ELBv2) to route traffic to multiple port mappings that are exposed by a task of a given service.
Example --
Service A is composed of 1 Task running with Task Definition B.
Task Definition B has one 'Container' which internally runs two daemons on two different port numbers (port 8000 and port 9000, both TCP). Thus, Task Definition B has two ports that need to be mapped to the ALB.
I'm not too worried about the ports that the ALB exposes (they don't have to be 8000 and 9000, but will help if they were).
my-lb-dns.com:8000 -> myservice:8000
my-lb-dns.com:9000 -> myservice:9000
Any ideas on how to create multiple listeners and target groups to achieve this? Nothing in the Console UI is allowing me to do this, and the API has not been very helpful either.
After speaking with AWS support, it appears that the ECS service is geared toward micro-services that are expected to expose only one port.
Having an ECS Service use an Application Load Balancer to map two or more ports isn't supported.
Of course, an additional Load Balancer can be manually added by configuring the appropriate target groups etc., but ECS will not automatically update the configuration when services are updated or scaled up, and also when the underlying container instances change.