How to expose a port of a container in AWS Fargate - amazon-web-services

I have a ASP.NET Core 5 Web app and an API(Asp.NET Core API). I want to run these two application in one Fargate Task. I am an absolute beginner in container and AWS Fargate world. So based on my R&D came up with following 4 step solution:
Will create a Task definition which will have two container definitions and each will have its own exposed container port by defining portMappings. lets say port 49153 for Web and 49155 for API container.
Will create two target group with target type IP and with desired ports. Lets assume target1(port-49153) and target2 (port-49155).
Will create a service and add two load balancers in this service like:
"loadBalancers":[
{
"targetGroupArn":"target1Arn",
"containerName":"webapp",
"containerPort":49153
},
{
"targetGroupArn":"target2Arn",
"containerName":"webapi",
"containerPort":49155
} ]
Will route incoming traffic to specific target in the ALB listeners.
I tried to implement this solution but failed as the ports exposed in task definition are not getting hit somehow. (I am no expert, so if the above solution is not as it should be then please suggest the appropriate one)
What I defined in above points is my end goal but for the simplicity I tired exposing a specific port with single container in Task definition but failed with this too. I done this in following ways:
Published my container image to ECR with AWS tool kit for Visual Studio 2019. the Docker file looks like:
Created new Task definition with uploaded container image and 49153 as containerPort in portMappings.
Created target group "Target49153" with target type IP and with port 49153.
Created new Service with name "SRVC" with this Task Definition.
Security Group, my service attached with, is having following inbound rules.
After doing these, my service is failing with the error message
service SRVC (port 49153) is unhealthy in target-group Target49153 due to (reason Request timed out).
when I try to access the app with Task's public IP like "http://taskpublicip:49153" it gives "ERR_CONNECTION_REFUSED". however when I edit Security Group of my service and add inbound rule for All traffic from anywhere, the application works at port 80 like "http://taskpublicip". but not able to hit port 49153 in anyway. Please help me to find out the right way. Thanks!

Related

Health Check keeps failing for ECS container

I am currently trying to deploy 2 ECS services on a single EC2 instance for test environment.
Here is what I have done so far:
Successfully created 2 Security Groups for Load Balancer and EC2 instance.
My EC2 Security Group
My ALB Security Group
Successfully created 2 different Task Definitions for my 2 applications, all Spring Boot application. First application is running on port 8080, Container Port in Task Definition is also 8080. The second application is running on port 8081, Container Port in Task Definition is also 8081.
Successfully created an ECS cluster with an Auto-Scaling Group as Capacity Provider. The cluster also recognizes the Container Instance created from Auto-Scaling Group (I am using t2.micro since it is in free-tier package). Attached created Security Group to EC2 instance.
My EC2 Security Group
Successfully created an ALB with 2 forward listeners 8080 and 8081 configured to 2 different Target Groups for each service. Attached created Security Group to ALB.
Here is how the ECS behaves with my services:
I attempted to create 2 new services. First service mapped with port 8080 on ALB. The second one mapped with port 8081 on ALB. Each of them have different Target Group but the Health Check configurations are the same
Health Check Configuration for Service 1
Health Check Configuration for Service 2
The first service was deployed pretty smooth, health check returned success on the first try.
However, for the second service, I use the exact same configuration as the first one, just a different port listener on ALB and the application container running on a different port number as well (which I believe that it should not be a problem). The service attempted 10 times before it fails the deployment and I noticed getting this repeated error message: service <service_name> instance <instance_id> port <port_number> is unhealthy in target-group <target_group_name> due to (reason Health checks failed).
This did not happen with my first service with the same configuration. The weird thing is that when I attempted to send a request the ALB domain name on port 8081, the application on the second service seems to be working fine without any error. It is just that the Unhealthy Check keeps throwing my service off.
I went over bunch of posts and nothing really helps with the current situation. Also, it is kind of dumb since I cannot dig any further details rather than this info in this image below.
Anyone has any suggestion to resolve this problem? Would really appreciate it.

gRPC in AWS Elastic Beanstalk load balancer / network setup

I have been at this for a couple of days and just cant figure it out.
I have tried this with gRPC in node.js and java on Elastic Beanstalk. On a normal VPS its quite simple just create a proxy grpcpass and it's set. I would like to move my micro services over to AWS Elastic Beanstalk but cant get the gRPC to connect.
What I did:
Created a new Java environment on Elastic Beanstalk and deployed my service. The gRPC server is on port 9086.
I have looked around the net and the closest thing I could find to a tutorial is New – Application Load Balancer Support for End-to-End HTTP/2 and gRPC but it does not cover how to setup the load balancer for gRPC for an instance.
Using the guide I made a few changes to the Target group like so:
Created a Target Group using the instances configuration
I have tried building the target group with both http and https for port 9086,
after creating the target group I registered the instance on the target group
After that I went to the load balancer and created a listener on port 443 and forwarded it to the target group. Port 443 is also open on the security policy.
The security listener settings pointing it to the AWS certificate allocated to the url.
I have tried both http and https on the target group on port 9086 but all my gRPC client calls fail with either status 13 or 14 meaning the request is not going through. I have confirmed in the logs the gRPC server is up and running.
Does anybody know where I am going wrong here? I feel like its something simple that I am missing, just can't find any tutorials or documentation on the proper way to set this up. Is what I am trying to do even possible on AWS Elastic Beanstalk?
From what I see on your screens, your ALB targets were added but they did not pass the health check. Meaning, that they are not allowed to accept any traffic yet.
You can find a good sample of a gRPC application with an implemented health check in the attached file in this article:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html#attachments-abf727c1-ff8b-43a7-923f-bce825d1b459

How does CodeDeploy work with dynamic port mapping?

It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc
As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.
What all our AWS infra is :
We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.
We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)
We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.
What we may have in the futur :
Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.
What's the problem :
Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error
service xxxx-xxxx was unable to place a task because no container
instance met all of its requirements. The closest matching
container-instance yyyy is already using a port required by your task.
Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?
I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...
-- Edit 02/18/2021 --
So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random
But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).
--
Not specifying hostPort seems to assign a "random" port on the host
But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something
You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.
I've tried to visualize the conflict:
The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.
You don't want to explicitly map these container ports to the host port, let ECS worry about that.
Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.
Service Definition docs (search for containerPort)

How to deploy continuously using just One EC2 instance with ECS

I want to deploy my nodejs webapp continuously using just One EC2 instance with ECS. I cannot create multiple instances for this app.
My current continuous integration process:
Travis build the code from github, build tag and push docker image and deployed to ECS via ECS Deploy shell script.
Everytime the deployment happen, following error occurs. Because the port 80 is always used by my webapp.
The closest matching container-instance ffa4ec4ccae9
is already using a port required by your task
Is it actually possible to use ECS with one instance? (documentation not clear)
How to get rid of this port issue on ECS? (stop the running container)
What is the way to get this done without using a Load Balancer?
Anything I missed or doing apart from the best practises?
The main issue is the port conflict, which occurs when deploying a second instance of the task on the same node in the cluster. Nothing should stop you from having multiple container instances apart from that (e.g. when not using a load balancer; binding to any ports at all).
To solve this issue, Amazon introduced a dynamic ports feature in a recent update:
Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.
Here's a way to do it using the green/blue deployment pattern:
Host your containers on port 8080 & 8081 (or whatever port you want). Let's call 8080 green and 8081 blue. (You may have to switch the networking mode from bridge to host to get this to work on a single instance).
Use Elastic Load Balancing to redirect the traffic from 80/443 to green or blue.
When you deploy, use a script to swap the active listener on the ELB to the other color/container.
This also allows you to roll back to a 'last known good' state.
See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html for more information.

How to use Application Load Balancer for an ECS Service with multiple port mappings?

I want to be able to use an ALB (ELBv2) to route traffic to multiple port mappings that are exposed by a task of a given service.
Example --
Service A is composed of 1 Task running with Task Definition B.
Task Definition B has one 'Container' which internally runs two daemons on two different port numbers (port 8000 and port 9000, both TCP). Thus, Task Definition B has two ports that need to be mapped to the ALB.
I'm not too worried about the ports that the ALB exposes (they don't have to be 8000 and 9000, but will help if they were).
my-lb-dns.com:8000 -> myservice:8000
my-lb-dns.com:9000 -> myservice:9000
Any ideas on how to create multiple listeners and target groups to achieve this? Nothing in the Console UI is allowing me to do this, and the API has not been very helpful either.
After speaking with AWS support, it appears that the ECS service is geared toward micro-services that are expected to expose only one port.
Having an ECS Service use an Application Load Balancer to map two or more ports isn't supported.
Of course, an additional Load Balancer can be manually added by configuring the appropriate target groups etc., but ECS will not automatically update the configuration when services are updated or scaled up, and also when the underlying container instances change.