How to make two containers connectable in AWS ECS Fargate? - amazon-web-services

I have two containers added to a task definition
Node Container:
name :nodeAPI
port :exposed 5001
mongoconnection string in the env variable : mongodb://mongo [name of mongo container]
Mongo container:
name :mongo
port :exposed 27017
The node container is not able to connect to Mongo when I run this task. I am using Fargate and network as awsvpc.
How do I fix this?
How do I make it work running them from separate task definitions?

As every task in Fargate gets it own ENI, therefore correct way to communicate between containers in same task definition is calling local loopback 127.0.0.1 or localhost.
For example:
First Container will be able to communicate to second container using 127.0.0.1:<port of second container> as address and second container can communicate to First container using 127.0.0.1:<port of first container>
This is very well explained in AWS Blog: https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

there's a security-group configuration when you running the task check whether those are open
fargate security groep
fargate acts like a ec2 onlything is you can use a docker image
so you have to do normal ec2 configs

If both containers are defined within the same "Task Definition" than they are able to communicate using "localhost".
In your example your NodeJs app will talk with mongo localhost:27017

Related

ECS on EC2 bidirectional communication between two containers in the same task

I'm trying to configure ECS task on EC2 instance. Network mode in task definition is Bridge
My task has two containers inside, that should communicate with each other. e.g. ContainerA makes requests to ContainerB and ContainerB makes requests to ContainerA.
Everything works well when I use docker-compose, containers can communicate by their names. But when I deploy on ECS those containers don't see each other. Partly I can fix this problem using Links in the task definition, however, it works only in one direction, I mean if set links for both containers I receive such error message during creation task definition:
Unable to create a new revision of Task Definition web-app:12
Container links should not have a cycle
It would be great to hear any thoughts, what did I miss and if it's actually possible. Honestly, I thought that containers inside one task should communicate automatically by container names, especially when they are under the same Bridge network.
I know that there is a feature Service Discovery that allow communication between to services by names, but still, I would prefer to have one service and task with two containers inside.
Thank you for any help.
ContainerA NETWORK SETTINGS
If both containers are defined in the same task definition they are available via localhost:
For instance, if ContainerA is listening on port 8081 and ContainerB is listening on port 8082, they can simply reach each other by:
localhost:8081
localhost:8082
Side note: same concept as in Kubernetes pod with two containers - they are accessible via localhost
EDIT: that's relevant for awsvpc network mode as you can see in the documentation:
containers that belong to the same task can communicate over the
localhost interface
docker-compose uses not a bridge, but user-defined network by default. That's by addressing by service name works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
https://docs.docker.com/compose/networking/
ECS EC2 uses links by default, which are deprecated by Docker and not cannot have cycles, as you've found out.
Unfortunately user-defined networks are not supported by AWS despite a long-standing issue: https://github.com/aws/containers-roadmap/issues/184
That's what I've figured out so far. I guess the only option left is to use the awsvpc network mode which allows containers to communicate via localhost (which I find rather awkward):
This means that all containers in a task are addressable by the IP addresses of the elastic network interface, and they can communicate with each other over the localhost interface.
https://aws.amazon.com/blogs/compute/under-the-hood-task-networking-for-amazon-ecs/

fargate hostname set awsvpc

I'm pushing out a Aws Fagate task with 2 containers.
One of the containers is Nginx, which is going to route the traffic to the other container running in the task.
In Docker Compose, the containers seem to route to each other using the Docker dns service.
This does not work in Fargate.
I'm sure someone else has figured out how to communicate between containers in Fargate. I see there is a Network Mode, but Fargate seeems to default to awsvpc, so the settings for the container do net allow me to set the hostname.
Does anyone have some ideas how to make it possible for two containers in the same task to be able to refer to each other by hostname ?
Since you are using awsvpc networking mode, containes which are part of same task, as in your case, can simply use localhost to communicate with each other:
containers that belong to the same task can communicate over the localhost interface.
So you could modify your application/containers to use localhost for between-container communication as part of the same task.

Two ECS tasks connectivity in one EC2 Host

I have 2 ECS Services running with EC2 launch type and bridge networking mode. each service is having 1 task and both tasks are running on same EC2 container host.
on the Same ECS host, API container in 1st task is trying to communicate DB container in 2nd task by host name and port number( ex: http://abc-def:5000). what are the ways to achieve this ?
Your options are:
Put everything in a single task definition. Then you can use the link attribute to enable container A to communicate with container B like so B:port. Notice that link does not support circular relations meaning if A can talk with B, B will not be able to do that as well.
Switch to network mode host. This way you can communicate with localhost.
Proper service discovery with a tool like Consul or AWS ECS Service Discovery. I have no experience with the latter. See here.
Put your tasks behind ALB and use this load balancer to communicate between tasks.

Running multiple ECS tasks based on same task definitions in one host, using different ports

I have one ecs task definition. Can multiple tasks of the above task definition can run on one EC2 instance on several ports ?
I have already have running several tasks running on several ec2 instances. I want to reduce the consumption of resources. As one EC2 instance has more than enough resources to run several tasks. Is it possible to run same task several times on different ports on one EC2 instance ?
Yes, ECS has very good support for this since 2016. You can leave the host port empty in the container definition, this will result in a random port to be chosen for your container. As a result, more instances of the same task definition can run on one ECS instance.
You can configure your ECS service in combination with an Application Load Balancer so that when it starts a new task, it will register the port number in the associated target group. This way you never have to deal with the random port.
If you setup your service via the AWS console, configuration is pretty straightforward.
This can be configured by setting Host Port to be 0 in Port Mappings setting of Container Definitions while defining Task.
Following setting is available in Container Definition of Task.
It allows ECS to assign random ports to Tasks running in same EC2.
For more details please check - Setup Dynamic Port Mapping for ECS

How to deploy continuously using just One EC2 instance with ECS

I want to deploy my nodejs webapp continuously using just One EC2 instance with ECS. I cannot create multiple instances for this app.
My current continuous integration process:
Travis build the code from github, build tag and push docker image and deployed to ECS via ECS Deploy shell script.
Everytime the deployment happen, following error occurs. Because the port 80 is always used by my webapp.
The closest matching container-instance ffa4ec4ccae9
is already using a port required by your task
Is it actually possible to use ECS with one instance? (documentation not clear)
How to get rid of this port issue on ECS? (stop the running container)
What is the way to get this done without using a Load Balancer?
Anything I missed or doing apart from the best practises?
The main issue is the port conflict, which occurs when deploying a second instance of the task on the same node in the cluster. Nothing should stop you from having multiple container instances apart from that (e.g. when not using a load balancer; binding to any ports at all).
To solve this issue, Amazon introduced a dynamic ports feature in a recent update:
Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.
Here's a way to do it using the green/blue deployment pattern:
Host your containers on port 8080 & 8081 (or whatever port you want). Let's call 8080 green and 8081 blue. (You may have to switch the networking mode from bridge to host to get this to work on a single instance).
Use Elastic Load Balancing to redirect the traffic from 80/443 to green or blue.
When you deploy, use a script to swap the active listener on the ELB to the other color/container.
This also allows you to roll back to a 'last known good' state.
See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html for more information.