fargate hostname set awsvpc - amazon-web-services

I'm pushing out a Aws Fagate task with 2 containers.
One of the containers is Nginx, which is going to route the traffic to the other container running in the task.
In Docker Compose, the containers seem to route to each other using the Docker dns service.
This does not work in Fargate.
I'm sure someone else has figured out how to communicate between containers in Fargate. I see there is a Network Mode, but Fargate seeems to default to awsvpc, so the settings for the container do net allow me to set the hostname.
Does anyone have some ideas how to make it possible for two containers in the same task to be able to refer to each other by hostname ?

Since you are using awsvpc networking mode, containes which are part of same task, as in your case, can simply use localhost to communicate with each other:
containers that belong to the same task can communicate over the localhost interface.
So you could modify your application/containers to use localhost for between-container communication as part of the same task.

Related

ECS on EC2 bidirectional communication between two containers in the same task

I'm trying to configure ECS task on EC2 instance. Network mode in task definition is Bridge
My task has two containers inside, that should communicate with each other. e.g. ContainerA makes requests to ContainerB and ContainerB makes requests to ContainerA.
Everything works well when I use docker-compose, containers can communicate by their names. But when I deploy on ECS those containers don't see each other. Partly I can fix this problem using Links in the task definition, however, it works only in one direction, I mean if set links for both containers I receive such error message during creation task definition:
Unable to create a new revision of Task Definition web-app:12
Container links should not have a cycle
It would be great to hear any thoughts, what did I miss and if it's actually possible. Honestly, I thought that containers inside one task should communicate automatically by container names, especially when they are under the same Bridge network.
I know that there is a feature Service Discovery that allow communication between to services by names, but still, I would prefer to have one service and task with two containers inside.
Thank you for any help.
ContainerA NETWORK SETTINGS
If both containers are defined in the same task definition they are available via localhost:
For instance, if ContainerA is listening on port 8081 and ContainerB is listening on port 8082, they can simply reach each other by:
localhost:8081
localhost:8082
Side note: same concept as in Kubernetes pod with two containers - they are accessible via localhost
EDIT: that's relevant for awsvpc network mode as you can see in the documentation:
containers that belong to the same task can communicate over the
localhost interface
docker-compose uses not a bridge, but user-defined network by default. That's by addressing by service name works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
https://docs.docker.com/compose/networking/
ECS EC2 uses links by default, which are deprecated by Docker and not cannot have cycles, as you've found out.
Unfortunately user-defined networks are not supported by AWS despite a long-standing issue: https://github.com/aws/containers-roadmap/issues/184
That's what I've figured out so far. I guess the only option left is to use the awsvpc network mode which allows containers to communicate via localhost (which I find rather awkward):
This means that all containers in a task are addressable by the IP addresses of the elastic network interface, and they can communicate with each other over the localhost interface.
https://aws.amazon.com/blogs/compute/under-the-hood-task-networking-for-amazon-ecs/

Two ECS tasks connectivity in one EC2 Host

I have 2 ECS Services running with EC2 launch type and bridge networking mode. each service is having 1 task and both tasks are running on same EC2 container host.
on the Same ECS host, API container in 1st task is trying to communicate DB container in 2nd task by host name and port number( ex: http://abc-def:5000). what are the ways to achieve this ?
Your options are:
Put everything in a single task definition. Then you can use the link attribute to enable container A to communicate with container B like so B:port. Notice that link does not support circular relations meaning if A can talk with B, B will not be able to do that as well.
Switch to network mode host. This way you can communicate with localhost.
Proper service discovery with a tool like Consul or AWS ECS Service Discovery. I have no experience with the latter. See here.
Put your tasks behind ALB and use this load balancer to communicate between tasks.

How to make two containers connectable in AWS ECS Fargate?

I have two containers added to a task definition
Node Container:
name :nodeAPI
port :exposed 5001
mongoconnection string in the env variable : mongodb://mongo [name of mongo container]
Mongo container:
name :mongo
port :exposed 27017
The node container is not able to connect to Mongo when I run this task. I am using Fargate and network as awsvpc.
How do I fix this?
How do I make it work running them from separate task definitions?
As every task in Fargate gets it own ENI, therefore correct way to communicate between containers in same task definition is calling local loopback 127.0.0.1 or localhost.
For example:
First Container will be able to communicate to second container using 127.0.0.1:<port of second container> as address and second container can communicate to First container using 127.0.0.1:<port of first container>
This is very well explained in AWS Blog: https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/
there's a security-group configuration when you running the task check whether those are open
fargate security groep
fargate acts like a ec2 onlything is you can use a docker image
so you have to do normal ec2 configs
If both containers are defined within the same "Task Definition" than they are able to communicate using "localhost".
In your example your NodeJs app will talk with mongo localhost:27017

Communication Between Microservices in Aws Ecs

I'm having troubles with the communication between microservices. I have many spring boot applications, and many requests HTTP and AMQP (RabbitMQ) between them. Locally (in dev) I use Eureka (Netflix Oss) without Docker Images.
The question is: in the Amazon ECS Infraestructure how i can work with the same behavior? Which is the common pratice for the communication between microservices using Docker? I can still use Eureka for Service Discovery? Besides that, how this comunication will works between container instances?
I'd suggest reading up on ECS Service Load Balancing, in particular two points:
Your ECS service configuration may say that you let ECS or the EC2 instance essentially pick what port number the service runs on externally. (ie for Spring Boot applications inside the Docker container your application thinks it's running on port 8080, but in reality to anything outside the Docker container it may be running on port 1234
2 ECS clusters will check the health endpoint you defined in the load balancer, and kill / respawn instances of your service that have died
A load balancer gives you different ways to specify which application in the cluster you are talking to. This can be route based or DNS name based (and maybe some others). Thus http://myservice.example.com/api could point to a different ECS service than http://myservice.exaple.com/app... or http://app.myservice.example.com vs http://api.myservice.example.com.
You can configure ECS without a load balancer, I'm unsure how well that will work in this situation.
Now, you're talking service discovery. You can still use Eureka for service discovery, having Spring Boot take care of that. You may need to be clever about how you tell Eureka where your service lives (as the hostname inside the Docker container may be useless, and the port number inside the container totally will be too.) You may need to do something clever here to correctly derive that number, like introspecting using AWS APIs. I think this SO answer describes it correctly, or at least close enough to get started.
Additionally apparently ECS has service discovery built in now. This is either new since I last used ECS, or we didn't use it because we had other solutions. If you aren't completely tied to Eureka for other reasons.
Thanks for you reply. For now I'm using Eureka because I'm using Feign too for comunication between microservices.
My case is this: I have microservices (Example A,B,C). A communicates with B and C by way of Feign (Rest).
Microservices Example
Example of Code on Microservice A:
#FeignClient("b-service")
public interface BFeign {
}
#FeignClient("c-service")
public interface CFeign {
}
Using ECS and ALB it's possible still using Feign? If yes or not, how you suggest I do this?

How to map subdomains to multiple docker containers (as web servers) hosted using Elastic Bean Stack on AWS

I have seen an aws example of hosting a nginx server and a php server running on separate docker containers within an instance.
I want to use this infrastructure to host multiple docker containers (each being its own web server on a different unique port).
Each of these unique web application needs to be available on the internet with a unique subdomain.
Since one instance will not be enough for all the Docker containers, I will need them spread over multiple instances.
How can I host hundreds of docker containers over several instances, while one nginx-proxy container does the routing to map a subdomain to each web application container using it's unique port?
E.g.
app1.mydomain.com --> docker container exposing port 10001
app2.mydomain.com --> docker container exposing port 10002
app3.mydomain.com --> docker container exposing port 10003
....
...
If I use a nginx-proxy container, it would be easy to map each port number to a different subdomain. This would be true of all the Docker containers are in the same instance as the nginx-proxy container.
But can I map it to docker containers that is hosted on a different instance. I am planning to use elastic beanstalk for creating new instances for the extra docker containers.
Now nginx is running on one instance, while there are containers on different instances.
How do I achieve the end goal of hundreds of web applications hosted on separate docker containers mapped to unique subdomains?
To be honest, you question is not quite clear to me. It seems you could deploy an Nginx container in each instance having the proxy configuration for every app container you have on it, and as the cluster scales out, all them would have an Nginx as well. So, you could just set an ELB on top of it (Elastic Beanstalk supports it natively), and you would be good.
Nonetheless, I think you're intending to push Elastic Beanstalk to hard. I mean, it's not supposed to be used that way, like a big and generic Docker cluster. Elastic Beanstalk was built to facilitate application deployments, and nowadays containers are just one of the, let's say, platforms available (although it's not a language or framework, off course) for people to do it. But Elastic Beanstalk is not a container manager.
So, in my opinion, what makes sense is to deploy a single container per Beanstalk application with an ELB on top of it, so you don't need to worry about the underlying machines and their IPs. That way, you can easily set up a frontend proxy to route requests, because you have a permanent address for you application pool. And being independent pools, they can scale independently, and so on.
There are some more complex solutions out there which try to solve that problem of deploying containers in a wide single cluster, like Google's Kubernetes, and keeping track of them and providing endpoints for each application group. Also, there are solutions for dynamic reverse proxies like this one, recently released, and probably a lot of other solutions popping up every day, but all them would demand a lot of customization. But, in that case, we are not talking about an AWS solution.