Communication between two or more instances of an ECS service - amazon-web-services

I know that with AWS ECS's service-discovery you can make one service talk to the other by using the service.cluster pattern at the url (and service discovery does the rest for you). But I want to know if there is a way of doing one instance of a service talk to other container in the same service on AWS Elastic Container Service.
I have searched for this for a while now and decided to ask here as I wasn't able to find any conclusive ideas on how to make it work.

You need to associate your ECS Service with an Application Load Balancer (ALB) which makes sure ECS will registers the tasks (i.e. one or more containers) with the ALB providing a uniform URL for the services to communicate with each other. ALB support path based routing to multiple Target Groups allowing micro-service style communication between tasks/services.
As per the AWS documentation1, you can use a single ALB for multiple services as ALB provides path-based routing and priority-rules. For further information you can also view this link 2 which explains how path-based routing can be used to forward request to different target groups with different ECS tasks/services.

Related

How to deploy many ECS services using one instance and one load balancer?

I'm new to AWS and I am trying to gauge what migrating our existing applications into AWS would look like. I'm trying to host multiple apps as Services under a single ECS cluster, and use one Application Load Balancer with hostname rules to route requests to the correct container.
I was originally thinking I could give each service its own Target Group, but I ran into the RESOURCE:ENI error, which from what I can tell means that I can't just attach as many Target Groups as I want to the same cluster.
I don't want to create a separate cluster for each app, or use separate load balancers for them because these apps are very small and receive little to no traffic so it just wouldn't make sense. Even the minimum of 0.25 vCPU/0.5 GB that Fargate has is overkill for these apps.
What's the best way to host many apps under one ECS cluster and one Load Balancer? Is it best to create my own reverse-proxy server to do the routing to different apps?
You are likely using awsvpc network mode for the task definitions. You could change it to the (default) bridge mode instead. Your services don't seem to be ones that would need the added network performance boost of using the native EC2 networking stack.
The target groups' target types should be instance as per my understanding.

Do I need a loadbalancer in an AWS elastic beanstak environment

My applications run on ElasticBeanstalk and communicate purely with internal services like Kinesis and DynamoDB. There is no web traffic needed? Do I need an ElasticLoadBalancer in order to scale my instances up and down. I want to add and remove instances purely based on some cloudwatch metrics? Do I need the ELB to do managed updates etc.?
If there is no traffic to the service then there is no need to have a load balancer.
In fact the load balancer is primarily to distribute inbound traffic such as web requests.
Autoscaling can still be accomplished without a load balancer with scaling based on the CloudWatch metric that you want to use. In fact this is generally how consumer based applications tend to work.
To create this without a load balancer you would want to configure you environment as a worker environment.
#Chris already anwsered, but I would like to complement his answer for the following:
There is no web traffic needed?
Even if you communicate with Kinesis and DynamoDB only, your instances still need to be able to access internet to communicate with the AWS services. So the web traffic is required from your instances. The direct inbound traffic to your instances is not needed.
To fully separate your EB env from the internet you should have a look at the following:
Using Elastic Beanstalk with Amazon VPC
The document describes what you can do and want can't be done when using private subnets.

AWS ECS Private and Public Services

I have a scenario where I have to deploy multiple micro-services on AWS ECS. I want to make services able to communicate with each other via APIs developed in each micro-service. I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS. How can I achieve this? Can I use AWS ECS service discovery by having all services in a private subnet to enable communication between each of them? Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
The combination of both AWS load balancer ( for public access) and Amazon ECS Service Discovery ( for internal communication) is the perfect choice for the web application.
Built-in service discovery in ECS is another feature that makes it
easy to develop a dynamic container environment without needing to
manage as many resources outside of your application. ECS and Route 53
combine to provide highly available, fully managed, and secure service
discovery
Service discovery is a technique for getting traffic from one container to another using the containers direct IP address, instead of an intermediary like a load balancer. It is suitable for a variety of use cases:
Private, internal service discovery
Low latency communication between services
Long lived bidirectional connections, such as gRPC.
Yes, you can use AWS ECS service discovery having all services in a private subnet to enable communication between them.
This makes it possible for an ECS service to automatically register
itself with a predictable and friendly DNS name in Amazon Route 53. As
your services scale up or down in response to load or container
health, the Route 53 hosted zone is kept up to date, allowing other
services to lookup where they need to make connections based on the
state of each service.
Yes, you can use Load Balancer to make front-end micro-service accessible to end-users over the internet. You can look into this diagram that shows AWS LB and service discovery for a Web application in ECS.
You can see the backend container which is in private subnet, serve public request through ALB while rest of the container use AWS service discovery.
Amazon ECS Service Discovery
Let’s launch an application with service discovery! First, I’ll create
two task definitions: “flask-backend” and “flask-worker”. Both are
simple AWS Fargate tasks with a single container serving HTTP
requests. I’ll have flask-backend ask worker.corp to do some work and
I’ll return the response as well as the address Route 53 returned for
worker. Something like the code below:
#app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
r = requests.get("http://"+worker_host)
worker = socket.gethostbyname(worker_host)
return "Worker Message: {]\nFrom: {}".format(r.content, worker)
Note that in this private architecture there is no public subnet, just a private subnet. Containers inside the subnet can communicate to each other using their internal IP addresses. But they need some way to discover each other’s IP address.
AWS service discovery offers two approaches:
DNS based (Route 53 create and maintains a custom DNS name which
resolves to one or more IP addresses of other containers, for
example, http://nginx.service.production Then other containers can
send traffic to the destination by just opening a connection using
this DNS name)
API based (Containers can query an API to get the list of IP address
targets available, and then open a connection directly to one of the
other container.)
You can read more about AWS service discovery and use cases amazon-ecs-service-discovery and here
According to the documentation, "Amazon ECS does not support registering services into public DNS namespaces"
In other words, when it registers the DNS, it only uses the service's private IP address which would likely be problematic. The DNS for the "public" services would register to the private IP addresses which would only work, for example, if you were on a VPN to the private network, regardless of what your subnet rules were.
I think a better solution is to attach the services to one of two load balancers... one internet facing, and one internal. I think this works more naturally for scaling the services up anyway. Service discovery is cool, but really more for services talking to each other, not for external clients.
I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS.
I would use Service Discovery to wire the services internally and the Elastic Load Balancer integration to make them accessible for the public.
The load balancer can do the load balancing on one side and the DNS SRV records can do the load balancing for your APIs internally.
There is a similar question here on Stack Overflow and the answer [1] to it outlines a possible solution using the load balancer and the service discovery integrations in ECS.
Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
Yes, the load balancer can register targets in a private subnet.
References
[1] https://stackoverflow.com/a/57137451/10473469

AWS ECS - Deploying two different microservices in a single ECS

I have two microservices that need to be deployed in the same ECS service for efficient resource usage.
Both of them have the same context path so cannot use path-pattern filter in the ALB and ECS doesn't seem to allow multiple ALB's in a single ECS.
Is it possible to have two target groups serving the micro services at different ports ? Or is there any other solution ?
Yes you can have two different Target groups each with a unique Port under the same ALB. I use this construction to support HTTP and HTTPS protocol on the same instance with ALB. Should be the same for ECS
You can definitely have a single ALB serving up two different microservices on different ports of ECS instances. Typically when you're going this far, you might want to look at dynamic port mapping. The ALB still needs a way to decide which target group to go to -- hostname matching, for instance.
What I'm not totally sure I understand is why you want to share an ECS service -- why not put each microservice in its own ECS service and share an ALB instead?
Anyway, both are likely possible. I have several microservices, each with their own ECS service sharing ECS instances and an ALB in a cluster using host name matching on the ALB. If you really want to use a single ECS service, it seems like it would still be possible.

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.