AWS ECS - Deploying two different microservices in a single ECS - amazon-web-services

I have two microservices that need to be deployed in the same ECS service for efficient resource usage.
Both of them have the same context path so cannot use path-pattern filter in the ALB and ECS doesn't seem to allow multiple ALB's in a single ECS.
Is it possible to have two target groups serving the micro services at different ports ? Or is there any other solution ?

Yes you can have two different Target groups each with a unique Port under the same ALB. I use this construction to support HTTP and HTTPS protocol on the same instance with ALB. Should be the same for ECS

You can definitely have a single ALB serving up two different microservices on different ports of ECS instances. Typically when you're going this far, you might want to look at dynamic port mapping. The ALB still needs a way to decide which target group to go to -- hostname matching, for instance.
What I'm not totally sure I understand is why you want to share an ECS service -- why not put each microservice in its own ECS service and share an ALB instead?
Anyway, both are likely possible. I have several microservices, each with their own ECS service sharing ECS instances and an ALB in a cluster using host name matching on the ALB. If you really want to use a single ECS service, it seems like it would still be possible.

Related

How to deploy many ECS services using one instance and one load balancer?

I'm new to AWS and I am trying to gauge what migrating our existing applications into AWS would look like. I'm trying to host multiple apps as Services under a single ECS cluster, and use one Application Load Balancer with hostname rules to route requests to the correct container.
I was originally thinking I could give each service its own Target Group, but I ran into the RESOURCE:ENI error, which from what I can tell means that I can't just attach as many Target Groups as I want to the same cluster.
I don't want to create a separate cluster for each app, or use separate load balancers for them because these apps are very small and receive little to no traffic so it just wouldn't make sense. Even the minimum of 0.25 vCPU/0.5 GB that Fargate has is overkill for these apps.
What's the best way to host many apps under one ECS cluster and one Load Balancer? Is it best to create my own reverse-proxy server to do the routing to different apps?
You are likely using awsvpc network mode for the task definitions. You could change it to the (default) bridge mode instead. Your services don't seem to be ones that would need the added network performance boost of using the native EC2 networking stack.
The target groups' target types should be instance as per my understanding.

Cheap solution for exposing multiple HTTP services in K8s cluster (AWS EKS)

I'm pretty new to k8s and I'm trying to figure out how to expose to Internet, multiple HTTP services, in cheap manner. Currently I'm using AWS EKS cluster with managed node groups, so the cheap way will be not to provision any kind ELB as it cost. Also I would like those services to be in private subnets so just f.ex only Ingress Resource will be exposed and the nodes will stay private. One load balancer per svc is definitely not an option as it will break down my budget
The options I consider:
Using K8s ingress resources (to be precise: Istio Ingress controller). But the downside of that is, when we creating ingress resource, AWS create Load Balancer for which I will need to pay.
Run node groups in public subnets, and create K8s Services of type NodePort so I could reach service using NodeIP:NodePort (NodePort will be specific for each service). The downside of that I will need to remember all IPs and ports assigned to each service. I can live with one service but when the number increase that will be pretty awful to remember.
At last, without any other option is to create one load balancer with public IP and also create Ingress controller with Istio. So I will reach each services by single DNS name of Load Balancer and I will route to services by request path.
Looking forward to any solution and inputs.
I don't think there is any magic here. Option 1 and 3 are basically one and the same (unless I am missing something). As you pointed out I don't think option 2 is viable for the reasons you call out. You have a number of options to go with. I don't know the Istio ingress (but I assume it will be fine). We often see customers using either the NGINX ingress or the ALB ingress.
All of these options require a Load Balancer.

NLB - Multiple NLBs in EKS?

I currently have a setup where I have 1 NLB in my EKS cluster. I deployed ServiceA and ServiceB and both of them use the same NLB.
However, I am curious to know whether it is possible to create more than 1 NLB in a EKS cluster?
If yes, which use case would this be useful for?
And how would I specify ServiceC to use NLB1 and ServiceD to use NLB2?
I did not find specific documentation surrounding this and any pointers on this would be helpful. Thanks!
It is possible.
I've never done it with NLBs but this should be as simple as deploying a second Service of type LoadBalancer with the annotation indicating it's a nlb rather than an elb.
As to use case, a few that spring to mind:
strict requirements for segregation of traffic
namespacing of project resources
Routing would be accomplished by binding the deployment manifests for service C to the Service for NLB1 and the deployment manifests of service D to NLB2. Services route to pods through selectors, so it's merely a matter of ensuring your mapping is correct.

Can we use one ALB with AWS ECS Fargate?

I am having a bunch of micro services running in AWS ECS fargate with ALB and Route53.Each micro service has an ALB with Route53.
Is there any kind of possibility where I can use only one ALB for all the microservices and route to their respective Route53 paths??
Here, I am not using EKS. Using AWS ECS Fargate.
To server multiple Fargate services from a single ALB, you need to create different target groups (TGs) for them. So, each service will have its own TG, and ALB will forward traffic to respective TGs based on some listener rules.
Since you have Route53, a common choice is to create sub-domains, e.g.: service1.example.com and service2.example.com. You associate them as simple Alias A records with the same ALB.
On the ALB you will have single listener (e.g. HTTP 80), with different rules. For example:
Rule one will be based on Host header equal to service1.example.com, with Action of Forward to TG1.
Rule two will be based on Host header equal to service2.example.com, with Action of Forward to TG2.
Some default rule compulsory (or use 2 as default rule).
And that's it. Any request from the internet directed to service1.example.com will go to your TG1 which is Fragete service1. Same for requests to service2.example.com.

Communication between two or more instances of an ECS service

I know that with AWS ECS's service-discovery you can make one service talk to the other by using the service.cluster pattern at the url (and service discovery does the rest for you). But I want to know if there is a way of doing one instance of a service talk to other container in the same service on AWS Elastic Container Service.
I have searched for this for a while now and decided to ask here as I wasn't able to find any conclusive ideas on how to make it work.
You need to associate your ECS Service with an Application Load Balancer (ALB) which makes sure ECS will registers the tasks (i.e. one or more containers) with the ALB providing a uniform URL for the services to communicate with each other. ALB support path based routing to multiple Target Groups allowing micro-service style communication between tasks/services.
As per the AWS documentation1, you can use a single ALB for multiple services as ALB provides path-based routing and priority-rules. For further information you can also view this link 2 which explains how path-based routing can be used to forward request to different target groups with different ECS tasks/services.