Networking Between Tasks in AWS ECS Fargate - amazon-web-services

I'm trying to setup a cluster with several different tasks that need to be able to communicate with each other. I have turned on Service Discovery for each task and I see all of the Route 53 DNS entries in my PrivateHosted Zone get updated as I spin up new tasks, but for whatever reason when I try to use the domain name of a service (wordpress.local) My other container cannot resolve it. They are all on the same availability zone and the same subnet. I'm not totally certain what else I need to do in order to get these tasks to be able to communicate with each other aside from setting up a target group in my load balancer which seems unnecessary as I have Service Discovery turned on...

Related

How to deploy many ECS services using one instance and one load balancer?

I'm new to AWS and I am trying to gauge what migrating our existing applications into AWS would look like. I'm trying to host multiple apps as Services under a single ECS cluster, and use one Application Load Balancer with hostname rules to route requests to the correct container.
I was originally thinking I could give each service its own Target Group, but I ran into the RESOURCE:ENI error, which from what I can tell means that I can't just attach as many Target Groups as I want to the same cluster.
I don't want to create a separate cluster for each app, or use separate load balancers for them because these apps are very small and receive little to no traffic so it just wouldn't make sense. Even the minimum of 0.25 vCPU/0.5 GB that Fargate has is overkill for these apps.
What's the best way to host many apps under one ECS cluster and one Load Balancer? Is it best to create my own reverse-proxy server to do the routing to different apps?
You are likely using awsvpc network mode for the task definitions. You could change it to the (default) bridge mode instead. Your services don't seem to be ones that would need the added network performance boost of using the native EC2 networking stack.
The target groups' target types should be instance as per my understanding.

ECS Fargate cross microservice communication options

I have been looking into different ways of connecting multiple miscorservices within their own services/tasks using ECS Fargate.
Normally, if all microservices are defined in the same task definition, we can just use the local ip with corresponding ports but this means we cannot scale individual microservices. From what I can tell there are two 'main' ways of enabling this communication when we break these out into multiple services:
Add a load balancer to each service and use the loadbalancers public ip as the single point of access from one service to another.
Questions I have on this is:
a. Do all the services that need to communicate need to sit in the same VPC and have the
service's incoming rules set to the security group of the load balancer?
b. Say we have now provisioned the entire set up and need to set one of the loadbalancer public DNS's in one microservices code base, whats the best way of attaining this, im guessing some sort of terraform script that 'assumes' the public DNS that will be added to it?
Making use of AWS Service Discovery, meaning we can query service to service with a simple built up identifier.
Question I have for this is:
a. Can we still attach load balancers to the services and STILL use service discovery? Or
does service discovery have an under the hood load balancer already configured?
Many thanks in advance for any help!
1.a All services in the same VPC and their security groups (SGs)
I assume that you are talking about case where each service will have its own load balancer (LB). Since the LBs are public, they can be in any VPC, region or account.
SGs are generally setup so that incoming rules to services allow only connections from the SG of the LB.
1.b DNS
Each task can have environmental variables. This is a good way to pass the DNS values. If you are taking about terraform (TF), then TF would provision the LBs and then create the tasks and set the env variables with DNS values of the LBs. Thus, you would know the DNS of LBs as they would have been created before your services.
2.a Service discovery (SD)
SD is only for private communication between services. No internet is involved, so everything must be in same VPC or peered-VPCs. So its basically oposite of the first solution with LBs.
I think you should be able to also use public LB along with SD.
SD does not use a LB. Instead when you query a DNS of a service through SD you will get private IP addresses of the tasks in random order. So the random order approximates load balancing of connections between tasks in a service.

Attach multiple services in amazon application load balancer

I have two drop wizard services in running in AWS ECS instances and I need to use the single ALB for both of these services running in their own docker container and as part of the same ECS cluster.
In load balancer setting I can't see how can I map it to two services, please guide me as I need this to have less overhead and for coast saving purposes?
You will need to add a listener for each container port mapping, even if they're the same host.
You will need to add the additional listener(s) after the wizard.

EC2 instance autoscale in different regions?

I'm pretty new to aws and already set up an EC2 Instance running my Node.js Server. I created an AMI and added it to an Auto Scaling Group. Now I want to setup a Load Balancer which has one IP address and has different autoscaling groups in different Regions. It should connect the user to the Region with the lowest delay and consistently send and receive websocket messages from that server.
But all I see in my Settings is the VPC for the European region. Do I have to setup a new VPC? Or is this even possible what I'm trying to do?
Hope somebody can help me out, cheers!
It is possible to do that using Route53. You create your load balances on the regions you want, with their instances, running the same application, and setup route 53 to route the requests based on latency or geolocation.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Web Service Large Volume Of Calls On Amazon EC2

I am new to using web services but we have built a simple web service hosted in IIS on an Amazon EC2 instance with an Amazon RDS hosted database server, this all works fine as a prototype for our mobile application.
The next stage s to look at scale and I need to know how we can have a cluster of instances handling the web service calls as we expect to have a high number of calls to the web service and need to scale the number of instances handling the calls.
I am pretty new to this so at the moment I see we use an IP address in the call to the web service which implies its directed at a specific server> how do we build an architecture on Amazon where the request from the mobile device can be handled by one of a number of servers and in which we can scale the capacity to handle more web service calls by just adding more servers on Amazon
Thanks for any help
Steve
You'll want to use load balancing, that conveniently AWS also offers:
http://aws.amazon.com/elasticloadbalancing/
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance.
In addition to Elastic Load Balancing, you'll want to have an Amazon Machine Image created, so you can launch instances on-demand without having to do manual configuration on each instance you launch. The EC2 documentation describes that process.
There's also Auto Scaling, which lets you set specific metrics to watch and automatically provision more instances. I believe it's throttled, so you don't have to worry about creating way too many, assuming you set reasonable thresholds at which to start and stop launching more instances.
Last (for a simple overview), you'll want to consider being in multiple availability zones so you're resilient to any potential outages. They aren't frequent, but they do happen. There's no guarantee you'll be available if you're only in one AZ.