To Devs,
The architecture is for an ALB to be in front of an ECS cluster with different images in the cluster. There will be autoscaling for the tasks. Should there be a Service-Task-Group for each image with the ALB communicating with each Task Group?
Or can a Service have multiple Task-Groups with each one holding an image. I will be using Fargate but I don't believe that should change the architecture.
Thanks,
Related
My main objective is to utilize GPU for one of our existing task being deployed through Fargate.
We have existing load balancers for our staging and production environments.
Currently we have two ECS Fargate clusters which deploy Fargate serverless tasks.
We want to be able to deploy one of our existing fargate tasks with GPU, but because fargate doesn't support GPU, we need to configure an EC2 task.
To do this, I believe we need to create EC2 auto-scaling groups associated with both the staging and production environments that allow for deploying an EC2 instances with a GPU through ECS.
I'm unsure whether or not we need to create a new cluster to house the EC2 task, or if we can put the EC2 task in our existing clusters (can you mix Fargate and EC2 like this?).
We're using Terraform for Infrastructure as code.
Any AWS documentation or relevant Terraform docs would be appreciated.
You can absolutely mix Fargate and EC2 tasks in the same cluster. Recommended checking out Capacity Providers for this: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html
I have a cluster with a mixture of services running on EC2 and Fargate, all being used internally. I am looking to deploy a new Fargate Service which is going to be publicly available over the Internet and will get around 5000 requests per minutes.
What factors do I need to consider so that I can choose if a new cluster should be created or if I can reuse the existing one? Would sharing of clusters also lead to security issues?
If your deployment is purely using Fargate, not EC2, then there's really no technical reason to split it into a separate ECS cluster, but there's also no reason to keep it in the same cluster. There's no added cost to create a new Fargate cluster, and logically separating your services into separate ECS clusters can help you monitor them separately in CloudWatch.
I have deployed a Multi-container Docker on AWS Elastic Beanstalk and it is working as it is supposed to be, I have configured all the load balancers that I need for the communication of each Docker container.
But when it comes to scaling a container I do not know how to do it. The Elastic Beanstalk provides a single EC2 instance that I am able to scale, but if I scale it, all of my Docker images will be scale when, for instance, only one has the real need to be scaled.
I am also aware that when an environment is created on EB, AWS also created a cluster on ECS, and inside that cluster, I have one task assigned for all of my docker images, but cannot scale them.
Does anyone know how to scale only one Docker image on this scenario?
From what I've read so far:
EC2 ASG is a simple solution to scale your server with more copies of it with a load balancer in front of the EC2 instance pool
ECS is more like Kubernetes, which is used when you need to deploy multiple services in docker containers that works with each other internally to form a service, and auto scaling is a feature of ECS itself.
Are there any differences I'm missing here? Because ECS is almost always a superior choice to go with if they work as I understand.
You are right, in a very simple sense, EC2 Autoscaling Groups is a way to add/remove (register/unregister) EC2 instances to a Classic Load Balancer or Target Groups (ALB/NLB).
ECS has two type of scaling as does any Container orchestration platform:
Cluster Autoscaling: Add remove EC2 instances in a Cluster when tasks are pending to run
Service Autoscaling: Add/remove tasks in a service based on demand, uses Application AutoScaling service behind the scenes
I'm planning to use Docker, and associate 1 EC2 instance with 1 Microservice.
Why do I want to deploy Docker in AWS ECS vs. ElasticBeanstalk?
It is said that AWS ECS has a native support to Docker. Is that it?
It would be great if you could be elaborate the pros and cons of running docker on AWS ECS vs. ElasticBeanstalk.
Elastic Beanstalk (multi-container) is an abstraction layer on top of ECS (Elastic Container Service) with some bootstrapped features and some limitations:
Automatically interacts with ECS and ELB
Cluster health and metrics are readily available and displayed without any extra effort
Load balancer must terminate HTTPS and all backend connections are HTTP
Easily adjustable autoscaling and instance sizing
Container logs are all collected in one place, but still segmented by instance – so in a cluster environment finding which instance served a request that logged some important data is a challenge.
Can only set hard memory limits in container definitions
All cluster instances must run the same set of containers
As of ECS IT is Amazon’s answer to container orchestration. It’s a bit rough around the edges and definitely a leap from Elastic Beanstalk, but it does have the advantage of significantly more flexibility including the ability to even define a custom scheduler.
All of the limitations imposed by Elastic Beanstalk are lifted.
Refer these for more info :
Elastic Beanstalk vs. ECS vs. Kubernetes
Amazon EC2 Container Serivce
Amazon Elasticbeanstalk