I have a fargate cluster in dev environment which contains an ecs service supporting a single client.
We need to on-board 50 more clients. So wanted to know what are some best practices around fargate clusters. I looked around and did not find any suitable content(including aws fargate FAQ). Can anyone help me with the below:
Should I create one fargate cluster per client or within same fargate cluster create one ecs service per client ? Which one is better and why ?
Is there any limitation on how many fargate clusters can be created in aws ?
Let's say it depends but none of the options you can pick will result in you doing anything wrong. A cluster in Fargate doesn't have a very specific meaning because there are no container instances you would provision and attach to said cluster(s) to provide capacity. In the context of Fargate a cluster really just become some sort of "folder" or namespace. The only real advantage of having multiple clusters is because you can scope your users at the cluster level and delegate the ability to deploy in said clusters. If you don't have a specific need like that, for simplicity you are probably good with just one cluster and 50 separate ECS services in it.
Related
this is not a question about how to implement HPA on a EKS cluster running Fargate pods... It´s about if it is necessary to implement HPA along with Fargate, because as far as I know, Fargate is a "serverless" solution from AWS: "Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers."
So I´m not sure in which cases I would like to implement HPA on an EKS cluster running Fargate but the option is there. So I would like to know if someone could give more information.
Thank you in advance
EKS/Fargate allows you to NOT run "Cluster Autoscaler" (CA) because there are not nodes you need to run your pods. This is what it is referred to with "no over-provisioning and paying for additional servers."
HOWEVER, you could/would use HPA because Fargate does not provide a resource scaling mechanism for your pods. You can configure the size of your Faragte pods via K8s requests but at that point that is a regular pod with finite resources. You can use HPA to determine the number of pods (on Fargate) you need to run at any point in time for your deployment.
I'm trying to deploy a docker image to AWS's Elastic Container Service, and then run this as an EC2 instance (via Fargate). However, I believe I need to specify a minimum of 1 running instance in the TaskDefinition.
What I want to achieve though is basically to be able to spin up this container on demand, as it'll be used infrequently and then shut it down after. So the plan was to start/stop this via a lambda and redirect to the public IP (so within web request timeouts).
I've seen examples of how to do this using EC2, but none actually using Fargate. I don't believe I can define an EC2 task, based off of a docker image (if I can, this might be my solution?).
Does anyone know if it's possible to achieve this? If so could you provide some guidance on how I might approach it, and if you've any CloudFormation examples that would be brilliant.
There is almost not difference in defining ECS task for EC2 or Fargate. Only one difference is networking. With Fargate you have to use awsvpc networking.
You can use lambda. But there is better idea to achieve your use case.
To spin exactly one task, you have to set
Minimum instances: 0
Desired count: 1
Max instances: 1 or more
Autoscaling solution
However better idea than Lambda is to use Service autoscaling. The ECS Servce autoscaling requires metrics in cloudwatch. So you can push metric to cloudwatch to start task. Then compute your task and on the end of your computation put metrics to stop task.
Manual solution
Another solution can be switching desired count to 1 when you want to start task and to 0 when you want to stop task
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
When I select "scheduled job" while initiating resources, how is the process handled internally?
Can I verify the container in ECS? I guess it will use batch jobs for this option.
# copilot init
Note: It's best to run this command in the root of your Git repository.
Welcome to the Copilot CLI! We're going to walk you through some questions
to help you get set up with an application on ECS. An application is a collection of
containerized services that operate together.
Which workload type best represents your architecture? [Use arrows to move, type to filter, ? for more help]
> Load Balanced Web Service
Backend Service
Scheduled Job
What will be the charges if I select backend service or scheduled job?
Copilot uses Fargate containers under the hood; therefore, your charges for a backend service are based on the number of containers you have running and the CPU/memory size of those containers. The minimum container size is 0.25 vCPU and 512 GB of reserved memory.
For other service types, your pricing depends on a few more things.
Load Balanced Web Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Application Load Balancer (about $20/month depending on traffic)
Backend Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Scheduled Job
Fargate containers based on size, number, and invocation frequency and duration (ie you only pay for the minutes you use)
State Machine transitions The first 4000 transitions in a month are free, which corresponds to an invocation frequency of about once every 21 minutes assuming there are no retry transitions. Transitions after that limit are billed at a low rate.
Other notes
All Copilot-deployed resources are grouped with a number of resource tags. You can use those tags to understand billing activity, and even add your own tags via the --resource-tags flag in copilot svc deploy or copilot job deploy.
The tags we use to logically group resources are the following:
Tag Name
Value
copilot-application
name of the application this resource belongs to
copilot-environment
name of the environment this resource belongs to
copilot-service
name of the service or job this resource belongs to
The copilot-service tag is used for both jobs and services for legacy reasons.
Copilot refers to these entities as common cloud architectures [1].
I could not find an official document which outlines how these architectures are composed in detail. I guess it might be an implementation detail from the creators' perspective when you look at point one of the AWS Copilot CLI charter [2]:
Users think in terms of architecture, not of infrastructure. Developers creating a new microservice shouldn't have to specify VPCs, load balancer settings, or complex pipeline configuration. They may not know anything about other AWS services. They should be able to specify what "kind" of application it is and how it fits into their overall architecture; the infrastructure should be generated from that.
I have to agree that more sophisticated users always ask themselves how costs of a specific architecture will look like and I completely endorse the idea of having a special Copilot command such as copilot estimate costs service-xy which can be executed before creating the service.
There is some high-level documentation on the architecture types Load Balanced Web Service and Backend Service. [3]
It mentions the command copilot svc show in conjunction with the --resources flag [4]:
You can also provide an optional --resources flag to see all AWS resources associated with your service.
I think this gives you the ability to estimate costs right after bringing the services up and running.
A somehow more complicated approach which I frequently apply to understand complex constructs in the AWS CDK is to look at the source code. For example, you could open the corresponding Go file for the Load Balanced Web Service architecture: [5]. Digging into the code, you'll notice that they make it pretty clear that they are using Fargate containers instead of EC2 instances.
That is also what they tell us in the high-level service docs [4]:
You can select a Load Balanced Web Service and Copilot will provision an application load balancer, security groups, an ECS Service and run your service on Fargate.
More on Fargate: [6]
Btw, there is a really interesting comment in the issue section which outlines why they decided against supporting EC2 in the first place [7]:
What features have the team currently explicitly decided against?
EC2 comes to mind. I think we could have built a really nice experience ontop of EC2 instances - but I think there's a difference between building an "abstraction" around ECS / Fargate and building an "illusion" around ECS / EC2. What I mean by that is that if we created the illusion of a fully hands off EC2 experience, customers might be surprised that they are expected to be in charge of patching and security maintenance of those instances. This isn't something that Copilot, a CLI can really automate for people (realistically). We're still trying to figure out a good way to expose EC2 to folks - but we definitely see Fargate as the future.
For example, I need to deploy three docker instances to my ECS, which has three EC2. Is it possible to deploy these three docker instances to different EC2 machines?
I'm thinking deploy a kafka cluster, broker1, broker2, broker3 and zookeeper1, zookeeper2, zookeeper3 to three EC2 respectively.
If you have BrokerService, ZooKeeperService then the tasks will be balanced across availability zones by the spread placement strategy already so this should occur, but it is true that if their wasn't sufficient capacity you might not get the ideal placement.
Fargate Compute
There are a couple of ways to force. The easiest I can think of is if Fargate was an option. This will ensure the highest level of high availability, but then you are forced to use Fargate, instead of ec2, which might breach your requirements, as you may for example need block storage which you don't get from Fargate. It could cost or save you money depending on if it would save you needing to deploy new ec2 instances, but you may have reserved instances which you want to use so it depends.
Service Per AZ
Otherwise you could create a service for each AZ. Each service will have a PlacementConstraints using the Cluster query language to define which zone it should reside in:
"PlacementConstraints": [{
"Type": "memberOf",
"Expression": "attribute:ecs.availability-zone != us-east-1a"
}
You would use us-east-1{a-c} for each service.
Creating different clusters having different instances in each AZ would also achieve this.
Look into daemon services.
ECS offers 2 service types - replica and daemon.
Replica spreads tasks across availability zones and will place multiple tasks (Docker containers) on the same EC2 host to balance according to the placement strategy.
Daemon places one task per service per EC2 container host to fulfill your expectation.
PS: Daemon services don't work with Fargate. Doesn't look like you are using Fargate anyway.
You can use the distinctInstance placement constraint to place a service's replica tasks on different instances.
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
So, Is AWS Fargate better than Amazon EKS managed node groups? When should I choose managed node groups?
We chose to go with AWS Managed Node groups for the following reasons:
Daemonsets are not supported in EKS Fargate, so observability tools like Splunk and Datadog have to run in sidecar containers in each pod instead of a daemonset per node
In EKS Fargate each pod is run in its own VM and container images are not cached on nodes, making the startup times for pods 1-2 minutes long
All replies are on point. There isn't really "better" here. It's a trade-off. I am part of the container team at AWS and we recently wrote about the potential advantages of using Fargate over EC2. Faster pod start time, images caching, large pods configurations, special hw requirements e.g. GPUs) are all good reasons for needing to use EC2. We are working hard to make Fargate a better place to be though by filling some of the gaps so you could appreciate only the advantages.
There is no better than other. Your requirements (and skills) makes a product better than another!
The real difference in Fargate is that it's serverless, so you don't need for example to care about the EC2 instances right-sizing, you won't pay the idle time.
To go straight to the point: unless you are a K8S expert I would suggest Fargate.