Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
So, Is AWS Fargate better than Amazon EKS managed node groups? When should I choose managed node groups?
We chose to go with AWS Managed Node groups for the following reasons:
Daemonsets are not supported in EKS Fargate, so observability tools like Splunk and Datadog have to run in sidecar containers in each pod instead of a daemonset per node
In EKS Fargate each pod is run in its own VM and container images are not cached on nodes, making the startup times for pods 1-2 minutes long
All replies are on point. There isn't really "better" here. It's a trade-off. I am part of the container team at AWS and we recently wrote about the potential advantages of using Fargate over EC2. Faster pod start time, images caching, large pods configurations, special hw requirements e.g. GPUs) are all good reasons for needing to use EC2. We are working hard to make Fargate a better place to be though by filling some of the gaps so you could appreciate only the advantages.
There is no better than other. Your requirements (and skills) makes a product better than another!
The real difference in Fargate is that it's serverless, so you don't need for example to care about the EC2 instances right-sizing, you won't pay the idle time.
To go straight to the point: unless you are a K8S expert I would suggest Fargate.
Related
I have a fargate cluster in dev environment which contains an ecs service supporting a single client.
We need to on-board 50 more clients. So wanted to know what are some best practices around fargate clusters. I looked around and did not find any suitable content(including aws fargate FAQ). Can anyone help me with the below:
Should I create one fargate cluster per client or within same fargate cluster create one ecs service per client ? Which one is better and why ?
Is there any limitation on how many fargate clusters can be created in aws ?
Let's say it depends but none of the options you can pick will result in you doing anything wrong. A cluster in Fargate doesn't have a very specific meaning because there are no container instances you would provision and attach to said cluster(s) to provide capacity. In the context of Fargate a cluster really just become some sort of "folder" or namespace. The only real advantage of having multiple clusters is because you can scope your users at the cluster level and delegate the ability to deploy in said clusters. If you don't have a specific need like that, for simplicity you are probably good with just one cluster and 50 separate ECS services in it.
this is not a question about how to implement HPA on a EKS cluster running Fargate pods... It´s about if it is necessary to implement HPA along with Fargate, because as far as I know, Fargate is a "serverless" solution from AWS: "Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers."
So I´m not sure in which cases I would like to implement HPA on an EKS cluster running Fargate but the option is there. So I would like to know if someone could give more information.
Thank you in advance
EKS/Fargate allows you to NOT run "Cluster Autoscaler" (CA) because there are not nodes you need to run your pods. This is what it is referred to with "no over-provisioning and paying for additional servers."
HOWEVER, you could/would use HPA because Fargate does not provide a resource scaling mechanism for your pods. You can configure the size of your Faragte pods via K8s requests but at that point that is a regular pod with finite resources. You can use HPA to determine the number of pods (on Fargate) you need to run at any point in time for your deployment.
I've found a few posts on a similar topic, but wanted to clarify:
If I am running Kubernetes in AWS (natively, e.g. by deploying with Kops), is there any mechanism that can deploy additional nodes to the AWS node ASG to cater for resource requirements?
For example, if I deploy a 2 worker node cluster (ASG) that has a total of 8gb of memory, and I create a few kubernetes deployments onto the cluster, where memory requirements become greater than 8gb, is there a mechanism that will abstractly scale the underlying ASG to provide the required resources with me needing to manually increase the size of the ASG?
Thanks in advance.
Have you tried the kubernetes autoscaler project?
It is AWS compatible so it should answer your requirements
We can scale docker containers using service auto scaling feature in AWS with the help of Cloud-watch Alarms -
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_autoscaling_tutorial.html
Is there any other options are available to scale the docker container if the CPU/Memory utilization reaches 80% without using Cloud-watch?
Note: We can achieve the same in Kubernetes using horizontal pod auto scaling. I want to achieve the same in AWS without Cloudwatch support.
You can use AWS ECS to scale docker containers. It provides a native orchestration platform to AWS as well as support for Kubernetes.
If you decides to use ECS native container orchestration, it involves a learning curve where you need to understand ECS specific terms such as Tasks, Services and etc. The same goes with Kubernetes where you need to understand Pods, Services and etc.
When using ECS, it manages the underlying complexities such as placing containers across multiple EC2s which powers the container cluster, supporting Load Balancer intergration for container level load balancing, Supporting fault tolerance by replacing unhealthy containers and etc.
It is also possible to use AWS Fargate which also comes with ECS, where the underlying nodes in the cluster is entirely managed by AWS without even exposing the number of EC2s powering the cluster. It is more like you can scale up and down to large number of containers without worrying about allocating EC2s to the cluster. However, it is expensive in comparison which limits its usage for more specific workloads that requires higher levels of scalability with least predictability which justifies the pricing.
Since you can enable autoscaling of containers through DC/OS, when running this on an EC2 cluster, is it still necessary to, or redundant to run your cluster in an AutoScaling cluster?
There are two (orthogonal) concepts here at play and unfortunately the term 'auto-scale' is ambiguous here:
Certain IaaS platforms (incl. AWS) support dynamically adding VMs to a cluster.
The other is the capability of a container orchestrator to scale the number of copies of a service—in case of Marathon this is called instances or replicas in the context of Kubernetes—as long as there are sufficient resources (CPU, RAM, etc.) available in the cluster,
In the simplest case you'd auto-scale the services up to the point where the overall cluster utilization is high (>60%? >70%? >80%?) and the use the IaaS-level auto-scaling functionality to add further nodes. Turns out scaling back is the trickier thing.
So, complementary rather than redundant.