Is there any API that we can use to upscale/downscale the number of PODs in AWS EKS ?
I tried to go through the documentations related to horizontal pod autoscaling but that doesn't fulfil my requirement as I want to create an API to scale the pods and that approach focuses more on kubectl commands.
I was able to achieve this using client-java-api offered by kubernetes.
listNamespacedDeployments method can be used to get the deployments and pods based on the deployments.
replaceNamespacedDeployments can be used to replace the specified deployment to upscale or downscale the number of pods.
Related
I'm trying to create an AWS EKS cluster with Pulumi and it seems two components exists:
#pulumi/eks providing a Cluster component
#pulumi/aws providing an eks/Cluster component
#pulumi/eks seems to be higher level but I cannot find a documentation specifying the concrete difference between those, and if one is preferred depending on use cases.
What's the difference between those two components?
#pulumi/eks/Cluster is a component resource that is built on top of #pulumi/aws/eks/Cluster and other resources to simplify provisioning of EKS clusters. Its goal is to make common scenarios achievable with a handful of lines of code, as opposed to the involved model of raw AWS resources.
You can find some usage examples in
AWS Crosswalk: AWS Elastic Kubernetes Service
Easily Create and Manage AWS EKS Kubernetes Clusters with Pulumi.
I suggest you start with #pulumi/eks and see if it works well for you.
I have AWS EKS cluster with only Fargate profile, no Node Groups.
Is it possible to enable HPA in this case? I tried to enable metric server as described here but pod creation fails with error
0/4 nodes are available: 4 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.
Any insights?
You need to create fargate profile for this.
If you are deploying it into another namespace then you need to create a fargate profile for that namespace.
It looks like your node have a taint - for which there is no corresponding tolerations added to pod/deployment specs. You can find more about taints and tolerations here
About autoscaling of pods it is indeed possible as can be seen from similar tutorial here
I have used AWS Fargate previously for deploying a docker image as 10+ tasks in a cluster.
Now I want to do the same on Azure. I have been successful to run the image on a container group but I want to create replicas of the container group just like on AWS it's possible to run multiple tasks.
Can anyone suggest how to achieve the same on Azure ?
Also if I want to scale the container groups how could I do that ? (just like on AWS scaling policies and auto-scaling groups were there)
A container group (POD in AKS) is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes.
Question: How to create replicas of the container group in Azure?
Answer: Here are two common ways to deploy a multi-container group
1.Resource Manager template
A Resource Manager template is recommended when you need to deploy additional Azure service resources (for example, an Azure Files share)
2.YAML file.
The YAML format's is more concise nature, a YAML file is recommended when your deployment includes only container instances.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups
These two tool you can use to scale the container groups one is Most Popular AKS and Docker Swarm.
But I will suggest you use AKS because docker swarm is limited for scaling docker container only while AKS is use for scaling all types of containers. example Container D, Rocket Container and Docker Container. AKS uses POD to keep the container. So here we can assume container group with pods name. By default, AKS is not set for auto scaling or healing but using high level of Kubernetes object like Replication set we can auto scale. Kubernetes supports horizontal pod autoscaling.
Follow this link for auto scaling pods using Kubernetes: https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods
It sounds like Azure Web App for Containers is what you're looking for. Like Fargate, you just point it to your image repository and which image version it should use.
Controlling the number of instances is either done manually using a slider:
(this can also be done using the Azure CLI).
...or automatically based on conditions on certain metrics.
And like Fargate, Azure Webapps for Containers also provide various ways of side-by-side deployments (called deployment slots) that you can route part of your traffic to.
Although all of this could also be achieved using K8S (as mentioned by others), I think Webapps For Containers is probably the more "1-to-1 equivalent" of Amazons Fargate.
I used ECS Fargate and it provides containerization, auto-scaling based on request count, CPU and Memory.
It is working as expected.
I start to explore the AWS EKS feature and I didn't see any advantage in using this as all are provided by ECS Fargate.
Could someone help me understand where to use ECS Fargate and Where to use AWS EKS?
Anyhelp is appreciated.
Thanks,
Harry
You would use AWS EKS if you want to use Kubernetes.
Since Kubernetes is a standard, you could in theory move your application from AWS EKS to other cloud providers like Azure, Google Cloud, or DigitalOcean easily since they all support Kubernetes.
If you don't care about Kubernetes then I find that AWS ECS with the AWS Fargate [Serverless compute for containers] deployment type is currently the easiest method of running Docker containers on AWS.
Note that Amazon is actively working on adding the Fargate deployment type to the EKS service.
I would check back after the AWS re:invent conference next month to see how things have changed in this area.
We hear these questions often and I tried to capture some of the core principles of these comparisons/positioning in this blog post.
What does AWS' Elastic Kubernetes Service (EKS) do exactly if so much configuration is needed in CloudFormation which is (yet) another AWS service?
I followed the AWS EKS Getting Started in the docs at (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf) where it seems CloudFormation knowledge is heavily required to run EKS.
Am I mistaken or something?
So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)?
I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that?
Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative?
Update:
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS.
To run Kubernetes on AWS you have basically 2 options:
using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines
EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow this workshop)
EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.
I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant.
I feel the same difficulties earlier, and none of article could give me requirement in a glance for things that need to be done. Lot of people just recommend using eksctl which in my opinion will create a bloated and hard to manage kind of CloudFormation.
Basically both EKS is just a wrapper of Kubernetes, there's some points of integration between Kubernetes and AWS that still need to be done manually.
I've wrote an article that hope could help you understand all the process that need to be inplaces
EKS is the managed control plane for kubernetes , while Cloud-formation is a infrastructure templating service .
Instead of EKS you can run and manage the control plane(master nodes) on top of EC2 machines if you want to optimize for costs.For using EKS you have to pay for the underlying infra(EC2+networking..) and managed service fee(EKS price) .
Cloud-formation provides a nice interface to template and automate your infrastructure.You may use terraform in place of CF