HPA on AWS EKS with Fargate - amazon-web-services

I have AWS EKS cluster with only Fargate profile, no Node Groups.
Is it possible to enable HPA in this case? I tried to enable metric server as described here but pod creation fails with error
0/4 nodes are available: 4 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.
Any insights?

You need to create fargate profile for this.
If you are deploying it into another namespace then you need to create a fargate profile for that namespace.

It looks like your node have a taint - for which there is no corresponding tolerations added to pod/deployment specs. You can find more about taints and tolerations here
About autoscaling of pods it is indeed possible as can be seen from similar tutorial here

Related

Autoscaling of PODs in AWS EKS

Is there any API that we can use to upscale/downscale the number of PODs in AWS EKS ?
I tried to go through the documentations related to horizontal pod autoscaling but that doesn't fulfil my requirement as I want to create an API to scale the pods and that approach focuses more on kubectl commands.
I was able to achieve this using client-java-api offered by kubernetes.
listNamespacedDeployments method can be used to get the deployments and pods based on the deployments.
replaceNamespacedDeployments can be used to replace the specified deployment to upscale or downscale the number of pods.

How to debug EKS on Fargate not sending logs to Cloudwatch

I have a cluster on EKS that uses a mix of Fargate and managed EC2 nodes. I'm wanting to implement native FluentBit logging for the containers running on Fargate nodes and have tried following these guides: https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html and https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/.
My cluster was originally an older version which didn't support native logging for Fargate, but as part of this I updated it to version 1.18 / 7.
However no logs are showing up in CloudWatch.
The pod annotations look correct:
Annotations: CapacityProvisioned: 0.25vCPU 0.5GB
Logging: LoggingEnabled
kubernetes.io/psp: eks.privileged
Status: Running
I'm not able to find any error logs anywhere. Is there any way to figure out what issue might be going on?
I did not find any way to debug this issue, but did solve it. I'm using Terraform to define infrastructure, and my FluentBit config was indented in the Terraform code. This will silently break logging. Removing the indentation fixed the issue.

Change the horizontal-pod-autoscaler-sync-period in EKS

I have a k8s cluster deployed in AWS's EKS and I want to change horizontal-pod-autoscaler-sync-period from the default 30s value.
How can I change this flag?
Unfortunately you are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ dir on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and similar clusters manages only by their providers without getting you permissions to have access to master nodes.
Similar questions:
1) horizontal-autoscaler-in-a-gke-cluster
2) change-the-horizontal-pod-autoscaler-sync-period-with-gke
As a workaround you can create cluster using kubeadm init and configure/change it in any way you want.

What exactly does EKS do if CloudFormation is needed?

What does AWS' Elastic Kubernetes Service (EKS) do exactly if so much configuration is needed in CloudFormation which is (yet) another AWS service?
I followed the AWS EKS Getting Started in the docs at (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf) where it seems CloudFormation knowledge is heavily required to run EKS.
Am I mistaken or something?
So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)?
I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that?
Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative?
Update:
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS.
To run Kubernetes on AWS you have basically 2 options:
using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines
EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow this workshop)
EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.
I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant.
I feel the same difficulties earlier, and none of article could give me requirement in a glance for things that need to be done. Lot of people just recommend using eksctl which in my opinion will create a bloated and hard to manage kind of CloudFormation.
Basically both EKS is just a wrapper of Kubernetes, there's some points of integration between Kubernetes and AWS that still need to be done manually.
I've wrote an article that hope could help you understand all the process that need to be inplaces
EKS is the managed control plane for kubernetes , while Cloud-formation is a infrastructure templating service .
Instead of EKS you can run and manage the control plane(master nodes) on top of EC2 machines if you want to optimize for costs.For using EKS you have to pay for the underlying infra(EC2+networking..) and managed service fee(EKS price) .
Cloud-formation provides a nice interface to template and automate your infrastructure.You may use terraform in place of CF

Kubernetes multi-master cluster on AWS

We have created a single-master three-node worker cluster on AWS using Terraform, user-data YAML files, and CoreOS AMIs. The cluster works as expected but we are now in need to scale the master's up from one to three for redundancy purposes. My question is: other than using etcd clustering and/or the information provided on http://kubernetes.io/docs/admin/high-availability/, do we have any options to deploy a new or scale-up the existing cluster with multi-master nodes? Let me know if more details are required to answer this question.
The kops project can set up a high-availability master for you when creating a cluster.
Pass the following when you create the cluster (replacing the zones with whatever is relevant to you):
--master-zones=us-east-1b,us-east-1c,us-east-1d
Additionally, it can export Terraform files if you want to continue to use Terraform.