NodeCreationFailure-> Unhealthy nodes in the kubernetes cluster - amazon-web-services

I have created a Amazon Elastic Kubernetes Service in US East (Ohio)us-east-2 region. After cluster setup I have created Fargate profile which is done successfully. Now I am trying to Add a Node group but its ends with showing error "NodeCreationFailure Unhealthy nodes in the kubernetes cluster" issue. What's the reason?

your nodes are unable to register with your Amazon EKS cluster.
A quick and dirty solution consists in adding AmazonEKS_CNI_Policy to the worker nodegroup role.
If that's solve the problem please be aware that the recommended approach is instead:
https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html

Related

AWS VPC for EKS node group

I am trying to create a Node group in my EKS cluster, but I am getting '''NodeCreationFailure: Instances failed to join the kubernetes cluster'''.
After reading many documentations I think the problem is in the VPC configuration. I've tried multiple solutions like enabling DNS host name, and adding endpoints to the subnets but still having the same error.
Can any one guide me to solve this issue?
First, make sure that the VPC and Subnet configurations are correct and that the EC2 instances that you are trying to create the Node Group with have the correct security group settings.
Next, ensure that the EC2 instances have the correct IAM role attached and that it has the necessary permissions for the EKS cluster.
Finally, ensure that the IAM user or role that you are using to create the Node Group has the correct permission for the EKS cluster.
If all of the above are configured correctly, you may need to check the EKS cluster logs to troubleshoot further. You can do this by running the command "kubectl get events --sort-by=.metadata.creationTimestamp" in the EKS cluster.

Is it possible to add a elb (cloud provider) to an existing kubernetes cluster running on RHEL8 EC2?

i have a cluster running on aws ec2 and not a managed EKS, i'm trying to add a loadbalancer to the cluster without restarting it or initializing a new node, is that possible ? i've already set the permission and tags related to this post https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
But the thing is that we must add the flag --cloud-provider=aws to the Kubelet before adding the node to the cluster.
Is there any other options or other way to do it ?
[kubectl get nodes][1]
You can try using AWS load balancer controller, it works with both managed and self-managed K8s clusters https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
Zee

EKS nodes fail to communicate with AWS Classic Load Balancer

I "inherited" an unmanaged EKS cluster with two nodegroups created through eksctl with Kubernetes version 1.15. I updated the cluster to 1.17 and managed to create a new nodegroup with eksctl and nodes successfully join the cluster (i had to update aws-cni to 1.6.x from 1.5.x to do so). However the the Classic Load Balancer of the cluster marks my two new nodes as OutOfService.
I noticed the Load Balancer Security Group was missing from my node Security Groups thus i added it to my two new nodes but nothing changed, still the nodes were unreachable from outside the EKS cluster. I could get my nodes change their state to InService by applying the Security Group of my two former nodes but manually inserting the very same inbound/outbound rules seems to sort no effect on traffic. Only the former nodegroup security group seems to work in this case. I reached a dead end and asking here because i can't find any additional information on AWS documentation. Anyone knows what's wrong?

Is EKS Nodegroup really necessary

I have a few questions on EKS node groups.
I dont understand the concept of Node group and why is it required. Can't we create an EC2 and run kubeadm join for joining EC2 node to EKS Cluster. What advantage does node group hold.
Does node groups (be it managed or self-managed) have to exist in same VPC of EKS cluster. Is it not possible to create node group in another VPC. If so how?
managed node groups is a way to let AWS manage part of the lifecycle of the Kubernetes nodes. For sure you are allowed to continue to configure self managed nodes if you need/want to. To be fair you can also spin up a few EC2 instances and configure your own K8s control plane. It boils down to how much you wanted managed Vs how much you want to do yourself. The other extreme on this spectrum would be to use Fargate which is a fully managed experience (where there are no nodes to scale, configure, no AMIs etc).
the EKS cluster (control plane) lives in a separate AWS managed account/VPC. See here. When you deploy a cluster EKS will ask you which subnets (and which VPC) you want the EKS cluster to manifest itself (through ENIs that get plugged into your VPC/subnets). That VPC is where your self managed workers, your managed node groups and your Fargate profiles need to be plugged into. You can't use another VPC to add capacity to the cluster.

Setup an ETCD cluster behind an ELB using EC2 instances

I am new to Kubernetes and have a requirement in which have to setup an ETCD cluster behind an ELB. Our K8s cluster will be hosted using Rancher. Can anyone please share the steps or link for the same.
There are 2 options:
1: If you will import cluster into rancher, then #suren is right. You basically need to create Kubernetes cluster that comforms all of your needs then import that into Rancher.
2: If you plan to launch clusters using Rancher, then you need to create NodeTemplate that uses AWS EC2 for etcd nodes. Then you can launch cluster by ticking master, etcd and worker nodes separately and reference your etcd NodeTemplate in etcd node group.