I "inherited" an unmanaged EKS cluster with two nodegroups created through eksctl with Kubernetes version 1.15. I updated the cluster to 1.17 and managed to create a new nodegroup with eksctl and nodes successfully join the cluster (i had to update aws-cni to 1.6.x from 1.5.x to do so). However the the Classic Load Balancer of the cluster marks my two new nodes as OutOfService.
I noticed the Load Balancer Security Group was missing from my node Security Groups thus i added it to my two new nodes but nothing changed, still the nodes were unreachable from outside the EKS cluster. I could get my nodes change their state to InService by applying the Security Group of my two former nodes but manually inserting the very same inbound/outbound rules seems to sort no effect on traffic. Only the former nodegroup security group seems to work in this case. I reached a dead end and asking here because i can't find any additional information on AWS documentation. Anyone knows what's wrong?
Related
I am trying to create a Node group in my EKS cluster, but I am getting '''NodeCreationFailure: Instances failed to join the kubernetes cluster'''.
After reading many documentations I think the problem is in the VPC configuration. I've tried multiple solutions like enabling DNS host name, and adding endpoints to the subnets but still having the same error.
Can any one guide me to solve this issue?
First, make sure that the VPC and Subnet configurations are correct and that the EC2 instances that you are trying to create the Node Group with have the correct security group settings.
Next, ensure that the EC2 instances have the correct IAM role attached and that it has the necessary permissions for the EKS cluster.
Finally, ensure that the IAM user or role that you are using to create the Node Group has the correct permission for the EKS cluster.
If all of the above are configured correctly, you may need to check the EKS cluster logs to troubleshoot further. You can do this by running the command "kubectl get events --sort-by=.metadata.creationTimestamp" in the EKS cluster.
So, I have an interesting issue...
Private subnets for an EKS cluster have run out of IPs.. there aren't enough addresses in the subnets to allow the cluster to fully 'keep warm' all of the IPs it can use, so the subnets read '0 Free' in the AWS Console.. but clearly some are free in the cluster because the issue is intermittent (but becoming more problematic).
The current node groups are unmanaged, the cluster can't be rebuilt.
I created new subnets, and created a managed node group... then when I check the nodes, they can't resolve DNS. They can ping IPs but not resolve DNS..
The obvious answer is a missing rule in the SG.. but; the unmanaged nodes have a custom SG which a bunch of rules for cross VPC comms etc. I know we can't add a custom SG to a managed nodegroup... so I thought, perhaps I'd replicate the custom SG onto the SG the managed nodegroup is using (the EKS managed SG which is used for backplane and manage node groups).
This did not resolve the problem.
My questions are mainly... should this work? If the SG isn't stopping DNS from working, then what is? (I've spun up a busybox pod on a node in the node group and checked nslookup.. the correct DNS server is there, but no resolution).
I read that AWS are working on a solution to adding a custom SG to a managed nodegroup, but it's not yet possible.
I have a few questions on EKS node groups.
I dont understand the concept of Node group and why is it required. Can't we create an EC2 and run kubeadm join for joining EC2 node to EKS Cluster. What advantage does node group hold.
Does node groups (be it managed or self-managed) have to exist in same VPC of EKS cluster. Is it not possible to create node group in another VPC. If so how?
managed node groups is a way to let AWS manage part of the lifecycle of the Kubernetes nodes. For sure you are allowed to continue to configure self managed nodes if you need/want to. To be fair you can also spin up a few EC2 instances and configure your own K8s control plane. It boils down to how much you wanted managed Vs how much you want to do yourself. The other extreme on this spectrum would be to use Fargate which is a fully managed experience (where there are no nodes to scale, configure, no AMIs etc).
the EKS cluster (control plane) lives in a separate AWS managed account/VPC. See here. When you deploy a cluster EKS will ask you which subnets (and which VPC) you want the EKS cluster to manifest itself (through ENIs that get plugged into your VPC/subnets). That VPC is where your self managed workers, your managed node groups and your Fargate profiles need to be plugged into. You can't use another VPC to add capacity to the cluster.
I have created a Amazon Elastic Kubernetes Service in US East (Ohio)us-east-2 region. After cluster setup I have created Fargate profile which is done successfully. Now I am trying to Add a Node group but its ends with showing error "NodeCreationFailure Unhealthy nodes in the kubernetes cluster" issue. What's the reason?
your nodes are unable to register with your Amazon EKS cluster.
A quick and dirty solution consists in adding AmazonEKS_CNI_Policy to the worker nodegroup role.
If that's solve the problem please be aware that the recommended approach is instead:
https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
looking a guide to install kubernetes over AWS EC2 instances using kops Link I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes, is possible set an IP to some configuration file then my cluster is created with a specific IP in my control node and my etcd node???? if a control node is restarting and not have elastic IP its change, and a big number of issues starts. I want to prevent this problem, or at least after deploy change my control node IP.
I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes
The correct way, and the way almost every provisioning tool that I know of does this, is to use either an Elastic Load Balancer (ELB) or the new Network Load Balancer (NLB) to put an abstraction layer in front of the master nodes for exactly that reason. So it does one step better than just an EIP and assigns one EIP per Availability Zone (AZ), along with a stable DNS name. It's my recollection that the masters can also keep themselves in sync with the ELB (unknown about the NLB, but certainly conceptually possible), so if new ones come online they register with the ELB automatically
Then, a similar answer for the etcd nodes, and for the same reason, although as far as I know etcd has no such ability to keep the nodes in sync with the fronting ELB/NLB so that would need to be done with the script that provisions any new etcd nodes
At the time of writing this, there isn't any out-of-box solution from kops.
But you can try k8s-eip for this if your use case isn't critical. I wrote this tool for my personal cluster to save cost.