user data for managed node group - amazon-web-services

How can I edit user data in the managed node group which is part of my EKS cluster?
I tried to create a new version for the Launch Template that EKS cluster has created but I get an error on "Health issues" of the cluster: Ec2LaunchTemplateVersionMismatch
I want to make the nodes in the managed node group to mount the EFS automatically, without doing it manually on each instance, because of the autoscaling.

Related

Why EKS cluster creates a clone of the launch template?

Our EKS cluster is terraform managed and was specified with EC2 Launch Template as terraform resource. Our aws_eks_node_group includes Launch template section as shown below.
resource "aws_eks_node_group" eks_node_group" {
.........................
..........................
launch_template {
id = aws_launch_template.eksnode_template.id
version = aws_launch_template.eksnode_template.default_version
name = aws_launch_template.eksnode_template.name
}
}
However, after a while, EKS self-deployed the new Launch Template and linked it to the relevant auto-scaling group.
Why this has happened at the first place and how to avoid it in the future?
How can we link the customer managed Launch Template to EKS Autoscaling Group via terraform again? I tried changing the name or version of the launch template, but it is still using one created by EKS (self-managed)
The Amazon EKS API creates this launch template either by copying one you provide or by creating one automatically with default values in your account.
We don't recommend that you modify auto-generated launch templates.
(source)

Unexpected behaviour of AWS ASG, AWS Launch Template within AWS EKS

I've created AWS EKS cluster, with managed node group(s), as well I've created an AWS ASG (Auto Scaling Group) and AWS Launch Template. But when I'm attaching an AWS Launch Template to managed EKS node groups, it is creating a duplicate of existing (created) AWS Launch Template
--
Those Launch Templates:
DEV/MANAGED/EKS-WORKERS-SM/LATEST/TEMPLATE/EU-CENTRAL-1X
and
eks-XXXXXXXX-XXXXXXXX-XXXXXXXX
are identical, and I don't understand why EKS is creating for duplicates
--
As well same stuff is happening with AWS ASG (Auto Scaling Group), is there any way to fix this problem?
Technologies which used:
Terraform
Launch Template resource
Auto Scaling Group resource
EKS resource
EKS node group resource
It seems, you're running both resources. When you want to manage the Nodes only by your self, you don't need to have EKS node group resource, instead you need to use Launch Template resource with Auto Scaling Group resource, and with proper tagging

The results of `aws eks list-nodegroups` and `eksctl get nodegroups` are inconsistent

eksctl get nodegroups --cluster=cluster-name --profile=dev
aws eks list-nodegroups --cluster=cluster-name --profile=dev
First result is correct
Second result is air as follows:
{
"nodegroups": []
}
I used these two commands to get the nodegroup of the cluster, but found that the results were not consistent.
The configuration file I used was the same ~/.aws/config.
The cluster_name was checked by the command. Come out, these two commands can correctly detect cluster but cannot detect nodegroup
Thanks in advance
According to eksctl documentation:
Listing nodegroups
To list the details about a nodegroup or all of the nodegroups, use:
eksctl get nodegroup --cluster=<clusterName> [--name=<nodegroupName>]
Nodegroup immutability
By design, nodegroups are immutable. This means that if you need to
change something (other than scaling) like the AMI or the instance
type of a nodegroup, you would need to create a new nodegroup with the
desired changes, move the load and delete the old one. Check
Deleting and
draining.
And for list-nodegroup from AWS documentation
Lists the Amazon EKS managed node groups associated with the specified cluster in your AWS account in the specified Region. Self-managed node groups are not listed.
As you can see there are differences in these commands such as Self-managed node groups are not listed in the second command.

Nodes are not joining in aws eks

I have launched cluster using aws eks successfully and applied aws-auth but nodes are not joining to cluster. I checked log message of a node and found this -
Dec 4 08:09:02 ip-10-0-8-187 kubelet: E1204 08:09:02.760634 3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
Dec 4 08:09:03 ip-10-0-8-187 kubelet: W1204 08:09:03.296102 3542 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.296217 3542 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.459361 3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized`
I am not sure about this. I have attached eks full access to these instance node roles.
If you are using terraform, or modifying tags and name variables, make sure the cluster name matches in the tags!
Node must be "owned" by a certain cluster. The nodes will only join a cluster they're supposed to. I overlooked this, but there isn't a lot of documentation to go on when using terraform. Make sure variables match. This is the node tag naming parent cluster to join:
tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
if you have followed aws white paper there is easy way to connect the all worker node and join them with EKS cluster.
Link : https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
as per my thinking you forget to edit config map with instance role profile ARN.

cfn Cluster custom AMI creation

I am trying to use custom AMI (private) with cfn cluster framework.
I have created the AMI but every time I assign the AMI ID in config file (.cfncluster/config) the "Status: cfncluster-Belle - ROLLBACK_IN_PROGRESS" occurs.
How can I use my own private AMI in the cfn cluster as my worker nodes ?