AWS EKS nodes creation failure - amazon-web-services

I have a cluster in AWS created by these instructions.
Then I tried to add nodes in this cluster according to this documentation.
It seems that the nodes fail to be created with vpc-cni and coredns health issue type: insufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas.
The status of the pods kubectl get pods -n kube-system:
NAME READY STATUS RESTARTS AGE
aws-node-9cwkd 0/1 CrashLoopBackOff 13 42m
aws-node-h4qjt 0/1 CrashLoopBackOff 13 42m
aws-node-jrn5x 0/1 CrashLoopBackOff 13 43m
coredns-745979c988-25fcc 0/1 Pending 0 120m
coredns-745979c988-qvh7h 0/1 Pending 0 120m
kube-proxy-2bmlq 1/1 Running 0 42m
kube-proxy-hjcrw 1/1 Running 0 43m
kube-proxy-j9r9n 1/1 Running 0 42m
The logs of aws-node-9cwkd pod:
{"level":"info","ts":"2021-11-30T14:11:14.156Z","caller":"entrypoint.sh","msg":"Validating env variables ..."}
{"level":"info","ts":"2021-11-30T14:11:14.157Z","caller":"entrypoint.sh","msg":"Install CNI binaries.."}
{"level":"info","ts":"2021-11-30T14:11:14.177Z","caller":"entrypoint.sh","msg":"Starting IPAM daemon in the background ... "}
{"level":"info","ts":"2021-11-30T14:11:14.179Z","caller":"entrypoint.sh","msg":"Checking for IPAM connectivity ... "}
{"level":"info","ts":"2021-11-30T14:11:16.189Z","caller":"entrypoint.sh","msg":"Retrying waiting for IPAM-D"}
{"level":"info","ts":"2021-11-30T14:11:18.198Z","caller":"entrypoint.sh","msg":"Retrying waiting for IPAM-D"}
{"level":"info","ts":"2021-11-30T14:11:20.205Z","caller":"entrypoint.sh","msg":"Retrying waiting for IPAM-D"}
{"level":"info","ts":"2021-11-30T14:11:22.215Z","caller":"entrypoint.sh","msg":"Retrying waiting for IPAM-D"}
{"level":"info","ts":"2021-11-30T14:11:24.226Z","caller":"entrypoint.sh","msg":"Retrying waiting for IPAM-D"}
By running the command kubectl describe pod aws-node-h4qjt -n kube-system the following error occurs:
Readiness probe failed: {"level":"info","ts":"2021-11-30T14:11:07.145Z","caller":"/usr/local/go/src/runtime/proc.go:225","msg":"timeout: failed to connect service \":50051\" within 5s"}
Any help would be highly appreciated in order to create nodes in the cluster successfully.

It's most likely a problem with the node service role. You can get more information if you exec into the pod and then view the ipamd.log
kubectl exec -it aws-node-9cwkd -n kube-system -- /bin/bash
cat /host/var/log/aws-routed-eni/ipamd.log
Here's an example of the error I when I hit the same errors
{"level":"error","ts":"2021-12-02T13:27:51.464Z","caller":"ipamd/ipamd.go:444","msg":"Failed
to call ec2:DescribeNetworkInterfaces for [eni-0c01bd25ae6999ed5]:
UnauthorizedOperation: You are not authorized to perform this
operation.\n\tstatus code: 403, request id:
0438b84b-8052-4f31-9d63-c2ff7512f131"}
In my case I had to add the AmazonEKS_CNI_Policy policy to the node IAM role.
https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html

I used eksctl command line tool with --nodes flag and everything was created successfully as expected.
eksctl create cluster --name cluster-name \
--nodes 3 \
--node-type=t3.large \
--region=eu-west-1

Related

ErrImagePull on EKS cluster core services

I am trying to update my cluster and I am getting an error pulling the images. This is happening on coredns, aws-node and other core services. As far as I can tell I am a full admin on this particular cluster. When I tried to do a docker pull to see if the issue was with something else, I am getting "no basic auth credentials". I have done some research and cant see to find any references of this issue.
kube-system coredns-bd9bb9b78-wwmdd 0/1 ErrImagePull 0 52m
kube-system coredns-bd9bb9b78-wwmdd 0/1 ImagePullBackOff 0 52m
kube-system aws-node-zgd2w 0/1 Init:ErrImagePull 0 62m
kube-system aws-node-zgd2w 0/1 Init:ImagePullBackOff 0 63m
kube-system coredns-bd9bb9b78-wwmdd 0/1 ErrImagePull 0 57m
kube-system coredns-bd9bb9b78-wwmdd 0/1 ImagePullBackOff 0 57m
user#User-MacBook-Pro ~ % docker pull 643272868765.dkr.ecr.us-east-1.amazonaws.com/eks/coredns:v1.8.4
Error response from daemon: Head "https://643272868765.dkr.ecr.us-east-1.amazonaws.com/v2/eks/coredns/manifests/v1.8.4": no basic auth credentials
It turns out that it was a permissions issue. I put the an ID that had permissions to download the image and it downloaded sucessfully.

AWS Load Balancer Failed to Deploy

I'm trying to create AWS ALB-Ingress through EKS following the steps in the document https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I was successful till the step 7 in creating the controller:
[ec2-user#ip-X-X-X-X eks-cluster]$ kubectl apply -f v2_0_0_full.yaml
customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws created
mutatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/aws-load-balancer-controller configured
role.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-role created
clusterrole.rbac.authorization.k8s.io/aws-load-balancer-controller-role created
rolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-rolebinding created
service/aws-load-balancer-webhook-service created
deployment.apps/aws-load-balancer-controller created
certificate.cert-manager.io/aws-load-balancer-serving-cert created
issuer.cert-manager.io/aws-load-balancer-selfsigned-issuer created
validatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook created
However, the controller does NOT get to "Ready" status:
[ec2-user#ip-X-X-X-X eks-cluster]$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 0/1 1 0 29m
I'm also able to list the pod associated with the controller which also shows NOT READY:
[ec2-user#ip-X-X-X-X eks-cluster]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-XXXXXXXXXX-p4l7f 0/1 Pending 0 30m
I also can't seem to get its logs in order to try and debug the issue:
[ec2-user#ip-X-X-X-X eks-cluster]$ kubectl -n kube-system logs aws-load-balancer-controller-XXXXXXXXXX-p4l7f
[ec2-user#ip-X-X-X-X eks-cluster]$
Furthermore, the /var/log directory also does not have any related logs.
Please help me understand why it is not coming to READY state. Also let me know how to enable logging to debug these kind of issues.
I found the answer here. A faragate deployment requires the region and vpc-id.
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=<cluster-name> \
--set serviceAccount.create=false \
--set region=<region-code> \
--set vpcId=<vpc-xxxxxxxx>> \
--set serviceAccount.name=aws-load-balancer-controller \
-n kube-system
From the current LB conntroller manifest I found out that LB controller Pod specification doesn't have Readiness probe, only Liveness probe. That means that the Pod becomes Ready as soon as it pass the Liveness probe:
livenessProbe:
failureThreshold: 2
httpGet:
path: /healthz
port: 61779
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 10
But as we can see in the following output, LB controller's Pod is in Pending state:
[ec2-user#ip-X-X-X-X eks-cluster]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-XXXXXXXXXX-p4l7f 0/1 Pending 0 30m
If Pod stays in Pending state, it means that kube-scheduler is unable to bind the Pod to a cluster node for whatever reason.
Kube-scheduler is a part of Kubernetes control plain that is responsible for assigning Pods to Nodes.
No Pod logs exist at this phase, because Pod's containers are not started yet.
The most convenient way to check the reason is using the kubectl describe command:
kubectl describe pod/podname -n namespacename
On the bottom of the output there are list of events related to the Pod life cycle. Here is an example for the generic Ubuntu Pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37s default-scheduler Successfully assigned default/ubuntu to k8s-w1
Normal Pulling 25s (x2 over 35s) kubelet, k8s-w1 Pulling image "ubuntu"
Normal Pulled 23s (x2 over 30s) kubelet, k8s-w1 Successfully pulled image "ubuntu"
Normal Created 23s (x2 over 30s) kubelet, k8s-w1 Created container ubuntu
Normal Started 23s (x2 over 29s) kubelet, k8s-w1 Started container ubuntu
kubectl get events command can also show the problem. For example:
LAST SEEN TYPE REASON OBJECT MESSAGE
21s Normal Scheduled pod/ubuntu Successfully assigned default/ubuntu to k8s-w1
9s Normal Pulling pod/ubuntu Pulling image "ubuntu"
7s Normal Pulled pod/ubuntu Successfully pulled image "ubuntu"
7s Normal Created pod/ubuntu Created container ubuntu
7s Normal Started pod/ubuntu Started container ubuntu
or there could be a reason why Scheduler can't assign Pod to a Node:
"No nodes are available that match all of the predicates: Insufficient cpu (2), Insufficient memory (2)".
In some cases errors could be found in kube-scheduler Pod logs in kube-system namespace. The logs could be listed using the following command:
kubectl logs $(kubectl get pods -l component=kube-scheduler,tier=control-plane -n kube-system -o name) -n kube-system
Most common reasons why pod isn't scheduled are the following:
lack of CPU or memory resources requested by a Pod on the Nodes.
Pod cannot tolerate Taints on the Nodes
Pod have Affinity/AntiAffinity configuration that prevents it from scheduling
Storage or other specific resource (like GPU) requirements in Pod spec cannot be satisfied

Kubectl get deployments shows No resources found in default namespace

I am trying my hands on Kubernetes and I tried to deploy an image into k8s service
root#KubernetesMiniKube:/usr/local/bin# kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
pod/hello-minikube created
root#KubernetesMiniKube:/usr/local/bin# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube 1/1 Running 0 16s
root#KubernetesMiniKube:/usr/local/bin# kubectl get deployments
No resources found in default namespace.
Why i am seeing No resource found but actually there is a resource running inside default namespace.
When you are using $ kubectl run it will create a pod.
In your example thats exactly what happned, it created pod, named hello-minikube.
pod/hello-minikube created
If you want to create deployment
Deployments represent a set of multiple, identical Pods with no unique identities. A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
you can do it using command:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
deployment.apps/hello-minikube created
user#cloudshell:$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 8s
You can also create deployment using YAML.
Save YAML from this documentation example and use kubectl apply.
$ vi nginx.yaml
<paste proper YAML definition. Also you can use nano editor, or download ready yaml>
user#cloudshell:$ kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 3m48s
nginx-deployment 3/3 3 3 64s
Please let me know if you have further questions regarding this answer.

How to pull a private container from AWS ECR to a local cluster

I am currently having trouble trying to pull my remote docker image hosted via AWS ECR. I am getting this error when running a deployment
Step 1)
run
aws ecr get-login-password --region cn-north-1 | docker login --username AWS --password-stdin xxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn
Step 2)
run kubectl create -f backend.yaml
from here the following happens:
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 Pending 0 2s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ContainerCreating 0 4s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ErrImagePull 0 6s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ImagePullBackOff 0 7s
So then I run kubectl describe pod backend and it will output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 117s default-scheduler Successfully assigned default/backend-89d75f7df-qwqdq to minikube
Normal Pulling 32s (x4 over 114s) kubelet, minikube Pulling image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest"
Warning Failed 31s (x4 over 114s) kubelet, minikube Failed to pull image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/v2/baopals/manifests/latest: no basic auth credentials
Warning Failed 31s (x4 over 114s) kubelet, minikube Error: ErrImagePull
Warning Failed 19s (x6 over 113s) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 4s (x7 over 113s) kubelet, minikube Back-off pulling image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest"
the main error being no basic auth credentials
Now what I am confused about is that I can push images to my ECR fine and I can also push to my remote EKS cluster I feel like essentially the only thing I cant do right now is pull from my private repository that is hosted on ECR.
Is there something obvious that I'm missing here that is preventing me from pulling from private repos so i can use them on my local machine?
For fetching ECR image locally you have login to ECR and fetch docker image. while if you are on Kubernetes you have to use secret for storing ECR login details and use it each time for pulling image from ECR.
here shell script if you are on Kubernetes, it will automatically take values from AWS configuration or else you can update variables at starting of script.
ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text) #aws account number
REGION=ap-south-1 #aws ECR region
SECRET_NAME=${REGION}-ecr-registry #secret_name
EMAIL=abc#xyz.com #can be anything
TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://$ACCOUNT.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
imagePullSecret used in YAML file for pulling storing secret for private docker repos.
https://github.com/harsh4870/ECR-Token-automation/blob/master/aws-token.sh
When a node in your cluster launches a container, it needs the credentials to access the private registry to pull the image. Even if you have authenticated in your local machine, the node cannot reuse the login, because by design it could be running on another machine; so you have to provide the credentials in the pod template. Follow this guide to do that:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Basically you store the ECR credentials as a secret and provide it in the imagePullSecret of the container spec. The pod will then be able to to pull the image everytime.
If you are developing with your cluster running on local machine, you don't even need to do that. You can have the pod reuse the image that you have downloaded to your local cache by either setting the imagePullPolicy under container spec to IfNotPresent, or using a specific tag instead of latest for your image.

Two clusters on EKS, how to switch between them

I am not exactly sure what's going on which is why I am asking this question. When I run this command:
kubectl config get-clusters
I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1
then I run:
kubectl config current-context
and I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and if I run kubectl get pods, I get the expected output.
But how do I switch to the other cluster/context? what's the difference between the cluster and context? I can't figure out how these commands differ:
When I run them, I still get the pods from the wrong cluster:
root#4c2ab870baaf:/# kubectl config set-context arn:aws:eks:us-west-2:913617820371:cluster/eks1
Context "arn:aws:eks:us-west-2:913617820371:cluster/eks1" modified.
root#4c2ab870baaf:/#
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
root#4c2ab870baaf:/# kubectl config set-cluster arn:aws:eks:us-west-2:91xxxxxx371:cluster/eks1
Cluster "arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1" set.
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
so I really don't know how to properly switch between clusters or contexts and also switch the auth routine when doing so.
For example:
contexts:
- context:
cluster: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
user: arn:aws:eks:us-west-2:91xxxx371:cluster/ignitecluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
- context:
cluster: arn:aws:eks:us-west-2:91xxxx371:cluster/teros-eks-cluster
user: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
To clarify on the difference between set-context and use-context
A context is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. So when you do set-context, you just adding context details to your configuration file ~/.kube/config, but it doesn't switch you to that context, while use-context actually does.
Thus, as Vasily mentioned, in order to switch between clusters run
kubectl config use-context <CONTEXT-NAME>
Also, if you run kubectl config get-contexts you will see list of contexts with indication of the current one.
Use
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks
Consider using kubectx for managing your contexts.
Usage
View all contexts (the current context is bolded):
$kubectx
arn:aws:eks:us-east-1:12234567:cluster/eks_app
->gke_my_second_cluster
my-rnd
my-prod
Switch to other context:
$ kubectx my-rnd
Switched to context "my-rnd".
Bonus:
In the same link - check also the kubens tool.
This is the best command to switch between different EKS clusters.
I use it every day.
aws eks update-kubeconfig --name example
Documentation:
https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html