I am new to Kubernetes and I am facing a problem that I do not understand. I created a 4-node cluster in aws, 1 manager node (t2.medium) and 3 normal nodes (c4.xlarge) and they were successfully joined together using Kubeadm.
Then I tried to deploy three Cassandra replicas using this yaml but the pod state does not leave the pending state; when I do:
kubectl describe pods cassandra-0
I get the message
0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.
And I do not understand why, as the machines should be powerful enough to cope with these pods and I haven't deployed any other pods. I am not sure if this means anything but when I execute:
kubectl describe nodes
I see this message:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Therefore my question is why this is happening and how can I fix it.
Thank you for your attention
Each node tracks the total amount of requested RAM (resources.requests.memory) for all pods assigned to it. That cannot exceed the total capacity of the machine. I would triple check that you have no other pods. You should see them on kubectl describe node.
Related
We have 4 t3.medium instances (technically 17 pods allowed), but at some point, a lot of services crash and can't be restarted because the pod limit is reached.
0/3 nodes are available: 1 Too many pods, 2 node(s) had taint {node.kubernetes.io/unreachable: }
The problem is, that we only have deployed 13 Services in total, with 1 replicasets. How is this possible?
Even the AWS management interface tells us, that there is free space for more pods.
your error message says that 3 nodes, not 4, and obviously maybe your 13 services running on only one node cause 2 of them are unreachable.
so first make sure that 2 nodes are reachable
then you shall make sure of application resources
I have a k8s environment, where I am running 3 masters and 7 worker nodes. Daily my pods are in evicted states due to disk pressure.
I am getting the below error on my worker node.
Message: The node was low on resource: ephemeral-storage.
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
But my worker node has enough resources to schedule pods.
Having analysed the comments it looks like pods go in the Evicted state when they're using more resources then available depending on a particular pod limit. A solution in that case might be manually deleting the evicting pods since they're not using resources at that given time. To read more about Node-pressure Eviction one can visit the official documentation.
The GKE documentation about resource quotas says that those hard limits are only applied for clusters with 10 or fewer nodes.
Even though we have more than 10 nodes, this quota has been created and cannot be deleted
Is this a bug on GKE side or intentional and the documentation is invalid?
I had experienced a really strange error today using GKE. Our hosted gitlab-runner stopped running new jobs, and the message was:
pods "xxxx" is forbidden: exceeded quota: gke-resource-quotas, requested: pods=1, used: pods=1500, limited: pods=1500
So the quota resource is non-editable (as documentation says). The problem, however, that there was just 5 pods running, not 1500. So it can be a kubernetes bug, the way it calculated nodes count, not sure.
After upgrading control plane and nodes, the error didn't go away and I didn't know how to reset the counter of nodes.
What did work for me was to simply delete this resource quota. Was surprised that it was even allowed to /shrug.
kubectl delete resourcequota gke-resource-quotas -n gitlab-runner
After that, same resource quota was recreated, and the pods were able to run again.
The "gke-resource-quotas" protects the control plane from being accidentally overloaded by the applications deployed in the cluster that creates excessive amount of kubernetes resources. GKE automatically installs an open source kubernetes ResourceQuota object called ‘gke-resource-quotas’ in each namespace of the cluster. You can get more information about the object by using this command [kubectl get resourcequota gke-resource-quotas -o yaml -n kube-system].
Currently, GKE resource quotas include four kubernetes resources, the number of pods, services, jobs, and ingresses. Their limits are calculated based on the cluster size and other factors. GKE resource quotas are immutable, no change can be made to them either through API or kubectl. The resource name “gke-resource-quotas” is reserved, if you create a ResourceQuota with the same name, it will be overwritten.
On AWS EKS
I'm adding deployment with 17 replicas (requesting and limiting 64Mi memory) to a small cluster with 2 nodes type t3.small.
Counting with kube-system pods, total running pods per node is 11 and 1 is left pending, i.e.:
Node #1:
aws-node-1
coredns-5-1as3
coredns-5-2das
kube-proxy-1
+7 app pod replicas
Node #2:
aws-node-1
kube-proxy-1
+9 app pod replicas
I understand that t3.small is a very small instance. I'm only trying to understand what is limiting me here. Memory request is not it, I'm way below the available resources.
I found that there is IP addresses limit per node depending on instance type.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html?shortFooter=true#AvailableIpPerENI .
I didn't find any other documentation saying explicitly that this is limiting pod creation, but I'm assuming it does.
Based on the table, t3.small can have 12 IPv4 addresses. If this is the case and this is limiting factor, since I have 11 pods, where did 1 missing IPv4 address go?
The real maximum number of pods per EKS instance are actually listed in this document.
For t3.small instances, it is 11 pods per instance. That is, you can have a maximum number of 22 pods in your cluster. 6 of these pods are system pods, so there remains a maximum of 16 workload pods.
You're trying to run 17 workload pods, so it's one too much. I guess 16 of these pods have been scheduled and 1 is left pending.
The formula for defining the maximum number of pods per instance is as follows:
N * (M-1) + 2
Where:
N is the number of Elastic Network Interfaces (ENI) of the instance type
M is the number of IP addresses of a single ENI
So, for t3.small, this calculation is 3 * (4-1) + 2 = 11.
Values for N and M for each instance type in this document.
For anyone who runs across this when searching google. Be advised that as of August 2021 its now possible to increase the max pods on a node using the latest AWS CNI plugin as described here.
Using the basic configuration explained there a t3.medium node went from a max of 17 pods to a max of 110 which is more then adequate for what I was trying to do.
This is why we stopped using EKS in favor of a KOPS deployed self-managed cluster.
IMO EKS which employs the aws-cni causes too many constraints, it actually goes against one of the major benefits of using Kubernetes, efficient use of available resources.
EKS moves the system constraint away from CPU / memory usage into the realm of network IP limitations.
Kubernetes was designed to provide high density, manage resources efficiently. Not quite so with EKS’s version, since a node could be idle, with almost its entire memory available and yet the cluster will be unable to schedule pods on an otherwise low utilized node if pods > (N * (M-1) + 2).
One could be tempted to employ another CNI such as Calico, however would be limited to worker nodes since access to master nodes is forbidden.
This causes the cluster to have two networks and problems will arise when trying to access K8s API, or working with Admissions Controllers.
It really does depend on workflow requirements, for us, high pod density, efficient use of resources, and having complete control of the cluster is paramount.
connect to you EKS node
run this
/etc/eks/bootstrap.sh clusterName --use-max-pods false --kubelet-extra-args '--max-pods=50'
ignore nvidia-smi not found the output
whole script location https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh
EKS allows to increase max number of pods per node but this can be done only with Nitro instances. check the list here
Make sure you have VPC CNI 1.9+
Enable Prefix delegation for VPC_CNI plugin
kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
If you are using self managed node group, make sure to pass the following in BootstrapArguments
--use-max-pods false --kubelet-extra-args '--max-pods=110'
or you could create the node group using eksctl using
eksctl create nodegroup --cluster my-cluster --managed=false --max-pods-per-node 110
If you are using managed node group with a specified AMI, it has bootstrap.sh so you could modify user_data to do something like this
/etc/eks/bootstrap.sh my-cluster \ --use-max-pods false \ --kubelet-extra-args '--max-pods=110'
Or simply using eksctl by running
eksctl create nodegroup --cluster my-cluster --max-pods-per-node 110
For more details, check AWS documentation https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
I have a Kubernetes cluster distributed on AWS via Kops consisting of 3 master nodes, each in a different AZ. As is well known, Kops realizes the deployment of a cluster where Etcd is executed on each master node through two pods, each of which mounts an EBS volume for saving the state. If you lose the volumes of 2 of the 3 masters, you automatically lose consensus among the masters.
Is there a way to use information about the only master who still has the status of the cluster, and retrieve the Quorum between the three masters on that state? I recreated this scenario, but the cluster becomes unavailable, and I can no longer access the Etcd pods of any of the 3 masters, because those pods fail with an error. Moreover, Etcd itself becomes read-only and it is impossible to add or remove members of the cluster, to try to perform manual interventions.
Tips? Thanks to all of you
This is documented here. There's also another guide here
You basically have to backup your cluster and create a brand new one.