I am using AWS EKS. As i am trying to mount efs to my eks cluster, getting the following error.
Warning FailedMount 3m1s kubelet Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[nfs-client-root nfs-client-provisioner-token-8bx56]: timed out waiting for the condition
Warning FailedMount 77s kubelet MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/b07f3f15-b655-435c-8ec1-8d14b8690c1d/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 172.31.26.154:/mnt/nfs_share/ /var/lib/kubelet/pods/b07f3f15-b655-435c-8ec1-8d14b8690c1d/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-23226.scope.
mount.nfs: Connection timed out
And also i tried to connect with external nfs server, also getting the same warning message.
i have opened the inbound allow all traffic in eks cluster, efs and nfs security groups.
If it is the problem with nodes to install nfs-common, please let me know the steps how to install the nfs-common package inside the nodes.
As i am using AWS EKS, i am unable to login to the nodes.
While creating an ec2 machine for an external NFS-server, you must add it to the vpc used by the eks cluster and include it in the security group that nodes use to communicate with each other.
Related
I have a single node kubernetes cluster setup on AWS,I am currently running a VPC with one public and private subnet.
The master node is in the public subnet and worker node is in the private subnet.
So on the AWS console I can succesfuly register a cluster and download the connector manifest which, I then download and apply the manifest on my master node but unfortunately the pods don't start. the below is what i observered.
kubectl get pods
NAME READY STATUS RESTARTS AGE
eks-connector-0 0/2 Init:CrashLoopBackOff 7 (4m36s ago) 19m
kubectl logs ejs-connector-0
Defaulted container "connector-agent" out of: connector-agent, connector-proxy, connector-init (init)
Error from server (BadRequest): container "connector-agent" in pod "eks-connector-0" is waiting to start: PodInitializing
The pods are failing to start with th above logged errors.
I would suggest providing output of kubectl get pod eks-connector-0 -o yaml and kubectl logs -p eks-connector-0
When pods are increased through hpa, the following error occurs and pod creation is not possible.
If I manually change the replicas of the deployments, the pods are running normally.
It seems to be a CNI-related problem, and the same phenomenon occurs even if you install 1.7.10 cni for 1.20 cluster with add on .
200 IPs per subnet is sufficient, and the outbound security group is also open.
By default, that issue does not occur when the number of pods is scaled via kubectl .
7s Warning FailedCreatePodSandBox pod/b4c-ms-test-develop-5f64db58f-bm2vc Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7632e23d2f3db8f8b8c0335aaaa6afe1e52ad43cf293bfa6789aa14f5b665cf1" network for pod "b4c-ms-test-develop-5f64db58f-bm2vc": networkPlugin cni failed to set up pod "b4c-ms-test-develop-5f64db58f-bm2vc_b4c-test" network: CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7632e23d2f3db8f8b8c0335aaaa6afe1e52ad43cf293bfa6789aa14f5b665cf1"
Region: eu-west-1
Cluster Name: dev-pangaia-b4c-eks
For AWS VPC CNI issue, have you attached node logs?: No
For DNS issue, have you attached CoreDNS pod log?:
I want to mount EFS to ECS Farget. But I'm continuously getting below error:
Status reason CannotStartContainerError: ResourceInitializationError: failed to create new container runtime task: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:71: ...
Command ["df -h && while true; do echo \\\"RUNNING\\\"; done"]
I followed article : https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/
I have given access in EFS Security Group (InBound rule) to Security group for ECS for port 2049. ECS Container has simple nginx container running on port 80 so port 80 accessible on internet.
In task definition I have also Add volume i have created and same has been referred in container definition.
As per article I did all things but still getting an error.
Can you please help what can be wrong here? May be I'm missing some configuration
Unable to attach EFS to EC2s. We tried various ways to mount, even it is throwing the same error . logs:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-c04ksakbe520.efs.us-wsjn-1.amazonaws.com :/ efs mount.nfs4: Connection timed out
we found that the VPC was using a custom DNS in the DHCP option set to resolve Company on premises URLs. In order to mount an EFS using the DNS name, the connecting EC2 instance must be inside a VPC and must be configured to use the DNS server provided by Amazon [1]. Using the IP address of the mount target in the same Availability Zone as the instance (us-east-1a), we were able to mount the EFS [2] using the following command:
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 83.23.23.4:/ efs-mount-point
Then added the following line to the /etc/fstab file in order to mount the EFS automatically on boot:
83.23.23.4:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0
tested successfully by running "mount -a"
Mounting on Amazon EC2 with a DNS Name - https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html
Mounting File Systems Without the EFS Mount Helper - https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-old.html
I'm trying to mount EFS system on my EC2 instances.
I have followed this Walkthrough very well. But It seems that the EFS is not mounting by using DNS.
When I use IP it works but I don't find the files created by instance 1 inside the mounted folder in the instance 2. I mean the EFS is not realy shared.
Please Help?
For information, DNS settings are enabled in the VPC.
EFS and EC2 are in the same VPC.
EFS security Group has ingeress rule that allows the EC2 Security group on the port 2049.
What else should I check?
root#ip:~# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EC2_AVAIL_ZONE.fs-4644458f.efs.$REGION.amazonaws.com:/ /efs-mount-point
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#
root#ip:~# mount -a -t nfs4
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#
root#ip:~# mount -a
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#
If you have a custom DNS, you may need to redirect DNS queries to AWS DNS server:
echo "server=/amazonaws.com/169.254.169.253" > /etc/dnsmasq.d/amazonaws.com.conf
echo "prepend domain-name-servers 127.0.0.1;" >> /etc/dhcp/dhclient.conf
service dnsmasq restart
service network restart