Facing an issue with attaching EFS volume to Kubernetes pods - amazon-web-services

I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.
I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.
Detailed Error message for Pod describe:
timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw]
MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32

AWS EFS uses NFS type volume plugin, and As per
Kubernetes Storage Classes
NFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
Create an external Provisioner for NFS volume plugin.
Create a storage class.
Create one volume claim.
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
data:
file.system.id: yourEFSsystemid
aws.region: regionyourEFSisin
provisioner.name: example.com/aws-efs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs

The problem for me was that I was specifying a different path in my PV than /. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.

The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.
ec2 mounting: AWS EFS mounting with EC2
PV.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
nfs:
server: efs_public_dns.amazonaws.com
path: "/"
PVC.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
replicaset.yml
----- only volume section -----
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: efs

Related

AWS EKS Fargate - Unable to mount EFS volume with statefulset

I want to run a statefulSet in AWS EKS Fargate and attach a EFS volume with it, but I am getting errors in mounting a volume with pod.
These are the error I am getting from describe pod.
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 114s fargate-scheduler Successfully enabled logging for pod
Normal Scheduled 75s fargate-scheduler Successfully assigned default/app1 to fargate-10.0.2.123
Warning FailedMount 43s (x7 over 75s) kubelet MountVolume.SetUp failed for volume "efs-pv" : rpc error: code = Internal desc = Could not mount "fs-xxxxxxxxxxxxxxxxx:/" at "/var/lib/kubelet/pods/b799a6d6-fe9e-4f80-ac2d-8ccf8834d7c4/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs -o tls fs-xxxxxxxxxxxxxxxxx:/ /var/lib/kubelet/pods/b799a6d6-fe9e-4f80-ac2d-8ccf8834d7c4/volumes/kubernetes.io~csi/efs-pv/mount
Output: Failed to resolve "fs-xxxxxxxxxxxxxxxxx.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.
See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail.
Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first.
Warning: config file does not have fall_back_to_mount_target_ip_address_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fall_back_to_mount_target_ip_address_enabled = True].
If anyone has setup efs volume with eks fargate cluster please have a look at it. I am really stucked in from long time.
What I have setup
Created a EFS Volume
CSIDriver Object
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: efs.csi.aws.com
spec:
attachRequired: false
Storage Class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: <EFS filesystem ID>
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
Pod Configuration
apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
I had the same question as you literally a day after and have been working on the error nonstop since then! Did you check to make sure your VPC had DNS hostnames enabled? That is what fixed it for me.
Just an FYI, if you are using fargate and you want to change this--I had to go as far as deleting the entire cluster after changing the DNS hostnames flag in order for the change to propagate. I'm unsure if you're familiar with the DHCP options of a normal ec2 instance, but usually it takes something like renewing the ipconfig in order to force the flag to propagate, but since fargate is a managed system, I was unable to find a way to do so from the node itself. I have created another post here attempting to answer that question.
Another quick FYI: if your pod execution role doesn't have access to EFS, you will need to add a policy that allows access (I just used the default AmazonElasticFileSystemFullAccess Role for the time being in order to try to get things working). Once again, you will have to relaunch your whole cluster in order to get this role change to propagate if you haven't already done so!

How to use volume gp3 in storage class on EKS?

I'm converting volume gp2 to volume gp3 for EKS but getting this error.
Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
This is my config.
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp3
parameters:
fsType: ext4
type: gp3
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: test-pvc
name: test-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: gp3
When I type kubectl describe pvc/test. This is response:
Name: test-pvc
Namespace: default
StorageClass: gp3
Status: Pending
Volume:
Labels: app=test-pvc
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 58s (x9 over 4m35s) persistentvolume-controller Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
I'm using Kubernetes version 1.18.
Can someone help me. Thanks!
I found the solution to use volume gp3 in storage class on EKS.
First, you need to install Amazon EBS CSI driver with offical instruction here.
The next, you need to create the storage class ebs-sc after Amazon EBS CSI driver is installed, example:
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
So, you can use volume gp3 in storage class on EKS.
You can check by deploying resources:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-gp3-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ebs-sc
---
apiVersion: v1
kind: Pod
metadata:
name: app-gp3-in-tree
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-gp3-claim
EOF
Detailed documentation on Migrating Amazon EKS clusters from gp2 to gp3 EBS volumes: https://aws.amazon.com/vi/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/
References: Persistent Storage in EKS failing to provision volume
The default k8s storage driver support up to gp2 only. To use gp3 you need to install AWS EBS CSI driver. Here's the official instruction to install the driver.

How to use Same EFS for mounting multiple directories in Kubernetes deployment

I am trying to find a solution to make use of the same Amazon EFS for mounting multiple directories in the Kubernetes deployment. Here is my use case
I have an application named app1 that needs to persist a directory named "/opt/templates" to EFS
I have another application named app2 that needs to persist a directory named "/var/logs" to EFS
We deploy the applications as a Kubernetes Pod in the Amazon EKS cluster. If i am using the same EFS for both the above mounts, i can see all the files from both the directories "/opt/templates" and "/var/logs" as i am using the same EFS.
How can i solve the problem of using same EFS for both the application without seeing app1 mounted files in app2 directory ? Is it even possible of using the same EFS ID for multiple applications ?
Here is the Kubernetes manifests i used for for one of the application which includes PersistentVolume, PVC and the Deployment
----
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv-1
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc-report
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pvc-1
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy1
spec:
replicas: 1
selector:
matchLabels:
app: deploy1
template:
metadata:
labels:
app: deploy1
spec:
containers:
- name: app1
image: imageXXXX
ports:
- containerPort: 6455
volumeMounts:
- name: temp-data
mountPath: /opt/templates/
volumes:
- name: shared-data
emptyDir: {}
- name: temp-data
persistentVolumeClaim:
claimName: efs-pvc-1
It looks like you can do that by including the path as part of the volume handle.
A sub directory of EFS can be mounted inside container. This gives cluster operator the flexibility to restrict the amount of data being accessed from different containers on EFS.
For example:
volumeHandle: [FileSystemId]:[Path]
I think you will need to create two separate PVs and PVCs, one for /opt/templates, and the other for /var/logs, each pointing to a different path on your EFS.

custom tag on EBS volume provisioned dynamically by Kubernetes

I'm dynamically provisioning a EBS Volume (Kubernetes on AWS through EKS) through PersistentVolumeClaim with a StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: k8sebs
parameters:
encrypted: "false"
type: gp2
zones: us-east-1a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testk8sclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: k8sebs
resources:
requests:
storage: 1Gi
And pod that uses the volume:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: alpine
image: alpine:3.2
volumeMounts:
- mountPath: "/var/k8svol"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: testk8sclaim
I need to tag the EBS volume with a custom tag.
Documentation mentions nothing about tagging for provisioner aws-ebs, storageclass or PVC. I've spent hours to try to add a tag to the dynamically provided EBS volume but not luck.
Is creating custom tags for EBS a possibility in this scenario and if it is how can it be achieved?
Thank you,
Greg
Seems like at this point in time is not something possible yet.
Found these:
https://github.com/kubernetes/kubernetes/pull/49390
https://github.com/kubernetes/kubernetes/issues/50898
Hopefully something will be done soon.
The current approach is to use the AWS EBS CSI Driver instead of the K8s intree provisioner: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
If you use this new provisioner, you can add new tags using this: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/e175fe64989019e2d8f77f5a5399bad1dfd64e6b/charts/aws-ebs-csi-driver/values.yaml#L79

Kubernetes on GKE can't mount volumes

There are two nodes and 2 pods running in my cluster
(1 pod on each node)
My persistent volume claim is below
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: blockchain-data
annotations: {
"volume.beta.kubernetes.io/storage-class": "blockchain-disk"
}
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 500Gi
and mystorageclass
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: blockchain-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
and I mounted it on my container like this
spec:
containers:
- image: gcr.io/indiesquare-dev/geth-node:v1.8.12
imagePullPolicy: IfNotPresent
name: geth-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: blockchain-data
mountPath: /root/.ethereum
volumes:
- name: blockchain-data
persistentVolumeClaim:
claimName: blockchain-data
I have replicas set to 2. When start the deployment, the first pod starts correctly with the disk properly mounted.
However, the second pod gets stuck at containtercreating
If I run kubectl describe pods
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-c56fbb79-954f-11e8-870b-4201ac100003" Volume is already exclusively attached to one node and can't be attached to another
I think according to this message, I am trying to attach the disk which is already attached to another node.
What I want to do is to have two persistent volumes separately attached to two pods. If the pods scale up, then each should have a different volume attached.
How can I do this?
You can't attach a GCE Persistent Disk to multiple nodes. So if your pods are landing on different nodes you can't reuse the same disk.
You need something like ReadOnlyMany access mode but you have ReadWriteOnce.
Read https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes