Sharing AWS EFS Access Points across Persistent Volumes - amazon-web-services

I have an Access Point created on AWS EFS and now I do need to share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces.
Is there a way that I can perform those, or would I need to create a separate volume with size allocation under the same mount point?

...share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces
First, install the EFS CSI driver.
Then create the StorageClass and PersistentVolume representing the EFS volume and access point you have created:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <name>
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <name>
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: <name> # <-- match this to the StorageClass name
csi:
driver: efs.csi.aws.com
volumeHandle: <fs-handle-id>::<access-point-id>
In each of the namespace that you wish to mount the access point, create a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <name>
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: <name> # <-- match this to the StorageClass name
resources:
requests:
storage: 1Gi # <-- match this to the PersistentVolume
As usual, you specify the volume in your spec to use it:
...
volumes:
- name: <name>
persistentVolumeClaim:
claimName: <name>

Related

How to use volume gp3 in storage class on EKS?

I'm converting volume gp2 to volume gp3 for EKS but getting this error.
Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
This is my config.
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp3
parameters:
fsType: ext4
type: gp3
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: test-pvc
name: test-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: gp3
When I type kubectl describe pvc/test. This is response:
Name: test-pvc
Namespace: default
StorageClass: gp3
Status: Pending
Volume:
Labels: app=test-pvc
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 58s (x9 over 4m35s) persistentvolume-controller Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
I'm using Kubernetes version 1.18.
Can someone help me. Thanks!
I found the solution to use volume gp3 in storage class on EKS.
First, you need to install Amazon EBS CSI driver with offical instruction here.
The next, you need to create the storage class ebs-sc after Amazon EBS CSI driver is installed, example:
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
So, you can use volume gp3 in storage class on EKS.
You can check by deploying resources:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-gp3-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ebs-sc
---
apiVersion: v1
kind: Pod
metadata:
name: app-gp3-in-tree
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-gp3-claim
EOF
Detailed documentation on Migrating Amazon EKS clusters from gp2 to gp3 EBS volumes: https://aws.amazon.com/vi/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/
References: Persistent Storage in EKS failing to provision volume
The default k8s storage driver support up to gp2 only. To use gp3 you need to install AWS EBS CSI driver. Here's the official instruction to install the driver.

How to use Same EFS for mounting multiple directories in Kubernetes deployment

I am trying to find a solution to make use of the same Amazon EFS for mounting multiple directories in the Kubernetes deployment. Here is my use case
I have an application named app1 that needs to persist a directory named "/opt/templates" to EFS
I have another application named app2 that needs to persist a directory named "/var/logs" to EFS
We deploy the applications as a Kubernetes Pod in the Amazon EKS cluster. If i am using the same EFS for both the above mounts, i can see all the files from both the directories "/opt/templates" and "/var/logs" as i am using the same EFS.
How can i solve the problem of using same EFS for both the application without seeing app1 mounted files in app2 directory ? Is it even possible of using the same EFS ID for multiple applications ?
Here is the Kubernetes manifests i used for for one of the application which includes PersistentVolume, PVC and the Deployment
----
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv-1
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc-report
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pvc-1
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy1
spec:
replicas: 1
selector:
matchLabels:
app: deploy1
template:
metadata:
labels:
app: deploy1
spec:
containers:
- name: app1
image: imageXXXX
ports:
- containerPort: 6455
volumeMounts:
- name: temp-data
mountPath: /opt/templates/
volumes:
- name: shared-data
emptyDir: {}
- name: temp-data
persistentVolumeClaim:
claimName: efs-pvc-1
It looks like you can do that by including the path as part of the volume handle.
A sub directory of EFS can be mounted inside container. This gives cluster operator the flexibility to restrict the amount of data being accessed from different containers on EFS.
For example:
volumeHandle: [FileSystemId]:[Path]
I think you will need to create two separate PVs and PVCs, one for /opt/templates, and the other for /var/logs, each pointing to a different path on your EFS.

custom tag on EBS volume provisioned dynamically by Kubernetes

I'm dynamically provisioning a EBS Volume (Kubernetes on AWS through EKS) through PersistentVolumeClaim with a StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: k8sebs
parameters:
encrypted: "false"
type: gp2
zones: us-east-1a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testk8sclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: k8sebs
resources:
requests:
storage: 1Gi
And pod that uses the volume:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: alpine
image: alpine:3.2
volumeMounts:
- mountPath: "/var/k8svol"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: testk8sclaim
I need to tag the EBS volume with a custom tag.
Documentation mentions nothing about tagging for provisioner aws-ebs, storageclass or PVC. I've spent hours to try to add a tag to the dynamically provided EBS volume but not luck.
Is creating custom tags for EBS a possibility in this scenario and if it is how can it be achieved?
Thank you,
Greg
Seems like at this point in time is not something possible yet.
Found these:
https://github.com/kubernetes/kubernetes/pull/49390
https://github.com/kubernetes/kubernetes/issues/50898
Hopefully something will be done soon.
The current approach is to use the AWS EBS CSI Driver instead of the K8s intree provisioner: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
If you use this new provisioner, you can add new tags using this: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/e175fe64989019e2d8f77f5a5399bad1dfd64e6b/charts/aws-ebs-csi-driver/values.yaml#L79

Facing an issue with attaching EFS volume to Kubernetes pods

I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.
I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.
Detailed Error message for Pod describe:
timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw]
MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32
AWS EFS uses NFS type volume plugin, and As per
Kubernetes Storage Classes
NFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
Create an external Provisioner for NFS volume plugin.
Create a storage class.
Create one volume claim.
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
data:
file.system.id: yourEFSsystemid
aws.region: regionyourEFSisin
provisioner.name: example.com/aws-efs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs
The problem for me was that I was specifying a different path in my PV than /. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.
The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.
ec2 mounting: AWS EFS mounting with EC2
PV.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
nfs:
server: efs_public_dns.amazonaws.com
path: "/"
PVC.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
replicaset.yml
----- only volume section -----
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: efs

Kubernetes on GKE can't mount volumes

There are two nodes and 2 pods running in my cluster
(1 pod on each node)
My persistent volume claim is below
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: blockchain-data
annotations: {
"volume.beta.kubernetes.io/storage-class": "blockchain-disk"
}
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 500Gi
and mystorageclass
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: blockchain-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
and I mounted it on my container like this
spec:
containers:
- image: gcr.io/indiesquare-dev/geth-node:v1.8.12
imagePullPolicy: IfNotPresent
name: geth-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: blockchain-data
mountPath: /root/.ethereum
volumes:
- name: blockchain-data
persistentVolumeClaim:
claimName: blockchain-data
I have replicas set to 2. When start the deployment, the first pod starts correctly with the disk properly mounted.
However, the second pod gets stuck at containtercreating
If I run kubectl describe pods
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-c56fbb79-954f-11e8-870b-4201ac100003" Volume is already exclusively attached to one node and can't be attached to another
I think according to this message, I am trying to attach the disk which is already attached to another node.
What I want to do is to have two persistent volumes separately attached to two pods. If the pods scale up, then each should have a different volume attached.
How can I do this?
You can't attach a GCE Persistent Disk to multiple nodes. So if your pods are landing on different nodes you can't reuse the same disk.
You need something like ReadOnlyMany access mode but you have ReadWriteOnce.
Read https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes