I'm converting volume gp2 to volume gp3 for EKS but getting this error.
Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
This is my config.
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp3
parameters:
fsType: ext4
type: gp3
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: test-pvc
name: test-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: gp3
When I type kubectl describe pvc/test. This is response:
Name: test-pvc
Namespace: default
StorageClass: gp3
Status: Pending
Volume:
Labels: app=test-pvc
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 58s (x9 over 4m35s) persistentvolume-controller Failed to provision volume with StorageClass "gp3": invalid AWS VolumeType "gp3"
I'm using Kubernetes version 1.18.
Can someone help me. Thanks!
I found the solution to use volume gp3 in storage class on EKS.
First, you need to install Amazon EBS CSI driver with offical instruction here.
The next, you need to create the storage class ebs-sc after Amazon EBS CSI driver is installed, example:
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
So, you can use volume gp3 in storage class on EKS.
You can check by deploying resources:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-gp3-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ebs-sc
---
apiVersion: v1
kind: Pod
metadata:
name: app-gp3-in-tree
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-gp3-claim
EOF
Detailed documentation on Migrating Amazon EKS clusters from gp2 to gp3 EBS volumes: https://aws.amazon.com/vi/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/
References: Persistent Storage in EKS failing to provision volume
The default k8s storage driver support up to gp2 only. To use gp3 you need to install AWS EBS CSI driver. Here's the official instruction to install the driver.
I installed the EFS CI driver and got their Static Provisioning example to work: I was able to start a pod that appended to a file on the EFS volume. I could delete the pod and start another one to inspect that file and confirm the data written by the first pod was still there. But what I actually need to do is mount the volume read-only, and I am having no luck there.
Note that after I successfully ran that example, I launched an EC2 instance and in it, I mounted the EFS filesystem, then added the data that my pods need to access in a read-only fashion. Then I unmounted the EFS filesystem and terminated the instance.
Using the configuration below, which is based on the Static Provisioning example referenced above, my pod does not start Running; it remains in ContainerCreating.
Storage class:
$ kubectl get sc efs-sc -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"efs-sc"},"provisioner":"efs.csi.aws.com"}
creationTimestamp: "2020-01-12T05:36:13Z"
name: efs-sc
resourceVersion: "809880"
selfLink: /apis/storage.k8s.io/v1/storageclasses/efs-sc
uid: 71ecce62-34fd-11ea-8a5f-124f4ee64e8d
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Persistent Volume (this is the only PV in the cluster that uses the EFS Storage Class):
$ kubectl get pv efs-pv-ro -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv-ro"},"spec":{"accessModes":["ReadOnlyMany"],"capacity":{"storage":"5Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-26120da7"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-01-12T05:36:59Z"
finalizers:
- kubernetes.io/pv-protection
name: efs-pv-ro
resourceVersion: "810231"
selfLink: /api/v1/persistentvolumes/efs-pv-ro
uid: 8d54a80e-34fd-11ea-8a5f-124f4ee64e8d
spec:
accessModes:
- ReadOnlyMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: efs-claim-ro
namespace: default
resourceVersion: "810229"
uid: e0498cae-34fd-11ea-8a5f-124f4ee64e8d
csi:
driver: efs.csi.aws.com
volumeHandle: fs-26120da7
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
Persistent Volume Claim (this is the only PVC in the cluster attempting to use the EFS storage class:
$ kubectl get pvc efs-claim-ro -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim-ro","namespace":"default"},"spec":{"accessModes":["ReadOnlyMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-01-12T05:39:18Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim-ro
namespace: default
resourceVersion: "810234"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/efs-claim-ro
uid: e0498cae-34fd-11ea-8a5f-124f4ee64e8d
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv-ro
status:
accessModes:
- ReadOnlyMany
capacity:
storage: 5Gi
phase: Bound
And here is the Pod. It remains in ContainerCreating and does not switch to Running:
$ kubectl get pod efs-app -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"efs-app","namespace":"default"},"spec":{"containers":[{"args":["infinity"],"command":["sleep"],"image":"centos","name":"app","volumeMounts":[{"mountPath":"/data","name":"persistent-storage","subPath":"mmad"}]}],"volumes":[{"name":"persistent-storage","persistentVolumeClaim":{"claimName":"efs-claim-ro"}}]}}
kubernetes.io/psp: eks.privileged
creationTimestamp: "2020-01-12T06:07:08Z"
name: efs-app
namespace: default
resourceVersion: "813420"
selfLink: /api/v1/namespaces/default/pods/efs-app
uid: c3b8421b-3501-11ea-b164-0a9483e894ed
spec:
containers:
- args:
- infinity
command:
- sleep
image: centos
imagePullPolicy: Always
name: app
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: persistent-storage
subPath: mmad
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-z97dh
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-192-168-254-51.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim-ro
- name: default-token-z97dh
secret:
defaultMode: 420
secretName: default-token-z97dh
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
message: 'containers with unready status: [app]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
message: 'containers with unready status: [app]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-01-12T06:07:08Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: centos
imageID: ""
lastState: {}
name: app
ready: false
restartCount: 0
state:
waiting:
reason: ContainerCreating
hostIP: 192.168.254.51
phase: Pending
qosClass: BestEffort
startTime: "2020-01-12T06:07:08Z"
I am not sure if subPath will work with this configuration or not, but the same problem happens whether or not subPath is in the Pod configuration.
The problem does seem to be with the volume. If I comment out the volumes and volumeMounts section, the pod runs.
It seems that the PVC has bound with the correct PV, but the pod is not starting.
I'm not seeing a clue in any of the output above, but maybe I'm missing something?
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
aws-efs-csi-driver version: v.0.2.0.
Note that one of requirements is to have installed Golang in version 1.13.4+ but you have go1.12.12. So you have to update it. If you are upgrading from an older version of Go you must first remove the existing version.
Take a look here: upgrading-golang.
This driver is supported on Kubernetes version 1.14 and later Amazon EKS clusters and worker nodes. Alpha features of the Amazon EFS CSI Driver are not supported on Amazon EKS clusters.
Cannot mount read-only volume in kubernetes pod (using EFS CSI driver in AWS EKS). Try to change access mode to:
accessModes:
- ReadWriteMany
You can find more information here: efs-csi-driver.
Make sure that while creating EFS filesystem, it is accessible from Kuberenetes cluster. This can be achieved by creating the filesystem inside the same VPC as Kubernetes cluster or using VPC peering.
Static provisioning - EFS filesystem needs to be created manually first, then it could be mounted inside container as a persistent volume (PV) using the driver.
Mount Options - Mount options can be specified in the persistence volume (PV) to define how the volume should be mounted. Aside from normal mount options, you can also specify tls as a mount option to enable encryption in transit of the EFS filesystem.
Because Amazon EFS is an elastic file system, it does not enforce any file system capacity
limits. The actual storage capacity value in persistent volumes and persistent volume claims
is not used when creating the file system. However, since storage capacity is a required field
in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does
not limit the size of your Amazon EFS file system
I'm dynamically provisioning a EBS Volume (Kubernetes on AWS through EKS) through PersistentVolumeClaim with a StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: k8sebs
parameters:
encrypted: "false"
type: gp2
zones: us-east-1a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testk8sclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: k8sebs
resources:
requests:
storage: 1Gi
And pod that uses the volume:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: alpine
image: alpine:3.2
volumeMounts:
- mountPath: "/var/k8svol"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: testk8sclaim
I need to tag the EBS volume with a custom tag.
Documentation mentions nothing about tagging for provisioner aws-ebs, storageclass or PVC. I've spent hours to try to add a tag to the dynamically provided EBS volume but not luck.
Is creating custom tags for EBS a possibility in this scenario and if it is how can it be achieved?
Thank you,
Greg
Seems like at this point in time is not something possible yet.
Found these:
https://github.com/kubernetes/kubernetes/pull/49390
https://github.com/kubernetes/kubernetes/issues/50898
Hopefully something will be done soon.
The current approach is to use the AWS EBS CSI Driver instead of the K8s intree provisioner: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
If you use this new provisioner, you can add new tags using this: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/e175fe64989019e2d8f77f5a5399bad1dfd64e6b/charts/aws-ebs-csi-driver/values.yaml#L79
I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.
I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.
Detailed Error message for Pod describe:
timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw]
MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32
AWS EFS uses NFS type volume plugin, and As per
Kubernetes Storage Classes
NFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
Create an external Provisioner for NFS volume plugin.
Create a storage class.
Create one volume claim.
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
data:
file.system.id: yourEFSsystemid
aws.region: regionyourEFSisin
provisioner.name: example.com/aws-efs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs
The problem for me was that I was specifying a different path in my PV than /. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.
The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.
ec2 mounting: AWS EFS mounting with EC2
PV.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
nfs:
server: efs_public_dns.amazonaws.com
path: "/"
PVC.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
replicaset.yml
----- only volume section -----
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: efs
There are two nodes and 2 pods running in my cluster
(1 pod on each node)
My persistent volume claim is below
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: blockchain-data
annotations: {
"volume.beta.kubernetes.io/storage-class": "blockchain-disk"
}
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 500Gi
and mystorageclass
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: blockchain-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
and I mounted it on my container like this
spec:
containers:
- image: gcr.io/indiesquare-dev/geth-node:v1.8.12
imagePullPolicy: IfNotPresent
name: geth-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: blockchain-data
mountPath: /root/.ethereum
volumes:
- name: blockchain-data
persistentVolumeClaim:
claimName: blockchain-data
I have replicas set to 2. When start the deployment, the first pod starts correctly with the disk properly mounted.
However, the second pod gets stuck at containtercreating
If I run kubectl describe pods
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-c56fbb79-954f-11e8-870b-4201ac100003" Volume is already exclusively attached to one node and can't be attached to another
I think according to this message, I am trying to attach the disk which is already attached to another node.
What I want to do is to have two persistent volumes separately attached to two pods. If the pods scale up, then each should have a different volume attached.
How can I do this?
You can't attach a GCE Persistent Disk to multiple nodes. So if your pods are landing on different nodes you can't reuse the same disk.
You need something like ReadOnlyMany access mode but you have ReadWriteOnce.
Read https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes