Observed two kinds of syntaxes for PV & PVC creation in AWS EKS.
1)Using vol Id while creating both PV & PVC (Create volume manually and using that id)
2)Without using vol Id (dynamic provisioning of PV)
example-1:
- apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv-aws"
spec:
capacity:
storage: 10G
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
awsElasticBlockStore:
volumeID: vol-xxxxxxxx
fsType: ext4
In this case, I am creating volume manually and using that I'm creating both PV & PVC
example-2:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 20Gi
In this case by just creating PVC its creating volume in the backend (AWS) and PV.
What is the difference and in which to use in which scenarios? Pros and cons?
It should be based on your requirements. Static provisioning is generally not scalable. You have to create the volumes outside of the k8s context. Mounting existing volumes would be useful in disaster recovery scenarios.
Using Storage classes, or dynamic provisioning, is generally preferred because of the convenience. You can create roles and resource quotas to control and limit the storage usage and decrease operational overhead.
Related
I am working with a couple of k8s pods that have a PVC attached to it as an EBS volume in AWS. I made the mistake of increasing the space of the volume through the EBS console in AWS. I was thinking I could do it through the EBS console and then exec into the container on the pod and "extend the file system". After getting into the container, I realized I was not able to extend the file system directly in the container.
That is when I came across PVC and how to increase the volume through the k8s resource:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"files","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"150Gi"}},"storageClassName":"default"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
volume.kubernetes.io/storage-resizer: kubernetes.io/aws-ebs
creationTimestamp: "2021-05-20T12:18:55Z"
finalizers:
- kubernetes.io/pvc-protection
name: files
namespace: default
resourceVersion: "202729286"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/files
uid: a02bb805-de70-4fc8-bcef-a4943eb4ca0b
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
storageClassName: default
volumeMode: Filesystem
volumeName: pvc-a02bb805-de70-4fc8-bcef-a4943eb4ca0b
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 150Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-06-28T21:15:01Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
I have increased the size in this resource to the same size I manually increased it to in the EBS console. Additionally, I have added the allowVolumeExpansion attribute to the StorageClass and set it to true. However, I am still seeing the old size of the volume after deleting any linked pods to this PVC. Any ideas how I can increase the PVC would be helpful.
I have an Access Point created on AWS EFS and now I do need to share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces.
Is there a way that I can perform those, or would I need to create a separate volume with size allocation under the same mount point?
...share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces
First, install the EFS CSI driver.
Then create the StorageClass and PersistentVolume representing the EFS volume and access point you have created:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <name>
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <name>
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: <name> # <-- match this to the StorageClass name
csi:
driver: efs.csi.aws.com
volumeHandle: <fs-handle-id>::<access-point-id>
In each of the namespace that you wish to mount the access point, create a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <name>
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: <name> # <-- match this to the StorageClass name
resources:
requests:
storage: 1Gi # <-- match this to the PersistentVolume
As usual, you specify the volume in your spec to use it:
...
volumes:
- name: <name>
persistentVolumeClaim:
claimName: <name>
Im trying to figure out how to share data between a cronjob and a kubernetes deployment
I'm running Kubernetes hosted on AWS EKS
I've created a persistent volume with a claim and have tried to loop in the claim through both the cronjob and the deployment containers, however after the cronjob runs on the schedule the data still isn't in the other container where it should be
I've seen some threads about using AWS EBS but Im not so sure whats the way to go
Another thread talked about running different schedules to get the persistentvolume
- name: +vars.cust_id+-sophoscentral-logs
persistentVolumeClaim:
claimName: +vars.cust_id+-sophoscentral-logs-pvc
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: +vars.cust_id+-sp-logs-pv
spec:
persistentVolumeReclaimPolicy: Retain
claimRef:
name: +vars.cust_id+-sp-logs-pvc
namespace: +vars.namespace+
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/+vars.cust_id+-sophosdata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: +vars.cust_id+-sp-logs-pvc
namespace: +vars.namespace+
labels:
component: sp
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: +vars.cust_id+-sp-logs-pv
EBS volumes do not support ReadWriteMany as a mode. If you want to stay within the AWS ecosystem, you would need to use EFS which is a hosted NFS product. Other options include self hosted Ceph or Gluster and their related CephFS and GlusterFS tools.
This should generally be avoided if possible. NFS brings a whole host of problems to the table and while CephFS (and probably GlusterFS but I'm less familiar with that one personally) is better it's still a far cry from a "normal" network block device volume. Make sure you understand the limitations this brings with it before you include this in a system design.
I'm dynamically provisioning a EBS Volume (Kubernetes on AWS through EKS) through PersistentVolumeClaim with a StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: k8sebs
parameters:
encrypted: "false"
type: gp2
zones: us-east-1a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testk8sclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: k8sebs
resources:
requests:
storage: 1Gi
And pod that uses the volume:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: alpine
image: alpine:3.2
volumeMounts:
- mountPath: "/var/k8svol"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: testk8sclaim
I need to tag the EBS volume with a custom tag.
Documentation mentions nothing about tagging for provisioner aws-ebs, storageclass or PVC. I've spent hours to try to add a tag to the dynamically provided EBS volume but not luck.
Is creating custom tags for EBS a possibility in this scenario and if it is how can it be achieved?
Thank you,
Greg
Seems like at this point in time is not something possible yet.
Found these:
https://github.com/kubernetes/kubernetes/pull/49390
https://github.com/kubernetes/kubernetes/issues/50898
Hopefully something will be done soon.
The current approach is to use the AWS EBS CSI Driver instead of the K8s intree provisioner: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
If you use this new provisioner, you can add new tags using this: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/e175fe64989019e2d8f77f5a5399bad1dfd64e6b/charts/aws-ebs-csi-driver/values.yaml#L79
There are two nodes and 2 pods running in my cluster
(1 pod on each node)
My persistent volume claim is below
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: blockchain-data
annotations: {
"volume.beta.kubernetes.io/storage-class": "blockchain-disk"
}
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 500Gi
and mystorageclass
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: blockchain-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
and I mounted it on my container like this
spec:
containers:
- image: gcr.io/indiesquare-dev/geth-node:v1.8.12
imagePullPolicy: IfNotPresent
name: geth-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: blockchain-data
mountPath: /root/.ethereum
volumes:
- name: blockchain-data
persistentVolumeClaim:
claimName: blockchain-data
I have replicas set to 2. When start the deployment, the first pod starts correctly with the disk properly mounted.
However, the second pod gets stuck at containtercreating
If I run kubectl describe pods
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-c56fbb79-954f-11e8-870b-4201ac100003" Volume is already exclusively attached to one node and can't be attached to another
I think according to this message, I am trying to attach the disk which is already attached to another node.
What I want to do is to have two persistent volumes separately attached to two pods. If the pods scale up, then each should have a different volume attached.
How can I do this?
You can't attach a GCE Persistent Disk to multiple nodes. So if your pods are landing on different nodes you can't reuse the same disk.
You need something like ReadOnlyMany access mode but you have ReadWriteOnce.
Read https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes