Kubernetes pod failing with Invalid Volume Zone mismatch - amazon-web-services

I have a jenkins service deployed in EKS v 1.16 using helm chart. The PV and PVC had been accidentally deleted so I have recreated the PV and PVC as follows:
Pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-vol
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-2b/vol-xxxxxxxx
capacity:
storage: 120Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-ci
namespace: ci
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
volumeMode: Filesystem
status:
phase: Bound
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-ci
namespace: ci
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 120Gi
volumeMode: Filesystem
volumeName: jenkins-vol
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 120Gi
phase: Bound
kubectl describe sc gp2
Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2","namespace":""},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
The issue I'm facing is that the pod is not running when its scheduled on a node in a different availability zone than the EBS volume? How can I fix this

Add a nodeSelector to your deployment file, which will match it to a node in the needed availability zone (in your case us-east-2b):
nodeSelector:
topology.kubernetes.io/zone: us-east-2b

Add following labels to the PersistentVolume.
labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
example:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.beta.kubernetes.io/gid: "1000"
labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
name: test-pv-1
spec:
accessModes:
- ReadWriteOnce
csi:
driver: ebs.csi.aws.com
fsType: xfs
volumeHandle: vol-0d075fdaa123cd0e
capacity:
storage: 100Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
With the above labels the pod will automatically run in the same AZ where the volume is.

Related

0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available

I am deploying a jenkins on one master one node Kubernetes cluster, iam getting error when i try to do Dynamic Volume Provisioning. not sure what went wrong. please help.
my storageclass file
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
my PVC file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: standard
volumeMode: Filesystem
Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
selector:
matchLabels:
app: jcasc
replicas: 1
template:
metadata:
labels:
app: jcasc
spec:
volumes:
- name: jenkins-pvc
persistentVolumeClaim:
claimName: jenkins-pvc
containers:
- name: jenkins
image: jenkins:latest
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-pvc
mountPath: "/var/jenkins_home"
Find this troubleshooting doc for Fixing Pod Has Unbound Immediate PersistentVolumeClaims Error and also
For dynamic provisioning you can see this doc.
For pv doc

How can update existing aws ec2 volume id in persistent volume yaml file

I have created the aws ec2 volume using storageclass, persistentvolume, persistentvolumeclaim due to some AZ problem my ec2 instance moved to some
other AZ. I have created snapshot from the existing volume and created new volume in the instance created AZ.
The problem I cannot able to update the newly created volume in my persistentvolume yaml
error: persistentvolumes "jenkins-pv" is invalid
How to update existing volume id in the PV yaml?
Below are my yaml files
storageclass yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: jenkins-sc
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
PersistentVolumeClaim yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: dev
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: jenkins-sc
PersistentVolume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
finalizers:
- kubernetes.io/pv-protection
labels:
failure-domain.beta.kubernetes.io/region: us-east-2
failure-domain.beta.kubernetes.io/zone: us-east-2b
name: jenkins-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-2b/vol-0c999673840f0836e
capacity:
storage: 8Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-pvc
namespace: dev
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-east-2b
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- us-east-2
persistentVolumeReclaimPolicy: Retain
storageClassName: jenkins-sc
volumeMode: Filesystem

AWS EBS Volume with kubernetes issue

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv
Output: Running scope as unit run-20000.scope.
mount: /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 does not exist.
Warning FailedAttachVolume 7s (x6 over 23s) attachdetach-controller AttachVolume.NewAttacher failed for volume "pv" : Failed to get AWS Cloud Provider. GetCloudProvider returned <nil> instead
Warning FailedMount 7s kubelet, ip-172-31-3-191.us-east-2.compute.internal MountVolume.SetUp failed for volume "pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv
Output: Running scope as unit run-20058.scope.
mount: /var/lib/kubelet/pods/2e47e8b4-4755-46d6-9bc4-461ea02a6cb9/volumes/kubernetes.io~aws-ebs/pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 does not exist.
I have Kubernetes cluster running in same availability zone where EBS volumes is available
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2-retain
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: asvignesh
name: _PVC_
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: _PV_
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: _PV_
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://us-east-1a/vol-xxxxxxxxx
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2-retain
volumeMode: Filesystem
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: asvignesh
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: asvignesh
tier: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: asvignesh
spec:
selector:
matchLabels:
app: asvignesh
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: asvignesh
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: _PVC_
Are you running a cluster on managed K8s or bare metal?
Because
On the Kubernetes side of the house, you’ll need to make sure that the
--cloud-provider=aws command-line flag is present for the API server, controller manager, and every Kubelet in the cluster.
Document to refer : https://blog.scottlowe.org/2018/09/28/setting-up-the-kubernetes-aws-cloud-provider/
Example YAML
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
Ref : https://faun.pub/mysql-pod-with-persistent-ebs-volume-in-eks-150af369ff94

k8s Pv is not created

I want to create PV&PVC and the PV is not created , can you please advice what am I doing wrong?
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "100"
fsType: ext4
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: myns
spec:
storageClassName: io1
volumeName: my-sc
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 100Gi
The issue is that the PV is not created...what am I doing wrong?
Your StorageClass is named my-sc
Your PersistentVolumeClaim has its spec.storageClassName set to io1.
It should match an existing StorageClass name (my-sc).
You should delete your PVC, and re-create it with the proper storageClassName.

Multiple Volume mounts in EKS pod

I running AWS EKS and want 1 of the container to share multiple mounts to the same.
I created 1 EFS , 2 PV and 2 PVC
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: data
mountPath: /data
- name: docket
mountPath: /docket
volumes:
- name: data
persistentVolumeClaim:
claimName: efs-data-claim
- name: docket
persistentVolumeClaim:
claimName: efs-docket-claim
And these are my PV / PVCs
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-data-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-data-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-docket-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-docket-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
.. When i deploy the pod, I always get the following error
But if I go with only 1 PVC for both the mounts it is working fine.. Could anyone please let me know what is happening..
There is a 1-to-1 mapping between PhysicalVolumes and EFS storage.
If you use two different EFS with volumeHandle: fs-XXXXX and volumeHandle: fs-YYYYY it will work.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "logpv"
spec:
capacity:
storage: "2Gi"
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: "fs-28147e70"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "qmpv"
spec:
capacity:
storage: "2Gi"
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: "fs-13c3ac4b"
---