After kubectl apply -f pvc.yaml the below yaml file, I can able to find the mount path /var/local/pvctest inside the container that has been created. But, the host path /var/local/pvctest in the worker node is not created.
I'm new to PV & PVC with EKS and any help to fix this issue is much appreciated!
kind: Deployment
apiVersion: apps/v1
metadata:
name: pvctest
labels:
alias: pvctest
spec:
selector:
matchLabels:
alias: pvctest
replicas: 1
template:
metadata:
labels:
alias: pvctest
spec:
containers:
- name: pvctest
image: neo4j
ports:
- containerPort: 7474
- containerPort: 7687
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
persistentVolumeClaim:
claimName: pvctest-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvtest
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/local/pvctest
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvctest-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
PersistentVolume with hostPath requires the directory on the host to be pre-created. If you want the directory to be created automatically for you:
...
containers:
- name: pvctest
image: neo4j
...
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
hostPath:
path: /data
type: DirectoryOrCreate
PV/PVC is actually optonal for hostPath.
Related
I have an EKS cluster with multiple deployments (microservices). I would like all of them to write logs to same folder in an EFS mount even between restarts/scaling etc. Currently it creates a folder with the persistent volume id, which breaks our requirement. Is it possible to always mount the same folder even when the persistent volume is recreated. Can it always to point to the same folder in EFS?
It currently creates folders like this:
"/logs/pvc-2exxxcxs4-0xx7-4e11-813a-65xxxxxxxx/"
Instead, I would like it to be just "/logs" or a fixed path without any dependency on pvc id/name.
Below are the current yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
........
volumeMounts:
- name: dev-logs-efs
mountPath: /logs
volumes:
- name: dev-logs-efs
persistentVolumeClaim:
claimName: dev-logs-efs-pvc
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dev-logs-efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-xxxxxxxxxxx
basePath: "/logs"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-logs-efs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: dev-logs-efs-sc
resources:
requests:
storage: 5Gi
I have been working on a project with Postgres. I want to add this to EKS. my deployment is giving me an error as:
Warning FailedScheduling 29s default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
I have tried changing the storage capacity but nothing worked.
Various components of the YAML file are below:
apiVersion: v1
kind: Secret
metadata:
name: xdb-secret
namespace: default
type: Opaque
data:
django-secret-key: xxx
django_database_name: xxx
django_database_username: xxx
django_database_password: xxx
email_host_user: xxx
email_host_password: xxx
# ---
# apiVersion: v1
# kind: PersistentVolume
# metadata:
# name: xdb-pv
# labels:
# type: local
# spec:
# capacity:
# storage: 10Gi
# volumeMode: Filesystem
# accessModes:
# - ReadWriteOnce
# storageClassName: standard
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: xdb-pvc
namespace: default
labels:
app: local
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# volumeName: xdb-pv
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: xdb-deployment
namespace: default
labels:
app: xdb
spec:
selector:
matchLabels:
app: xdb
replicas: 1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: xdb
spec:
# initContainers:
# Init containers are exactly like regular containers, except:
# - Init containers always run to completion.
# - Each init container must complete successfully before the next one starts.
containers:
- name: xdb-cont
image: postgres:latest
env:
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: xdb-secret
key: django-secret-key
- name: DJANGO_DATABASE_NAME
valueFrom:
secretKeyRef:
name: xdb-secret
key: django_database_name
- name: DJANGO_DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: xdb-secret
key: django_database_username
- name: DJANGO_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: xdb-secret
key: django_database_password
- name: EMAIL_HOST_USER
valueFrom:
secretKeyRef:
name: xdb-secret
key: email_host_user
- name: EMAIL_HOST_PASSWORD
valueFrom:
secretKeyRef:
name: xdb-secret
key: email_host_password
- name: DJANGO_DATABASE_PORT
value: "5432"
- name: DJANGO_DATABASE_HOST
value: xdb-service
ports:
- containerPort: 5432
name: xdb-cont
volumeMounts:
- name: xdb-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: xdb-volume-mount
persistentVolumeClaim:
claimName: xdb-pvc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: xdb-service
namespace: default
spec:
selector:
app: xdb
ports:
- protocol: TCP
port: 5432
targetPort: 5432
Details:
replica: 1
node: 2
each node capacity: 20Gi
Capacity type: Spot
Desired size: 2 nodes
Minimum size: 2 nodes
Maximum size: 5 nodes
I am not sure what's going wrong in this.
I am deploying a jenkins on one master one node Kubernetes cluster, iam getting error when i try to do Dynamic Volume Provisioning. not sure what went wrong. please help.
my storageclass file
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
my PVC file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: standard
volumeMode: Filesystem
Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
selector:
matchLabels:
app: jcasc
replicas: 1
template:
metadata:
labels:
app: jcasc
spec:
volumes:
- name: jenkins-pvc
persistentVolumeClaim:
claimName: jenkins-pvc
containers:
- name: jenkins
image: jenkins:latest
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-pvc
mountPath: "/var/jenkins_home"
Find this troubleshooting doc for Fixing Pod Has Unbound Immediate PersistentVolumeClaims Error and also
For dynamic provisioning you can see this doc.
For pv doc
I running AWS EKS and want 1 of the container to share multiple mounts to the same.
I created 1 EFS , 2 PV and 2 PVC
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: data
mountPath: /data
- name: docket
mountPath: /docket
volumes:
- name: data
persistentVolumeClaim:
claimName: efs-data-claim
- name: docket
persistentVolumeClaim:
claimName: efs-docket-claim
And these are my PV / PVCs
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-data-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-data-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-docket-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-XXXXX
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-docket-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
.. When i deploy the pod, I always get the following error
But if I go with only 1 PVC for both the mounts it is working fine.. Could anyone please let me know what is happening..
There is a 1-to-1 mapping between PhysicalVolumes and EFS storage.
If you use two different EFS with volumeHandle: fs-XXXXX and volumeHandle: fs-YYYYY it will work.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "logpv"
spec:
capacity:
storage: "2Gi"
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: "fs-28147e70"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "qmpv"
spec:
capacity:
storage: "2Gi"
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: "fs-13c3ac4b"
---
I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it.
How do I retain the metrics from the pods even after they deleted?
The Prometheus deployment yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
Application deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
Your issue is with the wrong type of controller used to deploy Prometheus. The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).
You should switch to StatefulSet kind*, if you require persistence of data (metrics scraped by Prometheus) across Pod (re)scheduling.
*This is how Prometheus is deployed by default with prometheus-operator.
With this configuration for a volume, it will be removed when you release a pod. You are basically looking for a PersistentVolumne, documentation and example.
Also check, PersistentVolumeClaim.