I have a database running in a Kubernetes cluster on AWS. The database is deployed as a StatefulSet with 3 replicas. Each replica uses an AWS EBS storage as its persistent volume.
If I shutdown a database node, Kubernetes starts automatically a new one. The newly started node finds its corresponding persistent volume (AWS EBS volume) without any problems.
But what happens if I shut the Kubernetes cluster down? The AWS EBS volumes are still there. But does the Kubernetes cluster or the database StatefulSet find its corresponding persistent volumes on AWS after a full cluster restart?
Kubernetes relies on etcd for state storage. If you're using kops to bring your cluster up then your etcd is backed up by AWS EBS volumes. It is recommended to backup your etcd periodically to be able to fully recover from disaster.
See here:
https://github.com/kubernetes/kops/blob/master/docs/etcd_backup.md
Related
I have a managed AWS eks cluster and the ec2 instances (k8s nodes) can run in any availability zones (ap-northeast-1a, 1b, 1c, 1d in my case). I have a minio pod running on the cluster and using ebs volume(created automatically) in ap-northeast-1d az. The ec2 instance running in this zone died 2 days ago and a new ec2 instance was created automatically but in a different zone - ap-northeast-1a. Now my pod is not able to schedule on the node because it can not mount the volume (az mismatch).
I am looking for a better solution to tackle this issue.
Here is what I did -
I created a snapshot of my existing volume, created a new volume in the same zone through that snapshot. I modified the PV and PVC resources, changed az and volume id and the pods were able to schedule again and are running fine.
I believe this is a dirty solution and therefore, I am looking for a more appropriate solution.
Any suggestions would be appreciated
I am using a manually created Kubernetes Cluster (using kubeadm) deployed on AWS ec2 instances (manually created ec2 instances). I want to use AWS EBS volumes for Kubernetes persistent volume. How can I use AWS EBS volumes within Kubernetes cluster for persistent volumes?
Cluster details:
kubectl veresion: 1.19
kubeadm veresion: 1.19
Posted community wiki for better visibility with general solution as there are no further details / logs provided. Feel free to expand it.
The official supported way to mount Amazon Elastic Block Store as Kubernetes volume on the self-managed Kubernetes cluster running on AWS is to use awsElasticBlockStorevolume type.
To manage the lifecycle of Amazon EBS volumes on the self-managed Kubernetes cluster running on AWS please install Amazon Elastic Block Store Container Storage Interface Driver.
We have a kops based k8s cluster running on AWS with deployments using EFS as Persistent Volume; Now we would to migrate to EKS with PVC Deployments
could some one help me in migrating deployments using Persistent Volume claims to EKS cluster in AWS.
You can not move PersistentVolumeClaims to another cluster, you need to re-create them in the new cluster. You need to backup the data and restore from backup in the new cluster.
I wanted to use some AWS EBS volumes as a persistent storage for a deployment. I've configured the storage class and a PV, but I haven't been able to configure a Cloud provider.
The K8s documentation (as far as I understand) is for Kubernetes clusters running on a specific cloud provider, instead of an on-prem cluster using cloud resources. As the title says: Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster?
If so, can you a cloud provider to your existing cluster? (everything I've found online suggests that you add it when running kubeadm init).
Thank you!
You cannot use EBS storage in the same manner as you would when running on the cloud but you can use AWS Storage Gateway to store snapshots/backups of your volumes in cloud.
AWS Storage Gateway is a hybrid cloud storage service that connects
your existing on-premises environments with the AWS Cloud
The feature you are intrested in is called Volume Gateway
The Volume Gateway presents your applications block storage volumes
using the iSCSI protocol. Data written to these volumes can be
asynchronously backed up as point-in-time snapshots of your volumes,
and stored in the cloud as Amazon EBS snapshots.
Unfortunately you might not be able to automate creation of volumes in a way you could when running directly on AWS so some things you might have to do manually.
No you cannot because EBS can only be mounted inside AWS (usually in EC2 instances).
Currently I am using Kubernetes v1.11.6.
I deployed kubernetes in AWS by using KOPS.
In k8s cluster, deployed kafka, elasticsearch.
PVC for kafka and elasticsearch are EBS volumes in AWS.
My question is how to monitor PVC used and remaining available.
This did not worked, How to monitor disk usage of kubernetes persistent volumes?
They no longer seem to be exposed starting from 1.12
I thought of using aws cloudwatch but I am thinking kubernetes will have some answer for this generic problem.
I should be able to see PVC used and remaining available disk space
generally speaking you can monitor following metrics:
kubelet_volume_stats_capacity_bytes
kubelet_volume_stats_available_bytes
These metrics can be scraped from the kubelet endpoint on each node with tools like Prometheus :)