Currently I am using Kubernetes v1.11.6.
I deployed kubernetes in AWS by using KOPS.
In k8s cluster, deployed kafka, elasticsearch.
PVC for kafka and elasticsearch are EBS volumes in AWS.
My question is how to monitor PVC used and remaining available.
This did not worked, How to monitor disk usage of kubernetes persistent volumes?
They no longer seem to be exposed starting from 1.12
I thought of using aws cloudwatch but I am thinking kubernetes will have some answer for this generic problem.
I should be able to see PVC used and remaining available disk space
generally speaking you can monitor following metrics:
kubelet_volume_stats_capacity_bytes
kubelet_volume_stats_available_bytes
These metrics can be scraped from the kubelet endpoint on each node with tools like Prometheus :)
Related
I'm using Is Amazon Elastic Kubernetes Service, I need to run fargate service on the Spot node.
Is it possible for EKS? Currently, I can find information about ECS only. So what steps I should follow to achieve this?
Fargate for EKS does not support spot instance yet. You can upvote here for the feature.
In the documentation it is mentioned
With AWS Fargate, there are no upfront costs and you pay only for the
resources you use. You pay for the amount of vCPU, memory, and storage
resources consumed by your containerized applications running on
Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes
Service (EKS).
reference: https://aws.amazon.com/fargate/pricing/
This shows that Fargate Spot should work just fine for AWS EKS
I am using a manually created Kubernetes Cluster (using kubeadm) deployed on AWS ec2 instances (manually created ec2 instances). I want to use AWS EBS volumes for Kubernetes persistent volume. How can I use AWS EBS volumes within Kubernetes cluster for persistent volumes?
Cluster details:
kubectl veresion: 1.19
kubeadm veresion: 1.19
Posted community wiki for better visibility with general solution as there are no further details / logs provided. Feel free to expand it.
The official supported way to mount Amazon Elastic Block Store as Kubernetes volume on the self-managed Kubernetes cluster running on AWS is to use awsElasticBlockStorevolume type.
To manage the lifecycle of Amazon EBS volumes on the self-managed Kubernetes cluster running on AWS please install Amazon Elastic Block Store Container Storage Interface Driver.
I wanted to use some AWS EBS volumes as a persistent storage for a deployment. I've configured the storage class and a PV, but I haven't been able to configure a Cloud provider.
The K8s documentation (as far as I understand) is for Kubernetes clusters running on a specific cloud provider, instead of an on-prem cluster using cloud resources. As the title says: Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster?
If so, can you a cloud provider to your existing cluster? (everything I've found online suggests that you add it when running kubeadm init).
Thank you!
You cannot use EBS storage in the same manner as you would when running on the cloud but you can use AWS Storage Gateway to store snapshots/backups of your volumes in cloud.
AWS Storage Gateway is a hybrid cloud storage service that connects
your existing on-premises environments with the AWS Cloud
The feature you are intrested in is called Volume Gateway
The Volume Gateway presents your applications block storage volumes
using the iSCSI protocol. Data written to these volumes can be
asynchronously backed up as point-in-time snapshots of your volumes,
and stored in the cloud as Amazon EBS snapshots.
Unfortunately you might not be able to automate creation of volumes in a way you could when running directly on AWS so some things you might have to do manually.
No you cannot because EBS can only be mounted inside AWS (usually in EC2 instances).
I am looking into the Cluster level scalability of Kubernetes using the Cluster AutoScaler. These articles (1, 2) talk about the possibility of bursting from on-premise to cloud using the cluster autoscaler, but i am not able to see any instructions on how to achieve this.
Also this talks about having the entire K8S cluster on EC2s and achieve auto-scaling using the cluster autoscaler.
I also understand that cluster autoscaler does not come with Kubernetes and needs to be installed separately.
So these are my questions,
Is it possible to scale (scale up and down of nodes) from on-premise to AWS cloud using cluster autoscaler or by other means?
If the above is possible, any reference links would be of great help.
I have a database running in a Kubernetes cluster on AWS. The database is deployed as a StatefulSet with 3 replicas. Each replica uses an AWS EBS storage as its persistent volume.
If I shutdown a database node, Kubernetes starts automatically a new one. The newly started node finds its corresponding persistent volume (AWS EBS volume) without any problems.
But what happens if I shut the Kubernetes cluster down? The AWS EBS volumes are still there. But does the Kubernetes cluster or the database StatefulSet find its corresponding persistent volumes on AWS after a full cluster restart?
Kubernetes relies on etcd for state storage. If you're using kops to bring your cluster up then your etcd is backed up by AWS EBS volumes. It is recommended to backup your etcd periodically to be able to fully recover from disaster.
See here:
https://github.com/kubernetes/kops/blob/master/docs/etcd_backup.md