I am using a manually created Kubernetes Cluster (using kubeadm) deployed on AWS ec2 instances (manually created ec2 instances). I want to use AWS EBS volumes for Kubernetes persistent volume. How can I use AWS EBS volumes within Kubernetes cluster for persistent volumes?
Cluster details:
kubectl veresion: 1.19
kubeadm veresion: 1.19
Posted community wiki for better visibility with general solution as there are no further details / logs provided. Feel free to expand it.
The official supported way to mount Amazon Elastic Block Store as Kubernetes volume on the self-managed Kubernetes cluster running on AWS is to use awsElasticBlockStorevolume type.
To manage the lifecycle of Amazon EBS volumes on the self-managed Kubernetes cluster running on AWS please install Amazon Elastic Block Store Container Storage Interface Driver.
Related
I have hosted my K8s cluster on AWS EC2. I want to use AWS EBS Self-provisioning into my k8s cluster for PVC.so I came across EBS-CSI but I can't find the document that can help me to install into my self hosted cluster.
https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
I'm using Is Amazon Elastic Kubernetes Service, I need to run fargate service on the Spot node.
Is it possible for EKS? Currently, I can find information about ECS only. So what steps I should follow to achieve this?
Fargate for EKS does not support spot instance yet. You can upvote here for the feature.
In the documentation it is mentioned
With AWS Fargate, there are no upfront costs and you pay only for the
resources you use. You pay for the amount of vCPU, memory, and storage
resources consumed by your containerized applications running on
Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes
Service (EKS).
reference: https://aws.amazon.com/fargate/pricing/
This shows that Fargate Spot should work just fine for AWS EKS
I wanted to use some AWS EBS volumes as a persistent storage for a deployment. I've configured the storage class and a PV, but I haven't been able to configure a Cloud provider.
The K8s documentation (as far as I understand) is for Kubernetes clusters running on a specific cloud provider, instead of an on-prem cluster using cloud resources. As the title says: Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster?
If so, can you a cloud provider to your existing cluster? (everything I've found online suggests that you add it when running kubeadm init).
Thank you!
You cannot use EBS storage in the same manner as you would when running on the cloud but you can use AWS Storage Gateway to store snapshots/backups of your volumes in cloud.
AWS Storage Gateway is a hybrid cloud storage service that connects
your existing on-premises environments with the AWS Cloud
The feature you are intrested in is called Volume Gateway
The Volume Gateway presents your applications block storage volumes
using the iSCSI protocol. Data written to these volumes can be
asynchronously backed up as point-in-time snapshots of your volumes,
and stored in the cloud as Amazon EBS snapshots.
Unfortunately you might not be able to automate creation of volumes in a way you could when running directly on AWS so some things you might have to do manually.
No you cannot because EBS can only be mounted inside AWS (usually in EC2 instances).
I have a database running in a Kubernetes cluster on AWS. The database is deployed as a StatefulSet with 3 replicas. Each replica uses an AWS EBS storage as its persistent volume.
If I shutdown a database node, Kubernetes starts automatically a new one. The newly started node finds its corresponding persistent volume (AWS EBS volume) without any problems.
But what happens if I shut the Kubernetes cluster down? The AWS EBS volumes are still there. But does the Kubernetes cluster or the database StatefulSet find its corresponding persistent volumes on AWS after a full cluster restart?
Kubernetes relies on etcd for state storage. If you're using kops to bring your cluster up then your etcd is backed up by AWS EBS volumes. It is recommended to backup your etcd periodically to be able to fully recover from disaster.
See here:
https://github.com/kubernetes/kops/blob/master/docs/etcd_backup.md
I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md
Which is right?
You can install Kubernetes on a normal Amazon EC2 server.
The new container service is a separate offering by Amazon, called ECS.
EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/
Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
Kubernetes provides scripts to simple setup a cluster on a set of EC2 machines. The setup does pretty much everything needed to get started quickly.
Here is the link: https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/getting-started-guides/aws.md
Yes its possible to setup Kubernetes on AWS See:http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html
You can also manually setup Kubernetes on AWS by launching a EC2 instance
Foe setting in Redhat ami https://access.redhat.com/articles/1353773
(Note: Kubernetes needs flannel network to be setup for managing networking between docker containers running on different hosts(minions)
Amazons Container Service is unrelated to Kubernetes.
There are 3 main options for installing Kubernetes on AWS:
CoreOS have a cli for installing and managing kubernetes on aws: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
Kubernetes have some scripts for setting up a cluster on AWS: http://kubernetes.io/docs/getting-started-guides/aws/
Manual installation on EC2. Lots of options here: http://kubernetes.io/docs/getting-started-guides/#cloud
As an aside minikube is now a thing which is nice for running locally to try stuff out:
http://kubernetes.io/docs/getting-started-guides/minikube/
AWS recently launched EKS, which provides a managed k8s master nodes. This should be
what you are looking for.
Yes. You can use kubeadm to install kubernetes on EC2 instances.
There are other tools also available.
KOPS
EKS
Kubeadm