I am looking into the Cluster level scalability of Kubernetes using the Cluster AutoScaler. These articles (1, 2) talk about the possibility of bursting from on-premise to cloud using the cluster autoscaler, but i am not able to see any instructions on how to achieve this.
Also this talks about having the entire K8S cluster on EC2s and achieve auto-scaling using the cluster autoscaler.
I also understand that cluster autoscaler does not come with Kubernetes and needs to be installed separately.
So these are my questions,
Is it possible to scale (scale up and down of nodes) from on-premise to AWS cloud using cluster autoscaler or by other means?
If the above is possible, any reference links would be of great help.
Related
I am considering to use AWS EKS to deploy kubernete cluster but I wonder whether it supports bring other nodes in the cluster? Other nodes may come from GCP, on-prem infra. etc.
I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.
I am new to AWS EKS - I have an application for which I need one worker node (a pod) of the Kubernetes to run on my on-premise infrastructure. Is that possible and if yes then how can I achieve that ?
In theory you can run EKS and on-prem kubernetes clusters at the same time, but managing them via a single federation control plane. But I've never tried to use it with EKS, although EKS is mostly vanilla kubernetes, so it should work.
Using AWS EC2 to install Rancher cluster. Then setup Kubernetes cluster from Rancher server.
About auto scaling, there are some ways to do:
Use Rancher cattle webhook service
https://rancher.com/docs/rancher/v1.6/en/cattle/webhook-service/
This way should use monitoring tool Prometheus to monitor CPU usage, then add or delete nodes due to alerting.
Use terraform to generate rancher-master-ha, rancher-nodes, networking, database dynamictly
http://rancher.com/aws-rancher-building-resilient-stack/
This can well done for Rancher cluster.
Horizontal Pod Autoscaling Walkthrough
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
It's official horizontal pod autoscaling way.
Kubernetes Autoscaler
https://github.com/kubernetes/autoscaler
It's also a official auto scaling way for Kubernets cluster.
Use AWS Auto Scaling
https://aws.amazon.com/autoscaling/
About this way, how to connect it to Rancher cluster and Kubernetes cluster running on EC2?
There are many ways to do the auto scaling, but which is the best way? And the very important thing is, how to use AWS Auto Scaling for this architecture?
Since you deployed Kubernetes with Rancher, you should use Rancher webhooks for this operation.
Use Prometheus/Grafana to set up webhook when CPU utilization is over some %.
We have set up two Kubernetes clusters on aws one using the the kube-up script and other with the kops create cluster command. We find that the inter-pod communication in the kops created cluster is very slow making us think the routing is at problem. If anyone has experienced this issue and have as solution to suggest to reduce the latency its much appreciated.