I have a kubernetes cluster set up and I want to stop it so it doesn't generate additional costs, but keep my deployments and configurations saved so that it will work when I start it again. I tried disabling autoscaling and resizing the node pool, but I get the error INVALID_ARGUMENT: Autopilot clusters do not support mutating node pools.
With GKE (autopilot or not) you pay 2 things
The control plane, fully managed by Google
The workers: Node pools for GKE, the running pods on GKE Autopilot.
In both case, you can't stop the control plane, you don't manage it. The only solution is to delete the cluster.
In both case, you can scale your pods/node pools to 0 and therefore remove the worker cost.
That being said, in your case, you have no other solution than deleting your Autopilot control plane, and to save your configuration in config file (the yaml files). Next time you want to start your autopilot cluster, create a new one, load your config, and that's all.
For persistent data, you have to save them outside (on GCS for instance) and to reload them also. The boring part.
Note: you have 1 cluster free per billing account
Related
We are using GKE to host our apps with Anthos, our default node pool ils set to autoscale but I noticed that out of 5 running pods, only 2 are hosting our actual services.
All the others are running internal services like this:
The issue with that is that there's not enough room for running our own services. I guess these are vital for the cluster otherwise the cluster would autoscale and the nodes would get removed.
What would be the best approach to solve this issue? I thought of upgrading the nodes machine type to allow more resources per node and have more room within them and thus have less running nodes, but I wanted to make sure I was not simply missing something on how GKE works.
I've been now digging for quite some time but it seems that would be my only option.
GKE itself requires several add-on resources which are deployed as part of your cluster. You can fine tune the resource usage of some of the GKE add-ons for smaller clusters. Additionally, Anthos each Anthos capability you enable typically deploys a set of controllers as well. GKE and Anthos try to minimize the compute resources used by these services / controllers, but you do need to account for them when calculating the right size(s) for your nodes. A good rule of thumb is to assume that system services/controllers will use ~1 vCPU when using GKE/Anthos (it's typically lower than that, but it makes things easier). So if your workloads all request >=1 vCPU, you'll likely need to use nodes that have a minimum of 4 vCPUs. You'll also want to enable the cluster autoscaler for your node pools if you don't want to pre-provision everything.
A better option would be to use node auto-provisioning as in this case you don't need to create/manage your own node pools as GKE will automatically add/remove nodes / node pools based on the resources requested by your deployments.
Good morning. I am doing some tests with the new Google Cloud Kubernetes Engine's Autopilot mode. I know that it automates a lot of the machine resources' management, but I am not sure about what it automates. Does it only cares about provisioning the hardware resources that I set inside my PodSpec? Or does it also cares about scaling up and down the number of containers that I have based on traffic intensity?
I am coming from Cloud Run, so, after all, my main question is: Now, with GKE Autopilot, do I need to do something for it to create new container instances when the traffic intensity increases or is it all automatically managed? Do I need to set HPA, VPA and other autoscaler technologies when using autopilot?
For GKE autopilot you need to create the HPA and VPA configuration
GKE autopilot will the scaling of Node by default
You can read more at : https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#comparison
Scaling Pre-configured: Autopilot handles all the scaling and
configuring of your nodes.
Default: You configure Horizontal pod autoscaling (HPA) You configure
Vertical Pod autoscaling (VPA)
Do I need to set HPA, VPA and other autoscaler technologies when using
autopilot?
Autoscaler is not required as it will be by default managed by GKE and will scale the Node as per requirement.
The Cloud Run on GKE documentation says
Note that although these instructions don't enable cluster autoscaling to resize clusters for demand, Cloud Run for Anthos on Google Cloud automatically scales instances within the cluster.
Does that mean that if I create a Cloud Run cluster using the default configuration, my service will never scale past the capacity of the three nodes of the cluster?
Is it possible to enable Kubernetes autoscaling for Cloud Run clusters, or will that conflict with the internal Cloud Run autoscaler? I'd like to be able to scale up my Cloud Run cluster to many nodes, but take advantage of the autoscaler to avoid wasting resources.
You can define an autoscaling NodePool.
The warning is just about the Cloud Run (or Knative) autoscaller manage only the Pod autoscalling and doesn't manage the nodes autoscalling.
The nodes autoscaller is managed by K8S and based on CPU usage.
Remember, you can't scale to 0 node, but you can scale to 0 Pod. In addition, the node scaling and very slow compared to node scaling.
I created an EKS cluster but while deploying pods, I found out that the native AWS CNI only supports a set number of pods because of the IP restrictions on its instances. I don't want to use any third-party plugins because AWS doesn't support them and we won't be able to get their tech support. What happens right now is that as soon as the IP limit is hit for that instance, the scheduler is not able to schedule the pods and the pods go into pending state.
I see there is a cluster autoscaler which can do horizontal scaling.
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
Using a larger instance type with more available IPs is an option but that is not scalable since we will run out of IPs eventually.
Is it possible to set a pod limit for each node in cluster-autoscaler and if that limit is reached, a new instance is spawned. Since each pod uses one secondary IP of the node so that would solve our issue of not having to worry about scaling. Is this a viable option? and also if anybody has faced this and would like to share how they overcame this limitation.
EKS's node group is using auto scaling group for nodes scaling.
You can follow this workshop as a dedicated example.
We have configured Kubernetes cluster on EC2 machines in our AWS account using kops tool (https://github.com/kubernetes/kops) and based on AWS posts (https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/) as well as other resources.
We want to setup a K8s cluster of master and slaves such that:
It will automatically resize (both masters as well as nodes/slaves) based on system load.
Runs in Multi-AZ mode i.e. at least one master and one slave in every AZ (availability zone) in the same region for e.g. us-east-1a, us-east-1b, us-east-1c and so on.
We tried to configure the cluster in the following ways to achieve the above.
Created K8s cluster on AWS EC2 machines using kops this below configuration: node count=3, master count=3, zones=us-east-1c, us-east-1b, us-east-1a. We observed that a K8s cluster was created with 3 Master & 3 Slave Nodes. Each of the master and slave server was in each of the 3 AZ’s.
Then we tried to resize the Nodes/slaves in the cluster using (https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml). We set the node_asg_min to 3 and node_asg_max to 5. When we increased the workload on the slaves such that auto scale policy was triggered, we saw that additional (after the default 3 created during setup) slave nodes were spawned, and they did join the cluster in various AZ’s. This worked as expected. There is no question here.
We also wanted to set up the cluster such that the number of masters increases based on system load. Is there some way to achieve this? We tried a couple of approaches and results are shared below:
A) We were not sure if the cluster-auto scaler helps here, but nevertheless tried to resize the Masters in the cluster using (https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml). This is useful while creating a new cluster but was not useful to resize the number of masters in an existing cluster. We did not find a parameter to specify node_asg_min, node_asg_max for Master the way it is present for slave Nodes. Is there some way to achieve this?
B) We increased the count MIN from 1 to 3 in ASG (auto-scaling group), associated with one the three IG (instance group) for each master. We found that new instances were created. However, they did not join the master cluster. Is there some way to achieve this?
Could you please point us to steps, resources on how to do this correctly so that we could configure the number of masters to automatically resize based on system load and is in Multi-AZ mode?
Kind regards,
Shashi
There is no need to scale Master nodes.
Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).
Master components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all master components on the same machine, and do not run user containers on this machine. See Building High-Availability Clusters for an example multi-master-VM setup.
Master node consists of the following components:
kube-apiserver
Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane.
etcd
Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
kube-scheduler
Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.
kube-controller-manager
Component on the master that runs controllers.
cloud-controller-manager
runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.
For more detailed explanation please read the Kubernetes Components docs.
Also if You are thinking about HA, you can read about Creating Highly Available Clusters with kubeadm
I think your assumption is that similar to kubernetes nodes, masters devide the work between eachother. That is not the case, because the main tasks of masters is to have consensus between each other. This is done with etcd which is a distributed key value store. The problem maintaining such a store is easy for 1 machine but gets harder the more machines you add.
The advantage of adding masters is being able to survive more master failures at the cost of having to make all masters fatter (more CPU/RAM....) so that they perform well enough.