Add specific GKE labels (not Kubernetes ones) to a node pool - google-cloud-platform

We have two dev teams working on diferent parts of our product. Both teams share the same cluster each having their working version of the code deployed on separate namespaces so they can test without one interfering with the other.
Now we want each team to have its own budget for the testing environment. In order to have that we need to be able to have the usage cost for each one. From what I know about GCP, the only way to keep track of the costs from each resource is to attach labels to them. This development cluster that we have already has a GKE label which is shared across all resources created by the cluster.
The problem is that, since both team uses the same cluster they share the same GKE tags. So I would like to have one Node Pool for each team with specific tags on each one.
I couldn't find anything that would allow me to do that so decided to ask it here.
It would be very overkill to create a separate cluster for each team.

You can use the cluster resource metering feature of GKE. You can track usage based on Kubernetes labels and/or resources.

Related

Cluster nodes only used by internal pods

We are using GKE to host our apps with Anthos, our default node pool ils set to autoscale but I noticed that out of 5 running pods, only 2 are hosting our actual services.
All the others are running internal services like this:
The issue with that is that there's not enough room for running our own services. I guess these are vital for the cluster otherwise the cluster would autoscale and the nodes would get removed.
What would be the best approach to solve this issue? I thought of upgrading the nodes machine type to allow more resources per node and have more room within them and thus have less running nodes, but I wanted to make sure I was not simply missing something on how GKE works.
I've been now digging for quite some time but it seems that would be my only option.
GKE itself requires several add-on resources which are deployed as part of your cluster. You can fine tune the resource usage of some of the GKE add-ons for smaller clusters. Additionally, Anthos each Anthos capability you enable typically deploys a set of controllers as well. GKE and Anthos try to minimize the compute resources used by these services / controllers, but you do need to account for them when calculating the right size(s) for your nodes. A good rule of thumb is to assume that system services/controllers will use ~1 vCPU when using GKE/Anthos (it's typically lower than that, but it makes things easier). So if your workloads all request >=1 vCPU, you'll likely need to use nodes that have a minimum of 4 vCPUs. You'll also want to enable the cluster autoscaler for your node pools if you don't want to pre-provision everything.
A better option would be to use node auto-provisioning as in this case you don't need to create/manage your own node pools as GKE will automatically add/remove nodes / node pools based on the resources requested by your deployments.

Is there a distributed multi-cloud service mesh solution that's available? Something that cuts across GCP, AWS, Azure and even on-premise setup?

Is there a distributed multi-cloud service mesh solution that is available? A distributed service mesh that cuts across GCP, AWS, Azure and even on-premise setup?
Nathan Aw (Singapore)
Yes it is possible with istio multi cluster single mesh model.
According to istio documentation:
Multiple clusters
You can configure a single mesh to include multiple clusters. Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment:
Fault isolation and fail over: cluster-1 goes down, fail over to cluster-2.
Location-aware routing and fail over: Send requests to the nearest service.
Various control plane models: Support different levels of availability.
Team or project isolation: Each team runs its own set of clusters.
A service mesh with multiple clusters
Multicluster deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue.
You can configure inter-cluster communication based on the network and the options supported by your cloud provider. For example, if two clusters reside on the same underlying network, you can enable cross-cluster communication by simply configuring firewall rules.
Single mesh
The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name mysvc in the foo namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names.
A single mesh can span one or more clusters and one or more networks. Within a mesh, namespaces are used for tenancy.
Hope it helps.
An alternative could be using this tool with a Kubernetes cluster that spans through all of your selected cloud providers at the same time without the hassle of managing all of them separately

Kubernetes on AWS with multiple accounts?

I wonder if it is possible to run a single EKS cluster within one AWS account and give access to it (entire or specific namespaces) to another one?
Here's a scenario:
In my company we have multiple customers and host their systems within AWS. We'd like to setup AWS Organization structure with subaccounts per customer (+ maybe separate account for prod and test). Some of the customers are already being migrated to Kubernetes so we need EKS cluster. Now, setting separate clusters for each customers would not be cost effective - partially because it would generate over 100USD for each control plane, partially because we would need to have separate node groups for each customer which would decrease benefits of scale.
For this reason I thought of setting a single EKS cluster and give access to it to subaccounts created for customers.
Can I achieve this? And how to do it relatively simple?
Follow these steps
You can create separate namespace for each customer rather creating a separate cluster.
Define resource quota at namespace level and manage the resources.
Create RBAC roles and rolebindings to control access at namespace level for each customer.

Best Practice GCP - GKE | Multiple services

We have different GCP projects namely for DEV/STAGE/PROD.
In DEV project we do have two services running in one cluster as part of Phase 1, in custom VPC network and subnet.
As the project is expanding which is called as Phase 2, we would adding more services to the DEV GCP project where the services would go from 2 services to 6.
The discussion currently we are having was that for phase 2, whether to have the services in :
- same cluster Or
- different cluster
Considering the ingress rules, and page routing policies, it would be great if veterans can give some leads , which of the above approach would be good for the project?
You can use the same cluster. If you have insufficient resources to deploy all the pods you need for the various services, consider scaling up the cluster instead of creating a new one. You may also want to consider node pool autoscaling or node auto provisionning.
There are really only 2 limitations on the number of services in a cluster: the total number of k8s objects (this is somewhere near 300k~400k and is a limitation of etcd), and the number of service IPs provided at cluster creation (the secondary range you assigned for services).
Aside from the above two limitations, I don't really see much of a reason to create new clusters for the new services. If you have in house design requirements, that is different, but fomr a purely k8s or GKE point of view, you can definitely continue to use the same cluster.

Kubernetes - adding more nodes

I have a basic cluster, which has a master and 2 nodes. The 2 nodes are part of an aws autoscaling group - asg1. These 2 nodes are running application1.
I need to be able to have further nodes, that are running application2 be added to the cluster.
Ideally, I'm looking to maybe have a multi-region setup, whereby aplication2 can be run in multiple regions, but be part of the same cluster (not sure if that is possible).
So my question is, how do I add nodes to a cluster, more specifically in AWS?
I've seen a couple of articles whereby people have spun up the instances and then manually logged in to install the kubeltet and various other things, but I was wondering if it could be done in more of an automatic way?
Thanks
If you followed this instructions, you should have an autoscaling group for your minions.
Go to AWS panel, and scale up the autoscaling group. That should do it.
If you did it somehow manually, you can clone a machine selecting an existing minion/slave, and choosing "launch more like this".
As Pablo said, you should be able to add new nodes (in the same availability zone) by scaling up your existing ASG. This will provision new nodes that will be available for you to run application2. Unless your applications can't share the same nodes, you may also be able to run application2 on your existing nodes without provisioning new nodes if your nodes are big enough. In some cases this can be more cost effective than adding additional small nodes to your cluster.
To your other question, Kubernetes isn't designed to be run across regions. You can run a multi-zone configuration (in the same region) for higher availability applications (which is called Ubernetes Lite). Support for cross-region application deployments (Ubernetes) is currently being designed.