I've got an existing GKE cluster and I'd like to configure it with private nodes as per the GKE hardening guide.
It seems like the possibility for selecting a private cluster is disabled in the cluster configuration UI, and setting it in Terraform with a private_cluster_config block forces destruction of the cluster.
Is there no way to configure private nodes for an existing cluster?
Unfortunately at this point it is not possible:
You cannot convert an existing, non-private cluster to a private cluster.
Related
So, I am trying to setup an EKS cluster using Terraform EKS Module.
Everything is good, until I try to set the cluster_endpoint_public_access to false. I can no more access the cluster with my kubeconfig and I can no more applying terraform changes to the cluster with an error "CLUSTER UNREACHABLE".
Is there a solution for this? I forgot maybe something?
It's not possible to set cluster_endpoint_public_access_cidrs because it will be so many ips (team members, the Gitlab CI, etc)
Thank you
you probably are trying to access the cluster from outside the VPC. Since you closed the access from public networks, it is just doing what you told it to.
Try to create EC2 machine in the same VPC, access it by SSH, use the same credentials you did when creating EKS cluster and you should see that the kubeconfig works from that machine.
i am a bit confused about EKS Cluster end point access and EKS Private cluster. EKS Private cluster needs to have ECR as container registry. but if i keep EKS Cluster endpoint as private, does that means its a private cluster?
The EKS cluster endpoint is orthogonal to the way you configure the networking for your workloads. Usually an EKS Private cluster is a cluster WHOSE NODES AND WORKLOADS do not have outbound access to the internet (commonly used by big enterprises with hybrid connectivity so that the data flow only travels within a private network (i.e. VPC and on-prem). The endpoint is where your kubectl points to and it's different. It could be public, private or both at the same time. In most cases if you want an EKS Private cluster is likely that you want the endpoint to be private as well but it's just an obvious choice not a technical requirement.
I would like to create a bastion host to manage a private GKE cluster on GCP.
The bastion host is a GCE VM named bastion.
The cluster is a GKE private cluster named cluster.
The flow should be:
User -> (SSH via IAP) -> bastion -> (gke control-plane) -> cluster
For both resources, I would like to create and configure two service accounts from scratch in order to ensure the principle of the least privilege.
Do you have any suggestions for the optimal setup for scopes and roles?
To have a better overview about how to handle GKE clusters for production purposes, I would suggest taking a look on this article, specifically on the section dedicated for Private Clusters in which is mentioned the alternative to use VPC Service Controls that can help you mitigate the risk of data exfiltration.
I'am new to the kubernetes and gcp. I have one cluster called A where my service is running. In order to run it needs an elasticsearch cluster. I would like to run the elasticsearch in the different GKE cluster so it won't have any impact on my current services. I created another cluster called B and now I have an issue I don't know how to make the service from the A cluster to connect to the elasticsearch pod from the B cluster.
Currently I've found two solutions:
a) make elasticsearch accessible on the public internet - I want to avoid such solution
b) add to the service another container that would authenticate to the B cluster and run kubectl proxy
None of them seems to be the perfect solution. How to solve such case ? Do you have any tutorials articles or tips how to solve it properly ?
edit:
important notes:
both clusters belong to the default vpc
clusters exist I cannot tear them down and recreate reconfigured
You can expose your service with type LoadBalancer. This will create an internal load balancer which other GKE cluster can reach.
This article details the steps.
This article explains the available service option on GKE in detail.
With these option you dont have to expose your service to public internet or destroy your GKE cluster.
Hope this helps.
I trying to setup Kubernetes cluster using kops,
having all of my nodes and master running on a private shabnets on my existing AWS VPC,
when passing the vpcid and network cidr to the create command, i'm enforced to have the EnableDNSHostnames=true,
I wonder of it's possible to setup a cluster with that option set to false
So all of the instances lunched in the private vpc wont have public address
Thanks
It's completely possible to run in private subnets, that's how I deploy my cluster (https://github.com/upmc-enterprises/kubernetes-on-aws), where all servers are in private subnets and access is granted via bastion boxes.
For kops specifically, looks like there's support (https://github.com/kubernetes/kops/issues/428), but I'm not a big user of it so can't speak 100% to how well it works.