EKS Cluster endpoint access - amazon-web-services

i am a bit confused about EKS Cluster end point access and EKS Private cluster. EKS Private cluster needs to have ECR as container registry. but if i keep EKS Cluster endpoint as private, does that means its a private cluster?

The EKS cluster endpoint is orthogonal to the way you configure the networking for your workloads. Usually an EKS Private cluster is a cluster WHOSE NODES AND WORKLOADS do not have outbound access to the internet (commonly used by big enterprises with hybrid connectivity so that the data flow only travels within a private network (i.e. VPC and on-prem). The endpoint is where your kubectl points to and it's different. It could be public, private or both at the same time. In most cases if you want an EKS Private cluster is likely that you want the endpoint to be private as well but it's just an obvious choice not a technical requirement.

Related

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

How to make an Aurora instance reachable over the internet?

I have a staging and a production EKS cluster on AWS, and they use different DBs.
I need to deploy a replica of the prod app deployment in a temp namespace inside the staging cluster.
Now, the temp deployment needs to be connected to the prod Aurora.
But, the staging and production clusters are in separate VPCs, but [unfortunately] with the same CIDRs. So I cannot peer the two VPCs.
Also, the Aurora cluster is deployed in private subnets.
One [temporary] solution that I am thinking of is, to essentially make public the private subnet the Aurora writer is deployed into, and have my app in the staging cluster reach the prod db over the internet.
I found the private subnet that the Aurora writer is deployed into
Found the routing table that it uses
Could I just change the routing rule from 0.0.0.0/0 -> NAT-12345 to 0.0.0.0/0 -> IGW-12345 so instead of NAT it will use the Internet Gateway?
Is this something viable, and if so, do I need to make something else in order for the db endpoint eg the-prod-aurora-postgres.cluster-something123.uk-west-45.rds.amazonaws.com to be reachable over the internet?
Yes that's correct you have to create a public subnet (with route to igw)
Change aurora connectivity to public access
Adjust the security group to allow access only to the eks cluster
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-private-public-endpoints/

How to launch EKS node group into a private subnet without a NAT gateway?

I am using EKS and I want to enhance the security by keeping one out of the total two nodegroups into a private subnet. However, I have read few documents from AWS where it is a need that if a nodegroup has to be launched in private subnet then there has to be a NAT gateway connection so that the nodegroup can connect with the AWS Control plane VPC and communicate to the master. Putting up NAT will be too much because of its charges. If there is a workaround which I can use then I would be happy to know about it. I know using eksctl we can launch a nodegroup into private subnet without NAT. But I need something which can be done without eksctl. If I am wrong in my understanding then please do let me know.
AWS provides an explanation and an VPC template (amazon-eks-fully-private-vpc.yaml) for EKS without NAT in a post titled:
How do I create an Amazon EKS cluster and node groups that don't require access to the internet?
Instead of NAT, VPC interface endpoints are used for:
ec2
logs
ecr.api
ecr.dkr
sts

Least privilege IAM setup for managing a GKE private cluster using a bastion host

I would like to create a bastion host to manage a private GKE cluster on GCP.
The bastion host is a GCE VM named bastion.
The cluster is a GKE private cluster named cluster.
The flow should be:
User -> (SSH via IAP) -> bastion -> (gke control-plane) -> cluster
For both resources, I would like to create and configure two service accounts from scratch in order to ensure the principle of the least privilege.
Do you have any suggestions for the optimal setup for scopes and roles?
To have a better overview about how to handle GKE clusters for production purposes, I would suggest taking a look on this article, specifically on the section dedicated for Private Clusters in which is mentioned the alternative to use VPC Service Controls that can help you mitigate the risk of data exfiltration.

Can an existing GKE cluster be made private?

I've got an existing GKE cluster and I'd like to configure it with private nodes as per the GKE hardening guide.
It seems like the possibility for selecting a private cluster is disabled in the cluster configuration UI, and setting it in Terraform with a private_cluster_config block forces destruction of the cluster.
Is there no way to configure private nodes for an existing cluster?
Unfortunately at this point it is not possible:
You cannot convert an existing, non-private cluster to a private cluster.