How is eks cluster accessible when deployed in a private subnet? - amazon-web-services

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info

The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

Related

aws: API Gateway is not able to connect to EKS cluster running in private subnet

I am having a use case where I need to deploy the EKS cluster in private subnets and access it through API Gateway.
Currently, if I deploy the EKS cluster in public subnet and try to access it is working fine. However, it is not working when the EKS cluster is deployed into the private subnet.
Currently API gateway is configured with VPC link to access EKS cluster securely.
Network load balancer is configured to connect to the EKS cluster nodes.
Please let me know if there is anything that I am missing here.
Thanks,
Avinash

Access control plane from another ec2 instance

I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?
I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,
Edit: Both the vpcs are private and have the similar CIDR.
If both of your EC2 are in diff VPCs you can do the VPC peering.
If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and kubectl will connect to k8s cluster over the internet.
You can also checkout peering multiple VPC with same cidr range if you are looking for that way : Multiple VPC and Subnet with same CIDR blocks
Or : https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr
If your eks api server is private, create peering between the
VPCs and allow your Second EC2 server's private IP
If your eks api server is public, you can allow your Second EC2 instance's public IP from the aws console, in the eks security or network section

AWS ECS docker container RDS integration

I have two VPC's in the same account. VPC-A(has RDS installed), VPC-B has services installed through ECS EC2 deployment.
VPC-B has multiple subnets. Services deployed through ECS EC2 service couldn't integrate with RDS. It keeps getting the following error message("Is the server running on host "....")
Where as telnet on RDS database port from Ec2instance(E1) inc VPC-B subnet can connect to the database.
But, it couldn't start the server if the same services are installed through ECS. When manually trying to start the container it works(able to connect to the database).
I also set up a Peering connection between two VPC's but the connection problem exists only when the container is started through ECS EC2 deployment.
The dropdown for public IP has "Disabled" and no other options. Subnet's are public subnets.
Any help/thoughts will be highly helpful.
As per aws docs "awsvpc" launches in a private IP and to interact with external services nat gateway needs to be attached to subnet.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-considerations
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway.
"Auto assign public IP" mode is "Enabled" with "bridge" netowrking mode on on ECS EC2 launch.

How do I SSH tunnel to a remote server whilst remaining on my machine?

I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?
Many thanks,
John
In AWS
Scenario 1
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps
Scenario 2
when you have private cluster endpoint enabled (which seems to be your case)
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.
Either you can modify your private endpoint Steps here OR Follow these Steps
Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.
Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:
First make so called local ssh port forwarding on Bastion (pretty well described here):
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<kubernetes-cluster-ip-address-or-hostname>
Now you can access your kubernetes api from Bastion by:
curl localhost:<kube-api-server-port>
however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<bastion-server-ip-address-or-hostname>
From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:
curl localhost:<kube-api-server-port>
or incorporate it in your .kube/cofig.
Let me know if it helps.
You can also make such tunnel more persistent. More on that you can find here.

Is it possible to run kubernetes in a shared AWS VPC private network, without dns hostnames enabled?

I trying to setup Kubernetes cluster using kops,
having all of my nodes and master running on a private shabnets on my existing AWS VPC,
when passing the vpcid and network cidr to the create command, i'm enforced to have the EnableDNSHostnames=true,
I wonder of it's possible to setup a cluster with that option set to false
So all of the instances lunched in the private vpc wont have public address
Thanks
It's completely possible to run in private subnets, that's how I deploy my cluster (https://github.com/upmc-enterprises/kubernetes-on-aws), where all servers are in private subnets and access is granted via bastion boxes.
For kops specifically, looks like there's support (https://github.com/kubernetes/kops/issues/428), but I'm not a big user of it so can't speak 100% to how well it works.