how to create worker nodes in private subnet in EKS - amazon-web-services

I have created EKS cluster. VPC which is part of EKS has 4 subnets . 2 public subnets and 2 private subnets .
Added the worker node groups as well which has 3 nodes.
Now , the issue is all these worker nodes are deployed in public subnets. However, I want atleast one node in private subnet .
Please suggest that , how to deploy worker node in private subnet through EKS management console

Follow this guide to create managed nodegroup
https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html
Specify the private subnets while configuring the Networking as mentioned in Point no. 8.

Related

Access control plane from another ec2 instance

I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?
I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,
Edit: Both the vpcs are private and have the similar CIDR.
If both of your EC2 are in diff VPCs you can do the VPC peering.
If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and kubectl will connect to k8s cluster over the internet.
You can also checkout peering multiple VPC with same cidr range if you are looking for that way : Multiple VPC and Subnet with same CIDR blocks
Or : https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr
If your eks api server is private, create peering between the
VPCs and allow your Second EC2 server's private IP
If your eks api server is public, you can allow your Second EC2 instance's public IP from the aws console, in the eks security or network section

Is it possible to make AWS EKS nodes (EC2 instances) automatically get an IP from a specific subnet (on the same VPC or another VPC)?

We have an EKS cluster running in a VPC and we are thinking of extending this VPC or creating another VPC with a different subnet IP range. EKS nodes (EC2 instances) are running with multiple ENIs, that is, with multiple private IPs.
We wonder if it is possible to make these EC2 instances which serve as EKS nodes automatically get an IP from this new subnet within current VPC or on the other VPC when they are getting instantiated. If the subnet is on another VPC, should we have a VPC peering connection between two VPCs? Can it be doable by Cloud Formation templates on EKS? What is the best practice here? Thanks.
The option to extend VPC in EKS is via adding secondary CIDR block and configure CNI plugin to use the subnets created in the secondary CIDR block. CNI is ultimately responsible to assign the ip addresses available through the subnet cidr to the pods.
To use the correct CIDR range for VPC extension and to configure the CNI please use the below article :
https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/

Your internal load balancer must have a private subnet

I want to create an internal NLB on AWS to two EC2 instances in two AZs/subnets.
Configuration: 1 VPC, two AZs , 2 private and 2 public subnets:
AZ
eu-central-1a
eu-central-1b
public subnet
public 1
public 2
private subnet
private 1A
private 2A
The EC2 instances are located in the private subnets.
Under Mappings/eu-central-1a I can select public 1 or private 1A.
Under Mappings/eu-central-1b there's only public 2 and the error message:
Your internal load balancer must have a private subnet.
You can update the subnet’s route table in the VPC Console
The private networks and their route tables look identical. Not sure, what else needs to be done.
I used AWS CLI meanwhile. That solved the problem.
# aws elbv2 create-load-balancer --type network --name my-load-balancer --subnets subnet-07.......9528 subnet-0a..........5170 --scheme internal
To solve this issue, remove the internet gateway from the route table of the private subnets if they exist there.

ELB OutofService if EKS worker nodes in private subnet

I create a VPC with 1 public subnets and 2 private subnets like this link
Then I create a new EKS cluster, select 3 subnets. For EKS workers node, I only put in 2 private subnets and this node can be register to cluster.
So I tried to create a sample project in link, everything look good, pod, services, elb can be created. But ELB health check failed, it said "OutofService". In security group worknode, I allowed all traffic for ELB
I there anything I missing ?
Few points you can verify are whether
* elb security group is opened up in worker instance security group
* check whether ELB subnets are the the same Availability Zone as the worker nodes

EKS - Worker nodes from multiple VPC / Accounts

Has anyone managed to set up an EKS cluster having worker nodes from multiple VPCs or even Accounts?
As #Prabhat mentioned in the comment, you can only manage communication between Nodes in EKS cluster through Public and Private Subnets underlying to the same VPC in compliance with security policies for that particular AWS account.