EKS - Worker nodes from multiple VPC / Accounts - amazon-web-services

Has anyone managed to set up an EKS cluster having worker nodes from multiple VPCs or even Accounts?

As #Prabhat mentioned in the comment, you can only manage communication between Nodes in EKS cluster through Public and Private Subnets underlying to the same VPC in compliance with security policies for that particular AWS account.

Related

AWS VPC for EKS node group

I am trying to create a Node group in my EKS cluster, but I am getting '''NodeCreationFailure: Instances failed to join the kubernetes cluster'''.
After reading many documentations I think the problem is in the VPC configuration. I've tried multiple solutions like enabling DNS host name, and adding endpoints to the subnets but still having the same error.
Can any one guide me to solve this issue?
First, make sure that the VPC and Subnet configurations are correct and that the EC2 instances that you are trying to create the Node Group with have the correct security group settings.
Next, ensure that the EC2 instances have the correct IAM role attached and that it has the necessary permissions for the EKS cluster.
Finally, ensure that the IAM user or role that you are using to create the Node Group has the correct permission for the EKS cluster.
If all of the above are configured correctly, you may need to check the EKS cluster logs to troubleshoot further. You can do this by running the command "kubectl get events --sort-by=.metadata.creationTimestamp" in the EKS cluster.

EKS Cluster endpoint access

i am a bit confused about EKS Cluster end point access and EKS Private cluster. EKS Private cluster needs to have ECR as container registry. but if i keep EKS Cluster endpoint as private, does that means its a private cluster?
The EKS cluster endpoint is orthogonal to the way you configure the networking for your workloads. Usually an EKS Private cluster is a cluster WHOSE NODES AND WORKLOADS do not have outbound access to the internet (commonly used by big enterprises with hybrid connectivity so that the data flow only travels within a private network (i.e. VPC and on-prem). The endpoint is where your kubectl points to and it's different. It could be public, private or both at the same time. In most cases if you want an EKS Private cluster is likely that you want the endpoint to be private as well but it's just an obvious choice not a technical requirement.

How to launch EKS node group into a private subnet without a NAT gateway?

I am using EKS and I want to enhance the security by keeping one out of the total two nodegroups into a private subnet. However, I have read few documents from AWS where it is a need that if a nodegroup has to be launched in private subnet then there has to be a NAT gateway connection so that the nodegroup can connect with the AWS Control plane VPC and communicate to the master. Putting up NAT will be too much because of its charges. If there is a workaround which I can use then I would be happy to know about it. I know using eksctl we can launch a nodegroup into private subnet without NAT. But I need something which can be done without eksctl. If I am wrong in my understanding then please do let me know.
AWS provides an explanation and an VPC template (amazon-eks-fully-private-vpc.yaml) for EKS without NAT in a post titled:
How do I create an Amazon EKS cluster and node groups that don't require access to the internet?
Instead of NAT, VPC interface endpoints are used for:
ec2
logs
ecr.api
ecr.dkr
sts

How to add different vpc instances(node) to existing eks vpc ( both vpc are different)

i have a aws eks cluster with in the dev vpc now i have few ec2 instance in the test-vpc so now i need add the add test-vpc instance to the existing existing cluster. can we do. ?
FYI VPC peering i have done as well and peering also working
An Amazon EKS cluster is provisioned in a single VPC.
If you have a dev-VPC and a test-VPC, you need to use two different EKS clusters.

AWS EMR on VPC with EC2 Instance

I am doing a reading on AWS EMR on VPC but it seems like it is more of design consideration for AWS EMR Service to access EMR cluster for calls.
What I am trying to do is host a VPC with ALB and EC2 instance running an application as a service to access EMR cluster.
VPC -> Internet Gateway -> Load Balancer -> EC2 (Application endpoints) -> EMR Cluster
I don't want Cluster to be accessible from outside except through Public IP of IG. But Public IP can access only EC2 instance hosting application which calls EMR cluster on same VPC.
Is it recommended approach?
The design looks something like below.
Some challenges I am tackling is how to access S3 from EMR if on VPC,
and if the application is running on EC2 can it access EMR cluster, and if EMR cluster would be available publicly?
Any guidance links or recommendations would be welcome.
EDIT:
Or if I create EMR on VPC do i need to wrap it inside of another VPC something like below?
The simplest design is:
Put everything in a public subnet in a VPC
Use Security Groups to control access to the EMR cluster
If you are security-paranoid, then you could use:
Put publicly-accessible resources (eg EC2) in a public subnet
Put EMR in a private subnet
Use a NAT Gateway or VPC-Endpoints to allow EMR to communicate with S3 (which is outside the VPC)
The first option is simpler and Security Groups act as firewalls that can fully protect the EMR cluster. You would create three security groups:
ELB-SG: Permit inbound access from the Internet on your desired ports. Associate the security group with your Load Balancer.
EC2-SG: Permit inbound access from ELB-SG (from the Security Group itself). Associate the security group with your EC2 instances.
EMR-SG: Permit inbound access from EC2-SG (from the Security Group itself). Associate EMR-SG with the EMR cluster.
This will permit only the Load Balancer to communicate with the EC2 instances and only the EC2 instances to communicate with the EMR cluster. The EMR cluster will be able to connect directly to the Internet to access Amazon S3 due to default rules permitting Outbound access.