Sharing of subnets across multiple EKS clusters - amazon-web-services

I am setting up two EKS clusters in one VPC.
Is it possible to share the subnets among these two clusters? Is there any problem with that approach?
I was thinking of creating three private subnets that could be shared between these two EKS clusters.

I was a little research about this topic and the official doc of EKS don't say anything about avoid this approach.
In summary AWS recommend you this about subnets/vpc networking:
Make sure about the size of your subnets (if you have insufficient IP addresses available, your pods will not get an IP address)
Prefer use private subnets for your workers node & public subnets for Load Balancers
Reference: https://aws.github.io/aws-eks-best-practices/reliability/docs/networkmanagement/#recommendations_1
Btw, for a better security you can implement network policies, encryption in transit (load balancers, add a service mesh), please read this doc for more details: https://aws.github.io/aws-eks-best-practices/security/docs/network/#network-security

It's possible, in this case don't forget to add as many tags as necessary on your subnets (1 for each EKS cluster), such as:
kubernetes.io/cluster/cluster1: shared
kubernetes.io/cluster/cluster2: shared
...
kubernetes.io/cluster/clusterN: shared
This way, you will ensure the automatic subnet discovery by load balancers and ingress controllers.

A VPC per cluster is probably considered best practice owing to VPC IP address constraints and deployment best practices as well. You may have your reasons to adopt multiple EKS clusters per subnet however a common generic Kubernetes pattern is to have clusters separated for environments (e.g. dev/test/qa/staging/prod/etc) and namespaces to separate teams/devs within a given environment.
Multiple EKS Clusters on a shared VPC is not a great idea as you will easily run out of IP ranges. Check this info on IP networking

Related

Link between two containers in two different ECS clusters

I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B".
I don't want to make any ports public.
Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.
But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?
The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
You could go for open source solutions like Consul.
All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.
Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.
Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331
To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.

Why I should configure a AWS ECS Service with two or more Subnets?

Why I should configure an AWS ECS Service or an EC2 Instance with two or more Private Subnets from the same VPC? What would be the benefits of doing such thing instead of configuring it within just one Subnet? Would it be because of availability? I've read the documentation but it was not clear about it.
Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html
This is generally to distribute your ECS service across multiple availability zones, allowing your service to maintain high availability.
A subnet is bound to a single AZ, so it is assumed each subnet is in a different AZ.
By splitting across multiple subnets, during an outage load can be shifted to launch containers entirely in other subnets (assuming they're in different AZs).
This is generally encouraged for all services that support multiple availability zones.
More information on Amazon ECS Availability best practices are available from the blog.

Designing highly available VPCs

I have been reading and watching everything [1] I can related to designing highly available VPCs. I have a couple of questions. For a typical 3-tier application (web, app, db) that needs HA within a single region it looks like you need to do the following:
Create one public subnet in each AZ.
Create one web, app, and db private subnet in each AZ.
Ensure your web, app and db EC2 instances are split evenly between AZs (for this post assume the DBs are running hot/hot and the apps are stateless).
Use an ALB / autoscaling to distribute load across the web tier. From what I read ALBs provide HA across AZs within the same region.
Utilize Internet gateways to provide a target route for Internet traffic.
Use NAT gateways to SRC NAT the private subnet VMs so they can get out to
the Internet.
With this approach do you need to deploy one Internet and NAT gateway to each AZ? If you only deploy one what happens when you have an AZ outage. Are these services AZ aware (can't find a good answer for this question). Any and all feedback (glad to RTFM) is welcomed!
Thank you,
- Mick
[1] Last two resources I reviewed
Deploying production grade VPCs
High Availability Application Architectures in Amazon VPC
You need NAT Gateway in each AZ as the redundancy is limited to a single AZ. Here is the snippet from the official documentation
Each NAT gateway is created in a specific Availability Zone and
implemented with redundancy in that zone.
You need just a single Internet gateway for a VPC as it is redundant across AZs and a VPC level resource. Here is the snippet from Internet Gateway offical documentation
An internet gateway is a horizontally scaled, redundant, and highly
available VPC component that allows communication between instances in
your VPC and the internet. It therefore imposes no availability risks
or bandwidth constraints on your network traffic.
Here is a highly available architecture image showing NAT GW per AZ and Internet GW as a VPC resource
Image source: https://aws.amazon.com/quickstart/architecture/vpc/

3-tier web application subnet segmentation in AWS VPC

I'm new to AWS VPC setup for 3-tier web application. I created a VPC with subnet 10.0.0.0/16, and what is the good best practice to do the subnet segmentation in AWS VPC for 3 tier web application? I have ELB with 2 EC2 instances, and RDS and S3 in the backend.
Please advise!! Thanks.
A common pattern you will find is:
VPC with /16 (eg 10.0.0.0/16, which gives all 10.0.x.x addresses)
Public subnets with /24 (eg 10.0.5.0/24, which gives all 10.0.5.x addresses)
Private subnets with /23 (eg 10.0.6.0/23, which gives all 10.0.6.x and 10.0.7.x) -- this is larger because most resources typically go into private subnets and it's a pain to have to make it bigger later
Of course, you can change these above sizes to whatever you want within allowed limits.
Rather than creating a 3-tier structure, consider a 2-tier structure:
1 x Public Subnet per AZ for the Load Balancer (and possibly a Bastion/Jump Box)
1 x Private Subnet per AZ for everything else — application, database, etc.
There is no need to have apps and databases in separate private subnets unless you are super-paranoid. You can use Security Groups to configure the additional layer of security without using separate subnets. This means less IP addresses are wasted (eg in a partially-used subnet).
Of course, you could just use Security Groups for everything and just use one tier, but using private subnets gives that extra level of assurance that things are configured safely.
The way we do it:
We create a VPC that is a /16, e.g. 172.20.0.0/16. Do not use the default VPC.
Then we create a set of subnets for each application “tier”.
Public - Anything with a public IP. Load balancers and NAT gateways are pretty much the only thing here.
Web DMZ - Web servers go here. Anything that is a target for the load balancer.
Data - Resources responsible for storing and retrieving data. RDS instances, EC2 database servers, ElastiCacahe instances
Private - For resources that are truly isolated from Internet traffic. Management and reporting. You may not need this in your environment.
Subnets are all /24. One subnet per availability zone. So there would be like 3 Public subnets, 3 Web DMZ subnets, etc.
Network ACLs control traffic between the subnets. Public subnets can talk to Web DMZ. Web DMZ can talk to Data. Data subnets can talk to each other to facilitate clustering. Private subnets can’t talk to anybody.
I intentionally keep things very coarse in the Network ACL. I do not restrict specific ports/applications. We do that at the Security Group level.
Pro tip: Align the Subnets groups on a /20 boundary to simplify your Network ACLs rules. Instead of listing each data subnet individually, you can just list a single /20 which encompasses all data subnets.
Some people would argue this level of separation is excessive. However I find it useful because it forces people to think about the logical structure of the application. It guards against someone doing something stupid with a Security Group. It’s not bulletproof, but it is a second layer of defense. Also, we sometimes get security audits from customers that expect to see a traditional structure like you would find in an on-prem network.

I have limited IP space in my AWS VPC. How do I setup Kubernetes in AWS where worker nodes and control plane are in different subnets

I have limited IP's in my public facing VPC which basically means I cannot run the K8S worker nodes in that VPC since I would not have sufficient IP's to support all the pods. My requirement is to run the control plane in my public facing VPC and the worker nodes in a different VPC with a private IP range (192.168.X.X).
We use traefik for ingress and have deployed traefik as a DaemonSet. These pods are exposed using a Kubernetes service of type NLB. And we created a VPC endpoint on top of this NLB which allows us to access this traefik endpoint through our public facing VPC.
However, based on docs it looks like NLB support is still in alpha stage. I am curious what are my other options given the above constraints.
Usually, in Kubernetes cluster Pods are running in separate overlay subnet that should not overlap with existing IP subnets in VPC.
This functionality is provided by Kubernetes cluster networking solutions like Calico, Flannel, Weave, etc.
So, you only need to have enough IP address space to support all cluster nodes.
The main benefit of using NLB is to expose client IP address to pods, so if there are no such requirements, regular ELB would be good for most cases.
You can add secondary CIDR to your vpc and use one of the two options mentioned here to have pods use the secondary vpc CIDR.
https://aws.amazon.com/blogs/containers/optimize-ip-addresses-usage-by-pods-in-your-amazon-eks-cluster/