Kubernetes on AWS - managing virtual IPs - amazon-web-services

I've just set-up a 3-node Kubernetes cluster on AWS using Kelsey's guide. I notice that K8 assigns a unique virtual IP address to each pod and service. In the guide, a AWS route table is used to map the virtual IPs to actual IPs. While this works, it seems quite primitive and is not scalable when nodes are added/removed to the Kubernetes cluster.
What's the standard way to handle these virtual IPs when hosting Kubernetes on AWS at scale?

AWS route tables have a limit of 50 entries each, so that's not a very scalable solution. The most common approach is to use an overlay network. Two popular ones are
CoreOS Flannel
Weaveworks Weave
The Flannel README in particular gives a good overview of how it works.

Related

How to connect GKE to Mongo Atlas Cluster hosted in AWS?

My GKE cluster cannot connect to my Mongo Atlas Cluster and I have no idea why, nor I have many ways of troubleshooting it.
One of the reasons it may be failing is this:
Atlas does not support Network Peering between clusters deployed in a single region on different cloud providers. For example, you cannot set up Network Peering between an Atlas cluster hosted in a single region on AWS and an application hosted in a single region on GCP. (https://www.mongodb.com/docs/atlas/security-vpc-peering/#set-up-a-network-peering-connection)
Some really poor hint is given regarding the creation of Network Peering Containers in order to overcome that limitation, but honestly, I have no idea what to do.

Sharing of subnets across multiple EKS clusters

I am setting up two EKS clusters in one VPC.
Is it possible to share the subnets among these two clusters? Is there any problem with that approach?
I was thinking of creating three private subnets that could be shared between these two EKS clusters.
I was a little research about this topic and the official doc of EKS don't say anything about avoid this approach.
In summary AWS recommend you this about subnets/vpc networking:
Make sure about the size of your subnets (if you have insufficient IP addresses available, your pods will not get an IP address)
Prefer use private subnets for your workers node & public subnets for Load Balancers
Reference: https://aws.github.io/aws-eks-best-practices/reliability/docs/networkmanagement/#recommendations_1
Btw, for a better security you can implement network policies, encryption in transit (load balancers, add a service mesh), please read this doc for more details: https://aws.github.io/aws-eks-best-practices/security/docs/network/#network-security
It's possible, in this case don't forget to add as many tags as necessary on your subnets (1 for each EKS cluster), such as:
kubernetes.io/cluster/cluster1: shared
kubernetes.io/cluster/cluster2: shared
...
kubernetes.io/cluster/clusterN: shared
This way, you will ensure the automatic subnet discovery by load balancers and ingress controllers.
A VPC per cluster is probably considered best practice owing to VPC IP address constraints and deployment best practices as well. You may have your reasons to adopt multiple EKS clusters per subnet however a common generic Kubernetes pattern is to have clusters separated for environments (e.g. dev/test/qa/staging/prod/etc) and namespaces to separate teams/devs within a given environment.
Multiple EKS Clusters on a shared VPC is not a great idea as you will easily run out of IP ranges. Check this info on IP networking

Link between two containers in two different ECS clusters

I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B".
I don't want to make any ports public.
Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.
But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?
The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
You could go for open source solutions like Consul.
All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.
Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.
Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331
To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.

Does EKS supports kubenet or some type of virtual network to get more IP space?

My company gave us has a very small subnet size (like 30 IP addresses). By default in K8 every node gets a ton of IPs assigned to it, and every pod gets an IP, so having only 30 IPs to draw from isn't nearly enough to run a K8 cluster. I need hundreds, specifically around 400 or more to be able so stand up this cluster. I never used EKS and this is what we will be using. After some research I saw that that AKS in Azure can do virtual networks (so you can have all the IPs you need) with kubenet, so even with a small subnet Kubernetes can still function. This doc explains it pretty well from Azure side https://learn.microsoft.com/en-us/azure/aks/configure-kubenet.
I am still digging if EKS uses kubenet, and haven't found anything yet. I would appreciate any feedback for a virtual server or plugin I can use in EKS to get more IP space.
You are having trouble finding information on EKS utilizing kubenet because kubenet has a significant limitation in AWS that is not present in Azure. EKS utilizing kubenet is limited to a theoretical limit of 50 cluster nodes because kubenet relies on the VPC routing table for access to other EKS nodes. Because AWS limits the number of routes on a VPC routing table to 50, the number of EKS nodes is theoretically 50 (Less if you consider default route for internet traffic or routes to other EC2 instances)
In AKS (Azure) the UDR limit for a subnet is 250 (254 - 4 reserved by Azure).
EKS deploys with the CNI plugin enabled by default. You can get around the small subnet limitation by enabling CNI with custom networking which allows you to have the same type of IP Masquerade that you get with kubenet (i.E a separate internal kubernetes network for pods that is NAT'd to the EKS EC2 Nodes interface).
Here is a good description:
https://medium.com/elotl-blog/kubernetes-networking-on-aws-part-i-99012e938a40

GKE - how to connect two clusters to each other

I'am new to the kubernetes and gcp. I have one cluster called A where my service is running. In order to run it needs an elasticsearch cluster. I would like to run the elasticsearch in the different GKE cluster so it won't have any impact on my current services. I created another cluster called B and now I have an issue I don't know how to make the service from the A cluster to connect to the elasticsearch pod from the B cluster.
Currently I've found two solutions:
a) make elasticsearch accessible on the public internet - I want to avoid such solution
b) add to the service another container that would authenticate to the B cluster and run kubectl proxy
None of them seems to be the perfect solution. How to solve such case ? Do you have any tutorials articles or tips how to solve it properly ?
edit:
important notes:
both clusters belong to the default vpc
clusters exist I cannot tear them down and recreate reconfigured
You can expose your service with type LoadBalancer. This will create an internal load balancer which other GKE cluster can reach.
This article details the steps.
This article explains the available service option on GKE in detail.
With these option you dont have to expose your service to public internet or destroy your GKE cluster.
Hope this helps.