AWS EKS Cross Cluster Load balancing between pods using Layer 4 ELB - amazon-web-services

I have a situation where i'm deploying a k8s service of type LoadBalancer in one EKS cluster which is creating a Layer 4 ELB in AWS. This ELB can discover k8s pods in the same EKS cluster (based on label-selector) . What do I need to do so that same ELB (layer 4) can discover pods running in another EKS cluster ?
My primary use case is based around supporting cross cluster injection using mutating admission controller.
I have a mutating admission controller that injects a side car container in a pod. A webhook server (pod) is responsible for the actual injection (which I want to load balance across EKS clusters). I am trying to see if I can avoid deploying the webhook server in every EKS cluster which will help me in 2 ways :
1) Reduced monitoring / operational aspect
2) Since kube-api server is responsible for calling the webhook-server in an EKS cluster, if for whatever reason the webhook-server in that EKS cluster is unavailable , injection should still happen since we would have a webhook server running in another EKS cluster.

Related

AWS Fargate: How to deploy a service fargate task with a network load balancer

Background
Current State: I currently have a nlb that routes to an nginx server running on an ec2 instance.
Goal
I am trying to replace the nginx ec2 instance with a fargate service that runs nginx.
I would like to keep the current nlb and set the fargate cluster as the target group for the existing nlb.
Problem
according to aws documentation, aws ecs fargate cluster service supports loadbalancing with nlb or alb: https://docs.aws.amazon.com/AmazonECS/latest/userguide/service-load-balancing.html
when I try to deploy the nginx task, in the load balancing section,
there is only an option to select an existing alb or create a new
alb.
I tried changing the task protocol to TCP and UDP--regardless of
the protocol, when I try to deploy the task as a service, the only
load balancer option is still application load balancer.
Question
How do I load balance to a fargate cluster service task using an nlb? Am I missing a specific setting somewhere?
If you cannot set the fargate cluster as a target group for an nlb directly, would it be reasonable to route traffic from an nlb to an alb and then set the alb target group as a fargate cluster?
You can absolutely use an NLB with an ECS Fargate service. I've done this before many times. My guess is you are simply encountering a bug in the AWS web UI. I've always used Terraform to deploy this sort of thing. I just checked in the ECS web UI, and on the 2nd step of creating a new ECS service I get the option of using a Network Load Balancer:
If your view doesn't look like that, try switching from the "New ECS Experience" in the UI which is still fairly beta and missing a lot of features.
I just went back and checked, and in the new ECS UI they are currently missing the option to select an NLB, so you have to continue using the old version of the UI for now until they fix that. I suggest continuing to use the old UI until they phase it out, because the new ECS UI is still missing a lot of features.

Unable to understand AWS Fargate Pricing

We have a requirement of deploying Redis as an External Load Balancer Service on AWS-EKS (Elastic Kubernetes Service). As Redis will be a statefulset which of the following combination will be the best fit with EKS -
EKS with Self-managed nodes
EKS with Managed Node Groups
EKS with AWS Fargate
Although, I have studied that AWS Fargate should be used for deploying stateless applications.
Fargate, thus far, has been ideal for running stateless containerized workloads in a secure and cost-effective manner. Secure because Fargate runs each pod in a VM-isolated environment and patches nodes automatically. Cost-effective because, in Fargate, you only pay for the compute resources you have configured for your pod.
I didn't understand how stateless applications will be cost-effective. Kindly verify the below statement. It would be quite helpful.
In the stateless applications, the container will be in the running state, when there will be requests and otherwise no. of instances will be 0 just as in GCP Cloud Run.
Whereas in the stateful ones, the container will be running every time. And for this reason, we should use, EC2 instances for stateful applications

AWS EKS - Multi-cluster communication

I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.

Creating Load Balancer for kube-api server with High Availability Kubernetes Cluster

I am trying to create a High availability Kubernetes cluster for my CI/CD pipeline for deploying my Spring Boot microservices.
I am following the following kubernetes official document for exploring:
https://kubernetes.io/docs/setup/independent/high-availability/
My confusion is that - when reading, I found that need to create Load Balancer for kube-api server for forming the HA cluster. Actually I am planning to use AWS Ec2 machines for forming the cluster. So I will get Elastic Load Balancer from AWS. So do I need to create separate Load balancer as described in document or can I use the ELB for the same ?
You can use only ELB for this purpose.
Hopefully these Kubernetes and ELBs, The Hard Way instructions will be useful for you.