I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).
Related
In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?
AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.
I have two kubernetes clusters running inside AWS EKS. How can I connect them both so that both can communicate and share data ?
On one cluster only stateless applications are running while on another stateful like Redis DB, RabbitMQ etc.
Which will be the easiest way to setup communication ?
If you have a specific cluster to run DBs and other private stateful workloads, then ensure that your worker nodes for that EKS cluster are private.
Next step would be to create service resource to expose your Redis DB with an internal endpoint. You can achieve it by specifying following:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
With the above you are going to have entire cluster and stateful workloads exposed using internal endpoints. Once this is done, you have two options to connect your VPCs.
VPC peering to allow one cluster to connect with the other.
Transit Gateway which two VPCs will use to communicate privately.
I will be following the suggested approach by #marcincuber to use internal load balancer.
However, I also got one another workaround exposing the Redis, RabbitMQ service type as LoadBalancer.
Since my both cluster in the same VPC there is no need of VPC peering or any gateway setup, I am thinking to restrict the traffic via using Kubernetes default service loadBalancerSourceRanges.
I have a scenario where I have to deploy multiple micro-services on AWS ECS. I want to make services able to communicate with each other via APIs developed in each micro-service. I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS. How can I achieve this? Can I use AWS ECS service discovery by having all services in a private subnet to enable communication between each of them? Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
The combination of both AWS load balancer ( for public access) and Amazon ECS Service Discovery ( for internal communication) is the perfect choice for the web application.
Built-in service discovery in ECS is another feature that makes it
easy to develop a dynamic container environment without needing to
manage as many resources outside of your application. ECS and Route 53
combine to provide highly available, fully managed, and secure service
discovery
Service discovery is a technique for getting traffic from one container to another using the containers direct IP address, instead of an intermediary like a load balancer. It is suitable for a variety of use cases:
Private, internal service discovery
Low latency communication between services
Long lived bidirectional connections, such as gRPC.
Yes, you can use AWS ECS service discovery having all services in a private subnet to enable communication between them.
This makes it possible for an ECS service to automatically register
itself with a predictable and friendly DNS name in Amazon Route 53. As
your services scale up or down in response to load or container
health, the Route 53 hosted zone is kept up to date, allowing other
services to lookup where they need to make connections based on the
state of each service.
Yes, you can use Load Balancer to make front-end micro-service accessible to end-users over the internet. You can look into this diagram that shows AWS LB and service discovery for a Web application in ECS.
You can see the backend container which is in private subnet, serve public request through ALB while rest of the container use AWS service discovery.
Amazon ECS Service Discovery
Let’s launch an application with service discovery! First, I’ll create
two task definitions: “flask-backend” and “flask-worker”. Both are
simple AWS Fargate tasks with a single container serving HTTP
requests. I’ll have flask-backend ask worker.corp to do some work and
I’ll return the response as well as the address Route 53 returned for
worker. Something like the code below:
#app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
r = requests.get("http://"+worker_host)
worker = socket.gethostbyname(worker_host)
return "Worker Message: {]\nFrom: {}".format(r.content, worker)
Note that in this private architecture there is no public subnet, just a private subnet. Containers inside the subnet can communicate to each other using their internal IP addresses. But they need some way to discover each other’s IP address.
AWS service discovery offers two approaches:
DNS based (Route 53 create and maintains a custom DNS name which
resolves to one or more IP addresses of other containers, for
example, http://nginx.service.production Then other containers can
send traffic to the destination by just opening a connection using
this DNS name)
API based (Containers can query an API to get the list of IP address
targets available, and then open a connection directly to one of the
other container.)
You can read more about AWS service discovery and use cases amazon-ecs-service-discovery and here
According to the documentation, "Amazon ECS does not support registering services into public DNS namespaces"
In other words, when it registers the DNS, it only uses the service's private IP address which would likely be problematic. The DNS for the "public" services would register to the private IP addresses which would only work, for example, if you were on a VPN to the private network, regardless of what your subnet rules were.
I think a better solution is to attach the services to one of two load balancers... one internet facing, and one internal. I think this works more naturally for scaling the services up anyway. Service discovery is cool, but really more for services talking to each other, not for external clients.
I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS.
I would use Service Discovery to wire the services internally and the Elastic Load Balancer integration to make them accessible for the public.
The load balancer can do the load balancing on one side and the DNS SRV records can do the load balancing for your APIs internally.
There is a similar question here on Stack Overflow and the answer [1] to it outlines a possible solution using the load balancer and the service discovery integrations in ECS.
Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
Yes, the load balancer can register targets in a private subnet.
References
[1] https://stackoverflow.com/a/57137451/10473469
When a Kubernetes service is exposed via an Ingress object, is the load balancer "phisically" deployed in the cluster, i.e. as some pod controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?
Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?
By default, a kubernetes cluster has no IngressController at all. This means that you need to deploy one yourself if you are on premise.
Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.
Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.
I would like to make some clarification concerning the Google Ingress Controller starting from its definition:
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub description of this resource.
In particular notice that:
It is a Daemon
It is deployed in a pod
It is in kube-system namespace
It is hidden to the customer
However you will not be able to "see" this resource for example running :
kubectl get all --all-namaspaces, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the official documentation:
GCE/Google Kubernetes Engine deploys an ingress controller on the master
Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.
I will answer with respect to Google Cloud Engine.
Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:
GCP Console --> Network services --> LoadBalancing
Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc
Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress