Cross cluster communication in Kubernetes - amazon-web-services

I have two kubernetes clusters running inside AWS EKS. How can I connect them both so that both can communicate and share data ?
On one cluster only stateless applications are running while on another stateful like Redis DB, RabbitMQ etc.
Which will be the easiest way to setup communication ?

If you have a specific cluster to run DBs and other private stateful workloads, then ensure that your worker nodes for that EKS cluster are private.
Next step would be to create service resource to expose your Redis DB with an internal endpoint. You can achieve it by specifying following:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
With the above you are going to have entire cluster and stateful workloads exposed using internal endpoints. Once this is done, you have two options to connect your VPCs.
VPC peering to allow one cluster to connect with the other.
Transit Gateway which two VPCs will use to communicate privately.

I will be following the suggested approach by #marcincuber to use internal load balancer.
However, I also got one another workaround exposing the Redis, RabbitMQ service type as LoadBalancer.
Since my both cluster in the same VPC there is no need of VPC peering or any gateway setup, I am thinking to restrict the traffic via using Kubernetes default service loadBalancerSourceRanges.

Related

AWS EKS - Multi-cluster communication

I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).

How to make GKE clusters communicate using private IPs or networks?

Is there any way to make GKE clusters (in the same network) talk to each other using internal IP addresses?
I know GKE internal load balancers can handle traffic only from the same network and the same region. And I find this implementation very strange.
I understand pod IPs are routable but they are not static and can change anytime. Also, I know there is loadBalancerSourceRanges configuration option in external load balancers using which I can allow only the subnets I want but what if I want to keep every communication using internal and not using a public IP?
Is there any way to achieve what I am trying? Like configuring the Firewall Rules or anything else? or "Global routing mode" while creating the VPC network or anything?
If you have 2 clusters in 2 different regions and you want them to communicate using internal IPs, your best option is to use nodePort service in the clusters to expose your pods and then configure a VM instance to act as a proxy for each cluster.
This will have the same effect as using the LoadBalancer service as an internal Load Balancer but it has the benefit that it will work across multiple regions. It also allows the same Load Balancer to handle requests for all your services.
The one thing you need to be careful of is overloading the proxy instance. Depending on the number of requests, you may need to configure multiple proxy instances for the cluster, each one only handline a handful of services.
Thanks to Global Access, adding networking.gke.io/internal-load-balancer-allow-global-access: "true" to annotation will allow you to communicate across regions.
"Global access is available in Beta on GKE clusters 1.16+ and GA on 1.17.9-gke.600+."
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access

How do I achieve multicluster pod to pod communication in kubernetes?

I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.

cross-region k8s inter-cluster communication in GCP

I am looking for a way to access services/applications in a remote k8s cluster(C2) hosted in a different region(R2) from a client application in my current cluster(C1 in region R1).
Server application needs to load-balanced(fqdn preferred over IP)
Communication is through private network, no internet
I tried using an internal-LB for C2 which doesn't work and later realized it to be a regional product.
Moreover, it seems, the same constraint is true for vpc peering also.
Please suggest how to achieve this.
You can't use any internal GCP LB on a regional level. However, you may be able to use an Nginx internal ingress as it may not be limited to the same region.
Otherwise you can use Creating VPC-native clusters using Alias IPs which can allow you to call on pods directly. It will not offer built in load balancing but it is an alternative.
Finally, if you need to use the internal load balancers, you can create a VPN tunnel between the two regions and create a route that forces traffic through the gateway. Traffic coming through the tunnel will be regional to the ILB, but this config is more expensive and with more moving parts, there's a higher chance of failure