How do I achieve multicluster pod to pod communication in kubernetes? - amazon-web-services

I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.

Related

How to connect GKE to Mongo Atlas Cluster hosted in AWS?

My GKE cluster cannot connect to my Mongo Atlas Cluster and I have no idea why, nor I have many ways of troubleshooting it.
One of the reasons it may be failing is this:
Atlas does not support Network Peering between clusters deployed in a single region on different cloud providers. For example, you cannot set up Network Peering between an Atlas cluster hosted in a single region on AWS and an application hosted in a single region on GCP. (https://www.mongodb.com/docs/atlas/security-vpc-peering/#set-up-a-network-peering-connection)
Some really poor hint is given regarding the creation of Network Peering Containers in order to overcome that limitation, but honestly, I have no idea what to do.

MultiClould clusters cross communication

We have an active k8s cluster deployed on aws and the second active cluster deployed on GCP.
am trying to understand, what is the right design and pros and cos
a)Stretched Cluster:- AWS and GCP clusters. cross communicate
b)Isolated Clusters:- AWS and GCP behind GLB servers the request independently.
Additional notes: there are 50+services deployed in cluster.

How can I give access to statping deployed outside k8s cluster to monitor k8s services uptime?

I want statping to be independent of the infra it is monitoring. But I want to check the services uptime which are on clusterIP inside the k8s EKS cluster. Will setting up kubeconfig on the EC2 instance help ?
There are multiple ways to access Kubernetes Services from the statping EC2 Instance.
All of them are discussed in https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies
kubectl proxy https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#using-kubectl-proxy is a good option for your use case if you already have kubeconfig on the statping EC2 Instance.
You can use https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls to construct the Proxy URLs.

Cross cluster communication in Kubernetes

I have two kubernetes clusters running inside AWS EKS. How can I connect them both so that both can communicate and share data ?
On one cluster only stateless applications are running while on another stateful like Redis DB, RabbitMQ etc.
Which will be the easiest way to setup communication ?
If you have a specific cluster to run DBs and other private stateful workloads, then ensure that your worker nodes for that EKS cluster are private.
Next step would be to create service resource to expose your Redis DB with an internal endpoint. You can achieve it by specifying following:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
With the above you are going to have entire cluster and stateful workloads exposed using internal endpoints. Once this is done, you have two options to connect your VPCs.
VPC peering to allow one cluster to connect with the other.
Transit Gateway which two VPCs will use to communicate privately.
I will be following the suggested approach by #marcincuber to use internal load balancer.
However, I also got one another workaround exposing the Redis, RabbitMQ service type as LoadBalancer.
Since my both cluster in the same VPC there is no need of VPC peering or any gateway setup, I am thinking to restrict the traffic via using Kubernetes default service loadBalancerSourceRanges.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.