GKE - how to connect two clusters to each other - google-cloud-platform

I'am new to the kubernetes and gcp. I have one cluster called A where my service is running. In order to run it needs an elasticsearch cluster. I would like to run the elasticsearch in the different GKE cluster so it won't have any impact on my current services. I created another cluster called B and now I have an issue I don't know how to make the service from the A cluster to connect to the elasticsearch pod from the B cluster.
Currently I've found two solutions:
a) make elasticsearch accessible on the public internet - I want to avoid such solution
b) add to the service another container that would authenticate to the B cluster and run kubectl proxy
None of them seems to be the perfect solution. How to solve such case ? Do you have any tutorials articles or tips how to solve it properly ?
edit:
important notes:
both clusters belong to the default vpc
clusters exist I cannot tear them down and recreate reconfigured

You can expose your service with type LoadBalancer. This will create an internal load balancer which other GKE cluster can reach.
This article details the steps.
This article explains the available service option on GKE in detail.
With these option you dont have to expose your service to public internet or destroy your GKE cluster.
Hope this helps.

Related

How to connect GKE to Mongo Atlas Cluster hosted in AWS?

My GKE cluster cannot connect to my Mongo Atlas Cluster and I have no idea why, nor I have many ways of troubleshooting it.
One of the reasons it may be failing is this:
Atlas does not support Network Peering between clusters deployed in a single region on different cloud providers. For example, you cannot set up Network Peering between an Atlas cluster hosted in a single region on AWS and an application hosted in a single region on GCP. (https://www.mongodb.com/docs/atlas/security-vpc-peering/#set-up-a-network-peering-connection)
Some really poor hint is given regarding the creation of Network Peering Containers in order to overcome that limitation, but honestly, I have no idea what to do.

How do I achieve multicluster pod to pod communication in kubernetes?

I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.

Kubernetes AWS Dashboard Questions

I have a working Kubernetes gossip-based cluster deployed on AWS using Kops.
I also have the dashboard running on localhost:8001 on my local machine.
If I understand correctly this URL https://kubernetes.io/docs/admin/authentication/ gives the different ways to expose the dashboard properly among other things.
Whats the easiest and simplest steps to expose the cluster's dashboard across internet?
What are the disadvantages of using the gossip based cluster?
Is it all right to stop my EC2 instances when I am not using the cluster? Are there any reconfiguration steps needed when the EC2 instances are restarted? Is there any sequence in which the EC2 instances must be restarted?
[I realised that 3 is a bad question and the autoscaling group will cause another ec2 instance to start for each stopped ec2 instance (the kubernetes people and kops people are too good and the jokes on me). That said how can I stop/start the kubernetes cluster when I am not using it]/when I need it
Just making sure some one else finds this useful.
If you are trying to save costs when not using the kops kubernettes cluster on aws there are these options.
1. Tear it down completely.(you can rebuild it later) OR
2. In the 2 autoscale groups edit the min,desired,max to 0. This will bring down the cluster(you can later revert these values and the cluster will be back on its own)
Thanks.
R