Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress
Related
If I have an architecture in AWS with many different microservices, each in their own AWS ECS cluster and an AWS application load balancer in front of each one, would I still need consul or linkerd?
Conversely, if I don't use an AWS application load balancer, would I be required to use something like consul or linkerd to handle the load balancing as the ECS clusters scale up/down?
I'm trying to understand whether these all are complementary services or whether they are competing services.
Meshes are really designed to mediate communications within the cluster. Linkerd could secure communications across this kind of multicluster, but to be candid I find myself wondering if you'd be better served with all the microservices in the same cluster? That would let you use an API gateway and service mesh to secure everything and provide better reliability and observability, and save you money on LBs.
(Full disclosure: I have ties to Linkerd and to Emissary-ingress, so I'm a touch biased. :) )
In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?
AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.
I have two EKS Clusters in a VPC.
Cluster A running in Public subnet of VPC [Frontend application is deployed here]
Cluster B running in Private subnet of VPC [ Backend application is deployed here]
I would like to establish a networking with these two cluster such that, the pods from cluster A should be able to communicate with pods from Cluster B.
At the high level, you will need to expose the backend application via a K8s service. You'd then expose this service via an ingress object (see here for the details and how to configure it). Front end pods will automatically be able to reach this service endpoint if you point them to it. It is likely that you will want to do the same thing to expose your front-end service (via an ingress).
Usually an architecture like this is deployed into a single cluster and in that case you'd only need one ingress for the front-end and the back-end would be reachable through standard in-cluster discovery of the back-end service. But because you are doing this across clusters you have to expose the back-end service via an ingress. The alternative would be to enable cross-clusters discovery using a mesh (see here for more details).
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
We've currently got a production application using Kubernetes on AWS. Everything's working very well except I think we've made a misconfiguration problem.
We expose different services from within the cluster on domain names and we're now up to about 5 different services. Kubernetes' standard way to expose these services is through load balancers, but in our config we've created 6 load balancers. As you can imagine that many load balancers running can incur substantial cost overheads.
Is there any way to configure an individual load balancer to route to kubernetes targets based on domain names? So we can have one domain pointing at an ELB and have that route to the correct services internally?
You can use Ingress controller. Ingress will setup a single AWS load balancer and can be used to expose many services. If you services are all HTTP based, it should work quite well. For more information about ingress you can have a look to the Kubernetes docs or at the default Nginx based implementation. If needed there are also some other implementations using for example Envoy proxy etc.