I'm trying to create a multi-region Google Cloud Run setup and can't find any documentation.
My goal is creating an Google HTTPS Load Balancer and map the targets as my 3 Google Cloud Run instances.
https://lb.test.com/ >
eu.test.com > Europe Cloud Run
na.test.com > North America Cloud Run
sa.test.com > South America Cloud Run
Problem is, I can't find the option of mapping my HTTPS load balancer into my Cloud Run instances.
If this is not possible yet, can I use an external DNS LB such as AWS Route 53?
Thanks!
Mapping load balancer to cloud run is possible now. This can be achieved by creating NEGs (Network Endpoint Groups) which points to a cloud run service.
I have implemented this today, and came across this thread. To find out how to implement this follow instructions in
https://cloud.google.com/load-balancing/docs/negs/setting-up-serverless-negs#creating_the
I have recently published a guide on this on our official documentation: http://cloud.google.com/run/docs/multiple-regions
The solution involves adding the newly introduced "Serverless Network Endpoint Groups" as backends to your load balancer.
I do not think you can use a Google HTTPS Load Balancer to make cloud run service multiregional (HTTPS Load Balancer supports only compute engine vm as backend). Your question was very interesting and I did some research.
The only useful documents I found about this topic:
Running Multi-Region Apps on Google Cloud (Cloud Next '19).
Going Multi-Regional in Google Cloud Platform
They are explaining how you can make a cloud service multiregional using Apigee (some proxy servers HA Proxy, Nginx).
Related
I currently have two services in GCP, one is Google App Engine Flexible Environment and the other is Cloud Run and the idea is that through a service balancer I can distribute the traffic between those 2 services. Does anyone know how I can proceed?
You can create an HTTP(S) Load Balancer with two backend-configs one for AppEngine and the other for CloudRun.
Consider these notes:
Your LB will direct requests to each backend depending on the PATH and each backend will handle balancing depending on the the specs that you have configured for this backend.
You will create and configure each backend separately. and both will be of type Serverless Network Endpoint Group (Serverless NEG), See here for details.
I'd like to add cache to my Django app hosting on Cloud Run.
From Django official docs, we can connect Django to a memory-based cache. Since I'm using Cloud Run, the memory get cleaned.
Memotystore seems good for this purpose, but there's only tutorial for flask and redis.
How could I achieve this?
Or should I just use a database caching?
Connect Redis instance to Cloud Run service using the steps in
documentation.
To connect from Cloud Run (fully managed) to Memorystore you need to
use the mechanism called "Serverless VPC Access" or a "VPC
Connector"
First, you have to create a Serverless VPC Access Connector
and then configure Cloud Run to use this connector
See connecting to a VPC Network for more information.
Alternatives to using this include:
Use Cloud Run for Anthos, where GKE provides the capability to
connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the
Serverless VPC Access feature by using App Engine with Memorystore.
See this answer to connect to Memorystore from Cloud Run using an SSH
tunnel via GCE.
I'm trying to find out if Spring Cloud Zuul is working on GCP or not. If that's the case, should we use it the same way we do in AWS?
(The Services are deployed as App Engines)
Thanks,
As Zuul is an edge server as a bastion host it should have no problem working on GCP. The dataflow and architecture will depend entirely on the GCP API and service where you will be deploying Spring Cloud, in your case (App Engine API), the management and configurations come from Cloud VPC API features such as Routes and Load Balancer API's that will be working with Zuul.
Here is a guide explaining how to deploy it on GKE as a containerized service where they follow the integration for Zuul with Istio at service layer scope.
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
I read the hipaa doc from google, i'm no sure is the google cloud load balancer hipaa complince. google cloud hipaa
google says all the networks and regions are hipaa complince, i think this includes this products:
VPC
cloud DNS
cloud interconnect
cloud cdn
cloud load balancer
is this correct or i'm wrong?
I think it is because kubernetes engine uses cloud load balancer.
According to the documentation that you provided and in here network is covered by HIPAA.
"The Google Cloud BAA covers GCP’s entire infrastructure (all regions, all zones, all network paths, all points of presence)"
But lets not forget that you need to do your bit by securing the environment and applications that run on top of GCP, hence why it's called a shared security model.