Does URL of service on GCP CloudRun can be aliased? - google-cloud-platform

Q. Does URL of service on GCP CloudRun can be aliased with static string?
I plan to run my service on CloudRun. The problems are
URL generated by CloudRun is not known before service creation
My service region does not supported domain mapping on CloudRun
URL is dynamically created like "https://hihihi-sehvxcp7uq-du.a.run.app".
Suppose that there are two services A and B called by A. To A calls B, A must know URL of B. To achieve this, URL of B must be injected into A as configuration at startup time because URL is dynamic. I feel that this behavior leads unnecessary complexity increasing. To run just one line curl command, metadata or configuration has to be fetched.
But if URL can be aliased as static string (like dns or /etc/hosts), unnecessary configurations can be thrown away.

No, you cannot alias the Cloud Run service URL.
Since your deployment region does not support custom domains, your option is an HTTP(S) Load Balancer.
Setting up a load balancer with Cloud Run (fully managed), App Engine, or Cloud Functions

Related

What's the best way to load balance between Cloud Run services across projects?

Consider a scenario with two identical Cloud Run services ("Real Time App") deployed in two different regions, let's say EU and US. These services use Firestore for real time communication, and minimizing latency is important. Since Firestore only allows specifying one region per project, each Cloud Run service is deployed in its own project and uses its regional Firestore instance. It is not needed for a service in the US to access Firestore data in the EU and vice versa.
Is there a way to deploy a global HTTPS load balancer to route requests to Cloud Run service closest to the user when services are defined in different projects?
I attempted a setup with a shared VPC between a Host "Global" project (in the US) and 2 service projects (EU and US). I created a Cloud Run Service, Network Endpoint Group (NEG), and Backend Service in each regional project. I then attempted to create a Global forwarding rule, Target HTTPs proxy, and URL Map in the host project. However, the URL Map cannot be fed a backend service in another project, complaining that:
Cross-project references for this resource are not allowed.
Indeed, per the Shared VPC Architecture and Cross-project service referencing section of the documentation it seems that:
Cross-project service referencing is not supported for the global external HTTP(S) load balancer
and that, if I understood correctly, the following rules apply:
The NEG must be defined in the same project as the Cloud Run Service
The Backend Service must be in the same project as the NEG
The Target HTTP(s) Proxy and associating URL map must be in the same project as the Backend Service
The Forwarding Rule must be in the same project as the Backend Service
essentially requiring the entire chain to be defined in one project.
Are there recommended workarounds for this scenario?
One solution I can think of is to create a "Router" Cloud Run Service in the Host global project behind a load balancer, with multi region deployment. Its sole purpose is to respond to the client with the regional URL endpoint of the closest "Real Time App" Cloud Run service.
I am wondering whether there is a more elegant solution, though.

Configure URL redirection in GCP

Cloud Composer is Google Cloud's offering of Apache Airflow, the workflow management platform.
Composer deploys the Airflow web server in an AppEngine instance, and thus the URL of the deployed webapp is non-customizable. As a service deployed in AppEngine, the host name of the URL ends in ".appspot.com", but has an automatically generated prefix, and is not easily predictable.
How can I assign a custom, easier to remember host name to point to this service?
In particular, there are firewall rules in place, so a firewall exception for *.appspot.com would be too wide.
You can try to get inspiration from my article and perform a similar thing, not with Cloud Run but with App Engine URL.
I mean:
Create an internet NEG to appsport.com
Add the host header equals to your Cloud Composer appspot URL.
Create your Load Balancer with the domain name that you want.
I didn't test; let me know if it's suitable and if it works for you.

Google Cloud Platform Load Balancer with Cloud Run throws 404 error

I'm trying to setup the multi region deployment with Load Balancer that drives traffic to the Cloud Run app which is deployed in the closed region to the visitor by this tutorial https://cloud.google.com/run/docs/multiple-regions
I have a Google Cloud Platform Load Balancer setup with a backend service which points to three regional network endpoint groups each of them linked to a separate instance of Cloud Run app in different regions.
When I'm accessing a Cloud Run app in any region directly by Cloud Run app URL (like this https://cms-us-east1-dpuglk7uja-ue.a.run.app) it works well.
When I'm accessing the app through the load balancer domain in the europe it works good as well.
But when I'm accessing the app through the load balancer domain in any other region (US, Asia) I'm getting a 404 error with message The requested URL was not found on this server. That’s all we know.
I've done everything explained in this tutorial and not sure what's wrong with that. Here are the regions I'm using: europe-north1, us-east1, asia-northeast1.
Is there any chance that the beta version of the Serverless NEG is still buggy?
Your load balancer configuration is the right one. You have one backend service, and 1 serverless NEG per region.
The condition to have something working is to have the SAME Cloud Run service name but deployed in different regions.

How to lockdown routing to public and private parts, that's behind a GCP load balancer

I have a static JS application hosted in a Gcloud storage bucket and uses a Gcloud CDN. This set up is behind a single HTTPS load balancer.
I have two entry points to the app, namely /endpoint1, /endpoint2. Right now both the end points are publicly accessible. Where as, only /endpoint1 has to be public. I want to make /endpoint2 a private endpoint and only accessible from a certain space, say 10.0.0.0/24.
Is it possible to achieve this? If not, please suggest an other way around.
At this time your objectives are not possible without changing services.
Your objectives:
/endpoint1 is public
/endpoint2 is accessible from a CIDR block.
Note: 10.0.0.0/24 is a private IP block. Only public IP addresses reach the cloud.
Cloud Storage does not offer firewall type rules to control access. Buckets/objects are either public or controlled by IAM identity-based access and OAuth access tokens containing IAM roles/permissions.
You will need to move your app to a service that offers compute abilities so that you can implement your access control objectives. Options include Compute Engine, Cloud Run, Firebase, App Engine Standard, or App Engine Flexible.
Note: you can continue to host your public static files on Cloud Storage and also with your combination of HTTP Load Balancer plus CDN plus Cloud Storage. You might need to change the DNS endpoint so that your app can be hosted from multiple combined services.
Advanced Solution:
Leave /endpoint1 hosted by LB + CDN + Cloud Storage. Create a compute service from the list above to host /endpoint2. Add a backend to the load balancer that points to the compute service. Create a URL Map to forward traffic to the correct backend. Now you can add Cloud Armor to control access to /endpoint2.
Simple Solution:
If your site is low traffic and/or does not have large storage requirements, move your app to Cloud Run. Very low cost, easy to implement and manage plus autoscaling. Put access control in your application logic.

Send POST request from one service to another in Amazon ECS

I have a Node-Express website running on a microservices based architecture. I deployed the microservices on Amazon ECS cluster with one EC2 instance. The microservices sit behind an Application Load Balancer that routes external traffic correctly to the services. This system is working as expected except for one problem: I need to make a POST request from one service to the other. I am trying to use axios for this but I don't know what url to post to in axios. When testing locally, I just used axios.post('http://localhost:3000/service2',...) inside service 1 but how should I do it here?
So There are various ways.
1. Use Application Load Balancer behind the service
In this method, you put your micro services behind the load balancer(s) and to send request, you give load balancer URL. You can have path based routing for same load balancer or you can use multiple load balancers.
2. Use Service Discovery
In this method, you let your requester discover it. Now Service discovery can be done in various way like using ALB or Route 53 or ECS or Key Value Store or Configuration Management or Third Party Software such as Consul