Using Pub/Sub on a public Cloud Run service - google-cloud-platform

According to the "Authenticating service-to-service" documentation for Cloud Run, to use Pub/Sub and Cloud Scheduler on a service, unauthenticated access must be disabled because they rely on HTTP calls because of the zero scaling capability of Cloud Run services.
My services allow internal and Load Balancer traffic and must be publicly available for frontend clients, but they also must be able to communicate with each other privately with Pub/Sub.
Is there a way to achieve this? It feels unnatural to create a separate private service just for using Pub/Sub.

It's a missing piece. You can't plug in your VPC PubSub push subscription and Cloud Scheduler (but also Cloud Task, Cloud Build, Workflows,...). I asked Google Cloud few months ago, and it should be fixed by a new network features, soon. At least in 2021!
So, in your case, if your Cloud Run service is accessible from the public internet through a Load Balancer, you can use this public endpoint to call the path that you want on your service and thus perform the process.
If your Cloud Run in only accessible from ingress=internal, you can't for now.

Related

Can Services in GCP's Monitoring monitor endpoints?

I installed managed Anthos on a GKE cluster. Anthos Service Mesh is working and is displaying my API. Thanks to that Services that are in Monitoring automatically detect my API. This is great as it enables me to easily set SLOs and Error Budget for my API.
However I would like to be able to easily set SLOs for individual endpoints in my api. Services(in Monitoring) detect only my API and not the endpoints within my API(my API is one pod/container + sidecar). I tried to add endpoints to Services in Monitoring but it looks like it is only possible to add Kubernetes Objects there.
Is there a way to use Services in Monitoring with endpoints? Is the only way to do so to break endpoints to separate microservices?
You can monitor your endpoints using Cloud Endpoints with OpenAPI, which allows you to monitor the health of APIs you own by using the logs and metrics Cloud Endpoints maintains for you automatically. When users make requests to your API, Endpoints logs information about the requests and responses and also tracks three of the four golden signals of monitoring: latency, traffic, and errors. These usage and performance metrics help you monitor your API.
The following URL Configuring Cloud Endpoints has the configuration process for Cloud Endpoints. Use this URL Monitoring your API as a reference on the monitoring process for your API, and this last URL for the Cloud Endpoint’s overview.

Setting Cloud Monitoring uptime checks for non publicly accessible backends

I'm having some trouble setting uptime checks for some Cloud Run services that don't allow unauthenticated invocations.
For context, I'm using Cloud Endpoints + ESPv2 as an API gateway that's connected to a few Cloud Run services.
The ESPv2 container/API gateway allows unauthenticated invocations, but the underlying Cloud Run services do not (since requests to these backends flow via the API gateway).
Each Cloud Run service has an internal health check endpoint that I'd like to hit periodically via Cloud Monitoring uptime checks.
This serves the purpose of ensuring that my Cloud Run services are healthy, but also gives the added benefit of reduced cold boot times as the containers are kept 'warm'
However, since the protected Cloud Run services expect a valid authorisation header all of the requests from Cloud Monitoring fail with a 403.
From the Cloud Monitoring UI, it looks like you can only configure a static auth header, which won't work in this case. I need to be able to dynamically create an auth header per request sent from Cloud Monitoring.
I can see that Cloud Scheduler supports this already. I have a few internal endpoints on the Cloud Run services (that aren't exposed via the API gateway) that are hit via Cloud Scheduler, and I am able to configure an OIDC auth header on each request. Ideally, I'd be able to do the same with Cloud Monitoring.
I can see a few workarounds for this, but all of them are less than ideal:
Allow unauthenticated invocations for the underlying Cloud Run services. This will make my internal services publicly accessible and then I will have to worry about handling auth within each service.
Expose the internal endpoints via the API gateway/ESPv2. This is effectively the same as the previous workaround.
Expose the internal endpoints via the API gateway/ESPv2 AND configure some sort of auth. This sort of works but at the time of writing the only auth methods supported by ESPv2 are API Keys and JWT. JWT is already out of the question but I guess an API key would work. Again, this requires a bit of set up which I'd rather avoid if possible.
Would appreciate any thought/advice on this.
Thanks!
This simple solution may work on your use case as it is easier to just use a TCP uptime check on port 443:
Create your own Cloud Run service using https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy.
Create a new uptime check on TCP port 443 Cloud Run URL.
Wait a couple of minutes.
Location results: All locations passed
Virginia OK
Oregon OK
Iowa OK
Belgium OK
Singapore OK
Sao Paulo OK
I would also like to advise that Cloud Run is a Google fully managed product and it has a 99.95 % monthly up time SLA, with no recent incidents in the past few months, but proactively monitoring this on your end is a very good thing too.

GCP Cloud Scheduler Permission Errors with Service Account

I have created a set of cloud functions that work to ingest data into google cloud storage. The functions have been set with a get http request to only accept internal traffic.
However, when I use cloud scheduler to to invoke the functions I continually get permissions errors even while after specifying a service account for each of the functions with the proper permissions. I have set each of the functions to be in the us-central1 region and have researched the docs and Stack overflow with no success so far. Can I receive some assistance with this?
Cloud Scheduler is a serverless product. This means it doesn't belong to your project and not send the request to your Cloud Function through the VPC. In addition, Cloud Scheduler isn't yet supported in VPC SC
Thus, you can't. The workaround is to allow all ingress traffic on cloud function and to uncheck allow-unauthenticated access. Therefore, your function is callable from elsewhere (from internet) BUT you need a valid authentication to invoke it.
Use your service account and add it to Cloud Scheduler for invoking your function. Grant it the sufficient role for this
Alternative
However, if you would like initially not deploy your function publicly accessible on internet (allow internal traffic only ingress mode), there is an alternative.
Change your Cloud Scheduler to publish a PubSub message instead of calling directly your function. Then, deploy your function linked to PubSub topic instead of in HTTP target mode.
You might have some update to perform in your code, especially if you have parameters to handle (initially in the query or the body, now all is in the PubSub message published by Cloud Scheduler). But your function in only callable by your PubSub topic and no other way.
According to the documentation, in order to trigger a Cloud Function from Cloud Scheduler you have to use Pub/Sub. These are the steps:
Create the Cloud Function and make it trigger by a Pub/Sub topic.
Create the Pub/Sub topic.
Create the Cloud Scheduler job that will invoke the Pub/Sub trigger.
Once you do that you will be able to test-run the Cloud Scheduler job and verify whether it's working now. The final schema is something like this:
Cloud Scheduler job => Pub/Sub topic => Cloud Function
Once it's working remember to revert the roles granted to the Cloud Scheduler service account, as this method doesn't require them.
Here I found a blog post that does the same but with a more practical approach that you can follow from a CLI.

Serverles web app with automatically created, scheduled one time jobs

I'm trying to figure out if it's feasible to create a serverless web app in which an API function creates a job that is scheduled to run once at a specific time and date.
I've looked at the three main providers, AWS, Google Cloud and Microsoft Azure. All three provide everything needed for a serverless web app in general, but I'm not sure I understand if any of them support what I described above.
AWS has CloudWatch, which has an API. However, there is nothing about Events in the API doc, it looks like Events can only be created by hand in the console or via Terraform.
Google Cloud has the Scheduler. However, there is no mention of an API in the docs. It does support Terraform too, though.
Microsoft has the Azure Scheduler, and that one seems to support creating jobs via an API.
Doesn't Terraform require an API, so am I missing anything?
I'm completely new to serverless web apps. Is this even the correct approach to do this?
Edit:
I just realized that it's possible to create Amazon CloudWatch events via an API, however, it's called EventBridge... That makes me think I might have missed something in Google Cloud as well. However, I'm still wondering if this is the right approach?
To provide a little more detail on what I want to do:
A user creates an event in the web frontend.
My API function that the frontend calls creates some cloud version of a cronjob that is to be run once at a specific time and date
The job triggers another function that does something with a third party API at the time specified by the user
On Google Cloud, you can deploy your app on serverless services (Cloud Run, Cloud Function or App Engine). Then, you can set up a Cloud Scheduler. Cloud Scheduler can call an HTTP URL and then to trigger you serverless service.
About the API accessibility of Google Cloud services, "All is API". So you can do all what you can on the console or with the GCLOUD cli, with API calls.

Google Cloud Functions: Pub/Sub vs Rest triggering

Is Pub/Sub significantly faster way of communicating between, say, Kubernetes Engine (GKE) api server and a Cloud Function (GCF)?
Is it possible to use Pub/Sub to have such communication between GKE from one Google Cloud Project and GCF from another Google Cloud Project?
What is the way to communicate with Cloud Functions from another Google Cloud Project with low latency?
I think a global answer will clarify your questions. For this particular case, there are two ways to trigger a Google Cloud Function (GCF). You can directly make an HTTP request or you can subscribe the GCF to a topic by using Pub/Sub [https://cloud.google.com/functions/docs/calling ].
If your requests are occasional, an HTTP request will be faster because you don't need an intermediary. If that's not the case, then the Pub/Sub subscription queues the messages and ensures the delivery by retrying them until it receives confirmation.
To communicate between Google Kubernetes Engine (GKE) from one Google Cloud Project and Google Cloud Function (GCF) to another Google Cloud Project you can use either option. Trigger the GCF by the HTTP request directly or do it by publishing the message. When publishing, specify the project where you are sending it and the desirable topic in that project.
Also you need to give the proper permission to the service account to access from one project to the other:
For Pub/Sub https://cloud.google.com/pubsub/docs/authentication
For HTTP request
https://cloud.google.com/solutions/authentication-in-http-cloud-functions.
Google Cloud Function HTTP triggers documentation here: https://cloud.google.com/functions/docs/calling/http
Pub/Sub documentation here:
https://cloud.google.com/pubsub/docs/reference/libraries (you can
access to GitHub by the links in the code and see functions examples
for each language)