How to export Prometheus to Google Cloud Monitoring using OpenTelemetry? - google-cloud-platform

We have an ASGI api (FastAPI), in this API we have a metrics Prometheus endpoint. How to export these metrics to Google Cloud Monitoring using OpenTelemetry. Not using a sidecar.

If you want to export your Open Telemetry metrics to Cloud Monitoring, Prometheus is useless, you can use directly Open Telemetry - CLoud Monitoring integration.
In python, you have an Open Telemetry exporter that allow you to do that. No side car.
Then, if you need to query your metrics with PromQL, you can use Managed Services for Prometheus that offer a compliant PromQL endpoint and based on Monarch (Google internal logging system)

OpenTelemetry doesn't really have a way to scrape a Prometheus endpoint that I'm aware of. I would recommend simply using the Prometheus/Cloud Monitoring integration if that's at all an option. Otherwise, you can instrument your application to write to Cloud Monitoring using OpenTelemetry.

Related

Can Services in GCP's Monitoring monitor endpoints?

I installed managed Anthos on a GKE cluster. Anthos Service Mesh is working and is displaying my API. Thanks to that Services that are in Monitoring automatically detect my API. This is great as it enables me to easily set SLOs and Error Budget for my API.
However I would like to be able to easily set SLOs for individual endpoints in my api. Services(in Monitoring) detect only my API and not the endpoints within my API(my API is one pod/container + sidecar). I tried to add endpoints to Services in Monitoring but it looks like it is only possible to add Kubernetes Objects there.
Is there a way to use Services in Monitoring with endpoints? Is the only way to do so to break endpoints to separate microservices?
You can monitor your endpoints using Cloud Endpoints with OpenAPI, which allows you to monitor the health of APIs you own by using the logs and metrics Cloud Endpoints maintains for you automatically. When users make requests to your API, Endpoints logs information about the requests and responses and also tracks three of the four golden signals of monitoring: latency, traffic, and errors. These usage and performance metrics help you monitor your API.
The following URL Configuring Cloud Endpoints has the configuration process for Cloud Endpoints. Use this URL Monitoring your API as a reference on the monitoring process for your API, and this last URL for the Cloud Endpoint’s overview.

Setting Cloud Monitoring uptime checks for non publicly accessible backends

I'm having some trouble setting uptime checks for some Cloud Run services that don't allow unauthenticated invocations.
For context, I'm using Cloud Endpoints + ESPv2 as an API gateway that's connected to a few Cloud Run services.
The ESPv2 container/API gateway allows unauthenticated invocations, but the underlying Cloud Run services do not (since requests to these backends flow via the API gateway).
Each Cloud Run service has an internal health check endpoint that I'd like to hit periodically via Cloud Monitoring uptime checks.
This serves the purpose of ensuring that my Cloud Run services are healthy, but also gives the added benefit of reduced cold boot times as the containers are kept 'warm'
However, since the protected Cloud Run services expect a valid authorisation header all of the requests from Cloud Monitoring fail with a 403.
From the Cloud Monitoring UI, it looks like you can only configure a static auth header, which won't work in this case. I need to be able to dynamically create an auth header per request sent from Cloud Monitoring.
I can see that Cloud Scheduler supports this already. I have a few internal endpoints on the Cloud Run services (that aren't exposed via the API gateway) that are hit via Cloud Scheduler, and I am able to configure an OIDC auth header on each request. Ideally, I'd be able to do the same with Cloud Monitoring.
I can see a few workarounds for this, but all of them are less than ideal:
Allow unauthenticated invocations for the underlying Cloud Run services. This will make my internal services publicly accessible and then I will have to worry about handling auth within each service.
Expose the internal endpoints via the API gateway/ESPv2. This is effectively the same as the previous workaround.
Expose the internal endpoints via the API gateway/ESPv2 AND configure some sort of auth. This sort of works but at the time of writing the only auth methods supported by ESPv2 are API Keys and JWT. JWT is already out of the question but I guess an API key would work. Again, this requires a bit of set up which I'd rather avoid if possible.
Would appreciate any thought/advice on this.
Thanks!
This simple solution may work on your use case as it is easier to just use a TCP uptime check on port 443:
Create your own Cloud Run service using https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy.
Create a new uptime check on TCP port 443 Cloud Run URL.
Wait a couple of minutes.
Location results: All locations passed
Virginia OK
Oregon OK
Iowa OK
Belgium OK
Singapore OK
Sao Paulo OK
I would also like to advise that Cloud Run is a Google fully managed product and it has a 99.95 % monthly up time SLA, with no recent incidents in the past few months, but proactively monitoring this on your end is a very good thing too.

"Vue+S3+Lambda" architecture on Google Cloud

My single page website (VueJs) has only very few transactions, so I would like to implement it using serverless architecture.
A recommended architecture on AWS for a simple Web Application is the following:
Vue App uploaded on AWS S3
Connect to Backend via REST API
Use Lambda Funktions to connect to a Database
However, I would like to do this on Google Cloud as I plan to use BigQuery for Analytics.
What would be a similar and suitable architecture using Google GCP products to launch my Vue-based website with some straight forward backend processes?
You can use:
Cloud Storage for S3
API Gateway or Cloud Endpoints for REST API (compare your load needs and pricing)
Cloud functions for lambda
As for implementation complexity it will be more-less same. Some features are implemented in GCP much more convenient than in AWS and some - vice versa.

Google Cloud Functions: Pub/Sub vs Rest triggering

Is Pub/Sub significantly faster way of communicating between, say, Kubernetes Engine (GKE) api server and a Cloud Function (GCF)?
Is it possible to use Pub/Sub to have such communication between GKE from one Google Cloud Project and GCF from another Google Cloud Project?
What is the way to communicate with Cloud Functions from another Google Cloud Project with low latency?
I think a global answer will clarify your questions. For this particular case, there are two ways to trigger a Google Cloud Function (GCF). You can directly make an HTTP request or you can subscribe the GCF to a topic by using Pub/Sub [https://cloud.google.com/functions/docs/calling ].
If your requests are occasional, an HTTP request will be faster because you don't need an intermediary. If that's not the case, then the Pub/Sub subscription queues the messages and ensures the delivery by retrying them until it receives confirmation.
To communicate between Google Kubernetes Engine (GKE) from one Google Cloud Project and Google Cloud Function (GCF) to another Google Cloud Project you can use either option. Trigger the GCF by the HTTP request directly or do it by publishing the message. When publishing, specify the project where you are sending it and the desirable topic in that project.
Also you need to give the proper permission to the service account to access from one project to the other:
For Pub/Sub https://cloud.google.com/pubsub/docs/authentication
For HTTP request
https://cloud.google.com/solutions/authentication-in-http-cloud-functions.
Google Cloud Function HTTP triggers documentation here: https://cloud.google.com/functions/docs/calling/http
Pub/Sub documentation here:
https://cloud.google.com/pubsub/docs/reference/libraries (you can
access to GitHub by the links in the code and see functions examples
for each language)

GCP enable services/apis via Rest API or python module

How can I enable/disable APIs/Services in Google Cloud Project via Restful APIs or python?
For example, I want to enable following API/Service in a project.
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?project=
You can programmatically enable or disable a GCP service using the Service Usage API. There are also methods for batch operations and querying service state. See the link below to the documentation.
https://cloud.google.com/service-usage/docs/reference/rest/v1/services