I am trying to build a python cloud run service that would be triggered whenever a file is uploaded in a google cloud storage bucket. Although, when I see the logs, the service is not triggered while I have created an Eventarc trigger for the same, already. I cannot find any entries in the cloud run service logs, but the trigger tab shows an Eventarc trigger associated with it.[![Cloud Run Trigger Image][1]][1]
[![Cloud Run Logs][2]][2]
Any ideas or links that can help me here?
[1]: https://i.stack.imgur.com/ijjh2.png
[2]: https://i.stack.imgur.com/QhFhk.png
In your logs, the line
booting worker with pid: 4
indicates, that your cloud run instance did indeed got triggered, but might have failed to boot, because there is no further log output.
To debug, deploy a demo cloud run function that just logs the incoming message. Thus, it will be easier to see whether it has been triggered (and with what payload).
There is an easy Tutorial from Google along these lines.
Related
I have two cloud runs within the same VPC/Project, which one cloud run (A) builds the request based upon the output of the request of another cloud run (B).
I am calling cloud run (B) from cloud run (A) by providing cloud run (B)'s trigger URL in cloud run (A)'s .env.
The error from the curl I receive, is "System Unavailable".
The error logs from the cloud run is:
POST 503
"The request failed because either the HTTP response was malformed or connection to the instance had an error. Additional troubleshooting documentation can be found at: https://cloud.google.com/run/docs/troubleshooting#malformed-response-or-connection-error"
The logs show that the cloud run (A) is successfully calling the other cloud run (B). But the request takes up to 60-120 seconds until a response is generated by cloud run (A). I set the request timeout to 10min on cloud run (A) to be safe but still face errors on cloud run (A).
The only network specific non-default setting used when setting up both cloud runs is "Ingress=Internal+load balancing".
This setup works with the original reference cloud run (B) is sent a request from a GCE VM server running the same image+container setup.
What cloud run setting(s) do I need to get one cloud run to be able to request data from another properly?
I am referencing the cloud run from both another cloud run and vm server via its trigger url:
cat .env
URL=https://<name>.a.run.app
As #John Hanley explained in the comments, my answer also focused around the same points.
For your service(B) to be called by service(A) these are the conditions you must be met:
Make sure both services are deployed under the same VPC network and both services should be in the same project. For more information a similar thread has explained these constraints.
For Cloud Run inter-service communication follow this thread which answers some of your questions
I have a nestJS application running on ECS (AWS), my cloud watch log is broken, each json line is a log line.
Image below:
Anybody suggest an idea
I have a Redis instance running in GCP Memorystore, and I have enabled notify-keyspace-events on this instance. My ultimate goal is to publish messages from my Redis instance when certain keys expire, and on these events, make a call to a service I have on Cloud Run with the data of the key as input.
How do I think about building this? Only way I can think is to have a thread always running in my Cloud Run instance to check for new messages in Redis Pub/Sub channels. I am afraid this might not work though as Cloud Run is not going to allow background tasks.
I am thinking of a way to generate a POST request to my Cloud Run service when the Redis message is generated, but could not find a way to do this yet.
What I know so far of that can be integrated together is Cloud Pub/Sub with Cloud Run as stated in these guides here and here.
What I don't know for sure is if you will be able to somehow publish events from your GCP Memorystore to a Pub/Sub topic. Maybe, if you are able to read in real time which Redis keys inspire, you could manually publish these events as messages to your Pub/Sub topìc, and then your Cloud Run subscribe to the same topic to receive the messages from it.
Another thing you could consider is using Cloud Background Functions.
As for sending a direct POST request to your Cloud Run service, the following documentation could be useful for you.
I am running a python script in Cloud Run on a daily basis with Cloud Scheduler to pull data from BigQuery and upload it to Google Cloud Storage as a CSV file. The Cloud Scheduler setup utilizes an HTTP "Target" with a GET "HTTP method". Also, Cloud Scheduler authenticates the https endpoint using a service account with the "Add OIDC token" option.
When running Cloud Scheduler and Cloud Run with a very small subset of the BigQuery data for a job that takes a few seconds, the "Result" in Cloud Scheduler always shows "Success" and the job completes as intended. However, when running Cloud Scheduler and Cloud Run with the full BigQuery dataset for a job that takes a few minutes, the "Result" in Cloud Scheduler always shows "Failed", even though the CSV file is typically (although not always) uploaded into Google Cloud Storage as intended.
(1) When running Cloud Scheduler and Cloud Run on the full BigQuery dataset, why does the "Result" in Cloud Scheduler always show "Failed", even though the job is typically finishing as intended?
(2) How can I fix Cloud Scheduler and Cloud Run to ensure the job always completes as intended and the "Result" in Cloud Scheduler always shows "Success"?
It's a common mistake with Cloud Scheduler. I rose it many times to Google but it nothing as changed until now...
The GUI (the web console) doesn't allow you to configure anything, especially the timeout. Your Cloud Scheduler fails because it considers that it doesn't receive the answer in time when you scan your full BQ dataset (that can take few minutes)
For solving this, use the command line (gcloud), especially the attempt-deadline parameter. You can have a look to other params: retry, backoff,... The allowed customization is interesting, but not present in the GUI!
How do I view stdout/stderr output logs for cloud ML? I've tried using gcloud beta logging read and also gcloud beta ml jobs stream-logs and nothing... all I see are the INFO level logs generated by the system i.e. "Tearing down TensorFlow".
Also in the case where I have an error that shows the docker container exited with non zero code. It links me to a GUI page that shows the same stuff as gcloud beta ml jobs stream-logs. Nothing that shows me the actual output from the console of what my job produced...
Help please??
It may be the case that the Cloud ML service account does not have permissions to write to your project's StackDriver Logs, or the Logging API is not enabled on your project.
First check whether the Stackdriver Logging API is enabled for the project by going to the API manager: https://console.cloud.google.com/apis/api/logging.googleapis.com/overview?project=[YOUR-PROJECT-ID]
Then Cloud ML service account should be automatically added as an Editor to the project, and therefore allows it to write to the project logs, but if you have changed your project permissions it may have lost it. If so, check that you've manually given the Cloud ML service account LogWriter permissions.
If you are unsure of the service account used by Cloud ML, this page has instructions on how to find it: https://cloud.google.com/ml/docs/how-tos/using-external-buckets