Google Cloud workflow with cloud Run - google-cloud-platform

We are making few changes to the present architecture and want to implement Google cloud workflow which will track the flow of a project creation. All the handlers are placed in Cloud Run. Now, how can I call the specific end points in the Workflow from Cloud Run??
I only have one cloud Run URL? I am new to Cloud. Any help will be much appreciated.

To summarize if you want to - Use Workflows with Cloud Run and Cloud Functions. Please have look to this - here
Just to refer, below are the abstracted steps from above for you as an example, to give you an idea, where you have to create a single workflow, connecting one service at a time:
Deploy two Cloud Functions services: the first function generates a
random number, and then passes that number to the second function
which multiplies it.
Using Workflows, connect the two HTTP functions
together. Execute the workflow and return a result that is then
passed to an external API.
Using Workflows, connect an external HTTP
API that returns the log for a given number. Execute the workflow
and return a result that is then passed to a Cloud Run service.
Deploy a Cloud Run service that allows authenticated access only.
The service returns the math.floor for a given number.
Using Workflows, connect the Cloud Run service, execute the entire
workflow, and return a final result.
An excerpt, to give you as an example (from above reference), to create a Cloud Run service based on a container and invoke/attach it to in Workflows...
Build the container image:
export SERVICE_NAME=<your_svc_name>
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${SERVICE_NAME}
Deploy the container image to Cloud Run, ensuring that it only accepts authenticated calls:
gcloud run deploy ${SERVICE_NAME} \
--image gcr.io/${GOOGLE_CLOUD_PROJECT}/${SERVICE_NAME} \
--platform managed \
--no-allow-unauthenticated
When you see the service URL, the deployment is complete. You will need to specify that URL when updating the workflow definition.
Create a text file with the filename e.g. "workflows.yaml" with the following content:
- randomgen_function:
call: http.get
args:
url: https://us-central1-*****.cloudfunctions.net/randomgen
result: randomgen_result
- multiply_function:
call: http.post
args:
url: https://us-central1-*****.cloudfunctions.net/multiply
body:
input: ${randomgen_result.body.random}
result: multiply_result
- log_function:
call: http.get
args:
url: https://api.mathjs.org/v4/
query:
expr: ${"log(" + string(multiply_result.body.multiplied) + ")"}
result: log_result
- floor_function:
call: http.post
args:
url: https://**service URL**
auth:
type: OIDC
body:
input: ${log_result.body}
result: floor_result
- return_result:
return: ${floor_result}
Note: here you replace service URL with your Cloud Run service URL generated above.
This connects the Cloud Run service in the workflow. Note that the auth key ensures that an authentication token is being passed in the call to the Cloud Run service.
Deploy the workflow, passing in the service account:
cd ~
gcloud workflows deploy <<your_workflows_name>> \
--source=workflow.yaml \
--service-account=${SERVICE_ACCOUNT}#${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com
Execute the workflow:
gcloud workflows run <<your_workflows_name>>
The output should resemble the following:
result: '{"body":............;
...
startTime: '2021-05-05T14:36:48.762896438Z'
state: SUCCEEDED

Related

Why doesn't my Cloud Function (2nd gen) trigger?

After getting some help deploying the sample code for a cloud function with a storage trigger I now get the function deployed just fine, but now it won't trigger :-(
The function is deployed:
The deployed code (per the tutorial):
'use strict';
// [START functions_cloudevent_storage]
const functions = require('#google-cloud/functions-framework');
// Register a CloudEvent callback with the Functions Framework that will
// be triggered by Cloud Storage.
functions.cloudEvent('imageAdded', cloudEvent => {
console.log(`Event ID: ${cloudEvent.id}`);
console.log(`Event Type: ${cloudEvent.type}`);
const file = cloudEvent.data;
console.log(`Bucket: ${file.bucket}`);
console.log(`File: ${file.name}`);
console.log(`Metageneration: ${file.metageneration}`);
console.log(`Created: ${file.timeCreated}`);
console.log(`Updated: ${file.updated}`);
});
// [END functions_cloudevent_storage]
The deploy command I used:
gcloud functions deploy xxx-image-handler --gen2 --runtime=nodejs16 --project myproject --region=europe-west3 --source=. --entry-point=imageAdded --trigger-event-filters='type=google.cloud.storage.object.v1.finalized' --trigger-event-filters='bucket=xxx_report_images'
To test I just uploaded a text file from command line:
gsutil cp test-finalize.txt gs://xxx_report_images/test-finalize.txt
Which worked fine, and the file was uploaded to the xxx_report_images bucket (which also is in the europe-west3 region) but the function is never triggered, which I can see by 0 items in the log:
gcloud beta functions logs read xxx-image-handler --gen2 --limit=100 --region europe-west3
Listed 0 items.
What is going on here? This seems very straight-forward and I fail to see what I'm missing, so hopefully I can get some guidance.
**** EDIT 1 *****
Regarding comment 1 below. I can see (in the Eventarch trigger list, picture below) that the service account used by eventarch is xxx-compute#developer.gserviceaccount.com
And I can see in the IAM Principals list that this indeed is the Default compute service account, which has the Editor role. I also explicitly added the Eventarc Connection Publisher role to this service account but without success (I assume the Editor role already contains this role, but just to be sure...). So I guess the issue is not related to your suggestion (?).
BTW. I tried the suggested gcloud compute project-info describe --format="value(defaultServiceAccount)" but just got Could not fetch resource: - Required 'compute.projects.get' permission for 'projects/myproject' and I couldn't figure out which role to add to which serviceaccount to enable this. However, as seen above I found the info in the Eventarc section in GCP instead.
**** EDIT 2 ****.
Indeed after more testing I think I have nailed it down to being a permission issue on Eventarc, just like suggested in the comment. I've posted a new question which is more specific.

How to send the results of a newman report to Datadog?

I have built a small microservice that is connected to Datadog, i.e. API calls to this service using Postman are shown in Datadog. I have generated the newman report for the service using -
newman run collection.json --reporters cli,json --reporter-json-export output.json
Now, I want the contents of my newman report output.json to be shown in Datadog. Any help/idea on how to do that would be really appreciated.
please do following actions,
Please construct API call with your API key - that should post data to dataDog - ref here
Then write seperate nodejs script to call this api with attachment.
Call your script after newman execution

Invalid service name [GOOGLE_APPLICATION_CREDENTIALS=name] in gcp connect twilio messaging with dialogflow

I have created one agent in Dialogflow and then connect it with GCP Function with Webhook. And now I want to integrate it with Twilio text messaging so that I follow https://github.com/GoogleCloudPlatform/dialogflow-integrations/tree/master/twilio#readme tutorial but when I put the command:
"gcloud beta run deploy --image gcr.io/test1/dialogflow-twilio--update-env-vars GOOGLE_APPLICATION_CREDENTIALS=test1.json --memory 1Gi"
it gives me error that
(gcloud.beta.run.deploy) Invalid service name [GOOGLE_APPLICATION_CREDENTIALS=name].
Service name must use only lowercase alphanumeric characters
and dashes. Cannot begin or end with a dash, and cannot be longer than 63 characters...
My gcloud sdk version is 290.0.1. I have created a service account in which have given access to dialogflow-client and use that account json file. Help me what I am missing in this please.
You must be entering GOOGLE_APPLICATION_CREDENTIALS=name whenever the command prompts you to enter a service name. In this case you can simply hit enter and it will create a default service name for you.
From README.md:
When prompted for a service name hit enter to accept the default.
Edit:
Run your command like this (add a space between dialogflow-twilio and --update env-vars):
gcloud beta run deploy --image gcr.io/test1/dialogflow-twilio --update-env-vars GOOGLE_APPLICATION_CREDENTIALS=test1.json --memory 1Gi
The current Google Cloud SDK version is 316. There is 1 release per week. If yours is 290, that means you are 26 weeks behind, roughly 6 month.
Update your gcloud SDK, it should fix your issue (the error message simply don't know the param that you use! And take the param value as the name of the Cloud Run service)
Try a gcloud components update

“Create new version” ignores custom service account

I'm trying to deploy a new version of a model to AI Platform, it's a custom prediction routine. I've managed to deploy just fine when I have all the resources in the same GCP project, but when I try to deploy and I point the GCS files to a bucket in a different project, it fails to deploy. So I'm trying to pass which service account to use when creating the version, but it keeps ignoring it.
That's the message I get:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json returned "Field: version.deployment_uri Error: The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-*****#cloud-ml.google.com.iam.gserviceaccount.com.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'version.deployment_uri', 'description': 'The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-******#cloud-ml.google.com.iam.gserviceaccount.com.'}]}]
My request looks like
POST https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json
{
"name": "v1",
"deploymentUri": "gs://[bucket-gcp-project-2]",
"pythonVersion": "3.5",
"runtimeVersion": "1.13",
"package_uris": "gs://[bucket-gcp-project-2]/model.tar.gz",
"predictionClass": "predictor.Predictor",
"serviceAccount": "my-service-account#[gcp-project-1].iam.gserviceaccount.com"
}
The service account has access in both projects
Specifying a service account is documented as a beta feature. Try using the gcloud SDK, e.g.:
gcloud components install beta
gcloud beta ai-platform versions create v1 \
--service-account my-service-account#[gcp-project-1].iam.gserviceaccount.com ...

how to check if gcloud backend service/url map are ready

Is there a way to determine if a backend service is ready? I ask because I run a script that creates a backend then a url map that uses this backend. The problem is I sometimes get errors saying the backend is not ready for use. I need to be able to pause until the backend is ready before I create a url map. I could check the error response for the phrase 'is not ready' but this isn't reliable for future versions of gcloud. This is somewhat related to another post I recently made on how to reliably check for gcloud errors.
I could also say the same for the url map. When i create a proxy that uses the url map, sometimes i get the error saying the url map is not ready.
Here's an example of what I'm experiencing:
gcloud compute url-maps add-path-matcher app-url-map
--path-matcher-name=web-path-matcher
--default-service=web-backend
--new-hosts="example.com"
--path-rules="/*=web-backend"
ERROR: (gcloud.compute.url-maps.add-path-matcher) Could not fetch resource:
- The resource 'projects/my-project/global/backendServices/web-backend' is not ready
gcloud compute target-https-proxies create app-https-proxy
--url-map app-url-map
--ssl-certificates app-ssl-cert
ERROR: (gcloud.compute.target-https-proxies.create) Could not fetch resource:
- The resource 'projects/my-project/global/urlMaps/app-url-map' is not ready
gcloud -v
Google Cloud SDK 225.0.0
beta 2018.11.09
bq 2.0.37
core 2018.11.09
gsutil 4.34
would assume it's gcloud alpha resources list ...
see the Error Messages of the Resource Manager and scroll down to the bottom, there it reads:
notReady The API server is not ready to accept requests.
which equals HTTP 503, SERVICE_UNAVAILABLE.
adding the --verbosity option might provide some more details.
see the documentation.