I have deployed a gcloud background function (pubsub) in emulator.
It is getting succesfully invoked from command line
functions call helloPubSub --data='{"message":"Hello World"}'
How to invoke gcloud local function from local server code ?
= = =
Below is my code on server side to publish to topic
pubsub
.topic(topicName)
.publisher()
.publish(dataBuffer)
.then(results => {
const messageId = results[0];
console.log(`Message ${messageId} published.`);
res.status(200)
res.send({hello:'world'})
})
.catch(err => {
console.error('ERROR:', err);
res.status(200)
res.send({err:err})
});
I receive following error message
{"err":{"code":7,"metadata":{"_internal_repr":{}},"note":"Exception occurred in retry method that was not classified as transient"}}
In the official docs it states:
Note: Functions deployed in the Emulator with non-HTTP triggers like Cloud Storage or Cloud Pub/Sub will not be invoked by these services. The Emulator is intended as a development environment only, so you would need to call these functions manually to have them invoked.
So if you deployed a function locally with Cloud Pub/Sub trigger, the only way to invoke it is using the command line command:
functions call [your-function]
Related
Pub Sub topic invokes a cloud function endpoint upon receiving a new message.
If any error happens inside the cloud function the function returns an error.
Whether the delivery will be retried by the PubSub in case of error?
The Cloud function deployed without the retry option. Want to have the retry control on the Pub Sub.
Tried a sample pub sub topic triggered cloud function which always returns the error on execution,
**
import (
"context"
"errors"
)
func PushBackOffTest(ctx context.Context, m PubSubMessage) error {
print(string(m.Data))
return errors.New("always returns error")
}
**
But the cloud function is not executed again.It ran only once.
ACK deadline 600 seconds. Max delivery attempts 6 . Configured from the G Cloud console.
If you want the event to be redelivered in the event of an error, then you need to enable retry in your Cloud Function by checking the "Retry on failure" box. Otherwise, Cloud Functions will acknowledge the message received from Pub/Sub regardless of the result of processing it. Checking this box is what tells Cloud Functions to use Cloud Pub/Sub's retry mechanism for unacknowledged messages.
I have written a small cloud function in GCP which is subscribed to Pub/Sub event. When any cloud builds triggered function post message into the slack channel over webook.
In response, we get lots of details to trigger name, branch name, variables details but i am more interested in Build logs URL.
Currently getting build logs URL in response is like : logUrl: https://console.cloud.google.com/cloud-build/builds/899-08sdf-4412b-e3-bd52872?project=125205252525252
which requires GCP console access to check logs.
While in the console there an option View Raw. Is it possible to get that direct URL in the event response? so that i can directly sent it to slack and anyone can access direct logs without having GCP console access.
In your Cloud Build event message, you need to extract 2 values from the JSON message:
logsBucket
id
The raw file is stored here
<logsBucket>/log-<id>.txt
So, you can get it easily in your function with Cloud Storage client library (preferred solution) or with a simple HTTP Get call to the storage API.
If you need more guidance, let me know your dev language, I will send you a piece of code.
as #guillaume blaquiere helped.
Just sharing the piece of code used in cloud function to generate the singedURL of cloud build logs.
var filename ='log-' + build.id + '.txt';
var file = gcs.bucket(BUCKET_NAME).file(filename);
const getURL = async () => {
return new Promise((resolve, reject) => {
file.getSignedUrl({
action: 'read',
expires: Date.now() + 76000000
}, (err, url) => {
if (err) {
console.error(err);
reject(err);
}
console.log("URL");
resolve(url);
});
})
}
const singedUrl = await getURL();
if anyone looking for the whole code please follow this link : https://github.com/harsh4870/Cloud-build-slack-notification/blob/master/singedURL.js
I have two scripts that I want to run, function_1 and function_2, where function_2 has to be run only after successful execution of function_1.
To do this on GCP I convert each script to a Cloud Function, which I set to be triggered by Pub/Sub as is standard in GCP. Let's say I want to schedule these functions using Composer. Since there's no Operator for executing a Cloud Function, I have to use the Pub/Sub operator to fire a message to my topic, which will in turn execute the function.
Here's my issue: Is it possible to have the trigger for function_2 in Composer execute only after function_1 has successfully ran? Since my DAG task is not the execution of the function but rather the firing of the message to Pub/Sub (which in turn runs the function), I don't understand how I can have function_2 run after the upstream function is fully executed instead of when the Pub/Sub message is sent.
In Airflow 1.10.1 version Google Cloud Functions Operator got released, this will solve the problem, no need of using the Pub/Sub.
Suppose if you are running < Airflow 1.10.1, using the http_operator is the workaround, return the status from the GCF.
Reference:
https://airflow.apache.org/docs/stable/howto/operator/gcp/function.html
https://airflow.apache.org/docs/stable/_api/airflow/operators/http_operator/index.html#module-airflow.operators.http_operator
I have successfully run Firebase emulator:
E:\firebase>firebase emulators:start
i emulators: Starting emulators: functions, firestore
! Your requested "node" version "8" doesn't match your global version "10"
+ functions: Emulator started at http://localhost:5001
! No Firestore rules file specified in firebase.json, using default rules.
i firestore: Serving ALL traffic (including WebChannel) on http://localhost:808
0
! firestore: Support for WebChannel on a separate port (8081) is DEPRECATED and
will go away soon. Please use port above instead.
i firestore: Emulator logging to firestore-debug.log
+ firestore: Emulator started at http://localhost:8080
i firestore: For testing set FIRESTORE_EMULATOR_HOST=localhost:8080
i functions: Watching "E:\firebase\func
tions" for Cloud Functions...
! functions: Your GOOGLE_APPLICATION_CREDENTIALS environment variable points to
E:\firebase\key.json. Non-emulated serv
ices will access production using these credentials. Be careful!
+ functions[notifyNewMessage]: firestore function initialized.
+ All emulators started, it is now safe to connect.
The functions file with notifyNewMessage function is below:
const functions = require('firebase-functions')
const admin = require('firebase-admin')
admin.initializeApp()
exports.notifyNewMessage = functions.firestore
.document('test/{test}')
.onCreate((docSnapshot, context) => {
console.log(docSnapshot.data())
}
When I create a new document manually in my Firebase console, my CLI in Windows does not log anything. How can I fix this so that it logs what the functions says in my CLI?
I was just being an idiot and had not re-compiled by code.
The local emulator doesn't respond to changes in the Firestore database that's hosted in Google's cloud and visible in the console. What it does respond to are changes in the locally emulated Firestore database also running on your machine. If you want your Firestore function to trigger in the local emulator, you will have to instead make a change to the emulated Firestore, as described in the documentation. You might want to go through the provided quickstart to get some experience with this.
If you don't want to use the Firestore emulator and just want to trigger it directly for testing, you can use the Firebase CLI local shell.
I'm writing a lambda (in node.js 6.10) to update an endpoint SageMaker. To do so I have to create a new HyperParamterTuningJob (and then describe it).
I succeeded to call all functions of the service SageMaker from the sdk (like listModels, createTrainingJob, ...) (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SageMaker.html) except some of them.
All the functions that are related to HyperParameterTuningJob
(createHyperParameterTuningJob, describeHyperParameterTuningJob, listHyperParameterTuningJobs and stopHyperParameterTuningJob)
are not known in the sdk by the lambda.
I have attached the policy 'AmazonSageMakerFullAccess' to the role IAM used (where all these functions are allowed). So the error can't come from a problem of authorization.
I have already created a HyperParameterTuningJob (by the interface of AWS) called 'myTuningJob'.
I have an error everytime I use the function describeHyperParamterTuningJob.
Here is my lambda code :
const AWS = require('aws-sdk');
const sagemaker = new AWS.SageMaker({region: 'eu-west-1', apiVersion: '2017-07-24'});
var role = 'arn:aws:iam::xxxxxxxxxxxx:role/service-role/AmazonSageMaker-ExecutionRole-xxxxxxxxxxxxxxx';
exports.handler = (event, context, callback) => {
var params = {
HyperParameterTuningJobName: 'myTuningJob'
};
sagemaker.describeHyperParameterTuningJob(params, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
};
When I try to test this code in AWS lambda, it returns this result in the console :
Function Logs:
START RequestId: 6e79aaa4-9a18-11e8-8dcd-d58423b413c1 Version: $LATEST
2018-08-07T08:03:56.336Z 6e79aaa4-9a18-11e8-8dcd-d58423b413c1 TypeError: sagemaker.describeHyperParameterTuningJob is not a function
at exports.handler (/var/task/index.js:10:15)
END RequestId: 6e79aaa4-9a18-11e8-8dcd-d58423b413c1
REPORT RequestId: 6e79aaa4-9a18-11e8-8dcd-d58423b413c1 Duration: 50.00 ms
Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 32 MB
RequestId: 6e79aaa4-9a18-11e8-8dcd-d58423b413c1 Process exited before completing request
When I call all other functions of the SageMaker service from the sdk, it runs correctly, whitout any error.
I don't find any explanation in the documentation of why these functions related to HyperParameterTuningJob are not recognized as functions in the sdk.
Does anyone have any idea of why it doesn't work ? Or any solutions to call theses functions ?
In AWS Lambda, only the sdk that have a sable version are available.
The sdk of the SageMaker service is not stable yet, so functions related to HyperParameterTuningJob are not in the version of the sdk included in AWS Lambda.
To use theses functions, you need to install the latest version of the sdk on local on your machine (with npm install aws-sdk).
Then zip the node_modules folder and your script (called index.js), then upload this zip folder into the AWS lambda.