Google Cloud Functions Credentials for Local Development - google-cloud-platform

I have a google cloud function. Within this function, I want to write files to GCS (google cloud storage), then get a signed URL of the file that is written to GCS and send that URL to the caller.
For local development, I run the functions locally using the functions-framework command:
functions-framework --source=.build/ --target=http-function --port 8082
When I want to write to GCS or get the signed URL, the cloud functions framework just tries to get the credentials from the signed-in gcloud CLI user. However, I want to point it to read the credentials from a service account. For all other gcloud development purposes, we have put the service account information in a local creds.json file and point the gcloud to read from that file.
Is there any way I can achieve this for functions? Meaning that when I start the functions locally (using functions-framework), I point it to the creds.json file to read the credentials from there?

All Google's SDKs, e.g., for GCS, make use of Application Default Credentials which you should be using instead of explicitly pathing to a key. If this is true for functions-framework, then exporting the variable should work.
The command gcloud auth application-default login is a better recommendation in that case, especially for testing the signed URL, because with that local credential as well as the Cloud Functions credential (through metadata server), the private key isn't present, and the signed URL must be called in a specific manner (provide token and the service account to be able to sign the URL).
Using gcloud auth application-default login creates Application Default Credentials, which have all the powers of the user's account and are persisted as a key called {HOME}/.config/gcloud/application-default_credentials.

Cloud Function Local Development Authentication
This is what I'm doing for local development of a google cloud function using Nodejs that is triggered by a pub/sub event. This function reads a file from a google cloud storage. This uses the Functions Framework Nodejs
TL;DR;
# shell A
gcloud auth application-default login
npm start
pub/sub event message
# shell B
curl -d "#mockPubSub.json" \
-X POST \
-H "Content-Type: application/json" \
http://localhost:8080
Greater Details
Cloud Function with Functions Framework
Docs: Functions Framework Nodejs
package.json
note the --target and --signature-type
{
...
"scripts": {
"start": "npx functions-framework --target=helloPubSub --signature-type=http"
},
"dependencies": {
"#google-cloud/debug-agent": "^7.0.0",
"#google-cloud/storage": "^6.0.0"
},
"devDependencies": {
"#google-cloud/functions-framework": "^3.1.2"
}
...
}
sample nodejs cloud function that downloads file into memory
/* modified from the sample
index.js
*/
const {Storage} = require('#google-cloud/storage');
function log(message, severity = 'DEBUG', payload) {
// Structured logging
// https://cloud.google.com/functions/docs/monitoring/logging#writing_structured_logs
if (!!payload) {
// If payload is an Error, get the stack trace.
if (payload instanceof Error && !!payload.stack) {
if (!!message ) {
message = message + '\n' + payload.stack;
} else {
message = payload.stack;
}
}
}
const logEntry = {
message: message,
severity: severity,
payload : payload
};
console.log(JSON.stringify(logEntry));
}
function getConfigFile(payload){
console.log("Get Config File from GCS")
const bucketName = 'some-bucket-in-a-project';
const fileName = 'config.json';
// Creates a client
const storage = new Storage();
async function downloadIntoMemory() {
// Downloads the file into a buffer in memory.
const contents = await storage.bucket(bucketName).file(fileName).download();
console.log(
`Contents of gs://${bucketName}/${fileName} are ${contents.toString()}.`
);
}
downloadIntoMemory().catch(console.error);
}
exports.helloPubSub = async (pubSubEvent, context) => {
/*
Read payload from the event and log the exception in App project if the payload cannot be parsed
*/
try {
const payload = Buffer.from(pubSubEvent.body.message.data, 'base64').toString()
const pubSubEventObj = JSON.parse(payload) ;
console.log("name: ", pubSubEventObj.name);
getConfigFile(pubSubEventObj)
} catch (err) {
log('failed to process payload: + payload \n' , 'ERROR', err);
}
};
Mock Message for Pub/Sub Event
blog reference, but I'm not using the emulator
myJson.json
{"widget": {
"debug": "on",
"window": {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
},
"image": {
"src": "Images/Sun.png",
"name": "sun1",
"hOffset": 250,
"vOffset": 250,
"alignment": "center"
},
"text": {
"data": "Click Here",
"size": 36,
"style": "bold",
"name": "text1",
"hOffset": 250,
"vOffset": 100,
"alignment": "center",
"onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
}
}}
encode for Pub/sub message ( likely there's a better way )
cat myJson.json | grep -v % | base64
take that output and put it into value for the data key:
mockPubSub.json
{
"message": {
"attributes": {
"greeting": "Hello from the Cloud Pub/Sub Emulator!"
},
"data": "< put the output of the base64 from above here >",
"messageId": "136969346945"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
Follow steps from the TL;DR: above.
Disclaimers
gcloud auth application-default login uses the permissions of the user who executes the command. So remember, in production, the service account the cloud function uses will need to read from the storage bucket.
while scrubbing this (i.e. renaming bits) and copying it over, I may have messed it up. Sorry if that is true.
this is all contrived if you are curious, my design is to take a message from a cloud scheduler that includes relevant details about what to read from the config.
this article explains a way to hot-reload the cloud function
It remains unclear why I'm using ``-signature-type=http` to mock message, but for now, I am.

Related

Google Cloud Functions - How to correctly setup the default credentials?

I'm using Google Cloud Functions to listen to a topic in Pub/Sub and send data to a collection in Firestore. The problem is: whenever I test the function (using the test tab that is provided in GCP) and check the logs from that function, it always throws this error:
Error: Could not load the default credentials.
Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
That link didn't help, by the way, as they say the Application Default Credentials are found automatically, but it's not the case here.
This is how I'm using Firestore, in index.js:
const admin = require('firebase-admin')
admin.initializeApp()
var db = admin.firestore()
// ...
db.collection('...').add(doc)
In my package.json, these are the dependencies (I'm using BigQuery too, which raises the same error):
{
"name": "[function name]",
"version": "0.0.1",
"dependencies": {
"#google-cloud/pubsub": "^0.18.0",
"#google-cloud/bigquery": "^4.3.0",
"firebase-admin": "^8.6.1"
}
}
I've already tried:
Creating a new service account and using it in the function setting;
Using the command gcloud auth application-default login in Cloud Shell;
Setting the environment variable GOOGLE_APPLICATION_CREDENTIALS via Cloud Shell to a json file (I don't even know if that makes sense);
But nothing seems to work :( How can I configure this default credential so that I don't have to ever configure it again? Like, a permanent setting for the entire project so all my functions can have access to Firestore, BigQuery, IoT Core, etc. with no problems.
This is the code that I am using:
const firebase = require('firebase');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const serviceAccount = require("./key.json");
const config = {
credential: admin.credential.cert(serviceAccount),
apiKey: "",
authDomain: "project.firebaseapp.com",
databaseURL: "https://project.firebaseio.com",
projectId: "project",
storageBucket: "project.appspot.com",
messagingSenderId: "",
appId: "",
measurementId: ""
};
firebase.initializeApp(config);
const db = admin.firestore();

AWS Kinesis: user address in event encoded / encrypted

In my React Native mobile app, I use AWS Amplify to send info about user actions (screen views, button taps, swipes, etc.) by means of Analytics.record(...) to AWS Pinpoint which in turn feeds them into a AWS Kinesis Data Stream. I have created an AWS Lambda Python 3 function that listens to events in this data stream.
Setup has been a breeze, thanks to outstanding documentation and everything works fine - except for one thing:
When a user logs in, I update the Pinpoint Endpoint with the user ID, email address and some more attributes using Analytics.updateEndpoint(...). In the lambda function, I base64-decode the event payload as shown in this sample code and a sample event payload looks roughly like this:
{
"event_type": "_session.start",
"event_timestamp": 1572345161558,
"application": {
"app_id": "<some app ID>",
"cognito_identity_pool_id": "us-east-1:<some pool ID>",
"sdk": {},
"version_name": "<the app version I put in using updateEndpoint(...)>"
... <snipped for brevity> ...
},
"attributes": {},
"endpoint": {
"ChannelType": "APNS",
"Address": "=ABAQRuUDJD ... <some longish binary value> j0eL+69lsY=",
"EndpointStatus": "ACTIVE",
"Location": {
"Country": "US"
},
"Demographic": {
"Make": "iPhone",
"Model": "iPhone X",
"ModelVersion": "13.1.3",
...
"Platform": "ios"
},
"User": {
"UserId": "us-east-1:<Cognito ID of the user that logged in>",
"UserAttributes": {}
},
... <snipped for brevity> ...
},
"awsAccountId": "<my account ID>"
}
The user email address in the "Address" field above is not contained in the Kinesis Data Stream event as plain text, but encoded (or encrypted ?) somehow.
My question: Can anybody tell me how it is encoded / encrypted ? And, ideally, how to get the plain text address ?
I tried to base64-decode it or decrypt it using my default AWS KMS key (and a combination thereof), but no luck.
Alternatively, I could use the (plain text) user ID to look up the email address in the AWS Cognito user pool used to manage auth & auth, but getting it from the event directly would obviously be a lot simpler...
I have searched the web up and down, asked in the AWS-Amplify channel on gitter, but that Address encoding / encryption just does not seem to be documented anywhere...

How to return an entire Datastore table by name using Node.js on a Google Cloud Function

I want to retrieve a table (with all rows) by name. I want to HTTP request using something like this on the body {"table": user}.
Tried this code without success:
'use strict';
const {Datastore} = require('#google-cloud/datastore');
// Instantiates a client
const datastore = new Datastore();
exports.getUsers = (req, res) => {
//Get List
const query = this.datastore.createQuery('users');
this.datastore.runQuery(query).then(results => {
const customers = results[0];
console.log('User:');
customers.forEach(customer => {
const cusKey = customer[this.datastore.KEY];
console.log(cusKey.id);
console.log(customer);
});
})
.catch(err => { console.error('ERROR:', err); });
}
Google Datastore is a NoSQL database that is working with entities and not tables. What you want is to load all the "records" which are "key identifiers" in Datastore and all their "properties", which is the "columns" that you see in the Console. But you want to load them based the "Kind" name which is the "table" that you are referring to.
Here is a solution on how to retrieve all the key identifiers and their properties from Datastore, using HTTP trigger Cloud Function running in Node.js 8 environment.
Create a Google Cloud Function and choose the trigger to HTTP.
Choose the runtime to be Node.js 8
In index.js replace all the code with this GitHub code.
In package.json add:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"#google-cloud/datastore": "^3.1.2"
}
}
Under Function to execute add loadDataFromDatastore, since this is the name of the function that we want to execute.
NOTE: This will log all the loaded records into the Stackdriver logs
of the Cloud Function. The response for each record is a JSON,
therefore you will have to convert the response to a JSON object to
get the data you want. Get the idea and modify the code accordingly.

AWS Pinpoint/Ionic - "Resource not found" error when trying to send push through CLI

I am new at programming with AWS services, so some fundamental things are pretty hard for me. Recently, I was asked to develop an app that used Amazon Pinpoint to send push notifications, as a test for considering future implementations.
As you can see in another question I posted in here (Amazon Pinpoint and Ionic - Push notifications not working when app is in background), I was having trouble trying to send push notifications to users when my app is running in the background. The app was developed using Ionic by following these steps.
When I was almost giving up, I decided to try sending the pushes directly through Firebase, and it finally worked. Some research took me to this question, in which another user described the problem as only happening in AWS Console, so the solution would be to use CLI. After searching a little about it, I found this tutorial about how to sending pinpoint messages to users using CLI, that seems to be what I wanted. Combining it with this documentation about phonegap plugin, I was able to generate a JSON I thought could be a solution:
{
"ApplicationId":"io.ionic.starter",
"MessageRequest":{
"Addresses": {
"": {
"BodyOverride": "",
"ChannelType": "GCM",
"Context": {
"": ""
},
"RawContent": "",
"Substitutions": {},
"TitleOverride": ""
}
},
"Context": {
"": ""
},
"Endpoints": {"us-east-1": {
"BodyOverride": "",
"Context": {},
"RawContent": "",
"Substitutions": {},
"TitleOverride": ""
}
},
"MessageConfiguration": {
"GCMMessage": {
"Action": "OPEN_APP",
"Body": "string",
"CollapseKey": "",
"Data": {
"": ""
},
"IconReference": "",
"ImageIconUrl": "",
"ImageUrl": "",
"Priority": "High",
"RawContent": "{\"data\":{\"title\":\"sometitle\",\"body\":\"somebody\",\"url\":\"insertyourlinkhere.com\"}}",
"RestrictedPackageName": "",
"SilentPush": false,
"SmallImageIconUrl": "",
"Sound": "string",
"Substitutions": {},
"TimeToLive": 123,
"Title": "",
"Url": ""
}
}
}
}
But when I executed it in cmd with aws pinpoint send-messages --color on --region us-east-1 --cli-input-json file://test.json, I got the response An error occurred (NotFoundException) when calling the SendMessages operation: Resource not found.
I believe I didn't write the JSON file correctly, since it's my first time doing this. So please, if any of you know what I am doing wrong, no mattering which step I misunderstood, I would appreciate the help!
"Endpoints" field in the Message request deals with the endpoint id (the identifier associated with an end user device while registering to pinpoint and not the region.)
In case if you haven't registered any endpoints with Pinpoint, you can use the "Addresses" field. After registering the GCM Channel in Amazon Pinpoint, you can get the GCM device token from your device and specify it here.
Here is a sample for sending direct messages using Amazon Pinpoint Note: The example deals with sending SMS message. You should have registered a SMS channel first and created an endpoint with the endpoint id as "test-endpoint1". Otherwise, you can use the "Addresses" field instead of "Endpoints" field.
aws pinpoint send-messages --application-id $APP_ID --message-request '{"MessageConfiguration": {"SMSMessage":{"Body":"hi hello"}},"Endpoints": {"test-endpoint1": {}}}
Also Note: ApplicationId is generated by Pinpoint. When you visit the Pinpoint console and choose your application, the URL will be of the format
https://console.aws.amazon.com/pinpoint/home/?region=us-east-1#/apps/someverybigstringhere/
Here "someverybigstringhere" is the ApplicationId and not the name you give for your project.

"We can not access the URL currently."

I call google api when the return of "We can not access the URL currently." But the resources must exist and can be accessed.
https://vision.googleapis.com/v1/images:annotate
request content:
{
"requests": [
{
"image": {
"source": {
"imageUri": "http://yun.jybdfx.com/static/img/homebg.jpg"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
],
"imageContext": {
"languageHints": [
"zh"
]
}
}
]
}
response content:
{
"responses": [
{
"error": {
"code": 4,
"message": "We can not access the URL currently. Please download the content and pass it in."
}
}
]
}
As of August, 2017, this is a known issue with the Google Cloud Vision API (source). It appears to repro for some users but not deterministically, and I've run into it myself with many images.
Current workarounds include either uploading your content to Google Cloud Storage and passing its gs:// uri (note it does not have to be publicly readable on GCS) or downloading the image locally and passing it to the vision API in base64 format.
Here's an example in Node.js of the latter approach:
const request = require('request-promise-native').defaults({
encoding: 'base64'
})
const data = await request(image)
const response = await client.annotateImage({
image: {
content: data
},
features: [
{ type: vision.v1.types.Feature.Type.LABEL_DETECTION },
{ type: vision.v1.types.Feature.Type.CROP_HINTS }
]
})
I have faced the same issue when I was trying to call the api using the firebase storage download url (although it worked initially)
After looking around I found the below example in the api docs for NodeJs.
NodeJs example
// Imports the Google Cloud client libraries
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const bucketName = 'Bucket where the file resides, e.g. my-bucket';
// const fileName = 'Path to file within bucket, e.g. path/to/image.png';
// Performs text detection on the gcs file
const [result] = await client.textDetection(`gs://${bucketName}/${fileName}`);
const detections = result.textAnnotations;
console.log('Text:');
detections.forEach(text => console.log(text));
For me works only uploading image to google cloud platform and passing it to URI parameters.
In my case, I tried retrieving an image used by Cloudinary our main image hosting provider.
When I accessed the same image but hosted on our secondary Rackspace powered CDN, Google OCR was able to access the image.
Not sure why Cloudinary didn't work when I was able to access the image via my web browser, but just my little workaround situation.
I believe the error is caused by the Cloud Vision API refusing to download images on a domain whose robots.txt file blocks Googlebot or Googlebot-Image.
The workaround that others mentioned is in fact the proper solution: download the images yourself and either pass them in the image.content field or upload them to Google Cloud Storage and use the image.source.gcsImageUri field.
For me, I resolved this issue by requesting URI (e.g.: gs://bucketname/filename.jpg) instead of Public URL or Authenticated URL.
const vision = require('#google-cloud/vision');
function uploadToGoogleCloudlist (req, res, next) {
const originalfilename = req.file.originalname;
const bucketname = "yourbucketname";
const imageURI = "gs://"+bucketname+"/"+originalfilename;
const client = new vision.ImageAnnotatorClient(
{
projectId: 'yourprojectid',
keyFilename: './router/fb/yourprojectid-firebase.json'
}
);
var visionjson;
async function getimageannotation() {
const [result] = await client.imageProperties(imageURI);
visionjson = result;
console.log ("vision result: "+JSON.stringify(visionjson));
return visionjson;
}
getimageannotation().then( function (result){
var datatoup = {
url: imageURI || ' ',
filename: originalfilename || ' ',
available: true,
vision: result,
};
})
.catch(err => {
console.error('ERROR CODE:', err);
});
next();
}
I faced with the same issue several days ago.
In my case the problem happened due to using queues and call api requests in one time from the same ip. After changing the number of parallel processes from 8 to 1, the amount of such kind of errors was reduced from ~30% to less than 1%.
May be it will help somebody. I think there is some internal limits on google side for loading remote images (because as people reported, using google storage also solves the problem).
My hypothesis is that an overall (short) timeout exists on Google API side which limit the number of files that can actually be retrieved.
Sending 16 images for batch-labeling is possible but only 5 o 6 will labelled because the origin webserver hosting the images was unable to return all 16 files within <Google-Timeout> milliseconds.
In my case, the image uri that I was specifying in the request pointed at a large image ~ 4000px x 6000px. When I changed it to a smaller version of the image. The request succeeded
The very same request works for me. It is possible that the image host was temporarily down and/or had issues on their side. If you retry the request it will mostly work for you.