I am trying to get a Cloud Function to create a Cloud Task that will invoke a Cloud Function. Easy.
The flow and use case are very close to the official tutorial here.
I also looked at this article by Doug Stevenson and in particular its security section.
No luck, I am consistently getting a 16 (UNAUTHENTICATED) error in Cloud Task.
If I can trust what I see in the console it seems that Cloud Task is not attaching the OIDC token to the request:
Yet, in my code I do have the oidcToken object:
const { v2beta3, protos } = require("#google-cloud/tasks");
import {
PROJECT_ID,
EMAIL_QUEUE,
LOCATION,
EMAIL_SERVICE_ACCOUNT,
EMAIL_HANDLER,
} from "./../config/cloudFunctions";
export const createHttpTaskWithToken = async function (
payload: {
to_email: string;
templateId: string;
uid: string;
dynamicData?: Record<string, any>;
},
{
project = PROJECT_ID,
queue = EMAIL_QUEUE,
location = LOCATION,
url = EMAIL_HANDLER,
email = EMAIL_SERVICE_ACCOUNT,
} = {}
) {
const client = new v2beta3.CloudTasksClient();
const parent = client.queuePath(project, location, queue);
// Convert message to buffer.
const convertedPayload = JSON.stringify(payload);
const body = Buffer.from(convertedPayload).toString("base64");
const task = {
httpRequest: {
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
url,
oidcToken: {
serviceAccountEmail: email,
audience: new URL(url).origin,
},
headers: {
"Content-Type": "application/json",
},
body,
},
};
try {
// Send create task request.
const request = { parent: parent, task: task };
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
return response.name;
} catch (error) {
if (error instanceof Error) console.error(Error(error.message));
return;
}
};
When logging the task object from the code above in Cloud Logging I can see that the service account is the one that I created for the purpose of this and that the Cloud Tasks are successfully created.
IAM:
And the function that the Cloud Task needs to invoke:
Everything seems to be there, in theory.
Any advice as to what I would be missing?
Thanks,
Your audience is incorrect. It must end by the function name. Here, you only have the region and the project https://<region>-<projectID>.cloudfunction.net/. Use the full Cloud Functions URL.
Related
I got this error when task is trying processing.
This is my nodejs code
async function quickstart(message : any) {
// TODO(developer): Uncomment these lines and replace with your values.
const project = "";//projectid
const queue = "";//queuename
const location = "";//region
const payload = JSON.stringify({
id: message.id,
data: message.data,
attributes: message.attributes,
});
const inSeconds = 180;
// Construct the fully qualified queue name.
const parent = client.queuePath(project, location, queue);
const task = {
appEngineHttpRequest: {
headers: {"Content-type": "application/json"},
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
relativeUri: "/api/download",
body: "",
},
scheduleTime: {},
};
if (payload) {
task.appEngineHttpRequest.body = Buffer.from(payload).toString("base64");
}
if (inSeconds) {
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
const request = {
parent: parent,
task: task,
};
console.log("Sending task:");
console.log(task);
// Send create task request.
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
console.log("Created task");
return true;
}
The task is created without issue. However, it didnt trigger my cloud function and I got 404 or unhandled exception in my cloud logs. I have no idea whats going wrong.
I also did test with gcloud cli without the issue. Gcloud cli able to trigger my cloud function based on provided url.
I am scouring the documentation, and it only provides pseudo-code of the credentials for v3 (e.g. const client = new S3Client(clientParams)
How do I initialize an S3Client with the bucket and credentials to perform a getSignedUrl request? Any resources pointing me in the right direction would be most helpful. I've even searched YouTube, SO, etc and I can't find any specific info on v3. Even the documentation and examples doesn't provide the actual code to use credentials. Thanks!
As an aside, do I have to include the fake folder structure in the filename, or can I just use the actual filename? For example: bucket/folder1/folder2/uniqueFilename.zip or uniqueFilename.zip
Here's the code I have so far: (Keep in mind I was returning the wasabiObjKey to ensure I was getting the correct file name. I am. It's the client, GetObjectCommand, and getSignedUrl that I'm having issues with.
exports.getPresignedUrl = functions.https.onCall(async (data, ctx) => {
const wasabiObjKey = `${data.bucket_prefix ? `${data.bucket_prefix}/` : ''}${data.uid.replace(/-/g, '_').toLowerCase()}${data.variation ? `_${data.variation.replace(/\./g, '').toLowerCase()}` : ''}.zip`
const { S3Client, GetObjectCommand } = require('#aws-sdk/client-s3')
const s3 = new S3Client({
bucketEndpoint: functions.config().s3_bucket.name,
region: functions.config().s3_bucket.region,
credentials: {
secretAccessKey: functions.config().s3.secret,
accessKeyId: functions.config().s3.access_key
}
})
const command = new GetObjectCommand({
Bucket: functions.config().s3_bucket.name,
Key: wasabiObjKey,
})
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner")
const url = getSignedUrl(s3, command, { expiresIn: 60 })
return wasabiObjKey
})
There are a credential chain that provide credential to your API calls from SDK
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html
Loaded from AWS Identity and Access Management (IAM) roles for Amazon
EC2
Loaded from the shared credentials file (~/.aws/credentials)
Loaded from environment variables
Loaded from a JSON file on disk
Other credential-provider classes provided by the JavaScript SDK
You can embed the credential inside your source code but it's not the prefered way
new S3Client(configuration: S3ClientConfig): S3Client
Where S3ClientConfig contain a credentials property
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const { S3Client,GetObjectCommand } = require("#aws-sdk/client-s3");
let client = new S3Client({
region:'ap-southeast-1',
credentials:{
accessKeyId:'',
secretAccessKey:''
}
});
(async () => {
const response = await client.send(new GetObjectCommand({Bucket:"BucketNameHere",Key:"ObjectNameHere"}));
console.log(response);
})();
Sample answer
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: '7kwrFkEp3lEnLU+OtxjrgdmS6gQmvPdbnqqR7I8P/rdFrUPBkdKYPYykWivuHPXCF1IHgjCIbe8=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
Here's a simple approach I use (in Deno) for testing (in case you don't want to go the signedUrl approach and just let the SDK do the heavy lifting for you):
import { config as env } from 'https://deno.land/x/dotenv/mod.ts' // https://github.com/pietvanzoen/deno-dotenv
import { S3Client, ListObjectsV2Command } from 'https://cdn.skypack.dev/#aws-sdk/client-s3' // https://github.com/aws/aws-sdk-js-v3
const {AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY} = env()
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const credentials = {
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/s3clientconfig.html
const config = {
region: 'ap-southeast-1',
credentials,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3client.html
const client = new S3Client(config)
export async function list() {
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/listobjectsv2commandinput.html
const input = {
Bucket: 'BucketNameHere'
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/command.html
const cmd = new ListObjectsV2Command(input)
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/listobjectsv2command.html
return await client.send(cmd)
}
Following this documentation, when requesting a batchPredict I run into this error via API
{
"error": {
"code": 13
"message": "internal",
}
}
Additionally, here's a screenshot screenshot of the error I see when I try to use the "Test & Use" tab. Neither of which are descriptive, so I'm not sure where the error lies.
In the request, I include the path to my CSV file in the Google Storage, which links to a video in the same bucket. Here's the contents of the CSV:
gs://XXXXXXXXXXXX/movie1.mov,0,inf
gs://XXXXXXXXXXXX/movie2.mov,0,inf
I also include the path to a /Results folder (in the same bucket) to save the predictions.
Code making the call:
const client = new PredictionServiceClient();
async function batchPredict() {
const request = {
name: client.modelPath('project-id-xxxxxx', 'us-central1', 'VOTxxxxxxxxxx'),
inputConfig: {
gcsSource: {
inputUris: ['gs://XXXXXXXXXXXX/apitest.csv'],
},
},
outputConfig: {
gcsDestination: {
outputUriPrefix: 'gs://XXXXXXXXXXXX/results/',
},
},
};
Please let me know if I need to provide any more detail.
The possible root cause is one of those two:
There is an issue somewhere in your code. So, if your code is not the same as below, I suggest that you try it out (changing the appropriate variables of course).
There is something wrong with your model, which is the most probable root cause (as per the error message itself).
So, if it is not your code, you should create a private issue report on issue-tracker explaining your issue and giving as much details as possible on it as well as your use case and impact.
As it is private, only Googlers and you will have access to it so feel free to share your project and model IDs.
Here is what I did to try to reproduce your issue (be sure to follow the before you begin guide):
I have trained a model on gs://YOUR_BUCKET/TRAINING.csv
TRAIN,gs://automl-video-demo-data/traffic_videos/traffic_videos_train.csv
TEST,gs://automl-video-demo-data/traffic_videos/traffic_videos_test.csv
Predicted on a couple of images on gs://YOUR_BUCKET/VIDEOS_TO_ANNOTATE.csv (inputUri):
gs://automl-video-demo-data/traffic_videos/highway_078.mp4, 0,inf
gs://automl-video-demo-data/traffic_videos/highway_079.mp4,10.00000,15.50000
using the Node.js predict example from the tutorial:
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
const projectId = 'YOUR_PROJECT';
const location = 'us-central1';
const modelId = 'VOTXXXXXXXXXXXXXXXXXX';
const inputUri = 'gs://YOUR_BUCKET/VIDEOS_TO_ANNOTATE.csv';
const outputUri = 'gs://YOUR_BUCKET/outputs/';
// Imports the Google Cloud AutoML library
const {PredictionServiceClient} = require('#google-cloud/automl').v1beta1;
// Instantiates a client
const client = new PredictionServiceClient();
async function batchPredict() {
// Construct request
const request = {
name: client.modelPath(projectId, location, modelId),
inputConfig: {
gcsSource: {
inputUris: [inputUri],
},
},
outputConfig: {
gcsDestination: {
outputUriPrefix: outputUri,
},
},
};
const [operation] = await client.batchPredict(request);
console.log('Waiting for operation to complete...');
// Wait for operation to complete.
const [response] = await operation.promise();
console.log(
`Batch Prediction results saved to Cloud Storage bucket. ${response}`
);
}
batchPredict();
Note that I have also tried the REST & CMD LINE predict example.
And in both cases, it worked well and I received a correct response:
Nodejs prediction's response:
Waiting for operation to complete...
Batch Prediction results saved to Cloud Storage bucket. [object Object]
REST & CMD LINE prediction's response:
{
"name": "projects/XXXXXXXXXX/locations/us-central1/operations/VOTXXXXXXXXXXXXXXX",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": "2021-04-16T08:09:52.102270Z",
"updateTime": "2021-04-16T08:09:52.102270Z",
"batchPredictDetails": {
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://MY_BUCKET/VIDEOS_TO_ANNOTATE.csv"
]
}
}
}
}
}
Problem:
I am trying to get the data from a text file stored in s3, I get it right in intent handler using a sync await but I want to get string in localisation file as I am trying to implement the solution in 2 languages.
I am getting err saying skill does not respond correctly.
This is file.js
const AWS = require('aws-sdk');
//========================
// This step is not required if you are running your code inside lambda or in
// the local environment that has AWS set up
//========================
const s3 = new AWS.S3();
async function getS3Object (bucket, objectKey) {
try {
const params = {
Bucket: 'my-bucket',
Key: 'file.txt',
};
const data = await s3.getObject(params).promise();
let dat = data.Body.toString('utf-8');
return dat;
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`);
}
}
module.exports = getS3Object;
this is the localisation.js file code
const dataText = require('file.js');
async let textTitle = await dataText().then(); **// this does not work**
module.exports = {
en: {
translation: {
WELCOME_BACK_MSG : textTitle,
}
},
it: {
translation: {
WELCOME_MSG: textTitle,
}
}
}
The problem is that in your localisation.js file you are trying to export something that is obtained via an asynchronous function call, but you cannot do that directly, module.exports is assigned and returned synchronously. Please, see for instance this SO question and answer for an in-deep background.
As you are mentioning Alexa skill, and for the name of the file, localisation.js, I assume you are trying something similar to the solution proposed in this GitHub repository.
Analyzing the content of the index.js file they provide, it seems the library is using i18next for localisation.
The library provides the concept of backend if you need to load your localisation information from an external resource.
You can implement a custom backend, although the library offers one that could fit your needs, i18next-http-backend.
As indicated in the documentation, you can configure the library to fetch your localization resources with this backend with something like the following:
import i18next from 'i18next';
import Backend from 'i18next-http-backend';
i18next
.use(Backend)
.init({
backend: {
// for all available options read the backend's repository readme file
loadPath: '/locales/{{lng}}/{{ns}}.json'
}
});
Here in SO you can find a more complete example.
You need to provide a similar configuration to the localisation interceptor provided in the Alexa skill example project, perhaps something like:
import HttpApi from 'i18next-http-backend';
/**
* This request interceptor will bind a translation function 't' to the handlerInput
*/
const LocalizationInterceptor = {
process(handlerInput) {
const localisationClient = i18n
.use(HttpApi)
.init({
lng: Alexa.getLocale(handlerInput.requestEnvelope),
// resources: languageStrings,
backend: {
loadPath: 'https://your-bucket.amazonaws.com/locales/{{lng}}/translations.json',
crossDomain: true,
},
returnObjects: true
});
localisationClient.localise = function localise() {
const args = arguments;
const value = i18n.t(...args);
if (Array.isArray(value)) {
return value[Math.floor(Math.random() * value.length)];
}
return value;
};
handlerInput.t = function translate(...args) {
return localisationClient.localise(...args);
}
}
};
Please, be aware that instead of a text file you need to return a valid son file with the appropriate translations:
{
"WELCOME_MSG" : "Welcome!!",
"WELCOME_BACK_MSG" : "Welcome back!!"
}
I followed Google's Quick-Start documentation for the Speech API to enable billing and API for an account. This account has authorized a service account to create Compute instances on its behalf. After creating an instance on the child account, hosting a binary to use the Speech API, I am unable to successfully use the example C# code provided by Google in the C# speech example:
try
{
var speech = SpeechClient.Create();
var response = speech.Recognize(new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
LanguageCode = "en"
}, RecognitionAudio.FromFile(audioFiles[0]));
foreach (var result in response.Results)
{
foreach (var alternative in result.Alternatives)
{
Debug.WriteLine(alternative.Transcript);
}
}
} catch (Exception ex)
// ...
}
Requests fail on the SpeechClient.Create() line with the following error:
--------------------------- Grpc.Core.RpcException: Status(StatusCode=Unauthenticated, Detail="Exception occured in
metadata credentials plugin.")
at
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task
task)
at
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task
task)
at Grpc.Core.Internal.AsyncCall`2.UnaryCall(TRequest msg)
at
Grpc.Core.Calls.BlockingUnaryCall[TRequest,TResponse](CallInvocationDetails`2
call, TRequest req)
at
Grpc.Core.DefaultCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Grpc.Core.Internal.InterceptingCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Google.Cloud.Speech.V1.Speech.SpeechClient.Recognize(RecognizeRequest
request, CallOptions options)
at
Google.Api.Gax.Grpc.ApiCall.<>c__DisplayClass0_0`2.b__1(TRequest
req, CallSettings cs)
at
Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass1_0`2.b__0(TRequest
request, CallSettings callSettings)
at Google.Api.Gax.Grpc.ApiCall`2.Sync(TRequest request,
CallSettings perCallCallSettings)
at
Google.Cloud.Speech.V1.SpeechClientImpl.Recognize(RecognizeRequest
request, CallSettings callSettings)
at Google.Cloud.Speech.V1.SpeechClient.Recognize(RecognitionConfig
config, RecognitionAudio audio, CallSettings callSettings)
at Rc2Solver.frmMain.RecognizeWordsGoogleSpeechApi() in
C:\Users\jorda\Google
Drive\VSProjects\Rc2Solver\Rc2Solver\frmMain.cs:line 1770
--------------------------- OK
I have verified that the Speech API is activated. Here is the scope that the service account uses when creating the Compute instances:
credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(me)
{
Scopes = new[] { ComputeService.Scope.Compute, ComputeService.Scope.CloudPlatform }
}.FromPrivateKey(yk)
);
I have found no information or code online about specifically authorizing or authenticating the Speech API for service account actors. Any help is appreciated.
It turns out the issue was that the Cloud Compute instances needed to be created with a ServiceAccount parameter specified. Otherwise the Cloud instances were not part of a ServiceAccount default credential, which is referenced by the SpeechClient.Create() call. Here is the proper way to create an instance attached to a service account, and it will use the SA tied to the project ID:
service = new ComputeService(new BaseClientService.Initializer() {
HttpClientInitializer = credential,
ApplicationName = "YourAppName"
});
string MyProjectId = "example-project-27172";
var project = await service.Projects.Get(MyProjectId).ExecuteAsync();
ServiceAccount servAcct = new ServiceAccount() {
Email = project.DefaultServiceAccount,
Scopes = new [] {
"https://www.googleapis.com/auth/cloud-platform"
}
};
Instance instance = new Instance() {
MachineType = service.BaseUri + MyProjectId + "/zones/" + targetZone + "/machineTypes/" + "g1-small",
Name = name,
Description = name,
Disks = attachedDisks,
NetworkInterfaces = networkInterfaces,
ServiceAccounts = new [] {
servAcct
},
Metadata = md
};
batchRequest.Queue < Instance > (service.Instances.Insert(instance, MyProjectId, targetZone),
(content, error, i, message) => {
if (error != null) {
AddEventMsg("Error creating instance " + name + ": " + error.ToString());
} else {
AddEventMsg("Instance " + name + " created");
}
});