GCP Video Intelligence - batchPredict error - google-cloud-platform

Following this documentation, when requesting a batchPredict I run into this error via API
{
"error": {
"code": 13
"message": "internal",
}
}
Additionally, here's a screenshot screenshot of the error I see when I try to use the "Test & Use" tab. Neither of which are descriptive, so I'm not sure where the error lies.
In the request, I include the path to my CSV file in the Google Storage, which links to a video in the same bucket. Here's the contents of the CSV:
gs://XXXXXXXXXXXX/movie1.mov,0,inf
gs://XXXXXXXXXXXX/movie2.mov,0,inf
I also include the path to a /Results folder (in the same bucket) to save the predictions.
Code making the call:
const client = new PredictionServiceClient();
async function batchPredict() {
const request = {
name: client.modelPath('project-id-xxxxxx', 'us-central1', 'VOTxxxxxxxxxx'),
inputConfig: {
gcsSource: {
inputUris: ['gs://XXXXXXXXXXXX/apitest.csv'],
},
},
outputConfig: {
gcsDestination: {
outputUriPrefix: 'gs://XXXXXXXXXXXX/results/',
},
},
};
Please let me know if I need to provide any more detail.

The possible root cause is one of those two:
There is an issue somewhere in your code. So, if your code is not the same as below, I suggest that you try it out (changing the appropriate variables of course).
There is something wrong with your model, which is the most probable root cause (as per the error message itself).
So, if it is not your code, you should create a private issue report on issue-tracker explaining your issue and giving as much details as possible on it as well as your use case and impact.
As it is private, only Googlers and you will have access to it so feel free to share your project and model IDs.
Here is what I did to try to reproduce your issue (be sure to follow the before you begin guide):
I have trained a model on gs://YOUR_BUCKET/TRAINING.csv
TRAIN,gs://automl-video-demo-data/traffic_videos/traffic_videos_train.csv
TEST,gs://automl-video-demo-data/traffic_videos/traffic_videos_test.csv
Predicted on a couple of images on gs://YOUR_BUCKET/VIDEOS_TO_ANNOTATE.csv (inputUri):
gs://automl-video-demo-data/traffic_videos/highway_078.mp4, 0,inf
gs://automl-video-demo-data/traffic_videos/highway_079.mp4,10.00000,15.50000
using the Node.js predict example from the tutorial:
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
const projectId = 'YOUR_PROJECT';
const location = 'us-central1';
const modelId = 'VOTXXXXXXXXXXXXXXXXXX';
const inputUri = 'gs://YOUR_BUCKET/VIDEOS_TO_ANNOTATE.csv';
const outputUri = 'gs://YOUR_BUCKET/outputs/';
// Imports the Google Cloud AutoML library
const {PredictionServiceClient} = require('#google-cloud/automl').v1beta1;
// Instantiates a client
const client = new PredictionServiceClient();
async function batchPredict() {
// Construct request
const request = {
name: client.modelPath(projectId, location, modelId),
inputConfig: {
gcsSource: {
inputUris: [inputUri],
},
},
outputConfig: {
gcsDestination: {
outputUriPrefix: outputUri,
},
},
};
const [operation] = await client.batchPredict(request);
console.log('Waiting for operation to complete...');
// Wait for operation to complete.
const [response] = await operation.promise();
console.log(
`Batch Prediction results saved to Cloud Storage bucket. ${response}`
);
}
batchPredict();
Note that I have also tried the REST & CMD LINE predict example.
And in both cases, it worked well and I received a correct response:
Nodejs prediction's response:
Waiting for operation to complete...
Batch Prediction results saved to Cloud Storage bucket. [object Object]
REST & CMD LINE prediction's response:
{
"name": "projects/XXXXXXXXXX/locations/us-central1/operations/VOTXXXXXXXXXXXXXXX",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": "2021-04-16T08:09:52.102270Z",
"updateTime": "2021-04-16T08:09:52.102270Z",
"batchPredictDetails": {
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://MY_BUCKET/VIDEOS_TO_ANNOTATE.csv"
]
}
}
}
}
}

Related

Cloud Functions / Cloud Tasks UNAUTHENTICATED error

I am trying to get a Cloud Function to create a Cloud Task that will invoke a Cloud Function. Easy.
The flow and use case are very close to the official tutorial here.
I also looked at this article by Doug Stevenson and in particular its security section.
No luck, I am consistently getting a 16 (UNAUTHENTICATED) error in Cloud Task.
If I can trust what I see in the console it seems that Cloud Task is not attaching the OIDC token to the request:
Yet, in my code I do have the oidcToken object:
const { v2beta3, protos } = require("#google-cloud/tasks");
import {
PROJECT_ID,
EMAIL_QUEUE,
LOCATION,
EMAIL_SERVICE_ACCOUNT,
EMAIL_HANDLER,
} from "./../config/cloudFunctions";
export const createHttpTaskWithToken = async function (
payload: {
to_email: string;
templateId: string;
uid: string;
dynamicData?: Record<string, any>;
},
{
project = PROJECT_ID,
queue = EMAIL_QUEUE,
location = LOCATION,
url = EMAIL_HANDLER,
email = EMAIL_SERVICE_ACCOUNT,
} = {}
) {
const client = new v2beta3.CloudTasksClient();
const parent = client.queuePath(project, location, queue);
// Convert message to buffer.
const convertedPayload = JSON.stringify(payload);
const body = Buffer.from(convertedPayload).toString("base64");
const task = {
httpRequest: {
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
url,
oidcToken: {
serviceAccountEmail: email,
audience: new URL(url).origin,
},
headers: {
"Content-Type": "application/json",
},
body,
},
};
try {
// Send create task request.
const request = { parent: parent, task: task };
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
return response.name;
} catch (error) {
if (error instanceof Error) console.error(Error(error.message));
return;
}
};
When logging the task object from the code above in Cloud Logging I can see that the service account is the one that I created for the purpose of this and that the Cloud Tasks are successfully created.
IAM:
And the function that the Cloud Task needs to invoke:
Everything seems to be there, in theory.
Any advice as to what I would be missing?
Thanks,
Your audience is incorrect. It must end by the function name. Here, you only have the region and the project https://<region>-<projectID>.cloudfunction.net/. Use the full Cloud Functions URL.

is there a way to get string (data) from text file stored in s3 in Alexa localisation.js file?

Problem:
I am trying to get the data from a text file stored in s3, I get it right in intent handler using a sync await but I want to get string in localisation file as I am trying to implement the solution in 2 languages.
I am getting err saying skill does not respond correctly.
This is file.js
const AWS = require('aws-sdk');
//========================
// This step is not required if you are running your code inside lambda or in
// the local environment that has AWS set up
//========================
const s3 = new AWS.S3();
async function getS3Object (bucket, objectKey) {
try {
const params = {
Bucket: 'my-bucket',
Key: 'file.txt',
};
const data = await s3.getObject(params).promise();
let dat = data.Body.toString('utf-8');
return dat;
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`);
}
}
module.exports = getS3Object;
this is the localisation.js file code
const dataText = require('file.js');
async let textTitle = await dataText().then(); **// this does not work**
module.exports = {
en: {
translation: {
WELCOME_BACK_MSG : textTitle,
}
},
it: {
translation: {
WELCOME_MSG: textTitle,
}
}
}
The problem is that in your localisation.js file you are trying to export something that is obtained via an asynchronous function call, but you cannot do that directly, module.exports is assigned and returned synchronously. Please, see for instance this SO question and answer for an in-deep background.
As you are mentioning Alexa skill, and for the name of the file, localisation.js, I assume you are trying something similar to the solution proposed in this GitHub repository.
Analyzing the content of the index.js file they provide, it seems the library is using i18next for localisation.
The library provides the concept of backend if you need to load your localisation information from an external resource.
You can implement a custom backend, although the library offers one that could fit your needs, i18next-http-backend.
As indicated in the documentation, you can configure the library to fetch your localization resources with this backend with something like the following:
import i18next from 'i18next';
import Backend from 'i18next-http-backend';
i18next
.use(Backend)
.init({
backend: {
// for all available options read the backend's repository readme file
loadPath: '/locales/{{lng}}/{{ns}}.json'
}
});
Here in SO you can find a more complete example.
You need to provide a similar configuration to the localisation interceptor provided in the Alexa skill example project, perhaps something like:
import HttpApi from 'i18next-http-backend';
/**
* This request interceptor will bind a translation function 't' to the handlerInput
*/
const LocalizationInterceptor = {
process(handlerInput) {
const localisationClient = i18n
.use(HttpApi)
.init({
lng: Alexa.getLocale(handlerInput.requestEnvelope),
// resources: languageStrings,
backend: {
loadPath: 'https://your-bucket.amazonaws.com/locales/{{lng}}/translations.json',
crossDomain: true,
},
returnObjects: true
});
localisationClient.localise = function localise() {
const args = arguments;
const value = i18n.t(...args);
if (Array.isArray(value)) {
return value[Math.floor(Math.random() * value.length)];
}
return value;
};
handlerInput.t = function translate(...args) {
return localisationClient.localise(...args);
}
}
};
Please, be aware that instead of a text file you need to return a valid son file with the appropriate translations:
{
"WELCOME_MSG" : "Welcome!!",
"WELCOME_BACK_MSG" : "Welcome back!!"
}

Dropzone.js + AWS S3 stalling queue

I'm trying to impliment a dropzone.js uploader to amazon S3 using the aws-sdk.js for the browser. But when I exceed the 'parallelUploads' maximum in the settings, the queue never completes. I'm using the approach in the following link:
amazon upload
relevant parts of my code:
var dz = new Dropzone("#DZContainer", {
acceptedFiles: "image/*,.jpg,.jpeg,.png,.gif",
autoQueue: true,
autoProcessQueue: true,
parallelUploads: 10,
clickable: [".uploadButton"],
accept: function(file, done){
let params = {
"Bucket": "upload-bucket",
"Key": getFullKey(file.name),
Body: file,
Region: "us-east-1,
ContentType: file.type
}
file.s3upload = AWS.S3.ManagedUpload(params);
if (typeof(done) === 'function') done();
},
canceled: function(file) {
if (file.s3upload) file.s3upload.abort();
},
init: function () {
this.on('removedfile', function (file) {
if (file.s3upload) file.s3upload.abort();
});
}
)
dz.uploadFiles = function (files) {
for (var j = 0; j < files.length; j++) {
var file = files[j];
dz.SendFile(file);
}
};
dz.SendFile = function(file) {
file.s3upload.send(function (err, data) {
if (err) {
console.err(err)
dz.emit("error", file, err.message);
} else {
dz.emit("complete", file);
}
});
if I drag in (or use the clickable) more than 10 files, the first 10 complete but it never processes the rest of the queue. What am I missing? All help is appreciated
EDIT: With a little more digging into Dropzone, it looks as though the file status is never getting set to complete. I see a function called _finished() in the dropzone code, but I'm having a hard time figuring out what specifically is supposed to trigger that function. I have tried dz.emit("complete", file) listed below as well as adding dz.emit("success",file) but my breakpoint at the first line of the _finished() function never triggers. Thus the file.status never gets set to completed.
Does anyone know when/what/how _finished() is supposed to be run?
As mentioned in the edit, I was able to track down where the .status was not properly getting set. This seemed to be in a private Dropzone function called _finished()
With further examination, I noticed that _finished() seemed to also be calling emit("complete", file) after setting file.status to Dropzone.SUCCESS and also emitting "success". It then checks if autoProcessQueue is set and if it is, returns the result of a processQueue() call.
I had a hard time figuring out what triggered this function as it was on an onload event that eventually realized was tied to an XHTTPRequest object used by the internal uploader (which is being overridden by the S3 uploader)
So I modified the function to emulate what the Dropzone._finished() was doing and it's behaving as expected:
dz.SendFile = function(file) {
file.s3upload.send(function (err, data) {
if (err) {
console.err(err)
dz.emit("error", file, err.message);
} else {
file.status = Dropzone.SUCCESS;
dz.emit("success", file, data, err);
dz.emit("complete", file);
if(dz.options.autoProcessQueue)
dz.processQueue()
}
});

How to properly set an API call in QML using XMLHttpRequest

I am building a small weather API as exercise to use QML and properly operate an API call using OpenWeather and you can see there a typical API response.
The problem I am having is that I can't get the API call to work. After setting a minimal example with some cities that you can see below, right next to the city it should appear the symbol of the weather, but it does not happen. The list of the icon can be found here. Source code of the MVE can be found here for completeness.
The error from the compiler: qrc:/main.qml:282: SyntaxError: JSON.parse: Parse error
This is what is happening
This is what is expected
Typical API JSON response can be found both here and below:
{
"coord": {
"lon": -122.08,
"lat": 37.39
},
"weather": [
{
"id": 800,
"main": "Clear",
"description": "clear sky",
"icon": "01d"
}
],
"base": "stations",
"main": {
"temp": 282.55,
"feels_like": 281.86,
"temp_min": 280.37,
"temp_max": 284.26,
"pressure": 1023,
"humidity": 100
},
"visibility": 16093,
"wind": {
"speed": 1.5,
"deg": 350
},
"clouds": {
"all": 1
},
"dt": 1560350645,
"sys": {
"type": 1,
"id": 5122,
"message": 0.0139,
"country": "US",
"sunrise": 1560343627,
"sunset": 1560396563
},
"timezone": -25200,
"id": 420006353,
"name": "Mountain View",
"cod": 200
}
Below a snippet of code related to the API call:
main.qml
// Create the API getcondition to get JSON data of weather
function getCondition(location, index) {
var res
var url = "api.openweathermap.org/data/2.5/weather?id={city id}&appid={your api key}"
var doc = new XMLHttpRequest()
// parse JSON data and put code result into codeList
doc.onreadystatechange = function() {
if(doc.readyState === XMLHttpRequest.DONE) {
res = doc.responseText
// parse data
var obj = JSON.parse(res) // <-- Error Here
if(typeof(obj) == 'object') {
if(obj.hasOwnProperty('query')) {
var ch = onj.query.results.channel
var item = ch.item
codeList[index] = item.condition["code"]
}
}
}
}
doc.open('GET', url, true)
doc.send()
}
In order to solve this problem I consulted several sources, first of all : official documentation and the related function. I believe it is correctly set, but I added the reference for completeness.
Also I came across this one which explained how to simply apply XMLHttpRequest.
Also I dug more into the problem to find a solution and also consulted this one which also explained how to apply the JSON parsing function. But still something is not correct.
Thanks for pointing in the right direction for solving this problem.
Below the answer to my question. I was not reading properly the JSON file and after console logging the problem the solution is below. code was correct from beginning, only the response needed to be reviewed properly and in great detail being the JSON response a bit confusing:
function getCondition() {
var request = new XMLHttpRequest()
request.open('GET', 'http://api.openweathermap.org/data/2.5/weather?q=London&units=metric&appid=key', true);
request.onreadystatechange = function() {
if (request.readyState === XMLHttpRequest.DONE) {
if (request.status && request.status === 200) {
console.log("response", request.responseText)
var result = JSON.parse(request.responseText)
} else {
console.log("HTTP:", request.status, request.statusText)
}
}
}
request.send()
}
Hope that helps!
In your code, your url shows this: "api.openweathermap.org/data/2.5/weather?id={city id}&appid={your api key}". You need to replace {city id} and {your api key} with real values.
You can solve it by providing an actual city ID and API key in your request URL

how to get data from {} in graphql

I want to get data about user addinfo(bool value).
when i do console.log(data.user), i can get data.user referred to below picture.
if when i do console.log(data.user.user), it shows that user is undefined referred to below picture.
{
user(token: "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6ImI3ZTA5YmVhOTAzNzQ3ODQiLCJleHAiOjE1NjM4OTcxNzksIm9yaWdJYXQiOjE1NjM4OTY4Nzl9.QFB58dAvqIC9RBBohN1b3TdR542dBZEcXOG1MSTqAQQ") {
user {
id
addinfo
}
}
}
this code show that
{
"data": {
"user": {
"user": {
"id": "4",
"addinfo": false
}
}
}
}
I can't see the rest of your code, but if the code is fetching your users, there is a time before the request comes back where your user has not been fetched yet. It looks like your screenshot shows this. There is an undefined before the successful object.
You need to ensure that the data has come back first be checking if the data prop is truthy or some other way to check if the promise has completed yet.
ie
if (!data.user) return 'Loading...';
return (
<Switch>
...
In GraphQL I'm getting user info using e.g. below code:
async getUser(id) {
const result = await this.api.query({
query: gql(getUser),
variables: {
id,
},
});
return result.data.getUser || null;
}
I'm invoking it by:
const user = await userService.getUser(id);
and I do have access to user properties.
Maybe you're trying to get user data before they are retrieved and available?