I've been using Google Cloud Video Intelligence for text detection. Now, I want to use it for speech transcription so I added SPEECH_TRANSCRIPTION feature to TEXT_DETECTION but the response only contains result for one feature, the last one.
const gcsUri = 'gs://path-to-the-video-on-gcs'
const request = {
inputUri: gcsUri,
features: ['TEXT_DETECTION', 'SPEECH_TRANSCRIPTION'],
};
// Detects text in a video
const [operation] = await video.annotateVideo(request);
const [operationResult] = await operation.promise();
const annotationResult = operationResult.annotationResults[0]
const textAnnotations = annotationResult.textAnnotations
const speechTranscriptions = annotationResult.speechTranscriptions
console.log(textAnnotations) // --> []
console.log(speechTranscriptions) // --> [{...}]
Is this a case where annotation is performed on only one feature at a time?
Annotation will be performed for both features. Below is an example code.
const videoIntelligence = require('#google-cloud/video-intelligence');
const client = new videoIntelligence.VideoIntelligenceServiceClient();
const gcsUri = 'gs://cloud-samples-data/video/JaneGoodall.mp4';
async function analyzeVideoTranscript() {
const videoContext = {
speechTranscriptionConfig: {
languageCode: 'en-US',
enableAutomaticPunctuation: true,
},
};
const request = {
inputUri: gcsUri,
features: ['TEXT_DETECTION','SPEECH_TRANSCRIPTION'],
videoContext: videoContext,
};
const [operation] = await client.annotateVideo(request);
const results = await operation.promise();
console.log('Waiting for operation to complete...');
// Gets annotations for video
console.log('Result------------------->');
console.log(results[0].annotationResults);
var i=1;
results[0].annotationResults.forEach(annotationResult=> {
console.log("annotation result no: "+i+" =======================>")
console.log(annotationResult.speechTranscriptions);
console.log(annotationResult.textAnnotations);
i++;
});
}
analyzeVideoTranscript();
N.B: What I have found is that annotationResult may not return the result in the same order of the declared features . You may want to change the code accordingly as per your need.
Related
I want to know if it's possible to train Dialogflow CX through API. By placing the new training phrases in my code (I am using NodeJS) and automatically update the list of phrases in that intent. One thing to add, I want to add a new phrase to the intent list no update an existing phrase.
Thank you in advance!
I was reading the documentation of Dialogflow CX and found this, https://github.com/googleapis/nodejs-dialogflow-cx/blob/main/samples/update-intent.js. But, this implementation will update a specific phrase instead of add it to the list.
Using the sample code that you have provided in your question, I updated it to show how to add a new phrase to the list. newTrainingPhrase will contain the training phrase, append newTrainingPhrase to intent[0].trainingPhrases and set updateMask to "training_phrases" to point to the part of the intent you would like to update.
See code below:
'use strict';
async function main(projectId, agentId, intentId, location, displayName) {
const {IntentsClient} = require('#google-cloud/dialogflow-cx');
const intentClient = new IntentsClient({apiEndpoint: 'us-central1-dialogflow.googleapis.com'});
async function updateIntent() {
const projectId = 'your-project-id';
const agentId = 'your-agent-id';
const intentId = 'your-intent-id';
const location = 'us-central1'; // define your location
const displayName = 'store.hours'; // define display name
const agentPath = intentClient.projectPath(projectId);
const intentPath = `${agentPath}/locations/${location}/agents/${agentId}/intents/${intentId}`;
//define your training phrase
var newTrainingPhrase = {
"parts": [
{
"text": "What time do you open?",
"parameterId": ""
}
],
"id": "",
"repeatCount": 1
};
const intent = await intentClient.getIntent({name: intentPath});
intent[0].trainingPhrases.push(newTrainingPhrase);
const updateMask = {
paths: ['training_phrases'],
};
const updateIntentRequest = {
intent: intent[0],
updateMask,
languageCode: 'en',
};
//Send the request for update the intent.
const result = await intentClient.updateIntent(updateIntentRequest);
console.log(result);
}
updateIntent();
}
process.on('unhandledRejection', err => {
console.error(err.message);
process.exitCode = 1;
});
main(...process.argv.slice(2));
I'm trying to create a REST API with ExpressJS that accept an image and pass it to another service (with a POST request) which is in charge to perform some operations (resize, etc..) and store into an AWS S3. I know that the same solution can be easily done with a Lambda Function directly but I have a K8s and I want to make worth it.
All components are already working with the exception of the service that forward the image to the second service.
The idea that I've found on internet is using a stream, but I got the exception Error: Expected a stream at Object.getStream [as default]
How can I solve that? Is the right practice or there is a better solution to achieve the same result?
const headers = req.headers;
const files: any = req.files
const filename = files[0].originalname;
const buffer = await getStream(files[0].stream)
const formFile = new FormData();
formFile.append('image', buffer, filename);
headers['Content-Type'] = 'multipart/form-data';
axios.post("http://localhost:1401/content/image/test/upload/", formFile, {
headers: headers,
})
.catch((error) => {
const { status, data } = error.response;
res.status(status).send(data);
})
I've found a solution that I post here for those who'll have the same problem.
Install form-data on node:
yarn add form-data
Then in your controller:
const headers = req.headers;
const files: any = req.files
const formFile = new FormData();
files.forEach((file: any) => {
const filename = file.originalname;
const buffer = file.buffer
formFile.append('image', buffer, filename);
})
// set the correct header otherwise it won't work
headers["content-type"] = `multipart/form-data; boundary=${formFile.getBoundary()}`
// now you can send the image to the second service
axios.post("http://localhost:1401/content/image/test/upload/", formFile, {
headers: headers,
})
.then((r : any) => {
res.sendStatus(r.status).end()
})
.catch((error) => {
const { status, data } = error.response;
res.status(status).send(data);
})
I Have this task for my company where i have to do a monthly User access review via cloudwatch.
This is a manual process where i have to go to cloudwatch > cloudwatch_logs > log_groups > /var/log/example_access > example-instance and then document the logs for a list of users from random generated date. The example instance is a certificate manager box which is linked to our entire production fleet nodes. I also have to document what command that user used on a specific nodes.
Wondering is there any way i can automate this process and dump it into word docs? it's getting painful as the list of user/employees are increasing. Thanks
Sure there is, I don't reckon you want Word docs though, I'd launch an elasticsearch instance on AWS and then give users who want data Kibana access.
Also circulating word docs in an org is bad juju, depending on your windows/office version it carries risks.
Add this lambda function and then go into cloudwatch and add it as subscription filter to the right log groups.
Note you may get missing log entries if they're not logged in JSON format or have funky formatting, if you're using a standard log format it should work.
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const elasticsearch = require('elasticsearch')
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
* #param utils
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
const esClient = new elasticsearch.Client({
httpAuth: process.env.esAuth, // your params here
host: process.env.esEndpoint, // your params here.
})
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
Now you should have your logs going into elastic, go into Kibana and you can search by date and even write endpoints to allow people to query their own data!
Easy way is just give stakeholders Kibana access and let them check it out.
Might not be exactly what ya wanted by I reckon it'll work better.
I'm trying to make dose schedule app that when the user set the alarm the app shows a page to check if the user takes a medicine or not. and the user should choose snooze or done with swiping ("done" to the left, "snooze" to the right).
I want the app gets opened automatically from the background on time.
I've already tried "nativescript-local-notification", but this one, the user must press the notification to open or enter the app and read "nativescript background service" but it seems to be the same as I've tried.
Could you tell me the way or give me some example to do?
I've solved it by myself. I put the solution that might be helped someone like me.
First you have set an alarm.
alarm.helper.js
import * as AlarmReceiver from '#/services/AlarmReceiver' // Do not remove
export const setAlarm = data => {
const ad = utils.ad
const context = ad.getApplicationContext()
const alarmManager = application.android.context.getSystemService(android.content.Context.ALARM_SERVICE)
const intent = new android.content.Intent(context, io.nerdrun.AlarmReceiver.class)
const { id, time, title, name } = data
// set up alarm
intent.putExtra('id', id)
intent.putExtra('title', title)
intent.putExtra('name', name)
intent.putExtra('time', time.toString())
const pendingIntent = android.app.PendingIntent.getBroadcast(context, id, intent, android.app.PendingIntent.FLAG_UPDATE_CURRENT)
alarmManager.setExact(alarmManager.RTC_WAKEUP, time.getTime(), pendingIntent)
console.log('registered alarm')
}
Extends AlarmReceiver on Android.
AlarmReceiver.js
export const AlarmReceiver = android.content.BroadcastReceiver.extend('io.nerdrun.AlarmReceiver', {
init: function() {
console.log('init receiver')
},
onReceive: function(context, intent) {
console.log('You got the receiver man!!')
const activityIntent = new android.content.Intent(context, com.tns.NativeScriptActivity.class)
const id = intent.getExtras().getInt('id')
const title = intent.getExtras().getString('title')
const name = intent.getExtras().getString('name')
const time = intent.getExtras().getString('time')
activityIntent.putExtra('id', id)
activityIntent.putExtra('title', title)
activityIntent.putExtra('name', name)
activityIntent.putExtra('time', time)
activityIntent.setFlags(android.content.Intent.FLAG_ACTIVITY_NEW_TASK)
context.startActivity(activityIntent)
}
})
register receiver to your manifest.
AndroidManifest.xml
<receiver android:name="io.nerdrun.AlarmReceiver" />
Of course, you can extend Activity on android into your project, but I haven't implemented it.
After the receiver worked it would navigate to Main Activity, you might control whatever you want in app.js below:
app.js
application.on(application.resumeEvent, args => {
if(args.android) {
console.log('resume succeed!!!')
const android = args.android
const intent = android.getIntent()
const extras = intent.getExtras()
if(extras) {
const id = extras.getInt('id')
const title = extras.getString('title')
const name = extras.getString('name')
const time = extras.getString('time')
Vue.prototype.$store = store
Vue.prototype.$navigateTo(routes.home, { clearHistory: true, props: props })
}
}
}
})
I have setup a Hyperlder Sawtooth Network from the Sawtooth Docs, you can find docker-compose.yaml I used to setup the network here:
https://sawtooth.hyperledger.org/docs/core/releases/1.0/app_developers_guide/sawtooth-default.yaml
Transaction processor code:
const { TransactionHandler } = require('sawtooth-sdk/processor/handler');
const { InvalidTransaction } = require('sawtooth-sdk/processor/exceptions');
const { TextEncoder, TextDecoder } = require('text-encoding/lib/encoding');
const crypto = require('crypto');
const _hash = (x) => {
return crypto.createHash('sha512').update(x).digest('hex').toLowerCase();
}
const encoder = new TextEncoder('utf8');
const decoder = new TextDecoder('utf8');
const TP_FAMILY = 'grocery';
const TP_NAMESPACE = _hash(TP_FAMILY).substring(0, 6);
class GroceryHandler extends TransactionHandler {
constructor() {
super(TP_FAMILY, ['1.0.0'], [TP_NAMESPACE]);
this.timeout = 500;
}
apply(request, context) {
console.log('Transaction Processor Called!');
this._context = context;
this._request = request;
const actions = ['createOrder'];
try {
let payload = JSON.parse(decoder.decode(request.payload));
let action = payload.action
if(!action || !actions.includes(action)) {
throw new InvalidTransaction(`Upsupported action "${action}"!`);
}
try {
return this[action](payload.data);
} catch(e) {
console.log(e);
}
} catch(e) {
throw new InvalidTransaction('Pass a valid json string.');
}
}
createOrder(payload) {
console.log('Creating order!');
let data = {
id: payload.id,
status: payload.status,
created_at: Math.floor((new Date()).getTime() / 1000)
};
return this._setEntry(this._makeAddress(payload.id), data);
}
_setEntry(address, payload) {
let dataBytes = encoder.encode(JSON.stringify(payload));
let entries = {
[address]: dataBytes
}
return this._context.setState(entries);
}
_makeAddress(id) {
return TP_NAMESPACE + _hash(id).substr(0,64);
}
}
const transactionProcessor = new TransactionProcessor('tcp://validator:4004');
transactionProcessor.addHandler(new GroceryHandler());
transactionProcessor.start();
Client code:
const { createContext, CryptoFactory } = require('sawtooth-sdk/signing');
const { protobuf } = require('sawtooth-sdk');
const { TextEncoder } = require('text-encoding/lib/encoding');
const request = require('request');
const crypto = require('crypto');
const encoder = new TextEncoder('utf8');
const _hash = (x) => {
return crypto.createHash('sha512').update(x).digest('hex').toLowerCase();
}
const TP_FAMILY = 'grocery';
const TP_NAMESPACE = _hash(TP_FAMILY).substr(0, 6);
const context = createContext('secp256k1');
const privateKey = context.newRandomPrivateKey();
const signer = new CryptoFactory(context).newSigner(privateKey);
let payload = {
action: 'create_order',
data: {
id: '1'
}
};
const address = TP_NAMESPACE + _hash(payload.id).substr(0, 64);
const payloadBytes = encoder.encode(JSON.stringify(payload));
const transactionHeaderBytes = protobuf.TransactionHeader.encode({
familyName: TP_FAMILY,
familyVersion: '1.0.0',
inputs: [address],
outputs: [address],
signerPublicKey: signer.getPublicKey().asHex(),
batcherPublicKey: signer.getPublicKey().asHex(),
dependencies: [],
payloadSha512: _hash(payloadBytes)
}).finish();
const transactionHeaderSignature = signer.sign(transactionHeaderBytes);
const transaction = protobuf.Transaction.create({
header: transactionHeaderBytes,
headerSignature: transactionHeaderSignature,
payload: payloadBytes
});
const transactions = [transaction]
const batchHeaderBytes = protobuf.BatchHeader.encode({
signerPublicKey: signer.getPublicKey().asHex(),
transactionIds: transactions.map((txn) => txn.headerSignature),
}).finish();
const batchHeaderSignature = signer.sign(batchHeaderBytes)
const batch = protobuf.Batch.create({
header: batchHeaderBytes,
headerSignature: batchHeaderSignature,
transactions: transactions
});
const batchListBytes = protobuf.BatchList.encode({
batches: [batch]
}).finish();
request.post({
url: 'http://localhost:8008/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
return console.log(err);
}
console.log(response.body);
});
Validator log: https://justpaste.it/74y5g
Transaction processor log: https://justpaste.it/5ayn6
> grocery-tp#1.0.0 start /processor
> node index.js tcp://validator:4004
Connected to tcp://validator:4004
Registration of [grocery 1.0.0] succeeded
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
After the below entry in validator logs, I don't receive any transactions to the processor.
[2018-07-04 10:39:18.026 DEBUG block_validator] Block(c9636780f4babea6b8103665bc1fb19a59ce0ba66289494fc61972e97423a3273dd1d41e93ddf90c933809dab5350a0a83b282aaf25ebdcc6619735e25d8b337 (block_num:75, state:00704f66a517e79dc064e63586b12d677a3b60ce25363a4654fa819a59e4132c, previous_block_id:32b07cd79093aee0b7833b8924c8fef01fce798f3d58560c83c9891b2c05c02f2a4b894de43503fdcb0f129e9f365cfbdc415b798877393f7e75598195ad3c94)) rejected due to state root hash mismatch: 00704f66a517e79dc064e63586b12d677a3b60ce25363a4654fa819a59e4132c != e52737049078b9e0f149bb58fc4938473a5e889fa427536b0e862c4728df5004
When sawtooth processes a transaction it will send it to your TP more than once and then compare the hash between the multiple invocations to ensure the same result is returned. If, within the TP, you are generating a different address or variation of data stored at an address it will fail the transaction.
The famous saying in sawtooth is that the TP must be deterministic for each transaction, in other words it is similar to the rule in function programming: The same TP called with the same Transaction should produce the same result.
Things to watch for:
Be careful to not construct an address that incorporates timestamp elements, incremental counts or other random bits of information
Be careful to not do the same for the data you are storing at an address