Where to find stored results from a Cloud Speech-to-Text API call? - google-cloud-platform

I am performing a batch of asynchronous long_running_recognize transcriptions using Google's Cloud Speech-to-Text, and it appears that some of my requests are timing out, and/or not returning anything. How may I access the stored results of my API calls? I'm using Python 3.7.
I realize that the API call returns results to the function that made the call. What I'm asking is, does Google store the results of my API calls somewhere? And how do I access them?

You should probably call the asynchronous method when submitting larger audio files. Specifically this calls the LongRunningRecognize method. This should submit a Long Running Operation and should return an immediate response, for example:
{
"name": "operation_name",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata"
"progressPercent": 34,
"startTime": "2016-08-30T23:26:29.579144Z",
"lastUpdateTime": "2016-08-30T23:26:29.826903Z"
}
}
With this response you can poll for the result given the operation_name:
curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
"https://speech.googleapis.com/v1/operations/your-operation-name"
Note: when you are not receiving any return values using this method, I would suggest increasing the timeout and retry of the client. This can be done with something like:
long_running_recognize(retry=10, timeout=300)
Source

Related

How to disable message encryption for gcp pubsub emulator?

I'm using the official GCP PubSub emulator to test integration locally.
I'd like to send messages via classic curl/postman tools but it is getting complicated because this emulator requires encryption of incoming messages.
For instance, if we send it like this:
curl --location --request POST 'http://localhost:8091/v1/projects/my-project/topics/transactions:publish' \
--header 'Content-Type: application/json' \
--data-raw '{"messages":[{"data":"{\"foo\":\"baz\"}","attributes":{}}]}'
Then, I'm getting 400:
{
"error": {
"code": 400,
"message": "Payload isn't valid for request.",
"status": "INVALID_ARGUMENT"
}
}
due to invalid incoming messages. It requires encryption and if I sniff the encrypted body it works.
But it is overwhelming to encrypt messages running it locally.
In order to disable encryption in GCP I can follow this guide but it is not applicable to local emulators run - there is no GCP environment or I don't know how to do it.
Are there any options to disable emulator decryption? If not where to report it, there is no GitHub project for this.
Ok, it took time for me to understand, but I think you mixed 2 things: encryption and encoding.
The data value in PubSub isn't provided encrypted but encoded in base64. There isn't encryption required here. Base64 encoding is a raw encoding to prevent data loss, encoding type, special characters, binary data and boring compatibility things.
Note: On your local computer with pubsub emulator, the data aren't encrypted at rest and in transit. On Google Cloud, with PubSub service, the data are encrypted in transit and at rest
With curl you can use this command (with linux OS)
curl --location --request POST 'http://localhost:8091/v1/projects/my-project/topics/transactions:publish' \
--header 'Content-Type: application/json' \
--data-raw "{\"messages\":[{\"data\":\"$(echo "{\"foo\":\"baz\"}" | base64 -)\",\"attributes\":{}}]}"
Yes, backslash are boring...
I don't know how to do with postman

How to retrieve the current listeners from a wowza cloud live stream via api?

We are using the wowza cloud to run a weekly live streaming event. Is there a way to get the current listeners as live data from the api?
We found two endpoints, but they appear to be equally dysfunctional:
https://api.cloud.wowza.com/api/v1.4/usage/stream_targets/y7tm2dfl/live leads to
​{
"meta": {
"status": 403,
"code": "ERR-403-RecordUnaccessible",
"title": "Record Unaccessible Error",
"message": "The requested resource isn't accessible.",
"description": ""
},
"request_id": "def6744dc2d7a609c61f488560b80019",
"request_timestamp": "2020-03-27T19:54:14.443Z"
}​
https://api.cloud.wowza.com/api/v1.4/usage/viewer_data/stream_targets/y7tm2dfl leads to
​{
"meta": {
"status": 404,
"code": "ERR-404-RouteNotFound",
"title": "Route Not Found Error",
"message": "The requested endpoint couldn't be found.",
"description": ""
},
"request_id": "11dce4349e0b97011820a39032d9664a",
"request_timestamp": "2020-03-27T19:56:01.637Z"
}​
y7tm2dfl is one of the two stream target ids, we get from calling https://api.cloud.wowza.com/api/v1.4/live_streams/nfpvspdh/stats
Is this the right way? According to this question the data might be only available with an delay of 2 hours...
Anybody knows of something, that can actually count as live data?
Thx a lot!
From Wowza Support:
The below endpoint is the correct one to use for near realtime view counts:
curl -H "wsc-api-key: ${WSC_API_KEY}" \
-H "wsc-access-key: ${WSC_ACCESS_KEY}" \
-H "Content-Type: application/json" \
-X "GET" \
"https://api.cloud.wowza.com/api/v1.4/usage/stream_targets/y7tm2dfl/live"
It appears this stream target "y7tm2dfl" is an Akamai push and will have 2 or more hours timeframe to get results. You'll need to create a new stream target that uses Fastly to take advantage of the near realtime stats.
https://www.wowza.com/docs/add-and-manage-stream-targets-in-wowza-streaming-cloud#add-a-wowza-cdn-on-fastly-target-for-hls-playback
This will retrieve the "Current Unique Viewers", which is defined as "the number of unique viewers for the stream in the last 90 seconds". This is only available with Fastly Stream Targets in api 1.4.

Using HyperLedger Fabric with C++ Application

So I am considering HyperLedger Fabric to use with an application I have written in C++. From my understanding, the interactions i.e. posting retrieving data is all done in chaincode, in all of the examples I have seen this is invoked by using the CLI interface docker container.
I simply want to be able to store data produced by my application on a blockchain.
My question is how do I invoke the chaincode externally, surely this is something that is able to be done. I saw that there was a REST SDK but this is no longer supported so I don't want to go near it, to be honest. What other options are available??
Thanks!
There are two official SDKs you can try out.
Fabric Java SDK
Node JS SDK
As correctly mentioned by #Ajaya Mandal, you can use SDKs to automate the invoking process. For example, you can start the node app as written in app.js of balance transfer example and you can hit the API like it is shown in ./testAPI.sh file.
echo "POST invoke chaincode on peers of Org1 and Org2"
echo
VALUES=$(curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes/mycc \
-H "authorization: Bearer $ORG1_TOKEN" \
-H "content-type: application/json" \
-d "{
\"peers\": [\"peer0.org1.example.com\",\"peer0.org2.example.com\"],
\"fcn\":\"move\",
\"args\":[\"a\",\"b\",\"10\"]
}")
Here you can add your arguments and pass it as you wish. You can use this thread to see how you can pass an HTTP request from C++.

Google Vision API request size limitation (text detection)

I'm using Google Vision API via curl (image is sent as base64-encoded payload within JSON). I can get correct results back only when my request sent via CURL is under 16k or so. As soon as it's over ~16k I'm getting no response at all:
Exactly the same request but with a smaller image
I have added the request over 16k to pastebin:
{
"requests": [
{
"image": {
"content": ...base64...
....
}
Failing request is here:
https://pastebin.com/dl/vL4Ahfw7
I could only find a 20MB limitation in the docs (https://cloud.google.com/vision/docs/supported-files?hl=th) but nothing like the weird issue I have. Thanks.

SOAP vs REST in a non-CRUD and stateless environment

Pretend I am building a simple image-processing API. This API is completely stateless and only needs three items in the request, an image, the image format and an authentication token.
Upon receipt of the image, the server merely processes the image and returns a set of results.
Ex: I see five faces in this image.
Would this still work with a REST based API? Should this be used with a REST based API?
Most of the examples I have seen when comparing REST and SOAP have been purely CRUD based, so I am slightly confused with how they compare in a scenario such as this.
Any help would be greatly appreciated, and although this question seems quite broad, I have yet to find a good answer explaining this.
REST is not about CRUD. It is about resources. So you should ask yourself:
What are my resources?
One answer could be:
An image processing job is a resource.
Create a new image processing job
To create a new image processing job, mak a HTTP POST to a collection of jobs.
POST /jobs/facefinderjobs
Content-Type: image/jpeg
The body of this POST would be the image.
The server would respond:
201 Created
Location: /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
Here 03125EDA-5044-11E4-98C5-26218ABEA664 is the ID of the job assigned by the server.
Retrieve the status of the job
The client now wants to get the status of the job:
GET /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
If the job is not finished, the server could respond:
200 OK
Content-Type: application/json
{
"id": "03125EDA-5044-11E4-98C5-26218ABEA664",
"status": "processing"
}
Later, the client asks again:
GET /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
Now the job is finished and the response from the server is:
200 OK
Content-Type: application/json
{
"id": "03125EDA-5044-11E4-98C5-26218ABEA664",
"status": "finished",
"faces": 5
}
The client would parse the JSON and check the status field. If it is finished, it can get the number of found faces from the faces field.