I am using a microphone which records sound through a browser, converts it into a file and sends the file to a java server. Then, my java server sends the file to the cloud speech api and gives me the transcription. The problem is that the transcription is super long (around 3.7sec for 2sec of dialog).
So I would like to speed up the transcription. The first thing to do is to stream the data (if I start the transcription at the beginning of the record. The problem is that I don't really understand the api. For instance if I want to transcript my audio stream from the source (browser/microphone) I need to use some kind of JS api, but I can't find anything I can use in a browser (we can't use node like this can we?).
Else I need to stream my data from my js to my java (not sure how to do it without breaking the data...) and then push it through streamingRecognizeFile from there : https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/speech/cloud-client/src/main/java/com/example/speech/Recognize.java
But it takes a file as the input, so how am I supposed to use it? I cannot really tell the system I finished or not the record... How will it understand it is the end of the transcription?
I would like to create something in my web browser just like the google demo there :
https://cloud.google.com/speech/
I think there is some fundamental stuff I do not understand about the way to use the streaming api. If someone can explain a bit how I should process about this, it would be owesome.
Thank you.
Google "Speech-to-Text typically processes audio faster than real-time, processing 30 seconds of audio in 15 seconds on average" [1]. You can use Google APIs Explorer to test exactly how long your each request would take [2].
To speed up the transcribing you may try to add recognition metadata to your request [3]. You can provide phrase hints if you are aware of the context of the speech [4]. Or use enhanced models to use special set of machine learning models [5]. All these suggestions would improve the accuracy and might have effects on transcribing speed.
When using the streaming recognition, in config you can set singleUtterance option to True. This will detect if user pause speaking and cease the recognition. If not streaming request will continue until to the content limit, which is 1 minute of audio length for streaming request [6].
Related
I want to stream the microphone audio from the web browser to AWS S3.
Got it working
this.recorder = new window.MediaRecorder(...);
this.recorder.addEventListener('dataavailable', (e) => {
this.chunks.push(e.data);
});
and then when user clicks on stop upload the chunks new Blob(this.chunks, { type: 'audio/wav' }) as multiparts to AWS S3.
But the problem is if the recording is 2-3 hours longer then it might take exceptionally longer and user might close the browser before waiting for the recording to complete uploading.
Is there a way we can stream the web audio directly to S3 while it's going on?
Things I tried but can't get a working example:
Kineses video streams, looks like it's only for real time streaming between multiple clients and I have to write my own client which will then save it to S3.
Thought to use kinesis data firehose but couldn't find any client data producer from brower.
Even tried to find any resource using aws lex or aws ivs but I think they are just over engineering for my use case.
Any help will be appreciated.
You can set the timeslice parameter when calling start() on the MediaRecorder. The MediaRecorder will then emit chunks which roughly match the length of the timeslice parameter.
You could upload those chunks using S3's multipart upload feature as you already mentioned.
Please note that you need a library like extendable-media-recorder if you want to record a WAV file since no browser supports that out of the box.
I am a new in AWS services and we want to build a simple demo that detect a special word and: [1] trigger an action [2] responses (as speech during the call).
For example, if the user say: "Help" I want to reply "OK" and make an operation (AWS lambda).
We're using Twilio, and Twilio should streaming the audio.
As I understand I have two options, Android Lex and Transcribe, when Lex is for bots and transcribe just translate the speech and can't get involved in conversation.
So the questions are:
What Services should I use to trigger an action when the special word is recognize AND involved in the conversation?
Can I streaming the call directly to AWS service via Twilio?
Edit
To be more clear: The communication will be with two persons in real time, and I want to make interject during their call when someone say "Help" I want to add a bot voice to the conversation and say "OK", for example"
[Person 1]: Hi, how are you
[Person 2]: HELP ...
[BOT]: OK (like a third person in a conference call..).
I am not fully clear on the interaction taking place with the user, before they interject with help. Are they listening to a bot, media file, TTS, or communicating with another person in real time?
For realtime analysis, you would need to use Twilio Media Streams, which streams the voice conversation to a service that could then convert the speech to text in near real time, looking for keywords, and then programmatically perform some action based on those keywords.
An example of using Twilio Media streams with Lex:
Use Amazon Lex as a conversational interface with Twilio Media Streams
I’ve created a Lex bot that is integrated with an Amazon Connect work flow. The bot is invoked when the user calls the phone number specified in the Connect instance, and the bot itself invokes a Lambda function for initialisation & validation and fulfilment. The bot asks several questions that require the caller to provide simple responses. It all works OK, so far so good. I would like to add a final question that asks the caller for their comments. This could be any spoken text, including non-English words. I would like to be able to capture this Comment slot value as an audio stream or file, perhaps for storage in S3, with the goal of emailing a call centre administrator and providing the audio file as an MP3 or WAV attachment. Is there any way of doing this in Lex?
I’ve seen mention of ‘User utterance storage’ here: https://aws.amazon.com/blogs/contact-center/amazon-connect-with-amazon-lex-press-or-say-input/, but there’s no such setting visible in my Lex console.
I’m aware that Connect can be configured to store a recording in S3, but I need to be able to access the recording for the current phone call from within the Lambda function in order to attach it to an email. Any advice on how to achieve this, or suggestions for a workaround, would be much appreciated.
Thanks
Amazon Connect call recording can only record conversations once an agent accepts the call. Currently Connect cannot record voice in the Contact Flows. So in regards to getting the raw audio from Connect, that is not possible.
However, it looks like you can get it from lex if you developed an external application (could be lambda) that gets utterances: https://docs.aws.amazon.com/lex/latest/dg/API_GetUtterancesView.html
I also do not see the option to enable or disable user utterance storage in Lex, but this makes me think that by default, all are recorded: https://docs.aws.amazon.com/lex/latest/dg/API_DeleteUtterances.html
I'm testing a new SWF workflow, and I've got some activity that makes a RESTful call out to another service. Problem is, I can see through logging that the actual call takes less than a second to complete, but the Activity always times out in SWF (START_TO_CLOSE of 5 mins). Being more specific, the RESTful call is a list call, and when I limit the batch size to a small number, the Activity completes and moves on very quickly. But at some seemingly arbitrary threshold, it chokes completely.
Does anyone have any insight into this? I've read that SWF calls have a size limitation of 1 MB, does anyone know how to find the size of data my workers are trying to pass SWF?
After some remote debugging, it turns out the response from the task is too big and the activity is failing silently. The failure occurs when the framework tries to report the response back to SWF, and the SDK calls RespondActivityTaskCompleted. That API has a length restriction on the internal result param:
Length Constraints: Maximum length of 32768.
This is a validation error that throws an uncaught exception and is swallowed internally until the Activity times out.
I wouldn't recommend using activity input and output parameters for passing large data sets. SWF is an orchestration technology, not the data passing one. The standard workarounds are:
Storing result in a separate store (S3 for example) and passing reference to it.
Caching result locally on a machine and route all following activities to the same host for them to have access to the cached result. See fileprocessing sample for the details of routing approach.
BTW. Have you checked out Cadence which is an open source version of SWF with much better client side libraries?
I'm a newbie in GCP.
While I'm reading the document of google speech api, it says that "Asynchronous Recognition (REST and gRPC) sends audio data to the Speech API and initiates a Long Running Operation. Using this operation, you can periodically poll for recognition results."
But what does "a Long Running Operation" actually means? And what's the difference between the process of synchronous & asynchronous recognition?
I've searched on the internet and found an answer about this: https://www.quora.com/What-is-the-difference-between-synchronous-and-asynchronous-speech-recognition
But I still can't get the idea. Can anyone explain more specifically?
I'll very appreciate for your answer:)
Asynchronous cloud requests usually return an id that request has been en-queued for processing, and later you can use that id to check on status and retrieve results when done.
Synchronous requests return results as part of response, but they may block for longer amounts of time.
You can use gcloud command line tool to try both. Sync requests for audio less than 60 sec
gcloud ml speech recognize AUDIO_FILE ...
and async for audio longer that is longer that 60sec
gcloud ml speech recognize-long-running AUDIO_FILE ...
the latter instead of transcript will return OPERATION_ID which later you can run
gcloud ml speech operations describe OPERATION_ID
to obtain results.
TIP: You can add --log-http flag to see what API requests gcloud is making to get more insight into what is going on at api level.