How to use speech recognition with/on video file? - c++

How can I code speech recognition engine (Using Microsoft Speech SDK) to "listen" a video file and save the detection into a file?

This is very similar to this question and has a very similar answer. You need to separate out the audio portion, convert it to WAV format, and send it to an inproc recognizer.
However, it has the same problems that I described before (requires training, assumes a single voice, and assumes the microphone is close to the speaker). If that's the case, then you can likely get reasonably good results. If that's not the case (i.e., you're trying to transcribe a TV show, or worse, some sort of camcorder audio), then the results will likely be unsatisfactory.

Related

YouTube's auto captioning produces better results than Google Speech to Text API (Model: video, UseEnhanced: true). How can be this possible?

Here my settings of Google Speech to Text AI
Here is the output file of Speech to Text AI : https://justpaste.it/speechtotext2
Here is the output file of YouTube's auto caption: https://justpaste.it/ytautotranslate
This is the video link : https://www.youtube.com/watch?v=IOMO-kcqxJ8&ab_channel=SoftwareEngineeringCourses-SECourses
This is the audio file of the video provided to Google Speech AI : https://storage.googleapis.com/text_speech_furkan/machine_learning_lecture_1.flac
Here I am providing time assigned SRT files
YouTube's SRT : https://drive.google.com/file/d/1yPA1m0hPr9VF7oD7jv5KF7n1QnV3Z82d/view?usp=sharing
Google Speech to Text API's SRT (timing assigned by YouTube) : https://drive.google.com/file/d/1AGzkrxMEQJspYenCbohUM4iuXN7H89wH/view?usp=sharing
I made comparison for some sentences and definitely YouTube's auto translation is better
For example
Google Speech to Text : Represent the **doctor** representation is one of the hardest part of computer AI you will learn about more about that in the future lessons.
What does this mean? Do you think this means that we are not just focused on behavior and **into doubt**. It is more about the reasoning when a human takes an action. There is a reasoning behind it.
YouTube's auto captioning : represent the **data** representation is one of the hardest part of computer ai you will we will learn more about that in the future lessons
what does this mean do you think this means that we are not just focused on behavior and **input** it is more about the reasoning when a human takes an action there is a reasoning behind it
I checked many cases and YouTube's guessing correct words is much better. How is this even possible?
This is the command I used to extract audio of the video : ffmpeg -i "input.mkv" -af aformat=s16:48000:output.flac
Both the automatic captions of the Youtube Auto Caption feature and the transcription of the Speech to Text Recognition are generated by machine learning algorithms, in which case the quality of the transcription may vary according to different aspects.
It is important to note that he Speech to Text API utilizes machine learning algorithms for its transcription, the ones that are improved over time and the results can vary according to the input file and the request configuration. One way of helping the models of Google transcription is by enabling data logging, this will allow Google to collect data from your audio transcription requests that will help to improve its machine learning models used for recognizing speech audio, including enhanced models.
Additionally, on the request configuration of the Speech to Text API, you can specify the RecognitionConfig settings. This parameter contains the encoding, sampleRateHertz, languageCode, maxAlternatives, profanityFilter and the speechContext, every parameter plays an important role on the accuracy of the transcription of the file.
Specifically for FLAC audio files, a lossless compression helps in the quality of the audio provided, since there is no degradation in quality of the original digital sample, FLAC uses a compression level parameter from 0 (fastest) to 8 (smallest file size).
Also, the Speech to Text API offers different ways to improve the accuracy of the transcription, such as:
Speech adaptation : This feature allows you to specify words and/or phrases that STT should recognize more frequently in your audio data
Speech adaptation boost : This feature allows allows you to add numerical weights to words and/or phrases according to how frequently they should be recognized in your audio data.
Phrases hints : Send a list of words and phrases that provide hints to the speech recognition task
These features might help you with the accuracy of the Speech to Text API recognizing your audio files.
Finally, please refer to the Speech to Text best practices to improve the transcription of your audio files, these recommendations are designed for greater efficiency and accuracy as well as reasonable response times from the API.

Creating custom voice commands (GNU/Linux)

I'm looking for advices, for a personal project.
I'm attempting to create a software for creating customized voice commands. The goal is to allow user/me to record some audio data (2/3 secs) for defining commands/macros. Then, when the user will speak (record the same audio data), the command/macro will be executed.
The software must be able to detect a command in less than 1 second of processing time in a low-cost computer (RaspberryPi, for example).
I already searched in two ways :
- Speech Recognition (CMU-Sphinx, Julius, simon) : There is good open-source solutions, but they often need large database files, and speech recognition is not really what I'm attempting to do. Speech Recognition could consume too much power for a small feature.
- Audio Fingerprinting (Chromaprint -> http://acoustid.org/chromaprint) : It seems to be almost what I'm looking for. The principle is to create fingerprint from raw audio data, then compare fingerprints to determine if they can be identical. However, this kind of software/library seems to be designed for song identification (like famous softwares on smartphones) : I'm trying to configure a good "comparator", but I think I'm going in a bad way.
Do you know some dedicated software or parcel of code doing something similar ?
Any suggestion would be appreciated.
I had a more or less similar project in which I intended to send voice commands to a robot. A speech recognition software is too complicated for such a task. I used FFT implementation in C++ to extract Fourier components of the sampled voice, and then I created a histogram of major frequencies (frequencies at which the target voice command has the highest amplitudes). I tried two approaches:
Comparing the similarities between histogram of the given voice command with those saved in the memory to identify the most probable command.
Using Support Vector Machine (SVM) to train a classifier to distinguish voice commands. I used LibSVM and the results are considerably better than the first approach. However, one problem with SVM method is that you need a rather large data set for training. Another problem is that, when an unknown voice is given, the classifier will output a command anyway (which is obviously a wrong command detection). This can be avoided by the first approach where I had a threshold for similarity measure.
I hope this helps you to implement your own voice activated software.
Song fingerprint is not a good idea for that task because command timings can vary and fingerprint expects exact time match. However its very easy to implement matching with DTW algorithm for time series and features extracted with CMUSphinx library Sphinxbase. See Wikipedia entry about DTW for details.
http://en.wikipedia.org/wiki/Dynamic_time_warping
http://cmusphinx.sourceforge.net/wiki/download

Convert Movie to OpenNI *.oni video

The Kinect OpenNI library uses a custom video file format to store videos that contain rgb+d information. These videos have the extension *.oni. I am unable to find any information or documentation whatsoever on the ONI video format.
I'm looking for a way to convert a conventional rgb video to a *.oni video. The depth channel can be left blank (ie zeroed out). For example purposes, I have a MPEG-4 encoded .mov file with audio and video channels.
There are no restrictions on how this conversion must be made, I just need to convert it somehow! Ie, imagemagick, ffmpeg, mencoder are all ok, as is custom conversion code in C/C++ etc.
So far, all I can find is one C++ conversion utility in the OpenNI sources. From the looks of it, I this converts from one *.oni file to another though. I've also managed to find a C++ script by a phd student that converts images from a academic database into a *.oni file. Unfortunately the code is in spanish, not one of my native languages.
Any help or pointers much appreciated!
EDIT: As my usecase is a little odd, some explanation may be in order. The OpenNI Drivers (in my case I'm using the excellent Kinect for Matlab library) allow you to specify a *.oni file when creating the Kinect context. This allows you to emulate having a real Kinect attached that is receiving video data - useful when you're testing / developing code (you don't need to have the Kinect attached to do this). In my particular case, we will be using a Kinect in the production environment (process control in a factory environment), but during development all I have is a video file :) Hence wanting to convert to a *.oni file. We aren't using the Depth channel at the moment, hence not caring about it.
I don't have a complete answer for you, but take a look at the NiRecordRaw and NiRecordSynthetic examples in OpenNI/Samples. They demonstrate how to create an ONI with arbitrary or modified data. See how MockDepthGenerator is used in NiRecordSynthetic -- in your case you will need MockImageGenerator.
For more details you may want to ask in the openni-dev google group.
Did you look into this command and its associated documentation
NiConvertXToONI --
NiConvertXToONI opens any recording, takes every node within it, and records it to a new ONI recording. It receives both the input file and the output file from the command line.

Analysing audio data for attributes at time intervals

I've been wanting to play around with audio parsing for a while now but I haven't really been able to find the correct library for what I want to do.
I basically just want to parse through a sound file and get amplitudes/frequencies and other relevant information at certain times during the song (like every 10 ms or so) so I can graph the data for example where the song speeds up a lot and where it gets really loud.
I've looked at OpenAL quite a bit but it doesn't look like it provides this ability, other than that I have not had much luck with finding out where to start. If anyone has done this or used a library which can do this a point in the right direction would be greatly appreciated. Thanks!
For parsing and decoding audio files I had good results with libsndfile, which runs on Windows/OSX/Linux and is open source (LGPL license). This library does not support mp3 (the author wants to avoid licensing issues), but it does support FLAC and Ogg/Vorbis.
If working with closed source libraries is not a problem for you, then an interesting option could be the Quicktime SDK from Apple. This SDK is available for OSX and Windows and is free for registered developers (you can register as an Apple developer for free as well). With the QT SDK you can parse all the file formats that the Quicktime Player supports, and that includes .mp3. The SDK gives you access to all the codecs installed by QuickTime, so you can read .mp3 files and have them decoded to PCM on the fly. Note that to use this SDK you have to have the free QuickTime Player installed.
As far as signal processing libraries I honestly can't recommend any, as I have written my own functions (for speech recognition, in case you are curious). There are a few open source projects that seem interesting listed in this page.
I recommend that you start simple, for example working on analyzing amplitude data, which is readily available from the PCM samples without having to do any processing. Being able to visualize the data is very useful, I have found Audacity to be an excellent visualization tool, and since it is open source you can build your own tests inside it.
Good luck!

Video and audio file

Duplicate: audio and video file compressor
I would like to compress a wmv 2mb or larger file to 3gp 250kb file for mobile devices.
any great compressors for video or audio?
I'm a huge fan of ffmpeg. Find out what codec and resolution your mobile device wants. If you're lucky, H.264 will be supported.
You might have some trouble here. WMV is a container, not a codec, so we can't tell specifically the level of compression we're dealing with and what needs to be changed where, but it may be difficult to get such a dramatic reduction in filesize without making huge compromises, like decreasing the resolution of the video by several orders of magnitude. These compromises may be acceptable for mobile viewing, but there's no guarantee you'll be able to get that filesize down, especially if your file is encoded in a modern codec like H.264 or VC-1.
My first piece of advice is to attempt to locate a good wizard-like transcoder, with a nice non-developer interface on it, etc. Video compression is intense work, and the power tools behind it, and the tools that these wizard-like applications use to actually perform their work, are very complex and take lots of practice and tweaking to get right, and are usually restricted to commandlines. If your mobile device's vendor provides these utilities, for instance, you'll be much better off using them.
If you aren't able to locate such a utility, godspeed and spend lots of time with mencoder and ffmpeg's man pages and IRC rooms. It's not difficult per se, it just takes a lot of study and reading to get acceptable output, especially when you're going after the reductions you've mentioned. Good luck.