C++ video processing loop - c++

I want to write an application which takes a video stream and in the cycle do processing frames and fragments of audio.
I saw such loop here:
I saw on the forums some solution with pipes here
Is it any library wrapper on ffmpeg/avlib which I could just write my callback without tones of spaghetti around like here

I would look into using Opencv for this. It takes care of a lot of the problems with video streams for you and allows processing on the individual frames.

Here simple and easy to follow examples:
https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples
I'm assuming you have experienced C++ programmer so these examples will give you a good start (btw, examples in C).

Related

adding "read aloud" feature to book app written in Cocos2D

I created a book app and used Cocos2D and physics engine (Chipmunk) to create it. I would like to add "read aloud" feature to it.
So far I found instructions/books and tutorials how to add read aloud feature when book is created with iBook Author (but I couldn't use iBook Author due to some limitations) using Epub3 and SMIL.
I also found a good tutorial from J. Shapiro how to make narrated book using AVSpeechSynthesizer. This helps, only that I would like to use recorded voice, rather than synthesized sound. I don't know if this approach can be modified to do so?
I also know how it can be done in Sprite Kit framework.
The only info that I couldn't find is how to add "read aloud" feature to the app written using Cocos2D. Could it be done within SimpleAudioEngine, or it can be combined with some other engine (possibly from Sprite Kit framework)?
I would appreciate very much if somebody can give me some references/pointers or tutorial links where to look for some answers how to add this feature.
Thanking you in advance.
I would like to use recorded voice, rather than synthesized sound
Good. Add your voice recording audio files (caf, wav or mp3 format) to the project. Play it back at the appropriate time using:
[[SimpleAudioEngine sharedEngine] playEffect:#"someVoiceRecordingFile.wav"];
Define what read aloud means to you because I find that a lot of terms, especially semi-vague ones like this, are used differently depending on who is using it.
When you say read aloud book do you essentially mean a digital storybook that reads the story to you by simply playing narration audio? I've created dozens of these and what you are asking has multiple steps depending on what features you are going for in your book. If you mean simply playing audio and that is it, then yes you could do that in cocos2d using SimpleAudioEngine (as one option) but I assume you already knew that which is why this question has a tab bit of vagueness to it. Either way you probably wouldn't want to play narration as an effect but rather stream it. To do that along with background music you'd stream background music via the left channel and narration via the right. You can easily add a method to SimpleAudioEngine to make this nice and neat. To get you started something similar to this can be used to access the right channel:
CDLongAudioSource* sound = [[CDAudioManager sharedManager] audioSourceForChannel:kASC_Right];
if ([sound isPlaying])
{
[sound stop];
}
[sound load:fileName];
Also use the proper settings and recommended formats for streaming audio such as aifc (or really all audio in general). Although I believe you can stream mp3 without it being decompressed first, the problem is with timing. If you are using highlighted text or looping audio then aifc is the better option. Personally I've never had a reason to use mp3. Wav with narration is something I'd avoid even if just for the file size increase. If the mp3 is decompressed even for streaming (which I'm not sure if it is off the top of my head) then you'd have a huge spike in memory that will be both highly unwanted and at times down right bad.
There are many other things that can go into it but those are the basic first steps. If you want to do things like highlighted text, per-word animations, etc then that will take more work of course and you'd need to be comfortable with cocos2d, SpriteKit, or whatever you decide to use. I'll be doing a tutorial series on it one day soon so I'll cover all of that stuff.
On the other hand, if you are talking about recording someone's voice and having it playback i.e. a mother recording herself reading the story so her child can hear her voice whenever they are using your app, then you'd simply record the audio like you would any other piece of audio, save it to the device, and play it back when the page is displayed in the proper reading mode (or whatever you personally call it). One place to look is the AVAudioRecorder that is part of the AVFoundation framework. Simply Google "iOS audio recording" for examples if you need them.

How to add sound effects to PCM buffered audio in C++

I have an int16_t[] buffer with PCM raw audio data and I want to apply some effects (like echo, reverb, gain...) into it.
I thought that SoX or similar can do the trick for me, but SoX only works with files and other similar libraries that supports adding sound effects seems to add the effects only when the sound is played. So my problem with this is that I want to apply the effect to the samples into my buffer without playing them.
I have never worked with audio, but reading about PCM data I have learned that I can apply gain multiplying each sample value, for example. But I'm looking for any library or relatively easy algorithms that I can use directly in my buffer to get the sound effects applied.
I'm sure there are a lot of solutions to my problem out there if you know what to look for, but it's my first time with audio "processing" and I'm lost, as you can see.
For everyone like me, interested in learning DSP related to audio processing with C++ I want to share my little research results and opinion, and perhaps save you some time :)
After trying several DSP libraries, finally I have found The Synthesis ToolKit in C++ (STK), an open-source library that offer easy and clear interfaces and easy to understand code that you can dive in to learn about various basic DSP algorithms.
So, I recommend to anyone who is starting out and have no previous experience to take a look at this library.
Your int16_t[] buffer contains a sequence of samples. They represent instantaneous amplitude levels. Think of them as the voltage to apply to the speaker at the corresponding instant in time. They are signed numbers with values in the range (-32767,32767]. A stream of constant zeros means silence. A stream of constant -32000 (for example) also means silence, but it will eventually burn your your speaker coil. The position in the array represents time, and the value of each sample represents voltage.
If you want to mix two sample streams together, for example to apply a chirp, you get yourself a sample stream with the chirp in it (record a bird or something). You then add the two sounds sample by sample.
You can do a super-cheesy reverb effect by taking your original sound buffer, lowering its volume (perhaps by dividing all the samples by a constant), and adding it back to your original stream, but shifting the samples by a tenth of a second's worth of array position.
Those are the basics of audio processing. Things get very sophisticated indeed. This field is known as "digital signal processing" and there are plenty of books on the subject.
You can do it either with hacking the audio buffer and trying to do some effects like gain and threshold with simple math operations or do it correct using proper DSP algorithms. If you wish to do it correct, I would recommend using the Speex Library. It's open source and and well tested. www (dot)speex (dot)org. The code should compile on MSVC or linux with minimal effort. This is the fastest way to get a good audio code working with proper DSP techniques. Your code would look like .. please read the AEC example.
st = speex_echo_state_init(NN, TAIL);
den = speex_preprocess_state_init(NN, sampleRate);
speex_echo_ctl(st, SPEEX_ECHO_SET_SAMPLING_RATE, &sampleRate);
speex_preprocess_ctl(den, SPEEX_PREPROCESS_SET_ECHO_STATE, st);
You need to setup the states, the code testecho includes these.

video/audio encoding/decoding/playback

I've always wanted to try and make a media player but I don't understand how. I found FFmpeg and GStreamer but I seem to be favoring FFmpeg despite its worse documentation even though I haven't written anything at all. That being said, I feel I would understand how things worked more if I knew what they were doing. I have no idea how video/audio streams work and the several media types so that doesn't help. At the end of the day, I'm just 'emulating' some of the code samples.
Where do I start to learn how to encode/decode/playback video/audio streams without having to read hundreds of pages of several 'standards'. Perhaps to a certain extent also be enough knowledge to playback media without relying on another API. Googling 'basic video audio decoding encoding' doesn't seem to help. :(
This seem to be a black art that nobody is out to tell anyone about.
The first part is extracting streams from the container. From there, you need to decode the streams into media. I recommend finding a small Theora video and seeing how the pieces relate there.
you want that we write one answer and you read that and be master in multimedia domain..!!!!
Anyway that can not be by one answer.
First of all understand this terminolgy by googling
1> container -- muxer/demuxer
2> codec --coder/decoder
If you like ffmpeg then go with its basic video plater application. iT is well documented at here http://dranger.com/ffmpeg/ it will shows the method of demuxing container and decoding any elementry stream with ffmpeg api. more about this at http://ffmpeg.org/ffplay.html
i like gstreamer more then ffmpeg. it has well documentation. it will be good choise if you start with gstreamer

converting image sequence to video

I want to make a screen capture utility, so far i am able to capture the screen in regular interval to get a numbered sequence of images and now i want to encode them to a video format preferably flv(because of good compression and web support)
....I tried the ffmpeg.exe for that reason but for some strange reason it did'nt work
on my vista ultimate...only the first picture is encoded while the rest -I dont know what happened to them.
Also I would prefer doing the encoding stuf programatically (using c/c++ library api if any for that purpose) rather than using tools as ffmpeg.exe and i am interested in encoding picture sequence to video not capturing contineouse video directly.
I searched through internet....there are lots of libraries and tutorial for converting between video formats but I did'nt find something usefull for my problem.
I am not verry proficient with video formats and sdk library, I just need a quick way to encode some pictures to video with some basic control (as time interval between two consecutive frames).
So can you help me with some pointers as to which library i should use and how(code fragment and little descriptive answer would greatly help) and please dont recomend any .NET solution I need to learn something out of this and dont want to apply some bruteforce approach to solve the problem.
Sorry for my english....and thanks in advance.
It appears that an .avi file can more or less directly be made of .jpg's:
An AVI file may carry audio/visual data inside the chunks in virtually any compression scheme, including Full Frame (Uncompressed), ..., Motion JPEG.
Also, something very similar has been discussed here before.

Extracting raw audio/waveform from an MP3

This question has been in my mind for a few years and I never actually found the answer for this.
What I would like to do is extract the actual waveform/PCM of an MP3 file, so that I can play it using the soundcard (of course).
Ideally I would be experimenting some DSP effects.
My first step was to look into LAME, but I didn't find anything relevant about MP3 decoding in a program or stuff like that.
So I'm asking where I could find something like this.
What language should I use? I was thinking C, but maybe there are programming languages out there that would do the job more efficiently.
Thanks!
Guillaume.
The question boils down to: what are you trying to accomplish?
From the description of your question of decoding an MP3 and playing it on the sound card makes it sounds as if you are trying to make a media player.
However, if your intent is to play around with DSP effects, then it sounds like the question is more about processing the sound rather than decoding MP3s. if that's the case, probably looking into writing plug-ins for existing media players (such as Windows Media Player and Winamp) would be easiest path to what you're trying to accomplish.
Frankly, learning to write your own decoder from scratch is not just a programming problem but a mathematical one, so using existing libraries are the way to go. Talking to the operating system or libraries like DirectSound to output audio seems like unnecessary work if anything. I feel that working on plug-ins for existing players would be the way to go, unless your goal is to make your own media player.
If what you really want to accomplish is playing with audio data, then probably decoding an MP3 to uncompressed PCM using any MP3 decoder, then manipulating it in the language of your choice would accomplish your goal of dealing with effects with sound.
The language choice is going to depend on whether you are going to interact directly with MP3 decoding libraries, or whether you can just use raw audio input, which would allow you to use pretty much any language of your choice.
There was a similar question a while back, Getting started with programmatic audio, where I posted an answer on some basic ways to manipulate audio, such as amplification, changing playback speed, and doing some work with FFT.
libmpg123 should do the trick.
I have been using the Windows Media SDK, not for this purpose, but I am pretty sure there are hooks let that let you intercept the audio stream, or convert MP4 to uncompressed WAV. I used C++.
Lots:
http://www.mp3-tech.org/programmer/decoding.html
Pick your poison...
Also, LAME does decode MP3s (check out --decode option), so you might find something interesting in that source.
-Adam
It really depends what platform you are programming on and what you want to do with the code. If you are on Windows you should look at the windows media format sdk or DirectShow. They should both have the ability to decode mp3 files into the raw waveform. On the Mac, I would expect Quicktime to have this same ability. Others have already suggested source for Linux/open source code.
I would recommend looking at Cubase and Wavelab as both will convert MP3 to WAV etc and allow you to play around with the waveform