converting image sequence to video - c++

I want to make a screen capture utility, so far i am able to capture the screen in regular interval to get a numbered sequence of images and now i want to encode them to a video format preferably flv(because of good compression and web support)
....I tried the ffmpeg.exe for that reason but for some strange reason it did'nt work
on my vista ultimate...only the first picture is encoded while the rest -I dont know what happened to them.
Also I would prefer doing the encoding stuf programatically (using c/c++ library api if any for that purpose) rather than using tools as ffmpeg.exe and i am interested in encoding picture sequence to video not capturing contineouse video directly.
I searched through internet....there are lots of libraries and tutorial for converting between video formats but I did'nt find something usefull for my problem.
I am not verry proficient with video formats and sdk library, I just need a quick way to encode some pictures to video with some basic control (as time interval between two consecutive frames).
So can you help me with some pointers as to which library i should use and how(code fragment and little descriptive answer would greatly help) and please dont recomend any .NET solution I need to learn something out of this and dont want to apply some bruteforce approach to solve the problem.
Sorry for my english....and thanks in advance.

It appears that an .avi file can more or less directly be made of .jpg's:
An AVI file may carry audio/visual data inside the chunks in virtually any compression scheme, including Full Frame (Uncompressed), ..., Motion JPEG.
Also, something very similar has been discussed here before.

Related

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

Convert Movie to OpenNI *.oni video

The Kinect OpenNI library uses a custom video file format to store videos that contain rgb+d information. These videos have the extension *.oni. I am unable to find any information or documentation whatsoever on the ONI video format.
I'm looking for a way to convert a conventional rgb video to a *.oni video. The depth channel can be left blank (ie zeroed out). For example purposes, I have a MPEG-4 encoded .mov file with audio and video channels.
There are no restrictions on how this conversion must be made, I just need to convert it somehow! Ie, imagemagick, ffmpeg, mencoder are all ok, as is custom conversion code in C/C++ etc.
So far, all I can find is one C++ conversion utility in the OpenNI sources. From the looks of it, I this converts from one *.oni file to another though. I've also managed to find a C++ script by a phd student that converts images from a academic database into a *.oni file. Unfortunately the code is in spanish, not one of my native languages.
Any help or pointers much appreciated!
EDIT: As my usecase is a little odd, some explanation may be in order. The OpenNI Drivers (in my case I'm using the excellent Kinect for Matlab library) allow you to specify a *.oni file when creating the Kinect context. This allows you to emulate having a real Kinect attached that is receiving video data - useful when you're testing / developing code (you don't need to have the Kinect attached to do this). In my particular case, we will be using a Kinect in the production environment (process control in a factory environment), but during development all I have is a video file :) Hence wanting to convert to a *.oni file. We aren't using the Depth channel at the moment, hence not caring about it.
I don't have a complete answer for you, but take a look at the NiRecordRaw and NiRecordSynthetic examples in OpenNI/Samples. They demonstrate how to create an ONI with arbitrary or modified data. See how MockDepthGenerator is used in NiRecordSynthetic -- in your case you will need MockImageGenerator.
For more details you may want to ask in the openni-dev google group.
Did you look into this command and its associated documentation
NiConvertXToONI --
NiConvertXToONI opens any recording, takes every node within it, and records it to a new ONI recording. It receives both the input file and the output file from the command line.

video/audio encoding/decoding/playback

I've always wanted to try and make a media player but I don't understand how. I found FFmpeg and GStreamer but I seem to be favoring FFmpeg despite its worse documentation even though I haven't written anything at all. That being said, I feel I would understand how things worked more if I knew what they were doing. I have no idea how video/audio streams work and the several media types so that doesn't help. At the end of the day, I'm just 'emulating' some of the code samples.
Where do I start to learn how to encode/decode/playback video/audio streams without having to read hundreds of pages of several 'standards'. Perhaps to a certain extent also be enough knowledge to playback media without relying on another API. Googling 'basic video audio decoding encoding' doesn't seem to help. :(
This seem to be a black art that nobody is out to tell anyone about.
The first part is extracting streams from the container. From there, you need to decode the streams into media. I recommend finding a small Theora video and seeing how the pieces relate there.
you want that we write one answer and you read that and be master in multimedia domain..!!!!
Anyway that can not be by one answer.
First of all understand this terminolgy by googling
1> container -- muxer/demuxer
2> codec --coder/decoder
If you like ffmpeg then go with its basic video plater application. iT is well documented at here http://dranger.com/ffmpeg/ it will shows the method of demuxing container and decoding any elementry stream with ffmpeg api. more about this at http://ffmpeg.org/ffplay.html
i like gstreamer more then ffmpeg. it has well documentation. it will be good choise if you start with gstreamer

combining separate audio and video files into one file C++

I am working on a C++ project with openCV. It is a simple web cam application with basic features like capturing pictures and videos. I have already been able to save video (w/o audio). Since openCV doesnot support audio processing, I was wondering if there is any way I can record audio separately in a different file and later combine those together to get one video file.
While searching on the internet, I did hear something about using ffmpeg with openCV. But I just cant figure out how to do it exactly.....
Can you guys help me? I would be very grateful... Thankyou!
P.S. I have used openCV and QT (for GUI)
As you said, opencv doesn't by itself deal with audio. However once you get a separate audio and video file, you can combine them using a technique called muxing. There are many many ways to do this. I use VirtualDub for most of my muxing needs, although it is windows only (not sure if that's a problem). I know ffmpeg is also capable of muxing (via the command line interface), I can't recall what the command is. There's also mplayer and a multitude of other programs out there to do this.
as far as i know openCV is good for video/image processing. To support audio processing, you can use other libraries e.g. PortAudio or C-sound.

Extracting raw audio/waveform from an MP3

This question has been in my mind for a few years and I never actually found the answer for this.
What I would like to do is extract the actual waveform/PCM of an MP3 file, so that I can play it using the soundcard (of course).
Ideally I would be experimenting some DSP effects.
My first step was to look into LAME, but I didn't find anything relevant about MP3 decoding in a program or stuff like that.
So I'm asking where I could find something like this.
What language should I use? I was thinking C, but maybe there are programming languages out there that would do the job more efficiently.
Thanks!
Guillaume.
The question boils down to: what are you trying to accomplish?
From the description of your question of decoding an MP3 and playing it on the sound card makes it sounds as if you are trying to make a media player.
However, if your intent is to play around with DSP effects, then it sounds like the question is more about processing the sound rather than decoding MP3s. if that's the case, probably looking into writing plug-ins for existing media players (such as Windows Media Player and Winamp) would be easiest path to what you're trying to accomplish.
Frankly, learning to write your own decoder from scratch is not just a programming problem but a mathematical one, so using existing libraries are the way to go. Talking to the operating system or libraries like DirectSound to output audio seems like unnecessary work if anything. I feel that working on plug-ins for existing players would be the way to go, unless your goal is to make your own media player.
If what you really want to accomplish is playing with audio data, then probably decoding an MP3 to uncompressed PCM using any MP3 decoder, then manipulating it in the language of your choice would accomplish your goal of dealing with effects with sound.
The language choice is going to depend on whether you are going to interact directly with MP3 decoding libraries, or whether you can just use raw audio input, which would allow you to use pretty much any language of your choice.
There was a similar question a while back, Getting started with programmatic audio, where I posted an answer on some basic ways to manipulate audio, such as amplification, changing playback speed, and doing some work with FFT.
libmpg123 should do the trick.
I have been using the Windows Media SDK, not for this purpose, but I am pretty sure there are hooks let that let you intercept the audio stream, or convert MP4 to uncompressed WAV. I used C++.
Lots:
http://www.mp3-tech.org/programmer/decoding.html
Pick your poison...
Also, LAME does decode MP3s (check out --decode option), so you might find something interesting in that source.
-Adam
It really depends what platform you are programming on and what you want to do with the code. If you are on Windows you should look at the windows media format sdk or DirectShow. They should both have the ability to decode mp3 files into the raw waveform. On the Mac, I would expect Quicktime to have this same ability. Others have already suggested source for Linux/open source code.
I would recommend looking at Cubase and Wavelab as both will convert MP3 to WAV etc and allow you to play around with the waveform