Qt C++ video libraries - c++

I'm looking for a video library for Qt 4 (C++/Windows) that has:
1) Basic video playback functionality
It should play all the common video formats such as DVD VOB and MP4/MKV/AVI (h264, xvid, divx). It should also be able to deinterlace the video automatically and display it in Display Aspect Ratio.
2) Cropping
It should have some basic functionality to remove black bars (user supplied arguments).
3) Snapshots
It should have functionality to take snapshots in memory.
4) Frame-by-frame seeking
It should have some basic functionality to do frame-by-frame seeking, e.g. prevFrame(), nextFrame(), jumpTo(frame) and getNumFrames().
I have tried the following and from what I could find the functionality they support:
Qt Phonon:
Yes. Plays all the needed formats and displays them correctly.
No.
No. Not implemented (returns empty image).
No.
QtFFmpegWrapper:
Partial. Does not deinterlace DVD VOBs. Does not display DVD VOBs in DAR.
No.
Yes.
Partial. Broken for MKV (h264).
Qt VLC:
Yes. Plays all the needed formats and displays them correctly.
Yes. Have not tried if it works though.
Partial. Only to disk. edit: QPixmap::grabWindow(player->videoWidget()->winId()) works.
No. Only by milliseconds.
Now I'm looking at QVision, which seems to have all of those features except for cropping. Maybe implementing cropping isn't that difficult. But I'm wondering if there's any other libraries I should look into? Or perhaps I missed something and they're possible with one of these libraries. Thanks.

You can consider Movie Player Gold SDK ActiveX 3.6 from ViscomSoft. I don't see cropping mentioned on their site but memory snapshots and frame-by-frame steps are among supported features.
I used their VideoEdit and Screen2Video SDKs in Windows Qt software, worked quite well.

Related

De-interlace captured video with DirectShow

(Add KSPROPERTY_CAMERACONTROL_SCANMODE into KSPROPERTY_VIDCAP_CAMERACONTROL?)
Working with a webcam I'm getting an interlaced video. The access to that webcam is done with a DirectShow using videoInput library (as part of OpenCV).
I cannot find a way to control that (interlaced) mode with the currently available options/enums in the OpenCV so I diged into MSDN and found the following:
IAMCameraControl interface can be used to get/set various properties, as long as they are listed in PROPSETID_VIDCAP_CAMERACONTROL (http://msdn.microsoft.com/en-us/library/dd389145(v=vs.85).aspx#methods)
PROPSETID_VIDCAP_CAMERACONTROL lists 3 blocks of enums (one for "prior to USB video class, one for UVC, and one for Win8) and among those values there's the one I'm interested in! It's a KSPROPERTY_CAMERACONTROL_SCANMODE (http://msdn.microsoft.com/en-us/library/ff567802(v=vs.85).aspx)
I have Windows 7.a SDK installed and in the C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include\strmif.h file what I see is a list of values that only covers the "original" set, which is pre-UVC, so it doesn't have the control for the interlaced mode. The enum in that file looks like this one from MSDN site: http://msdn.microsoft.com/en-us/library/windows/desktop/dd318253(v=vs.85).aspx
Then I've found this forum where the person claims "I had to update the CameraControlProperty enumeration" to add the needed properties and it worked for him (http://sourceforge.net/p/directshownet/discussion/460697/thread/562ef6cf/)
My question is:
How? :) What does one do to add a needed value (or a bunch of values) that are supposedly supported by the system? What am I missing?
MSDN seems to be holding the cards close to its body and not revealing much, so I'm asking the good people in here for help.
I hope someone out there has figured out how to do that and can share the wisdom.
My ultimate goal is to use the OpenCV's videoInput's setVideoSettingCamera() to control the interlacing (so that the image I get doesn't have those black lines)

Convert Movie to OpenNI *.oni video

The Kinect OpenNI library uses a custom video file format to store videos that contain rgb+d information. These videos have the extension *.oni. I am unable to find any information or documentation whatsoever on the ONI video format.
I'm looking for a way to convert a conventional rgb video to a *.oni video. The depth channel can be left blank (ie zeroed out). For example purposes, I have a MPEG-4 encoded .mov file with audio and video channels.
There are no restrictions on how this conversion must be made, I just need to convert it somehow! Ie, imagemagick, ffmpeg, mencoder are all ok, as is custom conversion code in C/C++ etc.
So far, all I can find is one C++ conversion utility in the OpenNI sources. From the looks of it, I this converts from one *.oni file to another though. I've also managed to find a C++ script by a phd student that converts images from a academic database into a *.oni file. Unfortunately the code is in spanish, not one of my native languages.
Any help or pointers much appreciated!
EDIT: As my usecase is a little odd, some explanation may be in order. The OpenNI Drivers (in my case I'm using the excellent Kinect for Matlab library) allow you to specify a *.oni file when creating the Kinect context. This allows you to emulate having a real Kinect attached that is receiving video data - useful when you're testing / developing code (you don't need to have the Kinect attached to do this). In my particular case, we will be using a Kinect in the production environment (process control in a factory environment), but during development all I have is a video file :) Hence wanting to convert to a *.oni file. We aren't using the Depth channel at the moment, hence not caring about it.
I don't have a complete answer for you, but take a look at the NiRecordRaw and NiRecordSynthetic examples in OpenNI/Samples. They demonstrate how to create an ONI with arbitrary or modified data. See how MockDepthGenerator is used in NiRecordSynthetic -- in your case you will need MockImageGenerator.
For more details you may want to ask in the openni-dev google group.
Did you look into this command and its associated documentation
NiConvertXToONI --
NiConvertXToONI opens any recording, takes every node within it, and records it to a new ONI recording. It receives both the input file and the output file from the command line.

c++ video compression library that supports many different compression algorithms?

For a scientific project i need to compress video data. The video however doesn't contain natural video and the quality characteristics of the compression will be different than for natural footage (preservation of hard edges for example is more important than smooth gradients or color correctness).
I'm looking for a library that can be easily integrated in an existing c++ project and that let's me experiment with different video compression algorithms.
Any suggestions?
Look at FFmpeg. It is the the most mature open source tool for video compression and decompression. It comes with a command line tool, and with libraries for codecs and muxers/demuxers that can be statically or dynamically linked.
As satuon already answered, FFmpeg is the go-to solution for all things multimedia. However, I just wanted to suggest an easier path for you than trying to hook your program up to its libraries. It would probably be far easier for you to generate a sequence of raw RGB images within your program, dump each out to disc (perhaps using a ridiculously simple format like PPM), and then use FFmpeg from the command like to compress them into a proper movie.
This workflow might cut down on your prototyping and development time.
As for the specific video codec you will want to use, you have a plethora of options available to you. One of the most important considerations will be: Who needs to be able to play your video and what software will they have available?

combining separate audio and video files into one file C++

I am working on a C++ project with openCV. It is a simple web cam application with basic features like capturing pictures and videos. I have already been able to save video (w/o audio). Since openCV doesnot support audio processing, I was wondering if there is any way I can record audio separately in a different file and later combine those together to get one video file.
While searching on the internet, I did hear something about using ffmpeg with openCV. But I just cant figure out how to do it exactly.....
Can you guys help me? I would be very grateful... Thankyou!
P.S. I have used openCV and QT (for GUI)
As you said, opencv doesn't by itself deal with audio. However once you get a separate audio and video file, you can combine them using a technique called muxing. There are many many ways to do this. I use VirtualDub for most of my muxing needs, although it is windows only (not sure if that's a problem). I know ffmpeg is also capable of muxing (via the command line interface), I can't recall what the command is. There's also mplayer and a multitude of other programs out there to do this.
as far as i know openCV is good for video/image processing. To support audio processing, you can use other libraries e.g. PortAudio or C-sound.

Video mixer filter

I need to find a video filter in order to mix multiple video streams (let's say, maximum 4).
I've found a video mixer filter from MediaLooks and is ok, but the problem is that i'm trying to use it in a school project (for the entire semester) and so the 30 days trial is kind of unacceptable.
So my question to you is that: are you aware of a free direct show filter that could help. If this is not working then it means i must write one. The problem here is that i don't know from where to start.
If you need output to the display, you can use the VMR. If you need output to file, then I think you will need to write something. The standard solution to this is to write an allocator/presenter plugin for the VMR that allows you to get back the mixed video and then save it somewhere. This is more efficient that a fully software-only mixer filter.
G
I finally ended up by implementing my own filter.
The VideoMixerRender9 (and 7) will do the trick for you. You can set the opacity and area each video going into the VMR9. I suggest playing with it from within graphedit.
I would also like to suggest skipping that all together. If you use WPF, you will get far more media capabilities, much easier.
If you want low level DirectShow support, you can try my project, WPF Mediakit. I have a control called MediaUriElement that is similar to WPF's MediaElement.