cvCapture and directshow - c++

I need to read an avi file and process data from it in OpenCV.
Should I use cvCapture or directshow?
What are the criterions?
What are the advantages of each?
Thanks

Regarding OpenCV, the most obvious advantages are:
It provides a cross-platform solution for retrieving frames from the camera;
cvCapture is incredibly easy to use.
Disadvantages:
OpenCV supports a smaller amount of camera devices.
Check cv::VideoCapture if you want to use the C++ interface of OpenCV.
If you will use OpenCV to process data, it makes no sense to add DirectShow to your project just to grab frames from the camera if your camera is supported by OpenCV.

Related

Use OpenWebrtc to stream OpenCV video

I'm trying to stream a video generated with OpenCV (using the webcam and doing some image processing). To enhance the challenge, we've decided to use OpenWebRTC. The OpenWebRTC examples are amazing, but they all use the webcam (I know, this is how webrtc is intended, to use the webcam), but we want to send Mat objects inside a while loop (very OpenCV style).
By chance, has anyone accomplished this or has any idea?
Thanks in advance,
—N

OpenCV: what makes a device recognisable by OpenCV?

I would like to know how OpenCV does to find devices that it can read from?
Can OpenCV only read from usb devices?
I have a video acquisition card (BlackMagic Intensity) and I until now used the blackMagic's SDK to retrieve the stream coming from the input port on the card, and converted it to a cv::Mat that I then use in my code.
But can't I not develop some kind of C++ plugin for OpenCV so that I can use VideoCapture to access my blackMagic stream? wouldn't it be a cleaner way of doing this?
Hope you can help

Stereo image acquisition using bumblebee2

I am using the Bumblebee2 camera and I am having trouble with acquiring stereo images from it. When I attempt to access the camera using MATLAB, the program crashes.
Does anyone know how I can acquire the stereo images using FlyCapture?
Matlab cannot read the BumbleBee 2 output directly. To do that you'll have to record the stream and process it offline. I wrote a proprietary recorder based on the code samples in the SDK. You can split the left/right images and record each one in a separate video container (e.g. using OpenCV to write a compressed avi file). Later, you can load these images into memory, and use Triclops to compute disparity maps (or alternatively, use OpenCV to run other algorithms, like semi-global block matching).
Flycapture can capture image series or video clips, but you have less control over what you get. I suggest you use the code samples to write a simple recorder, and then load your output into Matlab in standard ways. Consult the Point Grey tech support.

OpenCV IplImage save/read to video

I'm trying to save a video to analyse later with OpenCV algorithms.
I'm using a C++ library of the camera to obtain the frames.
IplImage *iplImageInput = QueryFrame(); //runs every 30 ms
std::vector <cv::Mat> splittedVector;
cv::split(cv::Mat(iplImageInput), splittedVector); //stereo vision camera
// splittedVector buffer is used by the algorithms
So I would like to replace the function "QueryFrame()" with data saved before.
I have already tried some things with cv::videowriter/cv::videocapture but with no luck.
Do you have any hints to eliminate the need for the camera while testing algorithms?
How should I implement a writer and reader to save like 150 frames?
Thanks a lot.

Firewire camera with OpenCv 2.4. not working

I am using OpenCV 2.4 with C++ for image processing and video streaming . I would like to know how we can use fire-wire camera like pixilink to capture frames? I tried the VideoCapture class but it seems to work with only usb cameras unlike firewire one. If some one has done same thing with firewire camera then kindly give some guidance how to do that?
You can capture images using firewire SDK or you can also libdc1394 API. I found libdc1394 to more reliable and easy to use and there are couple of examples available to get start.