I would like to know how OpenCV does to find devices that it can read from?
Can OpenCV only read from usb devices?
I have a video acquisition card (BlackMagic Intensity) and I until now used the blackMagic's SDK to retrieve the stream coming from the input port on the card, and converted it to a cv::Mat that I then use in my code.
But can't I not develop some kind of C++ plugin for OpenCV so that I can use VideoCapture to access my blackMagic stream? wouldn't it be a cleaner way of doing this?
Hope you can help
Related
I'm trying to stream a video generated with OpenCV (using the webcam and doing some image processing). To enhance the challenge, we've decided to use OpenWebRTC. The OpenWebRTC examples are amazing, but they all use the webcam (I know, this is how webrtc is intended, to use the webcam), but we want to send Mat objects inside a while loop (very OpenCV style).
By chance, has anyone accomplished this or has any idea?
Thanks in advance,
—N
I have been trying to access my analog camera via a EasyCap video capture device. Any code I try only picks the usb webcam or internal webcam. I guess that since video capture device is a video controller, opencv doesnt recognize it as a imaging device.
Can anyone conform if you cannot access analog cameras with opencv via a video capture device.
If so, then what other method can be used.
i struggled with the same problem (in my case within python instead of C++, although I am certain it will be the same root cause) and hope it helps!
the original thread + ANSWER
also relevant XKCD
Even for digital devices, OpenCV isn't good at reading them, it is good at processing them. The library has supply for generic webcams, of course; however it does not supply most of the commercial or industrial cameras.
In short, to decode, you should try using "video for linux" or "video for windows" libraries; or the device SDK itself. Since you are using a video2usb converter, you shouldn't have any problem accessing the analog camera through these software.
I am using OpenCV 2.4 with C++ for image processing and video streaming . I would like to know how we can use fire-wire camera like pixilink to capture frames? I tried the VideoCapture class but it seems to work with only usb cameras unlike firewire one. If some one has done same thing with firewire camera then kindly give some guidance how to do that?
You can capture images using firewire SDK or you can also libdc1394 API. I found libdc1394 to more reliable and easy to use and there are couple of examples available to get start.
I want to decode the MPEG motion vectors using OpenCV in C++.
Is there any function in OpenCV through which we can get this?
Brightness may not be constant through out the video in my case.
I am referring paper Efficient camera motion characterization for MPEG video indexing
It says use partial decoding to get motion vectors from MPEG-compressed video sequence.
But I am unable to determine how to do this using OpenCV.
How to proceed?
OpenCV uses ffmpeg, v4linux or QuickTime as backend video encoder/decoder. It cannot access internal data or partial decoding results, because it is just a wrapper over other libraries. All it does is to handle frames from the backend and convert them to IplImage or cv::Mat.
If you want to access internal data, you should play with the ffmpeg code.
I need to read an avi file and process data from it in OpenCV.
Should I use cvCapture or directshow?
What are the criterions?
What are the advantages of each?
Thanks
Regarding OpenCV, the most obvious advantages are:
It provides a cross-platform solution for retrieving frames from the camera;
cvCapture is incredibly easy to use.
Disadvantages:
OpenCV supports a smaller amount of camera devices.
Check cv::VideoCapture if you want to use the C++ interface of OpenCV.
If you will use OpenCV to process data, it makes no sense to add DirectShow to your project just to grab frames from the camera if your camera is supported by OpenCV.