How can I retrieve by using OpenCV in iOS - c++

I found retrieve function in OpenCV for web camera.
https://github.com/BelBES/HandDetector/blob/master/main.cpp
cv::VideoCapture cap(CV_CAP_OPENNI);
cap.retrieve(mat, CV_16UC1);
Can I retrieve it the same way in iOS?
I tried like following, but it shows just black.
mat.convertTo(depthMap, CV_8UC1, 1.0/255);
Any idea?
Or, How can I convert this main.cpp for iOS?

In iOS framework opencv2.framework, there is a type CvCapture, which is the cv::VideoCapture in C++:
Try it.

Related

Convert a PXCImage into an OpenCV Mat (PIXEL_FORMAT_YUY2)

I'm working on the Intel's RealSense SDK and I have to convert it into an OpenCV format.
I saw this solution in the forum (Convert a PXCImage into an OpenCV Mat) but for the "PIXEL_FORMAT_YUY2" type doesn't work that code.
Anyone knows how to change it?
Thanks in advance
I don't know the Intel RealSense SDK as I have only used the librealsense API.
The documentation for the Intel RealSense SDK shoud be here.
I don't know how it works with the SDK but with librealsense you can select directly the appropriate color format (for OpenCV mat it should be bgr8).
If you don't have this option with the SDK, you can see here how librealsense unpack yuy2 format.
Or maybe you can try to copy the data directly to a mat (you will have to figure out the good value for cvDataType and the good value for cvDataWidth) and then use cvtColor() with the appropriate conversion if you want to be able to access to the pixel values as a RGB triplet?
Hope it helps.

Geting videostream from Baumer GigE Cameras and use it in OpenCV

I'm trying to get a videostream from gigE Baumer txg12 camera, to use it in my openCV app. But I don't know how to achieve it.
Does anybody have experience with using Baumer SDK with openCV?
How can I get my video-stream?

How to capture video from an external camera using opencv on ubuntu

I am a beginner to ubuntu. I am trying to drive a external camera on ubuntu 15.04. I want to know how to combine opencv library with camera driver. So I can capture video use sentences like
VideoCapture cap;
cap.open(0);//0,1,2...
Does anyone have some idea? Looking forward to your reply!
It depends on the camera you use, some cameras you need do nothing, and some cameras give API,you can use the API to get the video then use opencv to do something.

Firewire camera with OpenCv 2.4. not working

I am using OpenCV 2.4 with C++ for image processing and video streaming . I would like to know how we can use fire-wire camera like pixilink to capture frames? I tried the VideoCapture class but it seems to work with only usb cameras unlike firewire one. If some one has done same thing with firewire camera then kindly give some guidance how to do that?
You can capture images using firewire SDK or you can also libdc1394 API. I found libdc1394 to more reliable and easy to use and there are couple of examples available to get start.

Setting video capture properties no longer works in OpenCV 2.2?

Prior OpenCV 2.2, I was able to do
VideoCapture capture(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
in order to modify the frame size. But after when I compiled my application against OpenCV 2.2, setting the properties no longer works (the video is displayed correctly though). If I do a get of these values, 0 is returned. And if I look at the size of the captured frame, it is 160 x 120.
I searched online but most of the posts were about the problem in Linux whereas I am running Windows 7 64-bit. My webcam is a Logitech QuickCam Ultra Vision.
Is there anyone experiencing the same problem? Or no problem at all?
Thanks in advance!
This problem has been solved in OpenCV 2.3, even with my old Logitech QuickCam Ultra Vision webcam.
May be you should try with VideoInput, which is also supported by OpenCV, and included in OpenCV 2.0.3.
See an example at http://opencv.willowgarage.com/wiki/CameraCapture