uEye camera not detected with VideoCapture - c++

I'm pretty stuck on a problem with my uEye camera. Using my laptop camera (id 0) or internet camera on usb (id 1) this line works perfectly: TheVideoCapturer.open(1); (TheVideoCapturer is of VideoCapture class, OpenCV).
Unfortunately, when I try to do the same with my uEye camera, it can't find it. I checked the camera ID in the ueyecameramanager, and it's 1. Or 35, in some expert mode. I'd like to use it the same way I used mentioned above cameras.
I've got the drivers, because, well, the ueyecameramanager works and gives me some stream, and ROS node ueye_cam works fine as well .
Any sort of advice would be gladly appreciated.

Even though you have probably already figured it out as far as I know you cannot use VideoCapture directly with uEye cameras. You have to use their own SDK to access the videostream (or take a single snapshot depending on your case). After that you can use memcpy() to copy the memory that is pointed by void pointer filled by is:GetImageMem(...) to the Mat object (cv::Mat::ptr()). If you look close enough what the ROS node for uEye does it actually uses the functions provided by the uEye SDK to set and access the camera. ROS also has its own image format and that is why an interface is implemented (called cv_bridge) to convert ROS images to OpenCV images. Overall it's a ridiculous salad of data copying and conversion but since this is how things currently are you don't have much of a choice there.

Related

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

Publishing Left and Right video feeds in ROS using a MYNTEYE stereo camera

I'm new to ROS and OpenCV and am trying to figure things out. It seems to do anything with vision the camera needs to be calibrated. I found what looks to be a simple method of calibration in the ROS tutorials here: https://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration
However one of the tutorial's assumptions is "a stereo camera publishing left and right images over ROS." I have no idea how to do this. Thanks to anyone that can help me.
This might be a separate issue but when I use the stereo camera in OpenCV it only recognizes it in one index (my built in webcam on my laptop would be 0 and the MYNTEYE would be 1). When it does display it puts both lens views in the same window so it looks like the camera is cross-eyed.
Use MYNT-EYE-ROS-Wrapper for using it
https://github.com/slightech/MYNT-EYE-ROS-Wrapper
Call
roslaunch mynteye_ros_wrapper mynt_camera_display.launch
to run the wrapper
The modified the published topic from minteye node to feed into stereo calibration node accordingly

Stereo image acquisition using bumblebee2

I am using the Bumblebee2 camera and I am having trouble with acquiring stereo images from it. When I attempt to access the camera using MATLAB, the program crashes.
Does anyone know how I can acquire the stereo images using FlyCapture?
Matlab cannot read the BumbleBee 2 output directly. To do that you'll have to record the stream and process it offline. I wrote a proprietary recorder based on the code samples in the SDK. You can split the left/right images and record each one in a separate video container (e.g. using OpenCV to write a compressed avi file). Later, you can load these images into memory, and use Triclops to compute disparity maps (or alternatively, use OpenCV to run other algorithms, like semi-global block matching).
Flycapture can capture image series or video clips, but you have less control over what you get. I suggest you use the code samples to write a simple recorder, and then load your output into Matlab in standard ways. Consult the Point Grey tech support.

Can OPENCV access a analog camera using a video capture device

I have been trying to access my analog camera via a EasyCap video capture device. Any code I try only picks the usb webcam or internal webcam. I guess that since video capture device is a video controller, opencv doesnt recognize it as a imaging device.
Can anyone conform if you cannot access analog cameras with opencv via a video capture device.
If so, then what other method can be used.
i struggled with the same problem (in my case within python instead of C++, although I am certain it will be the same root cause) and hope it helps!
the original thread + ANSWER
also relevant XKCD
Even for digital devices, OpenCV isn't good at reading them, it is good at processing them. The library has supply for generic webcams, of course; however it does not supply most of the commercial or industrial cameras.
In short, to decode, you should try using "video for linux" or "video for windows" libraries; or the device SDK itself. Since you are using a video2usb converter, you shouldn't have any problem accessing the analog camera through these software.

Firewire camera with OpenCv 2.4. not working

I am using OpenCV 2.4 with C++ for image processing and video streaming . I would like to know how we can use fire-wire camera like pixilink to capture frames? I tried the VideoCapture class but it seems to work with only usb cameras unlike firewire one. If some one has done same thing with firewire camera then kindly give some guidance how to do that?
You can capture images using firewire SDK or you can also libdc1394 API. I found libdc1394 to more reliable and easy to use and there are couple of examples available to get start.