Basically, I have a prototype ready with OpenCV that captures images from connected webcams. We need to ship it to customers and they include Surface Pro users, which has an integrated rear camera.
I am not sure whether it would work on that device or not i.e. whether my code would detect the integrated Surface Pro camera or not. We currently do not have access to such a machine.
So, is there a way validate this? I can think of two options:
Is there any emulator available for Surface Pro camera?
Does OpenCV provide a list of cameras which it supports?
Would really appreciate any form of assistance here!
OpenCV works via the OS camera drivers. If the Surface Pro camera appears as a normal camera to Windows OpenCV should see it as just another camera.
For record's sake - This Stack Overflow answer gives the code for iterating available devices.
And personally I can verify that OpenCV works with the Surface Pro cameras (front and rear). We are using the EMGU port of it.
Related
In AR.JS demo,
Android phones that have multiple rear cameras tend to use the wrong lens, such as a telescopic lens. For example, Huawei mate 20 pro uses the 3x lens.
How do I select the right camera to use?
I'm facing the same issue, i tried using opera, it detects all cameras and allows you to choose the one you want. but, in my case (Samsung Note 10+) the zoom cameras and front camera work,yet the main one doesn't.
Check if it works with your phone.
I'm new to ROS and OpenCV and am trying to figure things out. It seems to do anything with vision the camera needs to be calibrated. I found what looks to be a simple method of calibration in the ROS tutorials here: https://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration
However one of the tutorial's assumptions is "a stereo camera publishing left and right images over ROS." I have no idea how to do this. Thanks to anyone that can help me.
This might be a separate issue but when I use the stereo camera in OpenCV it only recognizes it in one index (my built in webcam on my laptop would be 0 and the MYNTEYE would be 1). When it does display it puts both lens views in the same window so it looks like the camera is cross-eyed.
Use MYNT-EYE-ROS-Wrapper for using it
https://github.com/slightech/MYNT-EYE-ROS-Wrapper
Call
roslaunch mynteye_ros_wrapper mynt_camera_display.launch
to run the wrapper
The modified the published topic from minteye node to feed into stereo calibration node accordingly
I'm working on a project requiring real-time access to the webcam, and have problems with getting a suitable camera stream under Windows 10 for processing the frames with OpenCV.
I'm able to access the camera just fine under Windows 8.1. using either
OpenCV 2.4.9 with Evgeny Pereguda's VideoInput
library(http://www.codeproject.com/Articles/776058/Capturing-Live-video-from-Web-camera-on-Windows-an)
for accessing the camera through Windows Media Foundation, or
OpenCV 3.0 without any additional libraries
These allow for capturing the webcam stream with high frame rate (~30fps), and setting the webcam resolution with e.g.
cvCapture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cvCapture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
Under Windows 10, however, both solutions above result in problems:
the solution using OpenCV 2.4.9, and the VideoInput library allow
setting the resolution to 640x480, but the frame rate is around 1FPS
(or worse?!), and the picture is very dark
the solution using OpenCV 3.0 gives me a nice 1920x1080 image on a good frame rate, but I'm unable to set the resolution for the stream
I even tried opening the camera stream with:
cv::VideoCapture cvCapture( CV_CAP_DSHOW + camnum );
cv::VideoCapture cvCapture ( CV_CAP_MSMF + camnum );
The first one works (as far as opening the stream, but with the same problems as above), the MSMF (Microsoft Media Foundation) results in cvCapture.isOpened() returning false;
Handling the FullHD stream real-time isn't feasible for the image processing algorithms, nor is resizing down the resulting frame with OpenCV.
The Windows 8.1 version is running on a Surface Pro 3 (Core i7), and the Windows 10 version on a Surface Pro 4 (Core i7). Could this be a hardware / camera driver problem? I tried finding updated drivers for the Surface 4, but to no avail.
Has anyone had similar problems? Is there an obvious solution I'm overlooking?
I think that your problem with videoInput on Windows 10 is related with selecting of the correct mediaType of web cameras. The fact is that OpenCV uses DirectShow by default and videoInput on Media Foundation is only optional.
I advise you correct check variables:
float MF_MT_FRAME_RATE_RANGE_MAX;
float MF_MT_FRAME_RATE;
float MF_MT_FRAME_RATE_RANGE_MIN;
in
// Structure of info MediaType
struct MediaType
Also I can advise to visit on site Capture Manager Topology Editor - this a free software for working with web cameras via Media Foundation. It allows verify accessible features of Media Foundation on your Surface Pro 4 (Core i7).
With best regards,
Evgeny Pereguda
I am doing a project regarding image processing and multiple person counting and was wondering, how exactly can I plug my ION AIR PRO PLUS video recording device (similar to a goPro), and use it as my 'webcam'? Basically, I want to plug it in and then access it via a live feed using Microsoft Visual Studio 2010 and OpenCV, and then do real time tracking of people walking.
What I am struggling with is accessing the external camera from my program. Anyone know how to do this?
The video camera has no wifi, only an hdmi output, RGB cable output and a USB.
Attach the USB cable and instantiate cv::VideoCaputre(0). In Linux local cameras have number indices I think in Windows should be the same.
Peter, In the past I have worked on camera products on Windows XP & 7. On windows usb cameras can be accessed using directshow.
You can implement a directshow filter for people tracking algorithm and fit it in the direct show pipeline soon after your camera plugin.
Here is a link to an application stack that may suit your use case(to give you an idea):
http://www.e-consystems.com/blog/camera/?p=1302
The recent windows operating systems that run WinRT, use a latest framework called MediaFoundation. As its very new there are some limitations we found when we tried to build similar applications.
To quickly see a preview out of your camera, pls google for "graphedit" and install it on a Windows 7 pc. Its a fairly simple tool. You can drag and drop your camera, if directshow is supported, then render its output pin and play for preview.
I do not think this is a standard webcam, it appears to work only as a mass storage device. One thing you can try is removing the micro sd card and connect it to the computer. This works on some cameras.
Assuming this does not work, one option would be to purchase an HDMI capture card. The YK762H PCI-E card costs around $40, and will allow you to use the camera with OpenCV, the videoInput library, or DirectShow.
The other option is to use the WiFi live preview. You would have to figure out the commands sent to the camera. This has already been done with the SJCam wifi cameras, the GoPro models, and Sony cameras such as the QX-10 and HDR-AS100V.
I am trying to view the depth images from Kinect for Windows and SoftKinetic DepthSense 325 at the same time. Later, I plan to write a program to grab and write the depth images from both devices.
For viewing depth images from Kinect for Windows, I am using the DepthBasics-D2D program from the Kinect SDK. For viewing depth images from SoftKinetic camera, I am using the DepthSenseViewer that ships with the driver.
I find that these two devices cannot be plugged in and used at the same time!
If I have SoftKinetic plugged in and DepthSenseViewer displaying the depth and then I plug in the Kinect, then the DepthBasics program reports that no Kinect could be found.
If I have Kinect plugged in and DepthBasics program display the depth and then I run the DepthSenseViewer and try to register the depth ndoe, it reports error: couldn't start streaming (error 0x3704).
Why cannot I view depth images from both Kinect and SoftKinetic simultaneously? Is there a way I can grab depth images from both devices?
Check that the two devices aren't trying to run on the same USB hub. To try to resolve the problem, you might try one device on a USB2 port and the other device on a USB3 port.
(Egad this is an old post. Anyway, it will still be helpful to someone.)