For several days now, I encounter a problem regarding depth frame acquisition with my Asus Xtion Pro Live, Openni 2 and Qt. My application works fine when the camera is connected to a USB 2.0 port, but when I try to connect the camera to a USB 3.0 port, I cannot display images from the depth stream.
I wrote a very basic console application in order to solve the problem, that just acquire color and depth frames and for each frame, write the timestamp and the index in a file. It appears that the number of depth frames received become very small compared to color frames (1784 color frames against 464 depth frames, for an acquisition of 1 mn). Connecting the camera to USB 2.0 port, I get well 1784 color frames and 1784 depth frames.
I noticed that using QApplication instead of a QCoreApplication, the number of depth frame images decrease in number (44 depth frames for an acquisition of 1 mn).
Do you think that the problem come from Qt or from the camera and its drivers ? I red on Asus support that there were some problems with Asus Xtion Pro live and USB 3.0. I downloaded a patch from http://reconstructme.net/2012/10/13/asus-xtion-usb-3-0-hotfix-2/ but it doesn't correct my problem.
Thanks !
Last week, i decided to take my courage in both hands and tried to solve my problem regarding my Asus Xtion Pro Live and Qt. I am sorry to resurrect this topic, but i have recently come across a website that solves my problem http://www.qimaging.com/support/chipset.php. As they say, the problem comes from the USB controller :
"Intel released a new version of their mother board chipsets (Series 7/C216, Series 8/C220 and later) with a native Intel based USB 3.0 host controller (USB3 extensible host controller, xHCI in the Device Manager). On these newer PCs, the Intel USB 3.0 host controller does not communicate properly with some traditional USB 2.0 chipsets to the extent where data is not properly delivered"
I followed their advice and bought a StarTech 2 Port PCI Express Card Adapter and now i can acquire depth stream and color stream from camera without any problem.
Related
I'm working on a project requiring real-time access to the webcam, and have problems with getting a suitable camera stream under Windows 10 for processing the frames with OpenCV.
I'm able to access the camera just fine under Windows 8.1. using either
OpenCV 2.4.9 with Evgeny Pereguda's VideoInput
library(http://www.codeproject.com/Articles/776058/Capturing-Live-video-from-Web-camera-on-Windows-an)
for accessing the camera through Windows Media Foundation, or
OpenCV 3.0 without any additional libraries
These allow for capturing the webcam stream with high frame rate (~30fps), and setting the webcam resolution with e.g.
cvCapture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cvCapture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
Under Windows 10, however, both solutions above result in problems:
the solution using OpenCV 2.4.9, and the VideoInput library allow
setting the resolution to 640x480, but the frame rate is around 1FPS
(or worse?!), and the picture is very dark
the solution using OpenCV 3.0 gives me a nice 1920x1080 image on a good frame rate, but I'm unable to set the resolution for the stream
I even tried opening the camera stream with:
cv::VideoCapture cvCapture( CV_CAP_DSHOW + camnum );
cv::VideoCapture cvCapture ( CV_CAP_MSMF + camnum );
The first one works (as far as opening the stream, but with the same problems as above), the MSMF (Microsoft Media Foundation) results in cvCapture.isOpened() returning false;
Handling the FullHD stream real-time isn't feasible for the image processing algorithms, nor is resizing down the resulting frame with OpenCV.
The Windows 8.1 version is running on a Surface Pro 3 (Core i7), and the Windows 10 version on a Surface Pro 4 (Core i7). Could this be a hardware / camera driver problem? I tried finding updated drivers for the Surface 4, but to no avail.
Has anyone had similar problems? Is there an obvious solution I'm overlooking?
I think that your problem with videoInput on Windows 10 is related with selecting of the correct mediaType of web cameras. The fact is that OpenCV uses DirectShow by default and videoInput on Media Foundation is only optional.
I advise you correct check variables:
float MF_MT_FRAME_RATE_RANGE_MAX;
float MF_MT_FRAME_RATE;
float MF_MT_FRAME_RATE_RANGE_MIN;
in
// Structure of info MediaType
struct MediaType
Also I can advise to visit on site Capture Manager Topology Editor - this a free software for working with web cameras via Media Foundation. It allows verify accessible features of Media Foundation on your Surface Pro 4 (Core i7).
With best regards,
Evgeny Pereguda
I am doing a project regarding image processing and multiple person counting and was wondering, how exactly can I plug my ION AIR PRO PLUS video recording device (similar to a goPro), and use it as my 'webcam'? Basically, I want to plug it in and then access it via a live feed using Microsoft Visual Studio 2010 and OpenCV, and then do real time tracking of people walking.
What I am struggling with is accessing the external camera from my program. Anyone know how to do this?
The video camera has no wifi, only an hdmi output, RGB cable output and a USB.
Attach the USB cable and instantiate cv::VideoCaputre(0). In Linux local cameras have number indices I think in Windows should be the same.
Peter, In the past I have worked on camera products on Windows XP & 7. On windows usb cameras can be accessed using directshow.
You can implement a directshow filter for people tracking algorithm and fit it in the direct show pipeline soon after your camera plugin.
Here is a link to an application stack that may suit your use case(to give you an idea):
http://www.e-consystems.com/blog/camera/?p=1302
The recent windows operating systems that run WinRT, use a latest framework called MediaFoundation. As its very new there are some limitations we found when we tried to build similar applications.
To quickly see a preview out of your camera, pls google for "graphedit" and install it on a Windows 7 pc. Its a fairly simple tool. You can drag and drop your camera, if directshow is supported, then render its output pin and play for preview.
I do not think this is a standard webcam, it appears to work only as a mass storage device. One thing you can try is removing the micro sd card and connect it to the computer. This works on some cameras.
Assuming this does not work, one option would be to purchase an HDMI capture card. The YK762H PCI-E card costs around $40, and will allow you to use the camera with OpenCV, the videoInput library, or DirectShow.
The other option is to use the WiFi live preview. You would have to figure out the commands sent to the camera. This has already been done with the SJCam wifi cameras, the GoPro models, and Sony cameras such as the QX-10 and HDR-AS100V.
I am trying to view the depth images from Kinect for Windows and SoftKinetic DepthSense 325 at the same time. Later, I plan to write a program to grab and write the depth images from both devices.
For viewing depth images from Kinect for Windows, I am using the DepthBasics-D2D program from the Kinect SDK. For viewing depth images from SoftKinetic camera, I am using the DepthSenseViewer that ships with the driver.
I find that these two devices cannot be plugged in and used at the same time!
If I have SoftKinetic plugged in and DepthSenseViewer displaying the depth and then I plug in the Kinect, then the DepthBasics program reports that no Kinect could be found.
If I have Kinect plugged in and DepthBasics program display the depth and then I run the DepthSenseViewer and try to register the depth ndoe, it reports error: couldn't start streaming (error 0x3704).
Why cannot I view depth images from both Kinect and SoftKinetic simultaneously? Is there a way I can grab depth images from both devices?
Check that the two devices aren't trying to run on the same USB hub. To try to resolve the problem, you might try one device on a USB2 port and the other device on a USB3 port.
(Egad this is an old post. Anyway, it will still be helpful to someone.)
I have created a c++ dll where it shows camera live streaming at panel handle with DirectShow API. My Camera is Logitech c920 Webcam. My camera offers H264 codec on the 3rd output pin.
When I use graphEdit, if I connect the Logitech Webcam 3rd output Pin to Video Mixing Render 9 1st input pin, it automaticly adds the DTV-DVD microsoft decoder between the connection like this :
Logitech HD Pro Webcam C920 [Capturer] => [VMR Input0] Video Mixing Renderer 9
(When connected become)
Logitech HD Pro Webcam C920 [Capturer] => [video Input1] Microsoft DTV-DVD Video Decoder [video Output 1] => [VMR Input0] Video Mixing Renderer 9
The quality is very nice and I have a fast video streaming rates on the active movie window.
This is where it's getting incorrect. In my code, I have directly connected the 3rd pin of capture source to vmr7 input pin (without adding dt-dvd decoder). I have also set the video settings as 1600x896 and H264 Mediatype with IAMStreamConfig.
I readed on msdn (if i have understand correctly) that directshow will automaticly put the necessary filter between two connected pins. It's working but the video quality is terrible.It looks like theres lot pixels mixed or corrupted. I also don't have any evidence that the filter as been added. Is it because i haven't programmaticaly put the decoder between them? And if it could be, how do I add this filter?
Thanks in advance and sorry for the english.
The problem is here:
if I connect the Logitech Webcam 3rd output Pin to Video Mixing Render 9
versus
I have directly connected the 3rd pin of capture source to vmr7 input pin
VMR-9 vs. VMR-7. The former is backed by Direct3D surfaces, with frames scaled smoothly by hardware. The latter, on the opposite, uses DirectDraw surfaces with scaling unavailable since Windows Vista, and the picture quality is terrible.
Use EVR as video renderer (or VMR-9) to get best picture quality.
i want to built a 3D view of my scene with inputs from at least 3 webcams...presently i am using OpenCV (with C/C++) on Windows 7 32 bit platform and it gives me a maximum of 2 webcam views parallely...i have generated a 3D view with 2 webcams but that is not up to the mark...so i have the following questions...
if i use 3 webcams (2.0) in a USB hub is it possible to access the USB video frames without OpenCV?if yes then how?to keep it simple i can skip the USB hub because my laptop has 3 USB ports...is it then possible?
i have read about LIBUSB library...yet not used it...is it possible to access webcam video frames with this kind of usb library?
how safe is the LIBUSB library?i have read in some forums that if not configured and used correctly the blue screen pops up very frequently...it also runs a chance to damage the usb driver...? is there any other usb library i can use safely?
anyone who have worked on similar usb multiple webcam access stuff or has any idea on this please guide me...any suggestion is welcome...
You can run as many webcams as USB bandwidth allows
If USB bandwidth limit is hit, you are unlikely to resolve this by using certain software library
It might be helpful to lower resolution or start using on-camera compression if such option exists since both reduce USB traffic
More links on USB bandwidth constraint: 2 usb cameras not working with opencv