I am using the QCamera class to show the camera output to QcameraViewFinder.
There is two camera : one is HD and other is normal.
Switching to normal camera is fast, but switching to HD cam takes some seconds and the machine is getting hanged for a while.
I am using Ubuntu 14.04 OS.
Related
I have i7 3770 with Intel® HD Graphics 4000. I want to run games in Virtual Machine. Most of them show errors with DirectX and Direct3D, but such games work fine on host machine.
I tried VirtualBox, Vmware, Hyper-V.
Also tried 3D acceleration and RemoteFX.
You might be able to do it with ESXi, check out this KB: https://kb.vmware.com/s/article/1010789
I'm working on a project requiring real-time access to the webcam, and have problems with getting a suitable camera stream under Windows 10 for processing the frames with OpenCV.
I'm able to access the camera just fine under Windows 8.1. using either
OpenCV 2.4.9 with Evgeny Pereguda's VideoInput
library(http://www.codeproject.com/Articles/776058/Capturing-Live-video-from-Web-camera-on-Windows-an)
for accessing the camera through Windows Media Foundation, or
OpenCV 3.0 without any additional libraries
These allow for capturing the webcam stream with high frame rate (~30fps), and setting the webcam resolution with e.g.
cvCapture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cvCapture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
Under Windows 10, however, both solutions above result in problems:
the solution using OpenCV 2.4.9, and the VideoInput library allow
setting the resolution to 640x480, but the frame rate is around 1FPS
(or worse?!), and the picture is very dark
the solution using OpenCV 3.0 gives me a nice 1920x1080 image on a good frame rate, but I'm unable to set the resolution for the stream
I even tried opening the camera stream with:
cv::VideoCapture cvCapture( CV_CAP_DSHOW + camnum );
cv::VideoCapture cvCapture ( CV_CAP_MSMF + camnum );
The first one works (as far as opening the stream, but with the same problems as above), the MSMF (Microsoft Media Foundation) results in cvCapture.isOpened() returning false;
Handling the FullHD stream real-time isn't feasible for the image processing algorithms, nor is resizing down the resulting frame with OpenCV.
The Windows 8.1 version is running on a Surface Pro 3 (Core i7), and the Windows 10 version on a Surface Pro 4 (Core i7). Could this be a hardware / camera driver problem? I tried finding updated drivers for the Surface 4, but to no avail.
Has anyone had similar problems? Is there an obvious solution I'm overlooking?
I think that your problem with videoInput on Windows 10 is related with selecting of the correct mediaType of web cameras. The fact is that OpenCV uses DirectShow by default and videoInput on Media Foundation is only optional.
I advise you correct check variables:
float MF_MT_FRAME_RATE_RANGE_MAX;
float MF_MT_FRAME_RATE;
float MF_MT_FRAME_RATE_RANGE_MIN;
in
// Structure of info MediaType
struct MediaType
Also I can advise to visit on site Capture Manager Topology Editor - this a free software for working with web cameras via Media Foundation. It allows verify accessible features of Media Foundation on your Surface Pro 4 (Core i7).
With best regards,
Evgeny Pereguda
For several days now, I encounter a problem regarding depth frame acquisition with my Asus Xtion Pro Live, Openni 2 and Qt. My application works fine when the camera is connected to a USB 2.0 port, but when I try to connect the camera to a USB 3.0 port, I cannot display images from the depth stream.
I wrote a very basic console application in order to solve the problem, that just acquire color and depth frames and for each frame, write the timestamp and the index in a file. It appears that the number of depth frames received become very small compared to color frames (1784 color frames against 464 depth frames, for an acquisition of 1 mn). Connecting the camera to USB 2.0 port, I get well 1784 color frames and 1784 depth frames.
I noticed that using QApplication instead of a QCoreApplication, the number of depth frame images decrease in number (44 depth frames for an acquisition of 1 mn).
Do you think that the problem come from Qt or from the camera and its drivers ? I red on Asus support that there were some problems with Asus Xtion Pro live and USB 3.0. I downloaded a patch from http://reconstructme.net/2012/10/13/asus-xtion-usb-3-0-hotfix-2/ but it doesn't correct my problem.
Thanks !
Last week, i decided to take my courage in both hands and tried to solve my problem regarding my Asus Xtion Pro Live and Qt. I am sorry to resurrect this topic, but i have recently come across a website that solves my problem http://www.qimaging.com/support/chipset.php. As they say, the problem comes from the USB controller :
"Intel released a new version of their mother board chipsets (Series 7/C216, Series 8/C220 and later) with a native Intel based USB 3.0 host controller (USB3 extensible host controller, xHCI in the Device Manager). On these newer PCs, the Intel USB 3.0 host controller does not communicate properly with some traditional USB 2.0 chipsets to the extent where data is not properly delivered"
I followed their advice and bought a StarTech 2 Port PCI Express Card Adapter and now i can acquire depth stream and color stream from camera without any problem.
Currently I’m working on a project to mirror a camera for a blind spot.
The camera got 640 x 480 NTSC signal.
The output screen is 854 x 480 NTSC.
I grab the camera with an EasyCAP video grabber.
On the Banana Pi I installed open cv 2.4.9.
The critical point of this project is that the video on the display needs to be real time.
Whenever I comment the line that puts the window into fullscreen, there pop ups a small window and the footage runs without delay and lagg.
But when I set the video to full screen, the footage becomes slow, and lags.
Part of the code:
namedWindow("window",0);
setWindowProperty("window",CV_WND_PROP_FULLSCREEN,CV_WINDOW_FULLSCREEN);
while(1){
cap>>image;
flip(image, destination,1);
imshow("window",destination);
waitKey(33); //delay 33 ms
}
How can I fill the screen with the camera footage without losing speed and frames?
Is it possible to output the footage directly to the composite output?
The problem is that upscaling and drawing is done in software here. The Banana Pi processor is not powerful enough to process the needed throughput with 30 frames per second.
This is an educated guess on my side, as even desktop systems can run into lag problems when processing and simultaneously displaying video.
A common solution in the computer vision community for this problem is to use OpenGL for display. Here, the upscaling and display is offloaded to the graphics processor. You can do the same thing on a Banana Pi.
If you compiled OpenCV with OpenGL support, you can try it like this:
namedWindow("window", WINDOW_OPENGL);
imshow("window", destination);
Note that if you use OpenGL, you can also save on the flip operation by using an approprate modelview matrix. For this however you probably need to dive into GL code yourself instead of using imshow.
I fixed the whole problem by using:
namedWindow("window",1);
With FLAG 1 stands for WINDOW_AUTOSIZE.
The footage is more real-time now.
I’m using a small monitor, so the window size is nearly the same as the monitor.
I have a multiple camera setup and an OpenCV application that has been working using two cameras. One camera is a Logitech C310, and the other has been my built-in camera on my MacBook Pro. On initialization I call cvCaptureFromCAM(), once for each camera. However, hooking up a second Logitech C310 (total of 3 cameras) causes the call to cvCaptureFromCAM() to hang for my MacBook pro camera.
Both Logitech cameras work together just fine. Once I call cvQueryFrame() for my MacBook camera it hangs, but eventually after maybe 2-3 minutes it returns what appears to be a valid pointer (i.e. not NULL), but that camera is not initialized. I receive no frames from that camera, and the light that is typically green when the camera is in use is not lit.
Here is a snippet of my code:
for( size_t i = 0; i < NUM_CAMERAS; i++ ) // Works fine when NUM_CAMERAS is 2, but hangs when it is 3
{
capture[i] = cvCaptureFromCAM( i );
if( capture[i] != NULL )
{
// Start a thread for each camera
}
}
// Threads manage calling cvQueryFrame() for each camera.
I am certain this is not an issue with multiple threads, because cvQueryFrame() always hangs, even if I do not start any new threads.
Thanks for any help - I'm having trouble finding anyone else with a similar issue.
OpenCV 2.4.5
OSX 10.8.4
gcc 4.2
Qt 5.0.2
Boost version 105300
2.4GHz Intel Core i7, retina Mac, 8BG RAM
2 x Logitech C310, 1 x MacBook Pro Camera
1 x Frustrated Dude
Turns out this is a limitation with my MacBook Pro. Running 2 USB cameras as well as the built-in iSight (which I believe is also USB internally), initialization of the 3rd device hangs. I can't provide a technical reason, but it seems it is a USB bandwidth issue.
I resolved this by using a Thunderbolt dock (http://www.belkin.com/us/p/P-F4U055) which has additional USB ports. Now my application works fine and initializes all cameras as it should. Also verified an Apple Thunderbolt display (which has a built in USB hub) also works.
Cheers!