I'm learning OpenCV because I want to build and program a 3D Scanner over the summer.
I bought three webcams for this purpose (two for the actual stereo images and one for texture [or as a backup]).
I tried to get a webcam's video with OpenCV. However, this does not work, as I ended up with a black screen, instead of video.
I then tried the same code with my grandmother's webcam. It worked fine.
However, I already bought 3 webcams of the type that I was planning on using to build my scanner: http://www.amazon.com/Webcam-Camera-Vision-Meeting-compatible/dp/B0015TJNEY/ref=pd_bxgy_e_img_b
I don't want to buy any new webcams.
Does anybody have any idea of why my webcams don't work with OpenCV (they work with other programs)?
How can I get OpenCV to accept my webcams?
Any suggestions would be appreciated!
Thanks
If your program pass this step , you should try a different number for cvCaptureFromCAM(0); 0 is the first web cam but maybe your's is set as 1, 2 or 3. you can also try -1 and see what happens
CvCapture *capture;
capture = cvCaptureFromCAM(0);
if (!capture)
{
printf("Error at capture");
return 1;
}
Related
I recently started using OpenCV for a project involving reading videos. I followed tutorials online for video's reading and the video seems to be read with no problems. However, when I display any frame from the video, the far right column appears to be corrupted. Here is the code I used for reading and displaying the first frame.
VideoCapture cap("6.avi");
Mat frame;
cap>>frame;
imshow("test",frame);
waitKey(0);
This resulted in a frame that looks good for the most part except the far right column. See here.
I am making no modifications to the video or frames before displaying it. Can anyone help figure out why this is happening?
Note: I'm running Ubuntu 14.04, OpenCV version 2.4.8
Full video can be found here.
Your code looks fine to me. Are you certain the frame is corrupted? Resize, maximize, minimize the "test" GUI window to see if the right edge is still corrupted. Sometimes while displaying really small images, I've seen the right edge of the GUI window display incorrectly even though the frame is correct. You could also try imwrite("test.png",frame) to see if the saved image is still corrupted.
If this doesn't help, it would seem like a codec problem. Ensure you have the latest version of opencv, ffmpeg.
If this still doesn't help, the video itself may be corrupted. You could try converting it into another format using ffmpeg
Now, I'm doing a project in which many webcams are used for capturing a still image for each webcam using OpenCV in C++.
Like other questions, multiple HD webcams may use too much bandwidth and exceed the limit.
Unlike the others, what I need is only a still image (only one frame) from each webcam. Let's say I have 15 webcams connecting to PC and every 10 seconds I would like to get still images (one image per webcam (total 15 images)) within 5 seconds. The images are then analysed and send a result to an arduino.
Approach 1: Open all webcams all the time and capture images every 10 seconds.
Problem: The bandwidth of USB is not enough.
Approach 2: Open all webcams but only one webcam at a time, and then close it and open the next one.
Problem: Switching the webcams from one to another takes at least 5 seconds for each switching.
What I need is only a single frame of an image from each webcam and not a video.
Is there any suggestions for this problem, besides loadbalancing of USB bus and adding USB PCI cards?
Thank you.
In opencv you can deal with the WebCam as stream which mean you have run as video. However, I think this kind of problem should be solved using the Webcam API if it is available. There should a way or another to take still image and return it to your program as data. So, you may search for this in website of the Camera.
I'm pretty stuck on a problem with my uEye camera. Using my laptop camera (id 0) or internet camera on usb (id 1) this line works perfectly: TheVideoCapturer.open(1); (TheVideoCapturer is of VideoCapture class, OpenCV).
Unfortunately, when I try to do the same with my uEye camera, it can't find it. I checked the camera ID in the ueyecameramanager, and it's 1. Or 35, in some expert mode. I'd like to use it the same way I used mentioned above cameras.
I've got the drivers, because, well, the ueyecameramanager works and gives me some stream, and ROS node ueye_cam works fine as well .
Any sort of advice would be gladly appreciated.
Even though you have probably already figured it out as far as I know you cannot use VideoCapture directly with uEye cameras. You have to use their own SDK to access the videostream (or take a single snapshot depending on your case). After that you can use memcpy() to copy the memory that is pointed by void pointer filled by is:GetImageMem(...) to the Mat object (cv::Mat::ptr()). If you look close enough what the ROS node for uEye does it actually uses the functions provided by the uEye SDK to set and access the camera. ROS also has its own image format and that is why an interface is implemented (called cv_bridge) to convert ROS images to OpenCV images. Overall it's a ridiculous salad of data copying and conversion but since this is how things currently are you don't have much of a choice there.
I am trying to get video stream from analog camera connected to usb easycap - in OpenCV C++.
using MATLAB, I can get stream the same approach like for laptop webcam (with changing the index from 1 to 2).
with OpenCV, I can get stream from laptop webcam with index 0.
but when I am trying to get with the camera connected to the easycap (using index 1) , the laptop crashes and get blue screen.
Anyone have done this before?
Thanks
I work on the same device and I also have some BSOD with it.
Do you plug it with the USB extension provided ? If yes, try don't use it.
If your problem is still hapening, it's probably because like me, you use a low quality chinese fake EasyCap. I bought a real one and I haven't problems anymore
If you want to keep your device, you can use it with VideoCapture in python, it works very well and there is no more BSOD
Try using Linux. I tested my code with a fake EasyCAP in windows and I got many BSOD then I built and executed the same code in Linux and it worked.
Linux is driver friendly.
I am trying to build an application to simply get, save and show some frames from my camera, a DMK 41BU02 (you can consult the specifications of the device in the following link: datasheet)
My code is as simple as that:
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
int main(int, char**)
{
String path="~/proof.jpg";
VideoCapture cap(1); // /dev/video0 is the integrated webcam of my laptop, while /dev/video1 is the DMK41BU02 camera
cvNamedWindow( "Video", CV_WINDOW_AUTOSIZE );
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
cap >> frame;
imwrite(path, frame);
imshow("Video", frame);
waitkey(0);
return 0;
}
The code compiles and executes whithout any problem, but the error arrives when the image is shown on the window or saved in the jpg file, because I get something like the following jpg, where the image is triplicated in the frame:
Resulting image of the code shown above
Some aspects to remark:
The code executes normally and returns normal images when working
with the integrated webcam of my laptop.
The DMK41BU02 camera works normally and returns normal images when working with another application, such as fswebcam or VLC.
The camera datasheet says it is compatible with OpenCV.
I have also tried the code with an infinite loop, as I know the first frame grabbed can be blank or with some type of error, but the problem is still there.
I have had some issues installing the camera drivers, but I think they're all resolved.
The laptop is a 32-bit machine with Ubuntu installed on it. Here you can see the output of uname -a: Linux AsusPC 3.11.0-18-generic #32~precise1-Ubuntu SMP Thu Feb 20 17:54:21 UTC 2014 i686 i686 i386 GNU/Linux
I have no idea of how to debug this problem and, of course, I don't know where the error could be. Could you give me any hint, please?
Thank you very much.
UPDATE: I forgot to post the weird outputs that the application writes in the terminal at the very beginning of the program:
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
libv4l2: error set_fmt gave us a different result then try_fmt!
HIGHGUI ERROR: libv4l unable convert to requested pixfmt
libv4l2: error set_fmt gave us a different result then try_fmt!
init done
opengl support available
I've had the exact same problem. The issue is within openCV itself, or more so; how cap_v4l.hpp (in the highgui module) and cap_libv4l.hpp are implemented.
The issue here is that OpenCV appearantly uses a wrong video type or channel type to read the data. Try playing arround with the different types (yuyv variants, etc) inside the opencv lib.
For some magical reason the cap_v4l.hpp is the code thats actually used by opencv and the code in cap_lib4l is not used, but seems to support more vide formats (It could be switched arround, i'm not sure about that).
Switching these files and recompiling opencv did improve stuff for me.
Since after the call to cap>>frame you have three channel (type=16), your capture is unaware that your camera is monochrome. Use grab-retrieve pairs instead since retrieve specifies number of channels.
bool VideoCapture::grab()
bool VideoCapture::retrieve(Mat& image, int channel=0)
Here is example code that also shows how to set camera parameters. You can also try to set some camera parameters that explicitly declare monochrome mode. If everything else fails you can always cut one image out of your triple with
Rect rect(0, 0, frame.cols/3, frame.rows);
Mat true_img = frame(rect).clone();
However, I kind of like what happens in your case. You have a natural frame queue and can analyze motion an possibly structure by looking at what happens in three consecutive frames.