OpenCV not initializing USB camera - c++

I am trying to capture video from a USB camera using OpenCV.
#include <highgui.h>
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap (-1);
if (!cap.isOpened())
cout << "Cam initialize failed";
else cout << "Cam initialized";
return 0;
}
It is failing to initialize the camera. cap.isOpened() is returning zero.
The same program, with same version of OpenCV and the same USB camera, is correctly running in my friend's machine. I am running Fedora 16. The camera is properly working in another application (for example, Cheese).
I did some searching in Google and Stack Overflow. But no useful help.
Any idea?

Try running your program as root. You may not have permission, and opencv doesn't tell you if thats the reason cap.isOpened() failed.

The manual here says that the VideoCapture::VideoCapture(int device) accepts
device: id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0.
I think you should change the -1 to 0 if you have 1 camera in your system.

Related

Can't change OpenCV video capture resolution

The problem I am having is that I am unable to change the resolution of an OpenCV video capture. The resolution is always 640x480, no matter what. The code I'm using is written in C++ and I'm using opencv 3.4.8. I've created a super simple program with which to do this and it just doesn't seem to work no matter what I try.
Here is the code in its entirety:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if (!cap.open(0))
return 0;
for (;;)
{
Mat frame;
cap.read(frame);
if (frame.empty()) break; // end of video stream
imshow("this is you, smile! :)", frame);
if (waitKey(10) == 27) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
When I run the above code frame is always 640x480.
I've tried changing the resolution with cap.set() to smaller and higher resolutions. I am using an ImageSource camera and I know that the resolutions I am attempting to use are supported by the camera and I can view video at those resolutions in another program.
I've tried using different cameras/webcams.
I've tried explicitly changing the backend API when I create the VideoCapture object - i.e. VideoCapture cap(0, CAP_DSHOW). I tried DSHOW, FFMPEG, IMAGES, etc.
I've tried running the same program on different computers.
The result is always the same 640x480 resolution.
Is there something simple I am missing? Every other post I can seem to find on SO just points toward using the cap.set() to change the width and height.
It depends on what your camera backend is. As the documentation says:
Each backend supports devices properties (cv::VideoCaptureProperties)
in a different way or might not support any property at all.
Also mentioned in this documentation:
Reading / writing properties involves many layers. Some unexpected
result might happens along this chain. Effective behaviour depends
from device hardware, driver and API Backend.
It seems your camera backend is not supported by OpenCV Video I/O module.
Note: I also met such kind of cameras, some of them different resolutions are working with different numbers. For example, you may catch desired resolution by trying VideoCaptur(-1) , VideoCapture(1) , VideoCapture(2) ...
Turns out the error was in the "if(!cap.open(0))" line that I was trying to use to check if cap had successfully initialized.
I was under the impression open was just returning true if the video capture object was open or false otherwise. But it actually releases the video capture object if it is already open and then it re-opens it.
Long story short that means that the cap.set() calls that I was using to change the resolution were being erased when the object was re-opened with cap.open(0). At which point the resolution was set back to the default of 640x480.
The method I was looking for is cap.isOpened(), which simply returns true or false if the object is open. A simple, silly mistake.

How can set exposure time for DMM 27UJ003-ML camera using opencv

I'm using a camera which is called DMM 27UJ003-MLand the documents are available via this link. This camera has some features such as Brightness which can be set in OpenCV, see the following code for instance
//Header
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap(0); //Access to camera with ID = 0
double brightness = cap.get(CV_CAP_PROP_BRIGHTNESS); // get value of brightness
cout<<brightness<<endl; //print brightness value in console
}
result is 0.5 and it's OK,I can set Brightness as well, but if i want to change Exposure time the problem will be appear!!(Exposure time is another camera property that can be variable)
int main()
{
VideoCapture cap(0);
cap.set(CV_CAP_PROP_EXPOSURE,0.1);
}
ButExposure time can't be set in appropriate way and if want to use get method to knowing what set as Exposure time value,result is strange
VideoCapture cap(0);
double Exposure = cap.get(CV_CAP_PROP_EXPOSURE);
cout<<Exposure<<endl;
result of Exposure is inf and camera doesn't response to outside environment(it seems that Exposure time is inf actually) so the only way to reset Exposure timeis software that company gave to me and i don't know how i can set this feature in opencv
thanks for your help.
Add the following code at the beginning:
cap.set(CV_CAP_PROP_AUTO_EXPOSURE,0.25);
0.25 means 'manual mode'.
If you use Linux based machine you can install a package that will help you about that, which it's name is v4l2ucp,this package can be install by below command in ubuntu
sudo apt install v4l2ucp
this is a package that give you graphical control on camera with using great v4l2 package (by installing v4l2ucp there is no need to install v4l2 again). if you can change exposure time in the v4l2ucp then you can use v4l2 inside of your program.
you can get whole information about your camera by below command in ubuntu terminal.
v4l2-ctl --all
after knowing which parameters are available for you by using above command you can change value of that parameter. for example of my output is like below
brightness (int) : min=-10 max=10 step=1 default=0 value=0
you can see there is a variable of camera,which it's name is brightness and defalut value of that is 0 and there is boundary for value (min=-10, and max =10) so how can I set this value to 10 for instance? I can do this by below command(please test it by open camera)
v4l2-ctl --set-ctrl brightness=10
after doing that in terminal you can see brightness changes in camera.
So how we could use v4l2 command inside Qt programming? by using QProcess class, this command let you to run terminal commands inside of Qt program. I write a simple example
#include <QProcess>
int main()
{
QProcess process;
process.start("v4l2-ctl --set-ctrl brightness=10");
pro.waitForFinished(-1);
}

Take high resolution image from ip camera using opencv

I have an ip camera. It allows me to have two different encoding type, h264 and mjpeg and its best resolution is 1920x1080.
I use iSpy software to find URL address of my camera. It works and take photo, but its resulotion is 640*360.
Here is my code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
int main()
{
cv::VideoCapture vcap;
const std::string videoStreamAddress = "rtsp://admin:admin#192.168.0.120/snl/live/1/2/stream1.cgi";
if (!vcap.open(videoStreamAddress))
{
printf("camera is null\n");
return -1;
}
else
{
vcap.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
vcap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
cv::Mat image;
vcap.read(image);
cv::imshow("image",image);
cv::imwrite("image.jpg", image);
}
cv::waitKey(1000);
return 0;
}
How can I take image with higher quality. I don't know the problem is from my camera, or my url, or my code.
I work with opencv 2.4 on windows 7.
Any help would be appreciated
I'm answering my own question.
At first I'd thought the problem is rooted in OpenCV, since I'd found many threads about setting camera parameters failures. It seems that vcap.set(CV_CAP_PROP_FRAME_WIDTH, 1920);vcap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080); doesn't work well.
Anyway, I've checked my camera Development Document on its website and found another URL for H.264 video stream. I've changed my URL, and it works. It takes 1920X1080 images.

Opencv doesn't detect firewire webcam on linux

I have connected a cam through firewire and tried to access it using opencv. The camera is detected in coriander and able to get a video stream. Below is the code I used
#include "/home/iiith/opencv-2.4.9/include/opencv/cv.h"
#include "/home/iiith/opencv-2.4.9/include/opencv/highgui.h"
#include "cxcore.h"
#include <iostream>
using namespace cv;
using namespace std;
int main(int,char**)
{
VideoCapture cap(0);
if(!cap.isOpened())
cout<<"Camera not detected"<<endl;
while(1)
{
Mat frame;
namedWindow("display",1);
cap >> frame;
imshow("display",frame);
waitKey(0);
}
}
When I run this code, the video is streamed from the webcam instead of my firewire cam. I tried the same code in my friend's system and there the firewire cam was detected. I tested the settings using different commands such as testlibraw , lsmod and they are all the same. Even the Opencv version, 2.4.9, Ubuntu 12.04 are all the same. This is really bizarre and am at this for 2 days. Can anyone please tell me what the difference could be? How can I get the external cam detected in opencv? Thanks in advance.
Note : Does this have something to have with setting the default cam? Thanks.
Update 1 : VideoCapture cap(1) gives the following error
HIGHGUI ERROR: V4L: index 1 is not correct!
Does this mean the camera is not recognized?
First, you should be sure that camera is recognized from your s.o.
unplug camera and wait few seconds;
open terminal and digit:
watch dmesg
lspci | grep -E -i "(1394|firewire)" #this could give you something
plug your device and read new entry on terminal
if your device is recognized you can launch a command like this:
mplayer tv:// -tv driver=v4l2:width=352:height=288
The Possible problem could be that the camera connected through firewire is not recognized by the system.
First try to see the camera output using AMcap or some other webcam software and check if you are able to see this.
If you not able to see the video in amcap then it means that drivers of that particular camera is missing.

Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.