Opencv2.4 Setting Camera Parameters - c++

I'm trying to set the camera parameters using the following codeand it is not working at all.
using namespace cv;
int main(int argc,char *argv[])
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
bool value = cap.set(CV_CAP_PROP_FRAME_WIDTH,10);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("frame", frame);
unsigned char *dad = (unsigned char*)frame.data;
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}

OpenCV tries to set this size directly in the camera, so it doesn't need to resize the frame.
The problem with this approach is that if your camera doesn't support this size natively, OpenCV will fail setting the value, leaving you the task to resize the frame after it is retrieved.
cap.set() seems to return the success of the function, I suggest you check it.
I recommend taking a look at another thread: how to change the capture resolution in OpenCV.

from opencv is using directshow for video capture. however, your camera only support a few of the resolution settings like, 480*320, 640*480, 720p, 1080p. if you set something else, it would not work at all.
if you want to check what kinds of resolution your camera support.
download the graphedt and check in the capture pin property.

the above code is not using for changing the camera parameters. I think it usu full for showing the video in your machine. May be this link is useful to you http://opencv.willowgarage.com/wiki/CameraCapture

Related

OpenCV: isOpened() always fails

I am taking my first steps with OpenCV and I am trying to run this piece of code. It is supposed to open the specified video in a new window and wait for the user to press ESC. I tried passing both the relative and absolute path to VideoCapture but VideoCapture::isOpened() always fails. Why is this happening?
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a nice little window.
Note that I am using VS15 and OpenCV 2.4 (with the x86 libs)
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(path_to_video); // open the video file
// VideoCapture cap(0);
if(!cap.isOpened()) // check if we succeeded
return -1;
namedWindow("Video",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Video", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
EDIT: I solved this by reinstalling OpenCV and creating a new Visual Studio project. The above code miraculously started working.
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a
nice little window. Why is this happening?
Because the VideoCapture class has two different constructors. The one that takes a string attempts to read from a file. The one that takes an integer attempts to read from a device. Passing 0 to the second version specifies the default device / camera.
VideoCapture::open¶ Open video file or a capturing device for video
capturing
C++: bool VideoCapture::open(const string& filename)
C++: bool VideoCapture::open(int device)
Parameters:
filename – name of the opened video file (eg. video.avi)
or image sequence (eg. img_%02d.jpg, which will read samples like
img_00.jpg, img_01.jpg, img_02.jpg, ...)
device – id of the opened video capturing device (i.e. a camera index).

OpenCV cheap stereo camera can't load both streams at once

I have purchased an ELP-1MP2CAM001 which shows up as two webcam devices on Windows. If I open the Windwos default "Camera" app and Skype I can display the feeds from both the left and right camera at the same time. I don't think therefore it is a USB Bandwidth issue with two cameras coming into the same port
I'm using fairly standard code (shown below) to open both of these feeds and it works successfully if I use two standard Microsoft HD3000 webcams instead of the single stereo camera.
I've tried a range of numbers inside the cap2() arguments so I don't think it's hiding at number 10 or anything weird like that.
My questions are:
There must be some sort of on board hub for the ELP cameras, do I need to do something different in OpenCV?
Could it be that both frames are accessible through cap(0)? This seems unlikely to me.
This questions says I don't need to do anything special? but obviously I'm missing something.
Any help on this would be great.
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
VideoCapture cap2(1); // open the default camera
cap.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
if (!cap.isOpened()) // check if we succeeded
return -1;
if (!cap2.isOpened()) // check if we succeeded
return -1;
Mat frame;
Mat frame2;
namedWindow("Frame", 1);
namedWindow("Frame2", 1);
for (;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Frame", frame);
Mat frame2;
cap2 >> frame2;
imshow("Frame2", frame2);
if (waitKey(30) >= 0) break; // Finish on "esc" key
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
I have the same camera as this, and I ran into the same problem before. Try change the order of your code as below:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
VideoCapture cap2(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
I think it's a problem with USB bandwidth. In your code, you opened two cameras in full resolution at the beginning, then you change the resolution of two cameras.
When you call VideoCapture cap(0); // open the default camera, cap has resolution 1280*720. cap already occupied the bandwidth. Thereby, VideoCapture cap2(1); won't open camera cap2 sucessfully.
Hope it helps.
According to VideoCapture documentation, there is a function called cv::VideoCapture::grab:
The primary use of the function is in multi-camera environments, especially when the cameras do not have hardware synchronization. That is, you call VideoCapture::grab() for each camera and after that call the slower method VideoCapture::retrieve() to decode and get frame from each camera
You can try that, with:
cap.grab();
cap.retrieve(...);

OpenCV "stuck" frames with VideoCapture

I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!

Playing video to correct speed with OpenCV

I have problem with plying video file, why it is slow motion ?
How can I make it normal speed?
#include"opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("eye.mp4");
// open the default camera
if (!cap.isOpened())
// check if we succeeded
return -1;
namedWindow("Video", 1);
while (1)
{
Mat frame;
cap >> frame;
imshow("Video", frame);
if (waitKey(10) == 'c')
break;
}
return 0;
}
VideoCapture isn't built for playback, it's just a way to grab frames from video file or camera. Other libraries that supports playback, such as GStreamer or Directshow, they set a clock that control the playback, so that it can be configured to play as fastest as possible or use the original framerate.
In your snippet, the interval between frames comes from the time it takes to read a frame and the waitKey(10). Try using waitKey(0), it should at least play faster. Ideally, you could use waitKey(1/fps).

Camera connection and disconnection catch

Hey guys,
I'm using OpenCV with the C++ API, and in order for my project to be more reliable I need a certain camera connection\disconnection handling.
I have searched for how-to's, but I could only find answers that require an ugly hack in order to do so.
Can you suggest a cleaner way to do it?
Thnx
Detecting camera connection/disconnection might require some tricks.
I suggest that you start another thread to check the success of cvCreateCameraCapture() in a loop, while your application is running.
Something like the following:
while (run_detection_thread) // global variable controlled by the main thread
{
CvCapture* capture = cvCreateCameraCapture(-1); //-1 or whatever number works for you
if (camera) //camera is connected
{
sleep(1);
}
else
{
// camera was disconnected
}
}
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT, CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}