I'm trying to access multiple usb-cameras in openCV with MacOS 10.11.
My goal is to connect up to 20 cameras to the pc via USB Quad-Channel extensions and take single images. I do not need to have live streaming.
I tried with the following code and I can take a single image from all cameras (currently only 3, via one usb controller).
The question is: does opencv stream a live video from the usb cameras all the time, or does grab() stores an image on the camera which can be retrieve with retrieve() ?
I couldn't find the information, wether opencv uses the grab() command on it's internal video buffer, or on the camera.
int main(int argument_number, char* argument[])
{
std::vector<int> cameraIDs{0,1,2};
std::vector<cv::VideoCapture> cameraCaptures;
std::vector<std::string> nameCaptures{"a","b","c"};
//Load all cameras
for (int i = 0;i<cameraIDs.size();i++)
{
cv::VideoCapture camera(cameraIDs[i]);
if(!camera.isOpened()) return 1;
camera.set(CV_CAP_PROP_FRAME_WIDTH, 640);
camera.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cameraCaptures.push_back(camera);
}
cv::namedWindow("a");
while(true) {
int c = cvWaitKey(2);
if(27 == char(c)){ //if esc pressed. grab new image and display it.
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
(*it).grab();
}
int i=0;
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
cv::Mat3b frame;
(*it).retrieve(frame);
cv::imshow(nameCaptures[i++], frame);
}
}
}
return 0;
}
Could you please make the statement more clear. You only want the frame from the feed or you want the streams to be connected all the time.
Opencv camera capture is always in running mode unless you release the capture device. So if you say you want only one frame from a device then its better to release this device once you retrieve the frame.
Another point is instead of using grab or retrieve in a multi cam environment, its better to use read() which combines both the above methods and reduces the overhead in decoding streams. So if you want say frame # 2sec position from each of the cams then in time domain they are pretty closely captured as in say the frame x from cam1 captured # 2sec position then frame x at 2.00001sec and frame x from cam3 captured at 2.00015sec..(time multiplexing - multithreading - internal by ocv)
I hope I am clear in explanation.
Thanks
Related
I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?
I've been having some issues regarding capturing video from a live stream.
I open up the video with the open function of the cv::VideoCapture. However I need to manually ask when a frame is ready in something like this:
while (true){
cv::Mat frame;
if (videoCapture.read(frame)){
// Do stuff ...
}
else{
// Video is done.
}
}
The problem with this code is that it will definitely process a single frame multiple times, the number of times depending on the camera's FPS. This is because the read() function will only return false if the camera is disconnected, according to the documentation.
So my question is, how can I know if there is a NEW frame available? That I'm not just getting the old one again?
I am currently work on a project that capture video from webcam and send the encoded stream via UDP to do a real time streaming.
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture cap(0); // open the video camera no. 0
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
return 0;
}
Some people say that the frame get from cap.read(frame) is already the decoded frame,I have no idea how and when does that happen. And what I want is the encoded frame or stream. What should I do to get it? Should I encoded it back again?
According to the docs, calling VideoCapture::read() is equivalent to calling VideoCapture::grab() then VideoCapture::retrieve().
The docs for the Retrieve function say it does indeed decode the frame.
Why not just use the decoded frame; presumably you'd be decoding it at the far end in any case?
OpenCV API does not give access to the encoded frames.
You will have to use a more low-level library, probably device and platform dependent. If your OS is Linux, Video4Linux2 may be an option, there must be equivalent libraries for Windows/MacOS. You may also have a look at mjpg-streamer, which does something very similar to what you want to achieve (on linux only).
Note that the exact encoding of the image will depend on your webcam, some usb webcam support mjpeg compression (or even h264), but other are only able to send raw data (usually in yuv colorspace).
Another option is to grab the decoded image wit Opencv, and reencode it, for example with imencode. It has the advantages of simplicity and portability, but image reencoding will use more resource.
I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.
I'm using OpenCV to get some video frames. This is how the camera capture is initialised:
VideoCapture capture;
capture.open(0); //Read from camera #0
If I wanted to switch to different camera, I'd do this:
capture.release(); //Release the stream
capture.open(1); //Open different stream
Imagine you had a few cameras connected to your computer and you wanted to loop through them using two buttons Previous camera and Next camera. Without saving the current camera ID to a variable, I need to get the actual value from the VideoCapture object.
So is there a way how to find out the id of currently used device?
Pseudocode:
int current = capture.deviceId;
capture.release();
capture.open(current++);
So is there a way how to find out the id of currently used device?
There's no way to do this because class VideoCapture doesn't contain such variable or method. It actually contains protected pointer to CvCapture (take a look at highgui.h) so you could try to play with it but you don't have access to this field.