I had several cameras connected to my PC. In my applicaion I need to choose one of them. But I couldn't find how to do this. Is there any function for listing all connected cameras?
CvCapture* capture;
capture = cvCaptureFromCAM( 0 );
cvCaptureFromCAM will allow you to choose different cameras- start at 0. Then just take notes regarding which camera corresponds to which number.
Related
My app could be executed in computer with 0 or 100 cameras connected. I need to do code to switch camera until computer have not any more camera to use. In this case, the source should be 0 again. To implement that, I have used the following code:
CvCapture * capture = cvCaptureFromCAM(_source);
// Try to open capture and if it fails go to first camera
if(!capture){
_source = 0;
capture = cvCaptureFromCAM(_source);
}
With this code, I want to try with one source (for example 3) and if the computer have not 3 cameras, go to the first camera (source 0). The issue is that, although source is 5, cvCaptureFromCAM always return a valid capture, with capture for the last camera used, never NULL to switch to 0 and get source from camera 0. Any idea about how implement this "circular" switch?
An option is get the count of cameras and do module operation in this range, but as far I know OpenCV doesn't have one method to get count of available cameras.
"Last camera used" suggests that you did not in fact release that camera. Try releasing the old camera before switching to a new camera.
I'm looking to make a program that once run, will continuously look for an template image (stored in directory of program) to match in realtime with the screen. Once found, it will click on the image (ie the center of the coords of the best match). The images will be exact copies (size/color) so finding the match should not be very hard.
This process then continues with many other images and then resets to start again with the first image again but once I have the first part working I can just copy the code.
I have downloaded the OpenCV library as it has image matching tools but I am lost. Any help with writing some stub code or pointing me to a helpful resource is much appreciated. I have checked a lot of the OpenCV docs with no luck.
Thanks you.
If you think that the template image would not be very different in the current frame then you should use matchTemplate() of openCV. Its very easy to use and will give you good results.
Have a look here for complete explanation http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
void start()
{
VideoCapture cap(0);
Mat image;
namedWindow(wndname,1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
"Load your template image here"
"Declare a result image here"
"Perform the templateMatching() between template image and frame and store the results of correlation in result image declared above"
char c = cvWaitKey(33);
if( c == 27 ) break;
}
}
Some people in this Q & A site suggested I use findContour to imitate what bwlabel in Matlab. But I am not sure because I think a contour is closed shape of detected edges and element from bwlabel is a connected shape. I guess they might be logically the same. What about them in practice? Are they really same?
Use either of these two library....cvBlobslib or cvblob...you will get many features about the connected components such as size and contour and ellipticity and bounding box...you can filter blobs and add togethar 2 or more blobs...try it..under the hood algo of bwlabel is a two scan connected component where as cvblob or cvBlobslib is a one scan algo...
bwlabel will give you the image connected components, i.e. different label for different connected objects in a background.
Probably what you mean is the combination of im2bw and imcontours provides, i.e. a combination of binarizing the image and trivially finding the single contour (boundaries) per retained object on the output.
Consider the following example:
I = imread('coins.png'); % grayscale
level = graythresh(I); % find thershold
BW = im2bw(I, level); % threshold image
imcontour(BW, 1); % plot single contour
For a grayscale image you can increase the number of requested contours, though findContours operates on binary images.
I found an exact article about this. Quick answer is "Yeah, their eventual output will be the same." So I might go with findContour after all considering cvBlob still using old C-style API and having its own implementation of finding contours.
How can I retrieve the current frame number of a video using OpenCV? Does OpenCV have any built-in function for getting the current frame or I have to do it manually?
You can use the "get" method of your capture object like below :
capture.get(CV_CAP_PROP_POS_FRAMES); // retrieves the current frame number
and also :
capture.get(CV_CAP_PROP_FRAME_COUNT); // returns the number of total frames
Btw, these methods return a double value.
You can also use cvGetCaptureProperty method (if you use old C interface).
cvGetCaptureProperty(CvCapture* capture,int property_id);
property_id options are below with definitions:
CV_CAP_PROP_POS_MSEC 0
CV_CAP_PROP_POS_FRAME 1
CV_CAP_PROP_POS_AVI_RATIO 2
CV_CAP_PROP_FRAME_WIDTH 3
CV_CAP_PROP_FRAME_HEIGHT 4
CV_CAP_PROP_FPS 5
CV_CAP_PROP_FOURCC 6
CV_CAP_PROP_FRAME_COUNT 7
POS_MSEC is the current position in a video file, measured in
milliseconds.
POS_FRAME is the position of current frame in video (like 55th frame of video).
POS_AVI_RATIO is the current position given as a number between 0 and 1
(this is actually quite useful when you want to position a trackbar
to allow folks to navigate around your video).
FRAME_WIDTH and FRAME_HEIGHT are the dimensions of the individual
frames of the video to be read (or to be captured at the camera’s
current settings).
FPS is specific to video files and indicates the number of frames
per second at which the video was captured. You will need to know
this if you want to play back your video and have it come out at the
right speed.
FOURCC is the four-character code for the compression codec to be
used for the video you are currently reading.
FRAME_COUNT should be the total number of frames in video, but
this figure is not entirely reliable.
(from Learning OpenCV book )
In openCV version 3.4, the correct flag is:
cap.get(cv2.CAP_PROP_POS_FRAMES)
The way of doing it in OpenCV python is like this:
import cv2
cam = cv2.VideoCapture(<filename>);
print cam.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
I'm using OpenCV to capture images from A4Tech camera. When I try to decrease image resolution, image assertion fails:
CvCapture *camera = cvCreateCameraCapture(1); // 0 is index of Laptop integrated camera
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_WIDTH, 160 );
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_HEIGHT, 140 );
assert(camera); // This is passed
while(true)
{
// ....
IplImage * image=cvQueryFrame(camera);
assert(image); // This fails. (Line 71 is here)
// ....
}
Output is:
HIGHGUI ERROR: V4L: Initial Capture Error: Unable to load initial memory buffers.
udpits: main.cpp:71: int main(int, char**): Assertion `image' failed.
Aborted
You seem to be doing it the right way, but OpenCV is known to have issues doing this kind of stuff. Are you sure your camera supports 160x140? Cheese says my camera supports 160x120, but when I select this format nothing seems to change.
One thing though, it's best to always check the return of OpenCV calls, for instance:
CvCapture *camera = cvCreateCameraCapture(1);
if (!camera)
{
// print error and exit
}
One thing you could do is resize the captured frame to the resolution you want. It's not ideal, I know, your CPU is going to do this task and therefore there will be some performance cost to your application, but if you need the image to be of a certain size and OpenCV is not playing nice there ain't much you can do, unless you are willing to make a surgery on OpenCV.
EDIT:
One thing you should do is check whether these values were really set. So after setting them, retrieve them with cvGetCaptureProperty().