c++ - OpenCV - Circular switch cameras with cvCaptureFromCam - c++

My app could be executed in computer with 0 or 100 cameras connected. I need to do code to switch camera until computer have not any more camera to use. In this case, the source should be 0 again. To implement that, I have used the following code:
CvCapture * capture = cvCaptureFromCAM(_source);
// Try to open capture and if it fails go to first camera
if(!capture){
_source = 0;
capture = cvCaptureFromCAM(_source);
}
With this code, I want to try with one source (for example 3) and if the computer have not 3 cameras, go to the first camera (source 0). The issue is that, although source is 5, cvCaptureFromCAM always return a valid capture, with capture for the last camera used, never NULL to switch to 0 and get source from camera 0. Any idea about how implement this "circular" switch?
An option is get the count of cameras and do module operation in this range, but as far I know OpenCV doesn't have one method to get count of available cameras.

"Last camera used" suggests that you did not in fact release that camera. Try releasing the old camera before switching to a new camera.

Related

Match object between different video frames

Am trying to use OPENCV to detect the shift in consecutive video frames when the camera is unstable and moving real time as shown in the picture.. To compensate the effect of shaking or changing in the angle I want to match some objects in the image as example the clock and from the center of the same object in the consecutive frames I can detect the shift value and compensate its effect. I don't know the way to do this real time or how many ways are available and accurate to do this.
Thank you in advance and I hope my question is clear.
This is a fairly standard operation, as it's actively used in MPEG-4 compression. It's called "motion estimation" and you don't do it on objects (too hard, requires image segmentation). In OpenCV, it's covered under Video Stabilization
If you want to try writing code yourself then one method is to first of all crop the frame to produce a sub image of your actual image slightly smaller than your actual image along each dimension. This will give you some room to move.
Next you want to be able to find and track shapes in OpenCV - an example of code is here - http://opencv-srf.blogspot.co.uk/2011/09/object-detection-tracking-using-contours.html - Play around until you get a few geometric primitive shapes coming up on each frame.
Next you want to build some vectors between the centres of each shape - these are what will determine the movement of the camera - if in the next frame most of the vectors are displaced but parallel that is a good indicator that the camera has moved.
The last step is to calculate the displacement, which should is matter of measuring the distance between detected parallel vectors. If this is smaller than your sub-image cropping then you can crop the original image to negate the displacement.
The pseudo code for each iteration would be something like -
//Variables
image wholeFrame1, wholeFrame2, subImage, shapesFrame1, shapesFrame2
vectorArray vectorsFrame1, vectorsFrame2; parallelVectorList
vector cameraDisplacement = [0,0]
//Display image
subImage = cropImage(wholeFrame1, cameraDisplacement)
display(subImage);
//Find shapes to track
shapesFrame1 = findShapes(wholeFrame1)
shapesFrame2 = findShapes(wholeFrame2)
//Store a list of parallel vectors
parallelVectorList = detectParallelVectors(shapesFrame1, shapesFrame2)
//Find the mean displacement of each pair of parallel vectors
cameraDisplacement = meanDisplacement(parallelVectorList)
//Crop the next image accounting for camera displacement
subImage = cropImage(wholeFrame1, cameraDisplacement)
There are better ways of doing it but this would be easy enough for someone doing their first attempt at image stabilisation with experience of OpenCV.

Kinect as Motion Sensor

I'm planning on creating an app that does something like this: http://www.zonetrigger.com/articles/Kinect-software/
That means, I want to be able to set up "Trigger Zones" using the Kinect and it's 3d Image. Now I know that Microsoft is stating that the Kinect can detect the skeleton of up to 6 People.
For me however, it would be enough to detect whether something is entering a trigger zone and where.
Does anyone know if the Kinect can be programmed to function as a simple Motion Sensor, so it can detect more than 6 entries?
It is well known that Kinect cannot detect more than 5 entries (just kidding). All you need to do is to get a depth map (z-map) from Kinect, and then convert it into a 3d map using these formulas,
X = (((cols - cap_width) * Z ) / focal_length_X);
Y = (((row - cap_height)* Z ) / focal_length_Y);
Z = Z;
Where row and col are calculated from the image center (not upper left corner!) and focal is a focal length of Kinect in pixels (~570). Now you can specify the exact locations in 3D where if the pixels appear, you can do whatever you want to do. Here are more pointers:
You can use openCV for the ease of visualization. To read a frame from Kinect after it was initialized you just need something like this:
Mat inputMat = Mat(h, w, CV_16U, (void*) depth_gen.GetData());
You can easily visualize depth maps using histogram equalization (it will optimally spread 10000 Kinect levels among your available 255 levels of grey)
It is sometimes desirable to do object segmentation grouping spatially close pixels with similar depth together. I did this several years ago, see this but had to delete the floor and/or common surface on which object stayed otherwise all the object were connected and extracted as a single large segment.

Choosing of a certain camera in OpenCV?

I had several cameras connected to my PC. In my applicaion I need to choose one of them. But I couldn't find how to do this. Is there any function for listing all connected cameras?
CvCapture* capture;
capture = cvCaptureFromCAM( 0 );
cvCaptureFromCAM will allow you to choose different cameras- start at 0. Then just take notes regarding which camera corresponds to which number.

Retrieving the current frame number in OpenCV

How can I retrieve the current frame number of a video using OpenCV? Does OpenCV have any built-in function for getting the current frame or I have to do it manually?
You can use the "get" method of your capture object like below :
capture.get(CV_CAP_PROP_POS_FRAMES); // retrieves the current frame number
and also :
capture.get(CV_CAP_PROP_FRAME_COUNT); // returns the number of total frames
Btw, these methods return a double value.
You can also use cvGetCaptureProperty method (if you use old C interface).
cvGetCaptureProperty(CvCapture* capture,int property_id);
property_id options are below with definitions:
CV_CAP_PROP_POS_MSEC 0
CV_CAP_PROP_POS_FRAME 1
CV_CAP_PROP_POS_AVI_RATIO 2
CV_CAP_PROP_FRAME_WIDTH 3
CV_CAP_PROP_FRAME_HEIGHT 4
CV_CAP_PROP_FPS 5
CV_CAP_PROP_FOURCC 6
CV_CAP_PROP_FRAME_COUNT 7
POS_MSEC is the current position in a video file, measured in
milliseconds.
POS_FRAME is the position of current frame in video (like 55th frame of video).
POS_AVI_RATIO is the current position given as a number between 0 and 1
(this is actually quite useful when you want to position a trackbar
to allow folks to navigate around your video).
FRAME_WIDTH and FRAME_HEIGHT are the dimensions of the individual
frames of the video to be read (or to be captured at the camera’s
current settings).
FPS is specific to video files and indicates the number of frames
per second at which the video was captured. You will need to know
this if you want to play back your video and have it come out at the
right speed.
FOURCC is the four-character code for the compression codec to be
used for the video you are currently reading.
FRAME_COUNT should be the total number of frames in video, but
this figure is not entirely reliable.
(from Learning OpenCV book )
In openCV version 3.4, the correct flag is:
cap.get(cv2.CAP_PROP_POS_FRAMES)
The way of doing it in OpenCV python is like this:
import cv2
cam = cv2.VideoCapture(<filename>);
print cam.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)

Decrease image resolution in OpenCV

I'm using OpenCV to capture images from A4Tech camera. When I try to decrease image resolution, image assertion fails:
CvCapture *camera = cvCreateCameraCapture(1); // 0 is index of Laptop integrated camera
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_WIDTH, 160 );
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_HEIGHT, 140 );
assert(camera); // This is passed
while(true)
{
// ....
IplImage * image=cvQueryFrame(camera);
assert(image); // This fails. (Line 71 is here)
// ....
}
Output is:
HIGHGUI ERROR: V4L: Initial Capture Error: Unable to load initial memory buffers.
udpits: main.cpp:71: int main(int, char**): Assertion `image' failed.
Aborted
You seem to be doing it the right way, but OpenCV is known to have issues doing this kind of stuff. Are you sure your camera supports 160x140? Cheese says my camera supports 160x120, but when I select this format nothing seems to change.
One thing though, it's best to always check the return of OpenCV calls, for instance:
CvCapture *camera = cvCreateCameraCapture(1);
if (!camera)
{
// print error and exit
}
One thing you could do is resize the captured frame to the resolution you want. It's not ideal, I know, your CPU is going to do this task and therefore there will be some performance cost to your application, but if you need the image to be of a certain size and OpenCV is not playing nice there ain't much you can do, unless you are willing to make a surgery on OpenCV.
EDIT:
One thing you should do is check whether these values were really set. So after setting them, retrieve them with cvGetCaptureProperty().