I'm using OpenCV to capture images from A4Tech camera. When I try to decrease image resolution, image assertion fails:
CvCapture *camera = cvCreateCameraCapture(1); // 0 is index of Laptop integrated camera
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_WIDTH, 160 );
cvSetCaptureProperty( camera, CV_CAP_PROP_FRAME_HEIGHT, 140 );
assert(camera); // This is passed
while(true)
{
// ....
IplImage * image=cvQueryFrame(camera);
assert(image); // This fails. (Line 71 is here)
// ....
}
Output is:
HIGHGUI ERROR: V4L: Initial Capture Error: Unable to load initial memory buffers.
udpits: main.cpp:71: int main(int, char**): Assertion `image' failed.
Aborted
You seem to be doing it the right way, but OpenCV is known to have issues doing this kind of stuff. Are you sure your camera supports 160x140? Cheese says my camera supports 160x120, but when I select this format nothing seems to change.
One thing though, it's best to always check the return of OpenCV calls, for instance:
CvCapture *camera = cvCreateCameraCapture(1);
if (!camera)
{
// print error and exit
}
One thing you could do is resize the captured frame to the resolution you want. It's not ideal, I know, your CPU is going to do this task and therefore there will be some performance cost to your application, but if you need the image to be of a certain size and OpenCV is not playing nice there ain't much you can do, unless you are willing to make a surgery on OpenCV.
EDIT:
One thing you should do is check whether these values were really set. So after setting them, retrieve them with cvGetCaptureProperty().
Related
I am trying to read a video file, process it, and write the processed frames as an output video file. However, I get the following error:
OpenCV Error: Assertion failed (img.cols == width && img.rows == height && channels == 3) in write, file /.../opencv-cpp/modules/videoio/src/cap_mjpeg_encoder.cpp, line 829
terminate called after throwing an instance of 'cv::Exception'
what(): /.../opencv-cpp/modules/videoio/src/cap_mjpeg_encoder.cpp:829: error: (-215) img.cols == width && img.rows == height && channels == 3 in function write
I am sure I have 3 channels (I checked it with .channels()) before the write() function takes place:
//generate video
cout<<finalOutputRGB.channels()<<endl;
outputCapRGB.write(finalOutputRGB);
So the problem is not here. Perhaps the way I initialized?
// Setup output videos
VideoWriter outputCapRGB(rgbVideoOutputPath, captureRGB.get(CV_CAP_PROP_FOURCC), captureRGB.get(CV_CAP_PROP_FPS),
Size(captureRGB.get(CV_CAP_PROP_FRAME_WIDTH), captureRGB.get(CV_CAP_PROP_FRAME_HEIGHT)));
What could it be? One thing came to my mind is that during the processing of the frames, they are being cropped, so the resolutions are not the same. Maybe this could be the reason. But then again, it would be stupid for OpenCV to not allow any modified video to be recorded.
So I tried to create the videowriter objects with the cropped sizes of my frames as follows:
// Sizes of the videos to be written (after the processing)
Size irFrameSize = Size(449, 585);
Size rgbFrameSize = Size(488, 694);
// Setup output videos
VideoWriter outputCapRGB(rgbVideoOutputPath, captureRGB.get(CV_CAP_PROP_FOURCC), captureRGB.get(CV_CAP_PROP_FPS), rgbFrameSize);
VideoWriter outputCapIR(irVideoOutputPath, captureIR.get(CV_CAP_PROP_FOURCC), captureIR.get(CV_CAP_PROP_FPS), irFrameSize);
However, I still get the same damn error.
Alternatively, I also appreciate any suggestion of software which can crop video files conveniently, on Ubuntu. This would also solve the problem. I would crop the videos and feed them in.
Any thoughts?
The exception says the image frame that you want to write have a different size from the size in your videowriter. You should check if every image frame you are writing has the same width/height as in your videowriter.
Just a shot in the dark: from the error, since you're sure that you have 3 channels, is it possible that you inverted width and height? In OpenCV, the Size is defined has (width, height), whereas Mat are defined as (rows, cols), what effectively correspond to (height, width).
May be I'm too late. But this solution works for me.
Resize the Mat by using cv::resize(finalOutputRGB,finalOutputRGB, cv::size(width, height*3))**Use height of the image*3 for the height. **
That's it. It solved my problem. Hope that would help.
I'm having a problem with detectMultiScale returning rectangles outside the bounds of the input Mat.
So what I'm doing is an optimization technique where the first frame of a video feed is passed to detectMultiScale in it's entirety.
If an object was detected, I create a temp Mat, which i clone the previous frames detected object's rect from the current full frame.
Then i pass this temp Mat to detectMultiScale, so only the area around the rectangle where the previous frame detected an object.
The problem i'm having is that the results from detectMultiScale when passing the temp Mat give rectangles that are outside the bounds of the input temp Mat.
Mostly I would just like to know exactly what is going on here. I have two ideas of what could be happening, but I can't figure out for sure.
Either the clone operation when cloning a rect from a full frame to the temp Mat is somewhere inside the Mat object setting the cloned area at the rows and columns of the full frame. So for example, i have a full frame of 100x100, i'm trying to clone a 10x10 rectangle from it at position 80x80. The resulting Mat will then be size 10x10, but maybe inside the Mat somewhere it is saying the Mat starts at 80x80?
CascadeClassifier is keeping state somewhere of the full frame i had passed to it previously?
I don't know what is happening here for sure, but was hoping someone could shed some light.
Here's a little code example of what i'm trying to do, with comments explaining the results i'm seeing:
std::vector<cv::Rect> DetectObjects(cv::Mat fullFrame, bool useFullFrame, cv::Rect detectionRect)
{
// fullFrame is 100x100
// detectionRect is 10x10 at position 80x80 eg. cv::Rect(80,80,10,10)
// useFullFrame is False
std::vector<cv::Rect> results;
if(useFullFrame)
{
object_cascade.detectMultiScale(fullFrame,
results,
m_ScaleFactor,
m_Neighbors,
0 | cv::CASCADE_SCALE_IMAGE | cv::CASCADE_DO_ROUGH_SEARCH | cv::CASCADE_DO_CANNY_PRUNING,
m_MinSize,
m_MaxSize);
}
else
{
// useFullFrame is false, so we run this block
cv::Mat tmpMat = fullFrame(detectionRect).clone();
// tmpMat is size 10,10
object_cascade.detectMultiScale(tmpMat,
results,
m_ScaleFactor,
m_Neighbors,
0 | cv::CASCADE_SCALE_IMAGE | cv::CASCADE_DO_ROUGH_SEARCH | cv::CASCADE_DO_CANNY_PRUNING,
m_MinSize,
m_MaxSize);
}
if(results.size() > 0)
{
// this is the weird part. When looking at the first element of
// results, (result[0]), it is at position 80,80, size 10,10
// so it is cv::Rect(80,80,10,10)
// even though the second detectMultiScale was ran with a Mat of 10x10
// do stuff
}
}
This is pretty darn close to what i have in code, except for the actual example values i mention above in the comments, i used values that were easy rather than full frame values like 1920x1080 and actual results, something like 367x711 for example.
So why am i getting results from detectMultiScale that are outside the bounds of the input Mat?
EDIT:
I had written this program originally for an embedded linux distribution, where this problem does not arise (i've always gotten expected results). This problem is happening on a windows release and build of opencv, so i'm currently going through the opencv code to see if there's anything that stands out related to this.
I believe this is a simple logic error. This:
if(fullFrame)
should be this:
if(useFullFrame)
My app could be executed in computer with 0 or 100 cameras connected. I need to do code to switch camera until computer have not any more camera to use. In this case, the source should be 0 again. To implement that, I have used the following code:
CvCapture * capture = cvCaptureFromCAM(_source);
// Try to open capture and if it fails go to first camera
if(!capture){
_source = 0;
capture = cvCaptureFromCAM(_source);
}
With this code, I want to try with one source (for example 3) and if the computer have not 3 cameras, go to the first camera (source 0). The issue is that, although source is 5, cvCaptureFromCAM always return a valid capture, with capture for the last camera used, never NULL to switch to 0 and get source from camera 0. Any idea about how implement this "circular" switch?
An option is get the count of cameras and do module operation in this range, but as far I know OpenCV doesn't have one method to get count of available cameras.
"Last camera used" suggests that you did not in fact release that camera. Try releasing the old camera before switching to a new camera.
I am trying to implement face detection program with webcam input using Viola-Jones face detector in OpenCV and it works fine except that it becomes about 10 times slower when no face is detected in the frame.
This is really weird because if there is no face in the frame, most of the windows will be rejected in the earlier stages of the cascade and it should be slightly faster I guess (NOT SLOWER!).
I am using detectMultiScale function (not cvHaarDetectObjects function) for some reasons but I don't think this should matter in any way.
Can anyone give me an advice on this problem please?
Thanks in advance.
Did you try to add the min and the max size of the face rectangle to be detected ?
You can also check your pyramid scale value, it must be > 1, and if it is too slow, try to use a highter value, the detection will not be as good but it will be faster.
cv::CascadeClassifier cascade;
// ... init classifier
cv::Mat grayImage;
// ... set image
std::vector<cv::Rect> > results;
cv::Size minSize(60,60);
cv::Size maxSize(80,80);
int minNeighbors= 3; // edited
float pyramidScale = 1.1f;
cascade.detectMultiScale(grayImage, results, pyramidScale, minNeighbors,
0 , minSize, maxSize);
I'm looking to make a program that once run, will continuously look for an template image (stored in directory of program) to match in realtime with the screen. Once found, it will click on the image (ie the center of the coords of the best match). The images will be exact copies (size/color) so finding the match should not be very hard.
This process then continues with many other images and then resets to start again with the first image again but once I have the first part working I can just copy the code.
I have downloaded the OpenCV library as it has image matching tools but I am lost. Any help with writing some stub code or pointing me to a helpful resource is much appreciated. I have checked a lot of the OpenCV docs with no luck.
Thanks you.
If you think that the template image would not be very different in the current frame then you should use matchTemplate() of openCV. Its very easy to use and will give you good results.
Have a look here for complete explanation http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
void start()
{
VideoCapture cap(0);
Mat image;
namedWindow(wndname,1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
"Load your template image here"
"Declare a result image here"
"Perform the templateMatching() between template image and frame and store the results of correlation in result image declared above"
char c = cvWaitKey(33);
if( c == 27 ) break;
}
}