I tried to use the following function in OpenCV (C++)
calcOpticalFlowPyrLK(prev_frame_gray, frame_gray, points[0], points[1], status, err, winSize, 3, termcrit, 0, 0.001);
and I get this error
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in calcOpticalFlowPyrLK,
file /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp, line 845
terminate called after throwing an instance of 'cv::Exception'
what(): /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp:845:
error: (-215) (npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0 in function calcOpticalFlowPyrLK
Both of the following return -1
frame_gray.checkVector(2, CV_32F, true)
prev_frame_gray.checkVector(2, CV_32F, true)
I wanted to know what checkVector actually does because it is leading to the assertion error as you can see above.
The official OpenCV's doc says:
cv::Mat::checkVector() returns N if the matrix is 1-channel (N x
ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise
OpenCV considers some data types equivalent in case of some functions i.e. objectPoints of cv::solvePnP() can be:
1xN/Nx1 1-channel cv::Mat
3xN/Nx3 3-channel cv::Mat
std::vector<cv::Point3f>
With checkVector you can make sure that you are passing the correct representation of your data.
I had a similar issue with cv2.projectPoints function (-215:Assertion failed) because openCV was expecting a nx3 matrix and i was passing an 1D array of length 3. Try:
points[0].reshape(-1,3)
As argument to the function. It changes the shape (3,) to shape (1,3).
Related
I'm trying to fill a triangle in a mask using the fillConvexPoly function.
But I get the following error.
OpenCV Error: Assertion failed (points.checkVector(2, CV_32S) >= 0) in fillConvexPoly, file /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp, line 2256
terminate called after throwing an instance of 'cv::Exception'
what(): /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp:2256: error: (-215) points.checkVector(2, CV_32S) >= 0 in function fillConvexPoly
I call the function as like so,
cv::Mat mask = cv::Mat::zeros(r2.size(), CV_32FC3);
cv::fillConvexPoly(mask, trOutCroppedInt, cv::Scalar(1.0, 1.0, 1.0), 16, 0);
where the trOutCroppedInt defined like so,
std::vector<cv::Point> trOutCroppedInt
And I push 3 points in the vector,
[83, 46; 0, 48; 39, 0]
How should I correct this error?
When points.checkVector(2, CV_32S) >= 0) is encountered
This error may occur when the data type is more complex than CV_32S and the dimension is greater than two, for example all data type like vector<Point2f> can create the problem. As the result we can use fillConvexpoly according to the following steps:
1. Reading an Image with
cv::Mat src=cv::imread("what/ever/directory");
2. determine points
You must determine your points like in the following graphic
Thus, our code for this point is:
vector<cv::Point> point;
point.push_back(Point(163,146)); //point1
point.push_back(Point(100,148)); //point2
point.push_back(Point(100,110)); //point3
point.push_back(Point(139,110)); //point4
3.Use cv::fillConvexPoly function
Consider the image src and draw a polygon ((with the points)) on this image then code would be as follows:
cv::fillConvexPoly(src, //Image to be drawn on
point, //C-Style array of points
Scalar(255, 0, 0), //Color , BGR form
CV_AA, // connectedness, 4 or 8
0); // Bits of radius to treat as fraction
(so output image is as follows: before:left side - after:right side)
[![Here is the image- skelt.tif(img)][1]][1]I am trying to find access the Mat element, angl (gradient angle). However when I use the .at statement, it throws an error.
I have already checked for NULL image (angl.data==NULL). This is not a NULL image.
Here is the code:
Mat img = imread("skelt.tif"); // this is a binary image
Mat grad_x(img.rows, img.cols, CV_16U);
Mat grad_y(img.rows, img.cols, CV_16U);
Mat angl;
Sobel(img, grad_x, CV_32F, 1, 0, 3);// Gradient X
Sobel(img,grad_y, CV_32F, 0, 1, 3); // Gradient Y
phase(grad_x, grad_y, angl,true);
cout << angl.at<float>(51, 5) << endl; // the dimensions are randomly chosen and are within the image
cout<<angl.ptr(5)[4];
The error is in the places where the .at operator is used. The error is -
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)pt.y < (unsigned)
size.p[0] && (unsigned)(pt.x * DataType<_Tp>::channels) < (unsigned)(size.p[1] *
channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) &
((1 << 3) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\opencv\build\in
clude\opencv2/core/mat.inl.hpp, line 912
I am unable to debug this error.
Your image angl is not the size or type you think it is, or is empty.
Try checking the type with angl.type() and size with angl.rows angl.cols
If there is a problem it is probably in the phase() call which we don't have, are you sure you are modifying the angl image and not a copy ?
In the worst case, you can debug into OpenCV to check the variables at that assert call which throws the error. To do that, you need to download the source of your OpenCV version and tell the debugger where the source files are located.
Once you have set up the debugging of the OpenCV source, you can simply jump into the .at call with your debugger to see which part of the assert condition fails.
I am attempting to calculate a histogram of a ROI in order to find that ROI in an image using back projection. My code is
cvtColor(frame, hsvFrame, CV_BGR2HSV);
cvtColor(ROI, hsvROI, CV_BGR2HSV);
float hRanges[] = {0, 180};
float sRanges[] = {0, 256};
float vRanges[] = {0, 256};
const float* ranges[] = { hRanges, sRanges, vRanges};
int histSize = 256;
int channels[] = {0,1,2};
calcHist(&hsvROI, 1, channels, Mat(), ROIhist, 3, &histSize, ranges);
calcBackProject(&hsvFrame, 1, channels, ROIhist, backProj, ranges, true);
imshow("display", backProj);
Firstly, please assume all Mats have already been declared (this is only a snippet). In my understanding, the more dimensions/channels I use the more accurate the back projection should be...so i have decided to include all 3 channels of an HSV row, and therefore of course an HSV image (is this necessary? is there a better way?). In this example above, i get the error:
OpenCV Error: Assertion failed (s >= 0) in setSize, file opencv/modules/core/src/matrix.cpp, line 293
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: opencv/modules/core/src/matrix.cpp:293: error: (-215) s >= 0 in function setSize
which gets triggered on the CalcHist line. My solution was to make sure ROIhist is of the same size as hsvROI, so I then put this line just before I declare the ranges:
Mat ROIhist(hsvROI.rows,hsvROI.cols, CV_8UC3, Scalar(0,0,0));
I think it worked, as on the next run, i got a different error that was triggered on the CalcBackProject line:
OpenCV Error: Assertion failed (dims > 0 && !hist.empty()) in calcBackProject, file /opencv/modules/imgproc/src/histogram.cpp, line 1887
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: opencv/modules/imgproc/src/histogram.cpp:1887: error: (-215) dims > 0 && !hist.empty() in function calcBackProject
Which i really do not understand. In fact I also feel like my previous fix by adding the Mat constructor shouldn't be necessary anyway...?
I essentially need to be able to calculate this back projection as accurately as possible while using both functions in ideally a rather simple and standard way. My attempt clearly has some logic flaw somewhere and I would appreciate an explanation or suggestion to how i should go about this properly to get the best results. Thanks in advance !
histSize in your case should be the number of bins in your 3D histogram (so an int array, not just an int). And I'd use lower numbers for them, so that more pixels would manage to fall into one bin, and would make your backprojection more meaningful:
int histSize[] = {32, 32, 32};
...
calcHist(&hsvROI, 1, channels, Mat(), ROIhist, 3, histSize, ranges);
I try to make average background but something wrong because it way to crash app
cv::Mat firstFrame;
cv::Mat averageBackground;
int frameCounter=0;
// this function is called for every frame of the camera
- (void)processImage:(Mat&)image; {
cv::Mat diffFrame;
cv::Mat currentFrame;
cv::Mat colourCopy;
cvtColor(image, currentFrame, COLOR_BGR2GRAY);
averageBackground = cv::Mat::zeros(image.size(), CV_32FC3);
cv::accumulateWeighted(currentFrame, averageBackground, 0.01);
cvtColor(image, colourCopy, COLOR_BGR2RGB);
I see in crash logs
OpenCV Error: Assertion failed (_src.sameSize(_dst) && dcn == scn) in accumulateWeighted, file /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp, line 1108
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp:1108: error: (-215) _src.sameSize(_dst) && dcn == scn in function accumulateWeighted
In cv::accumulateWeighted the input and the output image must have the same number of channels. In your case currentFrame has only one channel because of the COLOR_BGR2GRAY you did before, and averageBackground has three channels.
Also be careful with averageBackground = cv::Mat::zeros(image.size(), CV_32FC3); because with this line you are initializing the result image each time (so you are deleting the previous image values that allow you to calculate the average). You must initialize this image only once in the beginning of your program or wherever.
I'm trying to learn OpenCV (Using version 3.0.0).
Right now I'm trying to see what the point operatioins do to various images, everything is going fine until I tried to do the magnitude operation, which requires inputs be in the form of
magnitude(InputArray x, InputArray y, OutputArray magnitude)
It also describes that x and y should be floating-point arrays of x/y-coordinates of the vectors and also the same size.
I've tried making a Vector of Mat's and splitting up the input image into these vectors and then doing the magnitude operator on them, but this didn't work. So I think I need to pass the arguments as columns and rows, but now I'm getting the error
OpenCV Error: Assertion failed (src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F)) in magnitude, file /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp, line 521
terminate called after throwing an instance of 'cv::Exception'
what(): /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp:521: error: (-215) src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F) in function magnitude
Aborted (core dumped)
And I'm not sure why, because I am clearly converting the input Mats to CV_64F types.
Am I using the magnitude function wrong? Or just passing it the wrong data?
void Magnitude(Mat img, Mat out)
{
img.convertTo(img, CV_64F);
out.convertTo(out, CV_64F);
for(int i = 0 ; i < img.rows ; i ++)
for(int j = 0 ; j < img.cols ; j++)
cv::magnitude(img.row(i), img.col(j), out.at<cv::Vec2f>(i,j));
cv::normalize(out,out,0,255,cv::NORM_MINMAX);
cv::convertScaleAbs(out,out);
cv::imshow("Magnitude", out);
waitKey();
}
void magnitude(InputArray x, InputArray y, OutputArray magnitude)
where x, y and magnitude must have the same size. In your case it means that your image have to be quadratic. Is it right?
A sample usage:
cv::Sobel(img, gx, img.depth(), 1, 0, 3);
cv::Sobel(img, gy, img.depth(), 0, 1, 3);
cv::Mat mag(gx.size(), gx.type());
cv::magnitude(gx, gy, mag);