[![Here is the image- skelt.tif(img)][1]][1]I am trying to find access the Mat element, angl (gradient angle). However when I use the .at statement, it throws an error.
I have already checked for NULL image (angl.data==NULL). This is not a NULL image.
Here is the code:
Mat img = imread("skelt.tif"); // this is a binary image
Mat grad_x(img.rows, img.cols, CV_16U);
Mat grad_y(img.rows, img.cols, CV_16U);
Mat angl;
Sobel(img, grad_x, CV_32F, 1, 0, 3);// Gradient X
Sobel(img,grad_y, CV_32F, 0, 1, 3); // Gradient Y
phase(grad_x, grad_y, angl,true);
cout << angl.at<float>(51, 5) << endl; // the dimensions are randomly chosen and are within the image
cout<<angl.ptr(5)[4];
The error is in the places where the .at operator is used. The error is -
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)pt.y < (unsigned)
size.p[0] && (unsigned)(pt.x * DataType<_Tp>::channels) < (unsigned)(size.p[1] *
channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) &
((1 << 3) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\opencv\build\in
clude\opencv2/core/mat.inl.hpp, line 912
I am unable to debug this error.
Your image angl is not the size or type you think it is, or is empty.
Try checking the type with angl.type() and size with angl.rows angl.cols
If there is a problem it is probably in the phase() call which we don't have, are you sure you are modifying the angl image and not a copy ?
In the worst case, you can debug into OpenCV to check the variables at that assert call which throws the error. To do that, you need to download the source of your OpenCV version and tell the debugger where the source files are located.
Once you have set up the debugging of the OpenCV source, you can simply jump into the .at call with your debugger to see which part of the assert condition fails.
Related
I'm trying to fill a triangle in a mask using the fillConvexPoly function.
But I get the following error.
OpenCV Error: Assertion failed (points.checkVector(2, CV_32S) >= 0) in fillConvexPoly, file /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp, line 2256
terminate called after throwing an instance of 'cv::Exception'
what(): /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp:2256: error: (-215) points.checkVector(2, CV_32S) >= 0 in function fillConvexPoly
I call the function as like so,
cv::Mat mask = cv::Mat::zeros(r2.size(), CV_32FC3);
cv::fillConvexPoly(mask, trOutCroppedInt, cv::Scalar(1.0, 1.0, 1.0), 16, 0);
where the trOutCroppedInt defined like so,
std::vector<cv::Point> trOutCroppedInt
And I push 3 points in the vector,
[83, 46; 0, 48; 39, 0]
How should I correct this error?
When points.checkVector(2, CV_32S) >= 0) is encountered
This error may occur when the data type is more complex than CV_32S and the dimension is greater than two, for example all data type like vector<Point2f> can create the problem. As the result we can use fillConvexpoly according to the following steps:
1. Reading an Image with
cv::Mat src=cv::imread("what/ever/directory");
2. determine points
You must determine your points like in the following graphic
Thus, our code for this point is:
vector<cv::Point> point;
point.push_back(Point(163,146)); //point1
point.push_back(Point(100,148)); //point2
point.push_back(Point(100,110)); //point3
point.push_back(Point(139,110)); //point4
3.Use cv::fillConvexPoly function
Consider the image src and draw a polygon ((with the points)) on this image then code would be as follows:
cv::fillConvexPoly(src, //Image to be drawn on
point, //C-Style array of points
Scalar(255, 0, 0), //Color , BGR form
CV_AA, // connectedness, 4 or 8
0); // Bits of radius to treat as fraction
(so output image is as follows: before:left side - after:right side)
I'm currently playing around with a 360° camera and want to use OpenCV's spherical warper for that. However, each time I try to run a simple program that makes use of the stitcher functionality, it fails to return a stitched image. I'm basically just taking the 360° picture, divide it into two separate pictures (front- and rear lens) and want to stitch them back together.
Here's the code:
Mat srcImage = imread("assets/360_0043.JPG");
Mat frontLensImage(srcImage, Rect(0, 0, srcImage.cols / 2, srcImage.rows));
Mat rearLensImage(srcImage, Rect(srcImage.rows, 0, srcImage.cols / 2, srcImage.rows));
vector<Mat> imagesToStitch;
imagesToStitch.push_back(frontLensImage);
imagesToStitch.push_back(rearLensImage);
Mat panorama;
Stitcher stitcher = Stitcher::createDefault();
if(!imagesToStitch.empty()){
stitcher.stitch(imagesToStitch, panorama);
imshow("test", panorama);
waitKey(0);
}
else{
cout << "ERROR: Image array empty" << endl;
}
return 0;
When trying to run, it returns this error:
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/modules/imgproc/src/imgwarp.cpp, line 1834
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/modules/imgproc/src/imgwarp.cpp:1834: error: (-215) ssize.area() > 0 in function resize
When debugging, panorama is an empty object even though I pass it as the OutputArrayto stitcher.stitch. I searched the web thoroughly and couldn't find a solution, so any help would be greatly appreciated!
Kinda solved it. Apparently, OpenCVs memory management doesn't like you referencing the same address all the time. Since both of my images are dependant on srcImage I assume this is where the error was. I did a quick workaround which looks like this:
Mat unprocessedFrontLensImage(srcImage, Rect(0, 0, 3 * srcImage.cols / 4, srcImage.rows));
Mat unprocessedRearLensImage(srcImage, Rect(srcImage.cols / 4, 0, 3 * srcImage.cols / 4, srcImage.rows));
imwrite("left.jpg", unprocessedFrontLensImage);
imwrite("right.jpg", unprocessedRearLensImage);
Mat frontLensImage = imread("left.jpg");
Mat rearLensImage = imread("right.jpg");
Works like a charm. Don't teach me about redundancy, I know. I'm gonna clean up and refactor it, this is just my workaround for now.
I am using openCV 3.1.0 (I have tried with 2.4.9, with same problem). I want to output a HSV mat to jpeg:
// .. Getting JPEG content into memory
// JPEG to mat
Mat imgBuf=Mat(1, jpegContent, CV_8UC3, jpegSize);
Mat imgMat=imdecode(imgBuf, CV_LOAD_IMAGE_COLOR);
free(jpegContent);
if(imgMat.data == NULL) {
// Some error handling
}
// Now the JPEG is decoded and reside in imgMat
cvtColor(imgMat, imgMat, CV_BGR2HSV); // Converting to HSV
Mat tmp;
inRange(imgMat, Scalar(0, 0, 0), Scalar(8, 8, 8), tmp); // Problem goes here
cvtColor(tmp, imgMat, CV_HSV2BGR);
// Mat to JPEG
vector<uchar> buf;
imencode(".jpg", imgMat, buf, std::vector<int>());
outputJPEG=(unsigned char*)malloc(buf.size());
memcpy(outputJPEG, &buf[0], buf.size());
// ... Output JPEG
The problem is, when i do cvtColor(tmp, imgMat, CV_HSV2BGR) with inRange, my program will fail with:
OpenCV Error: Assertion failed (scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F)) in cvtColor, file /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp, line 8176
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp:8176: error: (-215) scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F) in function cvtColor
If i removed inRange, the program work just fine. I have tried to remove the cvtColor call, letting imencode to do its job, automatically converting HSV to BGR then to JPEG. This time, no more assertion failed, but i got corrupted JPEG image, as GStreamer complains:
gstrtpjpegpay.c(581): gst_rtp_jpeg_pay_read_sof ():
/GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0
WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Wrong SOF length 11.
Again, removing inRange also solves this issue, it produces good JPEG data.
So, is that i am invoking inRange improperly that cause corrupted image data? If yes, what is the correct way to use inRange?
inRange produces a single channel binary matrix, i.e. a CV_8UC1 matrix with values either 0 or 255.
So you cannot convert tmp with HSV2BGR, because the source image tmp doesn't have 3 channels.
OpenCV is telling you exactly this: scn (source channels) is not 3.
Since you probably want to keep and then convert to BGR only part of the image in your range, you can:
set to black everything outside the range: imgMat.setTo(Scalar(0,0,0), ~tmp);
convert the resulting image to BGR: cvtColor(imgMat, imgMat, CV_HSV2BGR);
I tried to use the following function in OpenCV (C++)
calcOpticalFlowPyrLK(prev_frame_gray, frame_gray, points[0], points[1], status, err, winSize, 3, termcrit, 0, 0.001);
and I get this error
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in calcOpticalFlowPyrLK,
file /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp, line 845
terminate called after throwing an instance of 'cv::Exception'
what(): /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp:845:
error: (-215) (npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0 in function calcOpticalFlowPyrLK
Both of the following return -1
frame_gray.checkVector(2, CV_32F, true)
prev_frame_gray.checkVector(2, CV_32F, true)
I wanted to know what checkVector actually does because it is leading to the assertion error as you can see above.
The official OpenCV's doc says:
cv::Mat::checkVector() returns N if the matrix is 1-channel (N x
ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise
OpenCV considers some data types equivalent in case of some functions i.e. objectPoints of cv::solvePnP() can be:
1xN/Nx1 1-channel cv::Mat
3xN/Nx3 3-channel cv::Mat
std::vector<cv::Point3f>
With checkVector you can make sure that you are passing the correct representation of your data.
I had a similar issue with cv2.projectPoints function (-215:Assertion failed) because openCV was expecting a nx3 matrix and i was passing an 1D array of length 3. Try:
points[0].reshape(-1,3)
As argument to the function. It changes the shape (3,) to shape (1,3).
I loaded an Image to a Mat:
Mat Mask = cvLoadImage(filename);
Its an 3744 X 5616 RGB Image. On the next Step i convert it to an Grayscale.
cvtColor(Mask,Mask,CV_BGR2GRAY);
after this i normalize it to use the full Grayscale later:
normalize(Mask,Mask,0,255,NORM_MINMAX,CV_8U);
Now i need the specific Grayscale values and getting an Error on some Values:
for(int i=0;i<(Picture.rows);i++)
{
for(int j=0;j<(Picture.cols);j++)
{
Vec3b masked = Mask.at<Vec3b>(i,j);
//some stuff
}
}
I'm getting the Following Error on some Pixels:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file c:\opencv\build\include\opencv2\core\mat.hpp, line 537
Anyone can tell me what i did wrong? It's strange that it appears only on some Pixel Values
Edit:
Additional Information:
If I load my Mask as Grayscale everything works fine. But when I use cvtColor() or Mat Mask = imread(filename,CV_LOAD_IMAGE_GRAYSCALE); on the image the error appears. Very Strange...
I think your problem is you are accessing a binary image with .at<Vec3b>(i,j). Instead you want to access each pixel with .at<uchar>(i,j). cvtColor(Mask,Mask,CV_BGR2GRAY); changes the 3 channel BGR image to a one channel grayscale image. .at<Vec3b>(i,j) is trying to access a 3 channel image which will eventually go past the end of the image array in memory causing problems or tripping those assertions.
The inner part of your for loop should look like this:
unsigned char masked = Mask.at<uchar>(i,j);