I am using OpenCV version 4.0.0. I am trying to stitch some images together and trim the resulting image and while I am able to stitch the images, I am not able to trim the resulting image.
My program keeps aborting with the following error:
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.0.0) /Users/RAR/opencv/modules/core/src/umatrix.cpp:545: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'UMat'
Abort trap: 6
The error occurs at the line stitched = stitched(cv::boundingRect(c)); in the code below.
while (cv::countNonZero(sub) > 0) {
cv::erode(minRect, minRect, cv::Mat()); // Erode the minimum rectangular mask
cv::subtract(minRect, thresh, sub); // Subtract the thresholded image from the minmum rectangular mask (count if there are any non-zero pixels left)
std::vector<std::vector<cv::Point>> cnts4;
cv::findContours(minRect.clone(), cnts4, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
c = cnts4[0];
for (auto iter = cnts4.begin(); iter != cnts4.end(); ++iter) {
if (cv::contourArea(*iter) > cv::contourArea(c)) { // Finds the largest contour (the contour/outline of the stitched image)
c = *iter;
}
}
stitched = stitched(cv::boundingRect(c)); // Extract the bounding box and use the bounding box coordinates to extract the final stitched images
}
Why am I getting this error?
From OP's comments:
stitched: cols: 4295 rows: 2867 bounding rect[4274 x 2845 from (11, 12)]
stitched: cols: 4274 rows: 2845 bounding rect[4272 x 2843 from (12, 13)]
In the first case, the rectangle is trying to extract a size of (4274, 2845) from (11, 12) in the stitched image. This means that it is taking pixels from (11, 12) to (4285, 2857), which is within the bounds of the stitched image since the stitched image has a size of (4295, 2867). No problem.
In the second case, the rectangle is trying to extract a size of (4272, 2843) from (12, 13) in the stitched image. This means that it is taking pixels from (12, 13) to (4284, 2856), which is out of bounds of the stitched image since the stitched image has a size of (4274, 2845). Problem.
The sub-image you are trying to extract is much bigger than the bigger image.
(-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows
The error message also indicates this. roi in the error message refers to the sub-image you are trying to extract using cv::boundingRect(c) and m is the stitched image. The coordinates of this rectangle are beyond the size of the stitched image.
You can test this by setting the coordinates of the rectangle manually.
You should not get an error with stitched(cv::Rect(11, 12, cv::Size(4274, 2845)
You will get the error with stitched(cv::Rect(12, 13, cv::Size(4272, 2843)
Last iteration is the problem, as it won't find any contours.
Maybe you can try something like so :
int nonZeroCount = 1;
while (nonZeroCount)
{
cv::erode(minRect, minRect, cv::Mat());
cv::subtract(minRect, thresh, sub);
nonZeroCount = cv::countNonZero(sub);
if (nonZeroCount)
{
std::vector< std::vector<cv::Point> > cnts4;
cv::findContours(minRect.clone(), cnts4, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
c = cnts4[0];
for (auto iter = cnts4.begin(); iter != cnts4.end(); ++iter)
{
if (cv::contourArea(*iter) > cv::contourArea(c))
{
c = *iter;
}
}
stitched = stitched(cv::boundingRect(c));
}
}
Related
I'm trying to get a sub-image from a RGB image in openCV and C++. I've seen the other threads on this topic but it didn't worked for me.
This is the code that I use:
Mat src = imread("Images/00011_00025.ppm");
Rect crop(1, 1, 64, 67);
Mat rez = src(crop);
The image a 64x67 dimension, so I don't understand why I get the following error in the console:
Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows)
Any ideas of what is the cause of this error?
Rect crop(1, 1, 64, 67);
The rectangles top left corner is at position (1,1) and its size is set to 64x67.
Mat rez = src(crop);
When using this rectangle to crop the image you're running out of bounds, since the rectangle has an offset of one pixel but the same size as the image to crop.
You could either manually account for the offset on width and height, or, and this is my preferred solution for cropping, make use of a cv::Range.
With ranges you could define a row and column span to perform cropping:
cv::Range rows(1, 64);
cv::Range cols(1, 67);
Mat rez = src(rows, cols);
The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform.
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f>.
Mat H = estimateRigidTransform(objPoints, scePoints, false);
Following the method used in the tutorial above, I want to transform the corner values using the transformation H. The tutorial uses perspectiveTransform with the 3x3 matrix returned by findHomography. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.
How would I transform the values of the corners, represented as vector <Point2f> with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such as warpAffine and getPerspectiveTransform as well, but so far not found a solution.
EDIT:
I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.
Mat row = (Mat_<double>(1,3) << 0, 0, 1);
H.push_back(row);
However this gives this error when using perspectiveTransform.
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
ChronoTrigger suggested using warpAffine. I am calling the warpAffine method below, the size of 1 x 5 is the size of objCorners and sceCorners.
warpAffine(objCorners, sceCorners, H, Size(1,4));
This gives the error below, which suggests the wrong type. objCorners and sceCorners are vector <Point2f> representing the 4 corners. I thought warpAffine would accept Mat images which may explain the error.
OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280
I've done it this way in the past:
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
if(R.cols == 0)
{
continue;
}
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
cv::Mat warped;
cv::warpPerspective(img1,warped,H,img1.size());
which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix
this code warps the IMAGES with a rigid transformation.
I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )
tutorial here:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
or you can do it manually by looping over your vector and
cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);
hope that helps.
remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!
edit: this is the full (tested) code:
// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));
// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);
for(unsigned int i=0; i<result.size(); ++i)
std::cout << result[i] << std::endl;
which gives output as expected:
[1, 1]
[2, 1]
[1, 2]
The affine transformations (the result of cv::estimateRigidTransform) are applied to the image with the function cv::warpAffine.
The 3x3 homography form of a rigid transform is:
a1 a2 b1
-a2 a3 b2
0 0 1
So when using estimateRigidTransform you could add [0 0 1] as the third row, if you want the 3x3 matrix.
I am trying to split a Matrix into segments, perform some manipulation of the segments and then merge the segments into a complete matrix again.
To split, I'm doing:
for(int j = 0; j < segments; ++j)
{
Rect r;
if(j == lastSegment)
{
r = Rect(j * segmentWidth, 0, lastWidth, origHei);
}
else
{
r = Rect(j * segmentWidth, 0, segmentWidth - 1, origHei);
}
tmpFrame(r).copyTo(segmentMats[j]);
}
Then to merge them I try:
Mat fullFrame;
for(int i = 0; i < maxSegments; ++i)
{
int segNum = i % segments;
Rect r;
if( segNum == 0) // 1st segment of frame
{
fullFrame.create(origWid, origHei, segmentMats[i].type());
r = Rect(0, 0, segmentWidth - 1, origHei);
}
else if (segNum == lastSegment)
{
r = Rect((i * segmentWidth), 0, lastWidth, origHei);
}
else
{
r = Rect((i * segmentWidth), 0, segmentWidth - 1, origHei);
}
segmentMats[i].copyTo(fullFrame(r));
}
But I keep getting a failed assertion,
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x
+ roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat
I don't see how this code could set borders outside the assertion values. Can someone see my error?
Thanks.
Edit:
Thanks for the replies. To clarify my variables, I've listed how they are computed below.
origWid and origHei are the height and width of the entire frame
segments = the number of vertical segments a frame is divided into. So segments = 2 means the frame is divided in half vertically.
lastSegment = segments - 1; since they are 0-inclusive indexed, that last segment has this index
segmentWidth = origWid / segments; this floors in case origWid is not evenly divisible by segments
lastWidth = origWid - (lastSegment * segmentWidth); this takes care of the case that origWid is not evenly divisible by segments and captures the number of columns left over in the last segment
segmentMats is an array of segments Mats
segNum = the order of the segment. So if segments == 2, segNum == 0 is the left half of the frame and segNum == 1 is the right half of the frame
I'm having trouble specifying the "region of interests" to perform feature finding in image stitching method (Stitcher::stitch). I get the following error
"OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/matrix.cpp, line 308
terminate called throwing an exception"
but when I checked the regions and the image cols and rows, it seems to be fine. Any help of suggestion would be appreciated.
OpenCV 2.4.0 have bug in Stitcher::Status Stitcher::matchImages() method (stiicher.cpp):
the algo scales the input image but the input roi remain unchanged.
I have an image that contains a square, and I need to extract the area contained in that square.
After applying the squares.c script (available in the samples of every OpenCV distribution) I obtain a vector of squares, then I need to save an image for each of them.
The user karlphillip suggested this:
for (size_t x = 0; x < squares.size(); x++)
{
Rect roi(squares[x][0].x, squares[x][0].y,
squares[x][1].x - squares[x][0].x,
squares[x][3].y - squares[x][0].y);
Mat subimage(image, roi);
}
in order to generate a new Mat called subimage for all the squares detected in the original image
As karl remembered me, the points detected in the image may not represent a perfect square (as you can see in the image above) but the code I just suggested to you assumes they do.
In fact I get this error:
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width &&
roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height &&
roi.y + roi.height <= m.rows) in Mat, file /usr/include/opencv/cxmat.hpp,
line 187
terminate called after throwing an instance of 'cv::Exception'
what(): /usr/include/opencv/cxmat.hpp:187: error: (-215) 0 <= roi.x &&
0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y &&
0 <= roi.height && roi.y + roi.height <= m.rows in function Mat
Aborted
Suggestion for make the script accept also non perfect squares?
I feel like I need to clarify a few things about that code.
First, it assumes that the region detected is a perfect square because it ignores some of the points inside squares[x] to create a new Mat.
Second, it also assumes that the points that make the region were detected in the clockwise direction, starting with p0 in the top-left corner of the image:
(p0) 1st----2nd (p1)
| |
| |
(p3) 4th----3rd (p2)
which might not be true for all the regions detected. That means that this code:
Rect roi(squares[x][0].x, squares[x][0].y,
squares[x][1].x - squares[x][0].x,
squares[x][3].y - squares[x][0].y);
probably will generate a ROI with invalid dimensions, such as negative width and height values, and that's why OpenCV throws a cv::Exception at you on Mat subimage(image, roi);.
What you should do, is write a code that will identify the top-left point of the region and call it p0, then it's nearest neightbor on the right side, p1, then find the bottom-right point of the region and call it p2, and then what's left is p3. After this, assembling the ROI is easy:
Rect roi(p0.x, p0.y,
p1.x - p0.x,
p3.y - p0.y);
EDIT:
I found an excellent solution while reading the documentation of the v2.3 of OpenCV. It automates the process I described earlier and it make things so much easier and clean. You can use this trick to order the 4 Points in the vector to a meaningful Rect structure:
// Data returned and filled by findSquares(). Check the example squares.cpp for more info on this function.
vector<vector<Point> > squares;
for (size_t i = 0; i < squares.size(); i++)
{
Rect rectangle = boundingRect(Mat(squares[i]));
cout << "#" << i << " rectangle x:" << rectangle.x << " y:" << rectangle.y << " " << rectangle.width << "x" << rectangle.height << endl;
}