Access element of float cv::Mat - c++

I don't understand why I can't get this code to work:
cv::Mat M(2, 3, CV_32FC1);
cv::Point2f center(20, 20);
M = cv::getRotationMatrix2D(center, 20, 1.0);
float test;
test = M.at<float>(1, 0);
test = M.at<float>(0, 1);
test = M.at<float>(1, 1);
The code fails when accessing the elements with M.at. The following assertion comes up:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in unknown function, file C:\OpenCV2.2\include\
opencv2/core/mat.hpp, line 517

To quote Good Will Hunting, "It's not your fault!"
M has been overwritten with a CV_64C1 or a double rotation matrix and that's why M.at<float>(i,j) fails.
So, don't bother initializing M ; cv::getRotationMatrix will take care of it and return a CV_64F matrix which can (of course) be accessed with M.at<double>(i,j).

I don't know anything about the cv namespace, but I would put a breakpoint at the first call to M.at() and look at the members of M. One of these members is causing the error:
dims <= 2
data == 0
i0 < size.p[0]
i1*DataType<_Tp>::channels < size.p[1]*channels()
((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1() //sure hope it isn't this one

Related

How to fix error with cv::boundingRect in OpenCV

I am using OpenCV version 4.0.0. I am trying to stitch some images together and trim the resulting image and while I am able to stitch the images, I am not able to trim the resulting image.
My program keeps aborting with the following error:
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.0.0) /Users/RAR/opencv/modules/core/src/umatrix.cpp:545: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'UMat'
Abort trap: 6
The error occurs at the line stitched = stitched(cv::boundingRect(c)); in the code below.
while (cv::countNonZero(sub) > 0) {
cv::erode(minRect, minRect, cv::Mat()); // Erode the minimum rectangular mask
cv::subtract(minRect, thresh, sub); // Subtract the thresholded image from the minmum rectangular mask (count if there are any non-zero pixels left)
std::vector<std::vector<cv::Point>> cnts4;
cv::findContours(minRect.clone(), cnts4, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
c = cnts4[0];
for (auto iter = cnts4.begin(); iter != cnts4.end(); ++iter) {
if (cv::contourArea(*iter) > cv::contourArea(c)) { // Finds the largest contour (the contour/outline of the stitched image)
c = *iter;
}
}
stitched = stitched(cv::boundingRect(c)); // Extract the bounding box and use the bounding box coordinates to extract the final stitched images
}
Why am I getting this error?
From OP's comments:
stitched: cols: 4295 rows: 2867 bounding rect[4274 x 2845 from (11, 12)]
stitched: cols: 4274 rows: 2845 bounding rect[4272 x 2843 from (12, 13)]
In the first case, the rectangle is trying to extract a size of (4274, 2845) from (11, 12) in the stitched image. This means that it is taking pixels from (11, 12) to (4285, 2857), which is within the bounds of the stitched image since the stitched image has a size of (4295, 2867). No problem.
In the second case, the rectangle is trying to extract a size of (4272, 2843) from (12, 13) in the stitched image. This means that it is taking pixels from (12, 13) to (4284, 2856), which is out of bounds of the stitched image since the stitched image has a size of (4274, 2845). Problem.
The sub-image you are trying to extract is much bigger than the bigger image.
(-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows
The error message also indicates this. roi in the error message refers to the sub-image you are trying to extract using cv::boundingRect(c) and m is the stitched image. The coordinates of this rectangle are beyond the size of the stitched image.
You can test this by setting the coordinates of the rectangle manually.
You should not get an error with stitched(cv::Rect(11, 12, cv::Size(4274, 2845)
You will get the error with stitched(cv::Rect(12, 13, cv::Size(4272, 2843)
Last iteration is the problem, as it won't find any contours.
Maybe you can try something like so :
int nonZeroCount = 1;
while (nonZeroCount)
{
cv::erode(minRect, minRect, cv::Mat());
cv::subtract(minRect, thresh, sub);
nonZeroCount = cv::countNonZero(sub);
if (nonZeroCount)
{
std::vector< std::vector<cv::Point> > cnts4;
cv::findContours(minRect.clone(), cnts4, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
c = cnts4[0];
for (auto iter = cnts4.begin(); iter != cnts4.end(); ++iter)
{
if (cv::contourArea(*iter) > cv::contourArea(c))
{
c = *iter;
}
}
stitched = stitched(cv::boundingRect(c));
}
}

OpenCV: Accessing elements of 5D Matrix

I have a problem accessing elements of a 5D Matrix in OpenCV. I create my Matrix using
int sizes[5] = { height_, width_, range_, range_, range_ };
Mat w_i_ = Mat(2 + channels, sizes, CV_16UC(channels), Scalar(0));
where channels = 3. Then I'm trying to access and modify the matrix elements using for loops:
for (UINT Y = 0; Y < height; ++Y) {
for (UINT X = 0; X < width; ++X) {
// a) Compute the homogeneous vector (wi,w)
Vec3b wi = image.at<Vec3b>(Y, X);
// b) Compute the downsampled coordinates
UINT y = round(Y / sigmaSpatial);
UINT x = round(X / sigmaSpatial);
Vec3b zeta = round( (image.at<Vec3b>(Y, X) - min) / sigmaRange);
// round() here is overloaded for vectors
// c) Update the downsampled S×R space
int idx[5] = { y, x, zeta[0], zeta[1], zeta[2] };
w_i_.at<Vec3b>(idx) = wi;
}
}
I am getting an assertion failed error produced by Mat::at() when I run the code. Specifically the message I get is:
OpenCV Error: Assertion failed (elemSize() == (((((DataType<_Tp>::type) & ((512 - 1) << 3)) >> 3) + 1) << ((((sizeof(size_t)/4+1)*16384|0x3a50) >> ((DataType<_Tp>::type) & ((1 << 3) - 1))*2) & 3))) in cv::Mat::at, file c:\opencv\build\include\opencv2\core\mat.inl.hpp, line 1003
I have searched the web but I can't seem to find any topics on 5D Matrices (similar topics proved of no help).
Thanks in advance
You initialize the zeta variable and do not check its values.
Most likely you get an out-of-range value for zeta[0], zeta[1] and zeta[2] indices and thus the internal range checking in at() function fails.
To prevent such crashes at least add some manual range checking before calling at():
for(int i = 0 ; i < 3 ; i++)
if(zeta[i] < 0 || zeta[i] >= _range)
continue;

Using estimateRigidTransform instead of findHomography

The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform.
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f>.
Mat H = estimateRigidTransform(objPoints, scePoints, false);
Following the method used in the tutorial above, I want to transform the corner values using the transformation H. The tutorial uses perspectiveTransform with the 3x3 matrix returned by findHomography. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.
How would I transform the values of the corners, represented as vector <Point2f> with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such as warpAffine and getPerspectiveTransform as well, but so far not found a solution.
EDIT:
I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.
Mat row = (Mat_<double>(1,3) << 0, 0, 1);
H.push_back(row);
However this gives this error when using perspectiveTransform.
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
ChronoTrigger suggested using warpAffine. I am calling the warpAffine method below, the size of 1 x 5 is the size of objCorners and sceCorners.
warpAffine(objCorners, sceCorners, H, Size(1,4));
This gives the error below, which suggests the wrong type. objCorners and sceCorners are vector <Point2f> representing the 4 corners. I thought warpAffine would accept Mat images which may explain the error.
OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280
I've done it this way in the past:
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
if(R.cols == 0)
{
continue;
}
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
cv::Mat warped;
cv::warpPerspective(img1,warped,H,img1.size());
which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix
this code warps the IMAGES with a rigid transformation.
I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )
tutorial here:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
or you can do it manually by looping over your vector and
cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);
hope that helps.
remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!
edit: this is the full (tested) code:
// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));
// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);
for(unsigned int i=0; i<result.size(); ++i)
std::cout << result[i] << std::endl;
which gives output as expected:
[1, 1]
[2, 1]
[1, 2]
The affine transformations (the result of cv::estimateRigidTransform) are applied to the image with the function cv::warpAffine.
The 3x3 homography form of a rigid transform is:
a1 a2 b1
-a2 a3 b2
0 0 1
So when using estimateRigidTransform you could add [0 0 1] as the third row, if you want the 3x3 matrix.

splitting/merging matrices in OpenCV

I am trying to split a Matrix into segments, perform some manipulation of the segments and then merge the segments into a complete matrix again.
To split, I'm doing:
for(int j = 0; j < segments; ++j)
{
Rect r;
if(j == lastSegment)
{
r = Rect(j * segmentWidth, 0, lastWidth, origHei);
}
else
{
r = Rect(j * segmentWidth, 0, segmentWidth - 1, origHei);
}
tmpFrame(r).copyTo(segmentMats[j]);
}
Then to merge them I try:
Mat fullFrame;
for(int i = 0; i < maxSegments; ++i)
{
int segNum = i % segments;
Rect r;
if( segNum == 0) // 1st segment of frame
{
fullFrame.create(origWid, origHei, segmentMats[i].type());
r = Rect(0, 0, segmentWidth - 1, origHei);
}
else if (segNum == lastSegment)
{
r = Rect((i * segmentWidth), 0, lastWidth, origHei);
}
else
{
r = Rect((i * segmentWidth), 0, segmentWidth - 1, origHei);
}
segmentMats[i].copyTo(fullFrame(r));
}
But I keep getting a failed assertion,
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x
+ roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat
I don't see how this code could set borders outside the assertion values. Can someone see my error?
Thanks.
Edit:
Thanks for the replies. To clarify my variables, I've listed how they are computed below.
origWid and origHei are the height and width of the entire frame
segments = the number of vertical segments a frame is divided into. So segments = 2 means the frame is divided in half vertically.
lastSegment = segments - 1; since they are 0-inclusive indexed, that last segment has this index
segmentWidth = origWid / segments; this floors in case origWid is not evenly divisible by segments
lastWidth = origWid - (lastSegment * segmentWidth); this takes care of the case that origWid is not evenly divisible by segments and captures the number of columns left over in the last segment
segmentMats is an array of segments Mats
segNum = the order of the segment. So if segments == 2, segNum == 0 is the left half of the frame and segNum == 1 is the right half of the frame

OpenCV extract area of an image from a vector of squares

I have an image that contains a square, and I need to extract the area contained in that square.
After applying the squares.c script (available in the samples of every OpenCV distribution) I obtain a vector of squares, then I need to save an image for each of them.
The user karlphillip suggested this:
for (size_t x = 0; x < squares.size(); x++)
{
Rect roi(squares[x][0].x, squares[x][0].y,
squares[x][1].x - squares[x][0].x,
squares[x][3].y - squares[x][0].y);
Mat subimage(image, roi);
}
in order to generate a new Mat called subimage for all the squares detected in the original image
As karl remembered me, the points detected in the image may not represent a perfect square (as you can see in the image above) but the code I just suggested to you assumes they do.
In fact I get this error:
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width &&
roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height &&
roi.y + roi.height <= m.rows) in Mat, file /usr/include/opencv/cxmat.hpp,
line 187
terminate called after throwing an instance of 'cv::Exception'
what(): /usr/include/opencv/cxmat.hpp:187: error: (-215) 0 <= roi.x &&
0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y &&
0 <= roi.height && roi.y + roi.height <= m.rows in function Mat
Aborted
Suggestion for make the script accept also non perfect squares?
I feel like I need to clarify a few things about that code.
First, it assumes that the region detected is a perfect square because it ignores some of the points inside squares[x] to create a new Mat.
Second, it also assumes that the points that make the region were detected in the clockwise direction, starting with p0 in the top-left corner of the image:
(p0) 1st----2nd (p1)
| |
| |
(p3) 4th----3rd (p2)
which might not be true for all the regions detected. That means that this code:
Rect roi(squares[x][0].x, squares[x][0].y,
squares[x][1].x - squares[x][0].x,
squares[x][3].y - squares[x][0].y);
probably will generate a ROI with invalid dimensions, such as negative width and height values, and that's why OpenCV throws a cv::Exception at you on Mat subimage(image, roi);.
What you should do, is write a code that will identify the top-left point of the region and call it p0, then it's nearest neightbor on the right side, p1, then find the bottom-right point of the region and call it p2, and then what's left is p3. After this, assembling the ROI is easy:
Rect roi(p0.x, p0.y,
p1.x - p0.x,
p3.y - p0.y);
EDIT:
I found an excellent solution while reading the documentation of the v2.3 of OpenCV. It automates the process I described earlier and it make things so much easier and clean. You can use this trick to order the 4 Points in the vector to a meaningful Rect structure:
// Data returned and filled by findSquares(). Check the example squares.cpp for more info on this function.
vector<vector<Point> > squares;
for (size_t i = 0; i < squares.size(); i++)
{
Rect rectangle = boundingRect(Mat(squares[i]));
cout << "#" << i << " rectangle x:" << rectangle.x << " y:" << rectangle.y << " " << rectangle.width << "x" << rectangle.height << endl;
}