I need to calculate the area of a blob/an object in a grayscale picture (loading it as Mat, not as IplImage) using OpenCV.
I thought it would be a good idea to get the coordinates of the edges (number of edges change form object to object) or to get all coordinates of the contour and then use contourArea() to calculate the area of my object.
I deleted all noise and got some nice and satisfying contours by using findContours() (programming in C++).
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy,int mode, int method, Point offset=Point());
Now I got to understand that param contours already owns the coordinates of all contours of my object. Did I get that right?
If yes, it there a way to access them?
And if no, how do I get the coordinates of the contour anyway?
contours is actually defined as
vector<vector<Point> > contours;
And now I think it's clear how to access its points.
The contour area is calculated by a function nicely called contourArea():
for (unsigned int i = 0; i < contours.size(); i++)
{
std::cout << "# of contour points: " << contours[i].size() << std::endl;
for (unsigned int j=0; j<contours[i].size(); j++)
{
std::cout << "Point(x,y)=" << contours[i][j] << std::endl;
}
std::cout << " Area: " << contourArea(contours[i]) << std::endl;
}
Related
I am trying to project 3D (x,y,z) axes via the openCV projectPoints function onto a chessboard after calibration, but every time I run my code, the axes all point to a specific projected image point on the screen.
example output image
The cameraMatrix and distCoeffs are read in from data that was wrote to a file during calibration. They are:
CameraMatrix[1372.852997982289, 0, 554.2708806543288;
0, 1372.852997982289, 906.4327368600385;
0, 0, 1]
distCoeff[0.02839203221556521;
0.442572399014994;
-0.01755006951285373;
-0.0008989327508155589;
-1.836490953232962]
The rotation and translation values are being computed in real-time via SolvePnP every time a bool is turned on via keypress. An example of a their output values are:
R =
[-0.9065211432378315;
0.3787201875924527;
-0.2788943269946833]
T =
[-0.4433059282649063;
-0.6745750872705997;
1.13753594660495]
While SolvePnP is being computed, I press another keypress to draw the 3D axes from the origin as written in the code below. And the rotation and translation values are passed into the projectPoint function. However, the axesProjectedPoints image points output for each axis is always very similar and in the range of:
[100.932, 127.418]
[55.154, 157.192]
[70.3054, 162.585]
Note the axesProjectedPoints is initialized out of the loop as a vector<Point2f> axesProjectedPoints
The reprojection error is fairly good, and under 1 pixel.
The projectPoints code:
if (found) {
// read calibration data -- function that reads calibration data saved to a file
readCameraConfig(cameraMatrix, distCoeffs);
// draw corners
drawChessboardCorners(convertedImage, patternsize, corners, found);
// draw 3D axes using projectPoints
// used 0.04 because the chessboard square is 0.01778 m
std::vector<Point3f> axis;
axis.push_back(cv::Point3f(0.04, 0, 0));
axis.push_back(cv::Point3f(0, 0.04, 0));
axis.push_back(cv::Point3f(0, 0, 0.04));
// the rotation_values and translation values are outputs from the openCV solvePnp function that is being computed separately in real-time every time i press a keypress
projectPoints(axis, rotation_values, translation_values, cameraMatrix, distCoeffs, axesProjectedPoints);
cout << "image points" << endl;
for (auto &n : axesProjectedPoints) {
cout << n << endl;
}
cv::line(convertedImage, corners[0], axesProjectedPoints[0], {255,0,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[1], {0,255,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[2], {0,0,255}, 5);
}
the solvePnP part of code:
/* calculate board's pose (rotation and translation) */
bool flag = false;
if (flag) {
printf("Calculating board's pose (rotation and translation) ...\n");
// read calibration data
readCameraConfig(cameraMatrix, distCoeffs);
// create undistorted corners or image points
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//cout << "POINTS" << endl;
std::vector<Point3d> objp;
for(auto &i : points) {
objp.push_back(i);
//cout << i << endl;
}
//cout << "CORNERS" << endl;
std::vector<Point2d> imagep;
for(auto &j : imagePoints) {
imagep.push_back(j);
//cout << j << endl;
}
cout << "point size" << endl;
cout << objp.size() << endl;
// calculate pose
solvePnP(objp, imagep, cameraMatrix, distCoeffs, rotation_values, translation_values, true, SOLVEPNP_ITERATIVE);
// print rotation and translation values
cout << "R = " << endl << " " << rotation_values << endl << endl;
cout << "T = " << endl << " " << translation_values << endl << endl;
}
}
I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
So, this is the code I have so far:
printf("Calculating FAST+BRIEF features...\n");
Ptr<FastFeatureDetector> FASTdetector = FastFeatureDetector::create();
Ptr<BriefDescriptorExtractor> BRIEFdescriptor = BriefDescriptorExtractor::create();
std::vector<cv::KeyPoint> FASTkeypoints_1, FASTkeypoints_2, FASTkeypoints_3;
Mat BRIEFdescriptors_1, BRIEFdescriptors_2, BRIEFdescriptors_3;
FASTdetector->detect(left08, FASTkeypoints_1);
FASTdetector->detect(right08, FASTkeypoints_2);
FASTdetector->detect(left10, FASTkeypoints_3);
BRIEFdescriptor->compute(left08, FASTkeypoints_1, BRIEFdescriptors_1);
BRIEFdescriptor->compute(right08, FASTkeypoints_2, BRIEFdescriptors_2);
BRIEFdescriptor->compute(left10, FASTkeypoints_3, BRIEFdescriptors_3);
Mat FAST_left08, FAST_right08, FAST_left10;
drawKeypoints(left08, FASTkeypoints_1, FAST_left08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left08.png", FAST_left08);
drawKeypoints(right08, FASTkeypoints_2, FAST_right08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_right08.png", FAST_right08);
drawKeypoints(left10, FASTkeypoints_3, FAST_left10, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left10.png", FAST_left10);
printf("FAST+BRIEF done. \n");
The code so far works perfectly fine, however I don't get rich keypoints, but standard ones. If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
I have done the same implementation with SIFT, SURF and ORB before that, but there I use the computeanddetect function directly, which gives me keypoints, where I can draw with the DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag.
I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
Yes, that is correct.
If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
No, keypoints are detected by using different methods. You can use SIFT, FAST, HarrisDetector, SURF etc. only to detect keypoints at first. Then there are different methods to describe the detected keypoints (e.g. a 128-bit float vector descriptor for SIFT) and match them afterwards.
A keypoint in OpenCV can be described by the different attributes angle, size, octave and so on https://docs.opencv.org/3.4.2/d2/d29/classcv_1_1KeyPoint.html
For SIFT every KeyPoint attribute is filled with a number that can later be drawn in the DRAW_RICH_KEYPOINTS flag. For FAST only standard values for the attributes are assigned so that they keypoints can be drawn with the mentioned flag but the size, octave and angle do not vary. Thus, every drawn KeyPoint looks similar.
Here a small code sample as a proof (I only use the ->detect functions):
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
int main(int argc, char** argv)
{
// Load image
cv::Mat img = cv::imread("MT189.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (!img.data) {
std::cout << "Error reading image" << std::endl;
return EXIT_FAILURE;
}
cv::Mat output;
// Detect FAST keypoints
std::vector<cv::KeyPoint> keypoints_fast, keypoints_sift;
cv::Ptr<cv::FastFeatureDetector> fast = cv::FastFeatureDetector::create();
fast->detect(img, keypoints_fast);
for (size_t i = 0; i < 100; ++i) {
std::cout << "FAST Keypoint #:" << i;
std::cout << " Size " << keypoints_fast[i].size << " Angle " << keypoints_fast[i].angle << " Response " << keypoints_fast[i].response << " Octave " << keypoints_fast[i].octave << std::endl;
}
// Detect SIFT keypoints
cv::Ptr<cv::xfeatures2d::SiftFeatureDetector> sift = cv::xfeatures2d::SiftFeatureDetector::create();
sift->detect(img, keypoints_sift);
for (size_t i = 0; i < 100; ++i) {
std::cout << "SIFT Keypoint #:" << i;
std::cout << " Size " << keypoints_sift[i].size << " Angle " << keypoints_sift[i].angle << " Response " << keypoints_sift[i].response << " Octave " << keypoints_sift[i].octave << std::endl;
}
// Draw SIFT keypoints
cv::drawKeypoints(img, keypoints_sift, output, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("Output", output);
cv::waitKey(0);
}
I am working depth data which is in the format of 16UC1. I want to find out the min value (greater than 0) with location from the image. I am using the minMaxLoc function but I am getting the error. It may be because of short values. It will be great , if you suggest the way.
int main()
{
Mat abc = imread("depth272.tiff");
cout << abc.size() << endl;
imshow("depth_image",abc);
Mat xyz = abc > 0;
cout << "abc type: " << abc.type() << "xyz type " << xyz.type() << endl;
double rmin, rmax;
Point rMinPoint, pMaxPoint;
minMaxLoc(abc, &rmin, &rmax, &rMinPoint, &pMaxPoint, xyz);
int row = rMinPoint.x;
int col = rMinPoint.y;
waitKey(0);
return 0;
}
The image is loaded as a 3-channel 8UC3 image.
The function minMaxLoc() only works on single channel images.
As #Miki suggests, you should use imread(..., IMREAD_UNCHANGED) to load as CV_16UC1.
I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.
I am getting a segfault when cloning a cv::Mat. Two functions are called, and work on m_mask a member variable (not a pointer) of my class:
Set the mask:
void SetMask(QImage mask)
{
if(!mask.isNull() && mask.depth() == 1)
{
std::cout << "Mask width: " << mask.width() << " and mask height: " << mask.height() << std::endl << std::flush;
if(mask.width() != m_mask.cols || mask.height() != m_mask.rows)
m_mask.create(mask.height(), mask.width(), CV_8UC1);
if(m_mask.data == 0)
std::cout << "MALLOC FAILED" << std::endl << std::flush;
//Copy data here
cv::imshow("OpenCV Image", m_mask);
}
else
m_mask = cv::Scalar(0);
}
Then use the mask:
QString MaskToXML()
{
QString xml_out;
if(!m_mask.empty())
{
cv::Mat workspace = m_mask.clone(); //Clone our mask - SEGFAULT HERE
//Run the contour code
std::vector< std::vector<cv::Point> > contours;
cv::findContours(workspace, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
//do stuff
}
return xml_out;
}
I had a heap corruption... general rule of thumb for me from now on... If cv::Mat is segfaulting, I corrupted the heap somewhere.
Edit: By "somewhere", I meant you can safely assume that cv::Mat is correct and that the functions it uses are correct. You can safely assume that YOU are corrupting memory somewhere on your own, probably at one of your pointers or data structures.