cv:::Mat clone Segfault - c++

I am getting a segfault when cloning a cv::Mat. Two functions are called, and work on m_mask a member variable (not a pointer) of my class:
Set the mask:
void SetMask(QImage mask)
{
if(!mask.isNull() && mask.depth() == 1)
{
std::cout << "Mask width: " << mask.width() << " and mask height: " << mask.height() << std::endl << std::flush;
if(mask.width() != m_mask.cols || mask.height() != m_mask.rows)
m_mask.create(mask.height(), mask.width(), CV_8UC1);
if(m_mask.data == 0)
std::cout << "MALLOC FAILED" << std::endl << std::flush;
//Copy data here
cv::imshow("OpenCV Image", m_mask);
}
else
m_mask = cv::Scalar(0);
}
Then use the mask:
QString MaskToXML()
{
QString xml_out;
if(!m_mask.empty())
{
cv::Mat workspace = m_mask.clone(); //Clone our mask - SEGFAULT HERE
//Run the contour code
std::vector< std::vector<cv::Point> > contours;
cv::findContours(workspace, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
//do stuff
}
return xml_out;
}

I had a heap corruption... general rule of thumb for me from now on... If cv::Mat is segfaulting, I corrupted the heap somewhere.
Edit: By "somewhere", I meant you can safely assume that cv::Mat is correct and that the functions it uses are correct. You can safely assume that YOU are corrupting memory somewhere on your own, probably at one of your pointers or data structures.

Related

What does cv::Mat's UMatData being 0x0(nullptr) mean?

In my code I'm encountering a situation where it is running for few hours but crashing at random points after that(SIGSEGV, Segmentation fault). Whenever crash happens, the cv::Mat involved(often different) will have the u parameter as a nullptr. (Link to what u is: https://docs.opencv.org/3.4/d3/d63/classcv_1_1Mat.html#a2742469fe595e1b9036f60d752d08461)
So I'm wondering what this u being a nullptr actually means and if this is the cause for crashing? This is confusing because u is 0x0 or nullptr at other points during execution as well and not just when it crashes.
An example code of how I'm using the mats and causing the u to become 0x0:
#include <opencv2/opencv.hpp>
int main()
{
cv::Mat main_data = cv::Mat::zeros(10, 10, CV_8UC3);
cv::Mat buffer = cv::Mat::zeros(100, 100, CV_8UC3);
cv::Mat valid_data = buffer;
std::cout << "before: " << valid_data.u << std::endl;
memcpy(buffer.data, main_data.data, main_data.rows*main_data.step);
valid_data = cv::Mat(main_data.rows, main_data.cols, main_data.type(), buffer.data, main_data.step[0]);
std::cout << "after: " << valid_data.u << std::endl;
return 0;
}
I have multiple threads and I'm using the above along with locks for sharing data between them.

Segmentation fault (core dumped) with OpenCV

I am trying to code a program that eliminates some of the connected components and keep the rest.
However, at some point in the code, the program exits with error message "Segmentation fault (core dumped)".
I have narrowed down the error to the statement: "destinationImage.at(row, column) = labeledImage.at(row, column);" using the checkpoints you'll find the code below.
I have tried all the solution I found, especially this one, with no luck.
Please Help!
One more thing, the program reads the image correctly but does not show the original image as per the code. Instead, it prints a message "init done
opengl support available". Is this normal? Does the implementation of the imshow take place at the end of the program with no errors?
/* Goal is to find all related components, eliminate secondary objects*/
#include <opencv2/core/utility.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
//Declaring variables
Mat originalImage;
int conComponentsCount;
int primaryComponents;
//Declaring constants
const char* keys =
{
"{#image|../data/sample.jpg|image for converting to a grayscale}"
};
//Functions prototypes, used to be able to define functions AFTER the "main" function
Mat BinarizeImage (Mat &, int thresh);
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize);
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea);
//Program mainstream============================================
int main (int argc, const char **argv)
{
//Waiting for user to enter the required path, default path is defined in "keys" string
CommandLineParser parser(argc, argv, keys);
string inputImage = parser.get<string>(0);
//Reading original image
//NOTE: the program MUST terminate or loop back if the image was not loaded; functions below use reference to matrices and references CANNOT be null or empty.
originalImage = imread(inputImage.c_str(), IMREAD_GRAYSCALE);// or: imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE)
cout << " 1) Loading image done!" << endl;//CHECKPOINT
if (originalImage.empty())
{
cout << "Nothing was loaded!";
return -1; //terminating program with error feedback
}
cout << " 2) Checking for null Image done!" << endl;//CHECKPOINT
namedWindow("Original Image", 0);
imshow("Original Image", originalImage);
cout << " 3) Showing ORIGINAL image done!" << endl;//CHECKPOINT
//Image Binarization; connectedcomponents function only accepts binary images.
int threshold=100; //Value chosen empirically.
Mat binImg = BinarizeImage(originalImage, threshold);
cout << " 4) Binarizing image done!" << endl;//CHECKPOINT
//Finding the number of connected components and generating the labeled image.
Mat labeledImage; //Image with connected components labeled.
Mat stats, centroids; //Statistics of connected image's components.
conComponentsCount = connectedComponentsWithStats(binImg, labeledImage, stats, centroids, 4, CV_16U);
cout << " 5) Connecting pixels done!" << endl;//CHECKPOINT
//Creating a new matrix to include the final image (without secondary objects)
Mat destinationImage(labeledImage.size(), CV_16U);
//Calculating the average of the labeled image components areas
int ccSizeIncluded = 1000;
int avgComponentArea = AverageCCArea(stats, conComponentsCount, ccSizeIncluded);
cout << " 6) Calculating components avg area done!" << endl;//CHECKPOINT
//Criteria for component sizes
for (int row = 0; row <= labeledImage.rows; row++)
{
cout << " 6a) Starting rows loop iteration # " << row+1 << " done!" << endl;//CHECKPOINT
for (int column = 0; column <= labeledImage.cols; column++)
{
//Criteria for component sizes
int labelValue = labeledImage.at<int>(row, column);
if (ComponentIsIncludedCheck (stats.at<int>(labelValue, CC_STAT_AREA), avgComponentArea))
{
//Setting pixel value to the "destinationImage"
destinationImage.at<int>(row, column) = labeledImage.at<int>(row, column);
cout << " 6b) Setting pixel (" << row << "," << column << ") done!" << endl;//CHECKPOINT
}
else
cout << " 6c) Pixel (" << row << "," << column << ") Skipped!" << endl;//CHECKPOINT
}
cout << " 6d) Row " << row << " done!" << endl;//CHECKPOINT
}
cout << " 7) Showing FINAL image done!" << endl;//CHECKPOINT
namedWindow("Final Image", 0);
imshow("Final Image", destinationImage);
cout << " 8) Program done!" << endl;//CHECKPOINT
waitKey (0);
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
Mat BinarizeImage (Mat & originalImg, int threshold=100) //default value of threshold of grey content.
{
// Binarization of image to be used in connectedcomponents function.
Mat bw = threshold < 128 ? (originalImg < threshold) : (originalImg > threshold);
return bw;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize) //calculates the average area of connected components without components smaller than minCCSize pixels..... reference is used to improve performance, passing-by-reference does not require copying the matrix to this function.
{
int average;
for (int i=1; i<=numOfLabels; i++)
{
int sum = 0;
int validComponentsCount = numOfLabels - 1;
if (CCLabelsStats.at<int>(i, CC_STAT_AREA) >= minCCSize)
{
sum += CCLabelsStats.at<int>(i, CC_STAT_AREA);
}
else
{
validComponentsCount--;
}
average = sum / (validComponentsCount);
}
return average;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea)
{
if (ccArea >= referenceCCArea)
{
return true; //Component should be included in the destination image
}
else
{
return false; //Component should NOT be included in the destination image
}
}
change this:
for (int row = 0; row <= labeledImage.rows; row++)
to this:
for (int row = 0; row < labeledImage.rows; row++)
and this:
for (int column = 0; column <= labeledImage.cols; column++)
to this:
for (int column = 0; column < labeledImage.cols; column++)
any good?
(remember that in C++ we start counting from 0, so if e.g. labeledImage.cols == 10, the last column is the one with the index 9)

What is the best way to find the closest match to a complex shape, using opencv and c++?

Alright, here is my source code. This code will take an image in a file and compare it against a list of images in another file. In the file of images you must include a .txt file containing the names of all of the images in the file you are trying to compare. The problem i'm having is that these two images are very similar but are not exactly the same. I need a method to refine these matches further. Perhaps even an entire new way to compare these two shapes (in larger chunks, blobs, ect). One way I was considering is actually making an entire keypoint map, and only comparing keypionts if they are at or near a certain point that corespondes to both images. Ie: compare keypoints at point (12,200), +-10 pixels from (x, y) and see if there are similar keypoint on the other image.
All I need is a way to get the best matches possible from: ActualImplant and XrayOfThatSameImplantButASlightlyDifferentSize. Please and thank you!
PS: you will see commented out sections where I was experimenting with Sobel Derivatives and other such things. I ended up just adjusting contrast and brightness on the xray for the best outline. The same has to be done to the image of the implant before it is used to try to match anything.
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\features2d\features2d.hpp"
#include "opencv2\imgproc.hpp"
#include <iostream>
#include <fstream>
#include <ctime>
const string defaultDetector = "ORB";
const string defaultDescriptor = "ORB";
const string defaultMatcher = "BruteForce-Hamming";
const string defaultXrayImagePath = "../../xray.png";
const string defaultImplantImagesTextListPath = "../../implantImage.txt";
const string defaultPathToResultsFolder = "../../results";
static void printIntro(const string& appName)
{
cout << "/* *\n"
<< " * Created by: Alex Gatz. 1/11/12. Created for: Xray Implant Identification *\n"
<< " * This code was created to scan a file full of images of differnt implants, generate keypoint maps *\n"
<< " * for each image, and identifywhich image most closely matches a chosen image in another folder *\n"
<< " */ *\n"
<< endl;
cout << endl << "Format:\n" << endl;
cout << "./" << appName << " [detector] [descriptor] [matcher] [xrayImagePath] [implantImagesTextListPath] [pathToSaveResults]" << endl;
cout << endl;
cout << "\nExample:" << endl
<< "./" << appName << " " << defaultDetector << " " << defaultDescriptor << " " << defaultMatcher << " "
<< defaultXrayImagePath << " " << defaultImplantImagesTextListPath << " " << defaultPathToResultsFolder << endl;
}
static void maskMatchesByImplantImgIdx(const vector<DMatch>& matches, int trainImgIdx, vector<char>& mask)
{
mask.resize(matches.size());
fill(mask.begin(), mask.end(), 0);
for (size_t i = 0; i < matches.size(); i++)
{
if (matches[i].imgIdx == trainImgIdx)
mask[i] = 1;
}
}
static void readImplantFilenames(const string& filename, string& dirName, vector<string>& implantFilenames)
{
implantFilenames.clear();
ifstream file(filename.c_str());
if (!file.is_open())
return;
size_t pos = filename.rfind('\\');
char dlmtr = '\\';
if (pos == String::npos)
{
pos = filename.rfind('/');
dlmtr = '/';
}
dirName = pos == string::npos ? "" : filename.substr(0, pos) + dlmtr;
while (!file.eof())
{
string str; getline(file, str);
if (str.empty()) break;
implantFilenames.push_back(str);
}
file.close();
}
static bool createDetectorDescriptorMatcher(const string& detectorType, const string& descriptorType, const string& matcherType,
Ptr<FeatureDetector>& featureDetector,
Ptr<DescriptorExtractor>& descriptorExtractor,
Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Creating feature detector, descriptor extractor and descriptor matcher ..." << endl;
featureDetector = ORB::create( //All of these are parameters that can be adjusted to effect match accuracy and process time.
10000, //int nfeatures = Maxiumum number of features to retain; max vaulue unknown, higher number takes longer to process. Default: 500
1.4f, //float scaleFactor= Pyramid decimation ratio; between 1.00 - 2.00. Default: 1.2f
6, //int nlevels = Number of pyramid levels used; more levels more time taken to process, but more accurate results. Default: 8
40, //int edgeThreshold = Size of the border where the features are not detected. Should match patchSize roughly. Default: 31
0, //int firstLevel = Should remain 0 for now. Default: 0
4, //int WTA_K = Should remain 2. Default: 2
ORB::HARRIS_SCORE, //int scoreType = ORB::HARRIS_SCORE is the most accurate ranking possible for ORB. Default: HARRIS_SCORE
33 //int patchSize = size of patch used by the oriented BRIEF descriptor. Should match edgeThreashold. Default: 31
);
//featureDetector = ORB::create(); // <-- Uncomment this and comment the featureDetector above for default detector-
//OpenCV 3.1 got rid of the dynamic naming of detectors and extractors.
//These two are one in the same when using ORB, some detectors and extractors are separate
// in which case you would set "descriptorExtractor = descriptorType::create();" or its equivilant.
descriptorExtractor = featureDetector;
descriptorMatcher = DescriptorMatcher::create(matcherType);
cout << ">" << endl;
bool isCreated = !(featureDetector.empty() || descriptorExtractor.empty() || descriptorMatcher.empty());
if (!isCreated)
cout << "Can not create feature detector or descriptor extractor or descriptor matcher of given types." << endl << ">" << endl;
return isCreated;
}
static void manipulateImage(Mat& image) //Manipulates images into only showing an outline!
{
//Sobel Dirivative edge finder
//int scale = 1;
//int delta = 0;
//int ddepth = CV_16S;
////equalizeHist(image, image); //This will equilize the lighting levels in each image.
//GaussianBlur(image, image, Size(3, 3), 0, 0, BORDER_DEFAULT);
//Mat grad_x, grad_y;
//Mat abs_grad_x, abs_grad_y;
////For x
//Sobel(image, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_x, abs_grad_x);
////For y
//Sobel(image, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_y, abs_grad_y);
//addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, image);
//Specific Level adjustment (very clean)
double alpha = 20; //Best Result: 20
int beta = -300; //Best Result: -300
image.convertTo(image, -1, alpha, beta);
}
static bool readImages(const string& xrayImageName, const string& implantFilename,
Mat& xrayImage, vector <Mat>& implantImages, vector<string>& implantImageNames)
{
//TODO: Add a funtion call to automatically adjust all images loaded to best settings for matching.
cout << "< Reading the images..." << endl;
xrayImage = imread(xrayImageName, CV_LOAD_IMAGE_GRAYSCALE); //Turns the image gray while loading.
manipulateImage(xrayImage); //Runs image manipulations
if (xrayImage.empty())
{
cout << "Xray image can not be read." << endl << ">" << endl;
return false;
}
string trainDirName;
readImplantFilenames(implantFilename, trainDirName, implantImageNames);
if (implantImageNames.empty())
{
cout << "Implant image filenames can not be read." << endl << ">" << endl;
return false;
}
int readImageCount = 0;
for (size_t i = 0; i < implantImageNames.size(); i++)
{
string filename = trainDirName + implantImageNames[i];
Mat img = imread(filename, CV_LOAD_IMAGE_GRAYSCALE); //Turns imamges gray while loading.
//manipulateImage(img); //Runs Sobel Dirivitage on implant image.
if (img.empty())
{
cout << "Implant image " << filename << " can not be read." << endl;
}
else
{
readImageCount++;
}
implantImages.push_back(img);
}
if (!readImageCount)
{
cout << "All implant images can not be read." << endl << ">" << endl;
return false;
}
else
cout << readImageCount << " implant images were read." << endl;
cout << ">" << endl;
return true;
}
static void detectKeypoints(const Mat& xrayImage, vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantKeypoints,
Ptr<FeatureDetector>& featureDetector)
{
cout << endl << "< Extracting keypoints from images..." << endl;
featureDetector->detect(xrayImage, xrayKeypoints);
featureDetector->detect(implantImages, implantKeypoints);
cout << ">" << endl;
}
static void computeDescriptors(const Mat& xrayImage, vector<KeyPoint>& implantKeypoints, Mat& implantDescriptors,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantImageKeypoints, vector<Mat>& implantImageDescriptors,
Ptr<DescriptorExtractor>& descriptorExtractor)
{
cout << "< Computing descriptors for keypoints..." << endl;
descriptorExtractor->compute(xrayImage, implantKeypoints, implantDescriptors);
descriptorExtractor->compute(implantImages, implantImageKeypoints, implantImageDescriptors);
int totalTrainDesc = 0;
for (vector<Mat>::const_iterator tdIter = implantImageDescriptors.begin(); tdIter != implantImageDescriptors.end(); tdIter++)
totalTrainDesc += tdIter->rows;
cout << "Query descriptors count: " << implantDescriptors.rows << "; Total train descriptors count: " << totalTrainDesc << endl;
cout << ">" << endl;
}
static void matchDescriptors(const Mat& xrayDescriptors, const vector<Mat>& implantDescriptors,
vector<DMatch>& matches, Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Set implant image descriptors collection in the matcher and match xray descriptors to them..." << endl;
//time_t timerBegin, timerEnd;
//time(&timerBegin);
descriptorMatcher->add(implantDescriptors);
descriptorMatcher->train();
//time(&timerEnd);
//double buildTime = difftime(timerEnd, timerBegin);
//time(&timerBegin);
descriptorMatcher->match(xrayDescriptors, matches);
//time(&timerEnd);
//double matchTime = difftime(timerEnd, timerBegin);
CV_Assert(xrayDescriptors.rows == (int)matches.size() || matches.empty());
cout << "Number of imageMatches: " << matches.size() << endl;
//cout << "Build time: " << buildTime << " ms; Match time: " << matchTime << " ms" << endl;
cout << ">" << endl;
}
static void saveResultImages(const Mat& xrayImage, const vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImage, const vector<vector<KeyPoint> >& implantImageKeypoints,
const vector<DMatch>& matches, const vector<string>& implantImagesName, const string& resultDir)
{
cout << "< Save results..." << endl;
Mat drawImg;
vector<char> mask;
for (size_t i = 0; i < implantImage.size(); i++)
{
if (!implantImage[i].empty())
{
maskMatchesByImplantImgIdx(matches, (int)i, mask);
drawMatches(xrayImage, xrayKeypoints, implantImage[i], implantImageKeypoints[i],
matches, drawImg, Scalar::all(-1), Scalar(0, 0, 255), mask, 4);
string filename = resultDir + "/result_" + implantImagesName[i];
if (!imwrite(filename, drawImg))
cout << "Image " << filename << " can not be saved (may be because directory " << resultDir << " does not exist)." << endl;
}
}
cout << ">" << endl;
//After all results have been saved, another function will scan and place the final result in a separate folder.
//For now this save process is required to manually access each result and determine if the current settings are working well.
}
int main(int argc, char** argv)
{
//Intialize variables to global defaults.
string detector = defaultDetector;
string descriptor = defaultDescriptor;
string matcher = defaultMatcher;
string xrayImagePath = defaultXrayImagePath;
string implantImagesTextListPath = defaultImplantImagesTextListPath;
string pathToSaveResults = defaultPathToResultsFolder;
//As long as you have 7 arguments, you can procede
if (argc != 7 && argc != 1)
{
//This will be called if the incorrect amount of commands are used to start the program.
printIntro(argv[1]);
system("PAUSE");
return -1;
}
//As long as you still have 7 arguments, I will set the variables for this
// to the arguments you decided on.
//If testing using XrayID --> Properties --> Debugging --> Command Arguments, remember to start with [detector] as the first command
// C++ includes the [appName] command as the first argument automantically.
if (argc != 1) //I suggest placing a break here and stepping through this to ensure the proper commands were sent in. With a
// GUI this would nto matter because the GUI would structure the input and use a default if no input was used.
{
detector = argv[1];
descriptor = argv[2];
matcher = argv[3];
xrayImagePath = argv[4];
implantImagesTextListPath = argv[5];
pathToSaveResults = argv[6];
}
//Set up cv::Ptr's for tools.
Ptr<FeatureDetector> featureDetector;
Ptr<DescriptorExtractor> descriptorExtractor;
Ptr<DescriptorMatcher> descriptorMatcher;
//Check to see if tools are created, if not true print intro and close program.
if (!createDetectorDescriptorMatcher(detector, descriptor, matcher, featureDetector, descriptorExtractor, descriptorMatcher))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
Mat testImage;
vector<Mat> implantImages;
vector<string> implantImagesNames;
//Check to see if readImages completes properly, if not true print intro and close program.
if (!readImages(xrayImagePath, implantImagesTextListPath, testImage, implantImages, implantImagesNames))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
vector<KeyPoint> xrayKeypoints;
vector<vector<KeyPoint> > implantKeypoints;
detectKeypoints(testImage, xrayKeypoints, implantImages, implantKeypoints, featureDetector);
Mat xrayDescriptors;
vector<Mat> implantTestImageDescriptors;
computeDescriptors(testImage, xrayKeypoints, xrayDescriptors, implantImages, implantKeypoints, implantTestImageDescriptors,
descriptorExtractor);
vector<DMatch> imageMatches;
matchDescriptors(xrayDescriptors, implantTestImageDescriptors, imageMatches, descriptorMatcher);
saveResultImages(testImage, xrayKeypoints, implantImages, implantKeypoints, imageMatches, implantImagesNames, pathToSaveResults);
system("PAUSE");
return 0;
}
Try below code.Hope this will help you.
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
#include <dirent.h>
#include <ctime>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int argc, const char *argv[])
{
double ratio = 0.9;
Mat image1 = imread("Image1_path);
Mat image2 = cv::imread("Image2_path");
Ptr<FeatureDetector> detector;
Ptr<DescriptorExtractor> extractor;
// TODO default is 500 keypoints..but we can change
detector = FeatureDetector::create("ORB");
extractor = DescriptorExtractor::create("ORB");
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(image1, keypoints1);
detector->detect(image2, keypoints2);
cout << "# keypoints of image1 :" << keypoints1.size() << endl;
cout << "# keypoints of image2 :" << keypoints2.size() << endl;
Mat descriptors1,descriptors2;
extractor->compute(image1,keypoints1,descriptors1);
extractor->compute(image2,keypoints2,descriptors2);
cout << "Descriptors size :" << descriptors1.cols << ":"<< descriptors1.rows << endl;
vector< vector<DMatch> > matches12, matches21;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
matcher->knnMatch( descriptors1, descriptors2, matches12, 2);
matcher->knnMatch( descriptors2, descriptors1, matches21, 2);
//BFMatcher bfmatcher(NORM_L2, true);
//vector<DMatch> matches;
//bfmatcher.match(descriptors1, descriptors2, matches);
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors1.rows; i++)
{
double dist = matches12[i].data()->distance;
if(dist < min_dist)
min_dist = dist;
if(dist > max_dist)
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
cout << "Matches1-2:" << matches12.size() << endl;
cout << "Matches2-1:" << matches21.size() << endl;
std::vector<DMatch> good_matches1, good_matches2;
for(int i=0; i < matches12.size(); i++)
{
if(matches12[i][0].distance < ratio * matches12[i][1].distance)
good_matches1.push_back(matches12[i][0]);
}
for(int i=0; i < matches21.size(); i++)
{
if(matches21[i][0].distance < ratio * matches21[i][1].distance)
good_matches2.push_back(matches21[i][0]);
}
cout << "Good matches1:" << good_matches1.size() << endl;
cout << "Good matches2:" << good_matches2.size() << endl;
// Symmetric Test
std::vector<DMatch> better_matches;
for(int i=0; i<good_matches1.size(); i++)
{
for(int j=0; j<good_matches2.size(); j++)
{
if(good_matches1[i].queryIdx == good_matches2[j].trainIdx && good_matches2[j].queryIdx == good_matches1[i].trainIdx)
{
better_matches.push_back(DMatch(good_matches1[i].queryIdx, good_matches1[i].trainIdx, good_matches1[i].distance));
break;
}
}
}
cout << "Better matches:" << better_matches.size() << endl;
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
// show it on an image
Mat output;
drawMatches(image1, keypoints1, image2, keypoints2, better_matches, output);
imshow("Matches result",output);
waitKey(0);
return 0;
}
That image looks rather like an artificial hip. If you're dealing with medical images, you should definitely check out The Insight Toolkit (ITK) which has many special features designed for the particular needs of this domain. You could do a simple Model-Image Registration between your real-world image and your template data to find the best result. I think you would get much better results with this approach than with the point-based testing described above.
This sort of registration performs an iterative optimisation of a set of parameters (in this case, an affine transform) which seeks to find the best mapping of the model to the image data.
ITK Affine Registration example
The example above takes a fixed image and attempts to find a transform that maps the moving image onto it. The transform is a 2D affine transform (rotation and translation in this case) and its parameters are the result of running the optimiser which maximises the matching metric. The metric measures how well the fixed image and the transformed moving image match. The interpolator is what takes the moving image and applies the transform to map it onto the fixed image.
In your sample images, fixed image could be the original X-ray and the moving image the actual implant. You will probably need to add scaling to make a full affine transform since the size of the two differs.
The metric is a measure of how well the transformed moving image matches the fixed image, so you would need to determine a tolerance or minimum metric for a match to be valid. If the images are very different, the metric would be very low and can be rejected.
The output is a set of transformation parameters and the output image is the final optimal transform applied to the moving image (not a combination of the images). The result is basically telling you where the implant is found in the X-ray.

How to determine if a cv::Mat is a zero matrix?

I have a matrix that is dynamically being changed according to the following code;
for( It=all_frames.begin(); It != all_frames.end(); ++It)
{
ItTemp = *It;
subtract(ItTemp, Base, NewData);
cout << "The size of the new data for ";
cout << " is \n" << NewData.rows << "x" << NewData.cols << endl;
cout << "The New Data is: \n" << NewData << endl << endl;
NewData_Vector.push_back(NewData.clone());
}
What I want to do is determine the frames at which the cv::Mat NewData is a zero matrix.
I've tried comparing it to a zero matrix that is of the same size, using both the cv::compare() function and simple operators (i.e NewData == NoData), but I can't even compile the program.
Is there a simple way of determining when a cv::Mat is populated by zeroes?
I used
if (countNonZero(NewData) < 1)
{
cout << "Eye contact occurs in this frame" << endl;
}
This is a pretty simple (if perhaps not the most elegant) way of doing it.
To check the mat if is empty, use empty(), if NewData is a cv::Mat, NewData.empty() returns true if there's no element in NewData.
To check if it's all zero, simply, NewData == Mat::zeros(NewData.size(), NewData.type()).
Update:
After checking the OpenCV source code, you can actually do NewData == 0 to check all element is equal to 0.
countNonZero(Mat ) will give u number of non zeros in mat
How about this..
Mat img = Mat::zeros(cvSize(1024, 1024), CV_8UC3);
bool flag = true;
MatConstIterator_<double> it = img.begin<double>();
MatConstIterator_<double> it_end = img.end<double>();
for(; it != it_end; ++it)
{
if(*it != 0)
{
flag = false;
break;
}
}
The Mat object has an empty property, so you can just ask Mat to tell you if it has something or it's empty. The result will be either true or false.

Calculate the area of an object with OpenCV

I need to calculate the area of a blob/an object in a grayscale picture (loading it as Mat, not as IplImage) using OpenCV.
I thought it would be a good idea to get the coordinates of the edges (number of edges change form object to object) or to get all coordinates of the contour and then use contourArea() to calculate the area of my object.
I deleted all noise and got some nice and satisfying contours by using findContours() (programming in C++).
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy,int mode, int method, Point offset=Point());
Now I got to understand that param contours already owns the coordinates of all contours of my object. Did I get that right?
If yes, it there a way to access them?
And if no, how do I get the coordinates of the contour anyway?
contours is actually defined as
vector<vector<Point> > contours;
And now I think it's clear how to access its points.
The contour area is calculated by a function nicely called contourArea():
for (unsigned int i = 0; i < contours.size(); i++)
{
std::cout << "# of contour points: " << contours[i].size() << std::endl;
for (unsigned int j=0; j<contours[i].size(); j++)
{
std::cout << "Point(x,y)=" << contours[i][j] << std::endl;
}
std::cout << " Area: " << contourArea(contours[i]) << std::endl;
}