Segmentation fault (core dumped) with OpenCV - c++

I am trying to code a program that eliminates some of the connected components and keep the rest.
However, at some point in the code, the program exits with error message "Segmentation fault (core dumped)".
I have narrowed down the error to the statement: "destinationImage.at(row, column) = labeledImage.at(row, column);" using the checkpoints you'll find the code below.
I have tried all the solution I found, especially this one, with no luck.
Please Help!
One more thing, the program reads the image correctly but does not show the original image as per the code. Instead, it prints a message "init done
opengl support available". Is this normal? Does the implementation of the imshow take place at the end of the program with no errors?
/* Goal is to find all related components, eliminate secondary objects*/
#include <opencv2/core/utility.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
//Declaring variables
Mat originalImage;
int conComponentsCount;
int primaryComponents;
//Declaring constants
const char* keys =
{
"{#image|../data/sample.jpg|image for converting to a grayscale}"
};
//Functions prototypes, used to be able to define functions AFTER the "main" function
Mat BinarizeImage (Mat &, int thresh);
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize);
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea);
//Program mainstream============================================
int main (int argc, const char **argv)
{
//Waiting for user to enter the required path, default path is defined in "keys" string
CommandLineParser parser(argc, argv, keys);
string inputImage = parser.get<string>(0);
//Reading original image
//NOTE: the program MUST terminate or loop back if the image was not loaded; functions below use reference to matrices and references CANNOT be null or empty.
originalImage = imread(inputImage.c_str(), IMREAD_GRAYSCALE);// or: imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE)
cout << " 1) Loading image done!" << endl;//CHECKPOINT
if (originalImage.empty())
{
cout << "Nothing was loaded!";
return -1; //terminating program with error feedback
}
cout << " 2) Checking for null Image done!" << endl;//CHECKPOINT
namedWindow("Original Image", 0);
imshow("Original Image", originalImage);
cout << " 3) Showing ORIGINAL image done!" << endl;//CHECKPOINT
//Image Binarization; connectedcomponents function only accepts binary images.
int threshold=100; //Value chosen empirically.
Mat binImg = BinarizeImage(originalImage, threshold);
cout << " 4) Binarizing image done!" << endl;//CHECKPOINT
//Finding the number of connected components and generating the labeled image.
Mat labeledImage; //Image with connected components labeled.
Mat stats, centroids; //Statistics of connected image's components.
conComponentsCount = connectedComponentsWithStats(binImg, labeledImage, stats, centroids, 4, CV_16U);
cout << " 5) Connecting pixels done!" << endl;//CHECKPOINT
//Creating a new matrix to include the final image (without secondary objects)
Mat destinationImage(labeledImage.size(), CV_16U);
//Calculating the average of the labeled image components areas
int ccSizeIncluded = 1000;
int avgComponentArea = AverageCCArea(stats, conComponentsCount, ccSizeIncluded);
cout << " 6) Calculating components avg area done!" << endl;//CHECKPOINT
//Criteria for component sizes
for (int row = 0; row <= labeledImage.rows; row++)
{
cout << " 6a) Starting rows loop iteration # " << row+1 << " done!" << endl;//CHECKPOINT
for (int column = 0; column <= labeledImage.cols; column++)
{
//Criteria for component sizes
int labelValue = labeledImage.at<int>(row, column);
if (ComponentIsIncludedCheck (stats.at<int>(labelValue, CC_STAT_AREA), avgComponentArea))
{
//Setting pixel value to the "destinationImage"
destinationImage.at<int>(row, column) = labeledImage.at<int>(row, column);
cout << " 6b) Setting pixel (" << row << "," << column << ") done!" << endl;//CHECKPOINT
}
else
cout << " 6c) Pixel (" << row << "," << column << ") Skipped!" << endl;//CHECKPOINT
}
cout << " 6d) Row " << row << " done!" << endl;//CHECKPOINT
}
cout << " 7) Showing FINAL image done!" << endl;//CHECKPOINT
namedWindow("Final Image", 0);
imshow("Final Image", destinationImage);
cout << " 8) Program done!" << endl;//CHECKPOINT
waitKey (0);
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
Mat BinarizeImage (Mat & originalImg, int threshold=100) //default value of threshold of grey content.
{
// Binarization of image to be used in connectedcomponents function.
Mat bw = threshold < 128 ? (originalImg < threshold) : (originalImg > threshold);
return bw;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize) //calculates the average area of connected components without components smaller than minCCSize pixels..... reference is used to improve performance, passing-by-reference does not require copying the matrix to this function.
{
int average;
for (int i=1; i<=numOfLabels; i++)
{
int sum = 0;
int validComponentsCount = numOfLabels - 1;
if (CCLabelsStats.at<int>(i, CC_STAT_AREA) >= minCCSize)
{
sum += CCLabelsStats.at<int>(i, CC_STAT_AREA);
}
else
{
validComponentsCount--;
}
average = sum / (validComponentsCount);
}
return average;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea)
{
if (ccArea >= referenceCCArea)
{
return true; //Component should be included in the destination image
}
else
{
return false; //Component should NOT be included in the destination image
}
}

change this:
for (int row = 0; row <= labeledImage.rows; row++)
to this:
for (int row = 0; row < labeledImage.rows; row++)
and this:
for (int column = 0; column <= labeledImage.cols; column++)
to this:
for (int column = 0; column < labeledImage.cols; column++)
any good?
(remember that in C++ we start counting from 0, so if e.g. labeledImage.cols == 10, the last column is the one with the index 9)

Related

Segmentation Fault in creating a cv::Mat from unsigned char array

I cannot understand why the following minimal code outputs segmentation fault and cv::Mat values are not printed correctly:
#include <opencv2/opencv.hpp>
int main()
{
unsigned char out[1280*720*3/2] = {100};
cv::Mat dummy_query = cv::Mat(1, 1280*720*3/2*sizeof(unsigned char), CV_8UC1, (void *)out);
cv::Size s = dummy_query.size();
std::cout << s << "\r\n";
for(int i = 0; i < 1280*720*3/2; i++)
{
std::cout << i << "ss" << int(out[i]) << ":";
std::cout << dummy_query.at<int>(0,i) << " ";
}
}
You have defined uchar datatype in cv::Mat but accessing at this line
as int std::cout << dummy_query.at<int>(0,i) << " ";
so your program will likely get crash at end of the loop
e.g.
// create a 100x100 8-bit matrix
Mat M(100,100,CV_8U);
// this will be compiled fine. no any data conversion will be done.
Mat_<float>& M1 = (Mat_<float>&)M;
// the program is likely to crash at the statement below
M1(99,99) = 1.f;
check this open cv reference

how to read and write an image using C++ in visual studio with ITK configured

I am a beginner to ITK and c++. I have the following code where I can get the height and width of an image. Instead of giving the input image in the console, I want to do it in the code itself. How do I directly give the input image to this code?
#include "itkImage.h"
#include "itkImageFileReader.h"
int main()
{
mat m("filename");
imshow("windowname", m);
}
// verify command line arguments
if( argc < 2 )
{
std::cout << "usage: " << std::endl;
std::cerr << argv[0] << " inputimagefile" << std::endl;
return exit_failure;
}
typedef itk::image<float, 2> imagetype;
typedef itk::imagefilereader<imagetype> readertype;
readertype::pointer reader = readertype::new();
reader->setfilename( argv[1] );
reader->update();
std::cout << reader->getoutput()->getlargestpossibleregion().getsize()[0] << " "
<< reader->getoutput()->getlargestpossibleregion().getsize()[1] << std::endl;
// an example image had w = 200 and h = 100 (it is wider than it is tall). the above output
// 200 100
// so w = getsize()[0]
// and h = getsize()[1]
// a pixel inside the region
itk::index<2> indexinside;
indexinside[0] = 150;
indexinside[1] = 50;
std::cout << reader->getoutput()-
>getlargestpossibleregion().isinside(indexinside) << std::endl;
// a pixel outside the region
itk::index<2> indexoutside;
indexoutside[0] = 50;
indexoutside[1] = 150;
std::cout << reader->getoutput()- >getlargestpossibleregion().isinside(indexoutside) << std::endl;
// this means that the [0] component of the index is referencing the left to right (column) index
// and the [1] component of index is referencing the top to bottom (row) index
return exit_success;
}
Change the line reader->setfilename( argv[1] ); by reader->setfilename( "C:/path/to/file.png" );
I assume that
mat m("filename");
imshow("windowname", m);
sneaked in from some unrelated code? Otherwise the example would not compile.

C++ Opencv: Mat.zeros get wrong shape

I defined and initialized a Mat variable using the Mat::zeros, when I print its shape, i.e. rows, cols, channels, it seems I get wrong values.
My code is shown as follows:
#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
int n_Channel = 3;
int mySizes[3] = {100, 200, n_Channel};
Mat M = Mat::zeros(n_Channel, mySizes, CV_64F);
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
return 0;
}
The printed message is :
-1,-1,1
What's wrong with this?
I also find that if I declare a Mat using the following code:
int n_Channel = 3;
Mat M(Size(100, 200), CV_32FC(n_Channel));
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
the outcome is correct:
200,100,3
I'm confused about this. Thank you all for helping me!
You want to use a very special overloaded version of the cv::Mat::zeros method.
Let's have a look at the following code:
// Number of channels.
const int n_Channel = 3;
// Number of dimensions; must be 1 or 2?
const int n_Dimensions = 2;
// Create empty Mat using zeros, and output dimensions.
int mySizes[n_Dimensions] = { 200, 100 };
cv::Mat M1 = cv::Mat::zeros(n_Dimensions, mySizes, CV_64FC(n_Channel));
std::cout << "M1: " << M1.rows << "," << M1.cols << "," << M1.channels() << std::endl;
// Create empty Mat using constructor, and output dimensions.
cv::Mat M2 = cv::Mat(cv::Size(100, 200), CV_64FC(n_Channel), cv::Scalar(0, 0, 0));
std::cout << "M2: " << M2.rows << "," << M2.cols << "," << M2.channels() << std::endl;
which gives the following output:
M1: 200,100,3
M2: 200,100,3
So, basically you have to move the "channel number info" from mySizes to the cv::Mat::zeros method. Also, you have to pay attention to the order of the image dimensions provided in mySizes, since it seem to differ from the constructor using cv::Size. I guess the latter one is width x height, whereas the first one is number of rows x number of cols.
How to init CV mat :
cv::Mat test = cv::Mat::zeros(cv::Size(100, 200), CV_64F);
As you can see, the first parameter is the Size cf :
https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html

Find the minimum value and it's location from the depth images in opencv c++

I am working depth data which is in the format of 16UC1. I want to find out the min value (greater than 0) with location from the image. I am using the minMaxLoc function but I am getting the error. It may be because of short values. It will be great , if you suggest the way.
int main()
{
Mat abc = imread("depth272.tiff");
cout << abc.size() << endl;
imshow("depth_image",abc);
Mat xyz = abc > 0;
cout << "abc type: " << abc.type() << "xyz type " << xyz.type() << endl;
double rmin, rmax;
Point rMinPoint, pMaxPoint;
minMaxLoc(abc, &rmin, &rmax, &rMinPoint, &pMaxPoint, xyz);
int row = rMinPoint.x;
int col = rMinPoint.y;
waitKey(0);
return 0;
}
The image is loaded as a 3-channel 8UC3 image.
The function minMaxLoc() only works on single channel images.
As #Miki suggests, you should use imread(..., IMREAD_UNCHANGED) to load as CV_16UC1.

What is the best way to find the closest match to a complex shape, using opencv and c++?

Alright, here is my source code. This code will take an image in a file and compare it against a list of images in another file. In the file of images you must include a .txt file containing the names of all of the images in the file you are trying to compare. The problem i'm having is that these two images are very similar but are not exactly the same. I need a method to refine these matches further. Perhaps even an entire new way to compare these two shapes (in larger chunks, blobs, ect). One way I was considering is actually making an entire keypoint map, and only comparing keypionts if they are at or near a certain point that corespondes to both images. Ie: compare keypoints at point (12,200), +-10 pixels from (x, y) and see if there are similar keypoint on the other image.
All I need is a way to get the best matches possible from: ActualImplant and XrayOfThatSameImplantButASlightlyDifferentSize. Please and thank you!
PS: you will see commented out sections where I was experimenting with Sobel Derivatives and other such things. I ended up just adjusting contrast and brightness on the xray for the best outline. The same has to be done to the image of the implant before it is used to try to match anything.
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\features2d\features2d.hpp"
#include "opencv2\imgproc.hpp"
#include <iostream>
#include <fstream>
#include <ctime>
const string defaultDetector = "ORB";
const string defaultDescriptor = "ORB";
const string defaultMatcher = "BruteForce-Hamming";
const string defaultXrayImagePath = "../../xray.png";
const string defaultImplantImagesTextListPath = "../../implantImage.txt";
const string defaultPathToResultsFolder = "../../results";
static void printIntro(const string& appName)
{
cout << "/* *\n"
<< " * Created by: Alex Gatz. 1/11/12. Created for: Xray Implant Identification *\n"
<< " * This code was created to scan a file full of images of differnt implants, generate keypoint maps *\n"
<< " * for each image, and identifywhich image most closely matches a chosen image in another folder *\n"
<< " */ *\n"
<< endl;
cout << endl << "Format:\n" << endl;
cout << "./" << appName << " [detector] [descriptor] [matcher] [xrayImagePath] [implantImagesTextListPath] [pathToSaveResults]" << endl;
cout << endl;
cout << "\nExample:" << endl
<< "./" << appName << " " << defaultDetector << " " << defaultDescriptor << " " << defaultMatcher << " "
<< defaultXrayImagePath << " " << defaultImplantImagesTextListPath << " " << defaultPathToResultsFolder << endl;
}
static void maskMatchesByImplantImgIdx(const vector<DMatch>& matches, int trainImgIdx, vector<char>& mask)
{
mask.resize(matches.size());
fill(mask.begin(), mask.end(), 0);
for (size_t i = 0; i < matches.size(); i++)
{
if (matches[i].imgIdx == trainImgIdx)
mask[i] = 1;
}
}
static void readImplantFilenames(const string& filename, string& dirName, vector<string>& implantFilenames)
{
implantFilenames.clear();
ifstream file(filename.c_str());
if (!file.is_open())
return;
size_t pos = filename.rfind('\\');
char dlmtr = '\\';
if (pos == String::npos)
{
pos = filename.rfind('/');
dlmtr = '/';
}
dirName = pos == string::npos ? "" : filename.substr(0, pos) + dlmtr;
while (!file.eof())
{
string str; getline(file, str);
if (str.empty()) break;
implantFilenames.push_back(str);
}
file.close();
}
static bool createDetectorDescriptorMatcher(const string& detectorType, const string& descriptorType, const string& matcherType,
Ptr<FeatureDetector>& featureDetector,
Ptr<DescriptorExtractor>& descriptorExtractor,
Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Creating feature detector, descriptor extractor and descriptor matcher ..." << endl;
featureDetector = ORB::create( //All of these are parameters that can be adjusted to effect match accuracy and process time.
10000, //int nfeatures = Maxiumum number of features to retain; max vaulue unknown, higher number takes longer to process. Default: 500
1.4f, //float scaleFactor= Pyramid decimation ratio; between 1.00 - 2.00. Default: 1.2f
6, //int nlevels = Number of pyramid levels used; more levels more time taken to process, but more accurate results. Default: 8
40, //int edgeThreshold = Size of the border where the features are not detected. Should match patchSize roughly. Default: 31
0, //int firstLevel = Should remain 0 for now. Default: 0
4, //int WTA_K = Should remain 2. Default: 2
ORB::HARRIS_SCORE, //int scoreType = ORB::HARRIS_SCORE is the most accurate ranking possible for ORB. Default: HARRIS_SCORE
33 //int patchSize = size of patch used by the oriented BRIEF descriptor. Should match edgeThreashold. Default: 31
);
//featureDetector = ORB::create(); // <-- Uncomment this and comment the featureDetector above for default detector-
//OpenCV 3.1 got rid of the dynamic naming of detectors and extractors.
//These two are one in the same when using ORB, some detectors and extractors are separate
// in which case you would set "descriptorExtractor = descriptorType::create();" or its equivilant.
descriptorExtractor = featureDetector;
descriptorMatcher = DescriptorMatcher::create(matcherType);
cout << ">" << endl;
bool isCreated = !(featureDetector.empty() || descriptorExtractor.empty() || descriptorMatcher.empty());
if (!isCreated)
cout << "Can not create feature detector or descriptor extractor or descriptor matcher of given types." << endl << ">" << endl;
return isCreated;
}
static void manipulateImage(Mat& image) //Manipulates images into only showing an outline!
{
//Sobel Dirivative edge finder
//int scale = 1;
//int delta = 0;
//int ddepth = CV_16S;
////equalizeHist(image, image); //This will equilize the lighting levels in each image.
//GaussianBlur(image, image, Size(3, 3), 0, 0, BORDER_DEFAULT);
//Mat grad_x, grad_y;
//Mat abs_grad_x, abs_grad_y;
////For x
//Sobel(image, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_x, abs_grad_x);
////For y
//Sobel(image, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_y, abs_grad_y);
//addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, image);
//Specific Level adjustment (very clean)
double alpha = 20; //Best Result: 20
int beta = -300; //Best Result: -300
image.convertTo(image, -1, alpha, beta);
}
static bool readImages(const string& xrayImageName, const string& implantFilename,
Mat& xrayImage, vector <Mat>& implantImages, vector<string>& implantImageNames)
{
//TODO: Add a funtion call to automatically adjust all images loaded to best settings for matching.
cout << "< Reading the images..." << endl;
xrayImage = imread(xrayImageName, CV_LOAD_IMAGE_GRAYSCALE); //Turns the image gray while loading.
manipulateImage(xrayImage); //Runs image manipulations
if (xrayImage.empty())
{
cout << "Xray image can not be read." << endl << ">" << endl;
return false;
}
string trainDirName;
readImplantFilenames(implantFilename, trainDirName, implantImageNames);
if (implantImageNames.empty())
{
cout << "Implant image filenames can not be read." << endl << ">" << endl;
return false;
}
int readImageCount = 0;
for (size_t i = 0; i < implantImageNames.size(); i++)
{
string filename = trainDirName + implantImageNames[i];
Mat img = imread(filename, CV_LOAD_IMAGE_GRAYSCALE); //Turns imamges gray while loading.
//manipulateImage(img); //Runs Sobel Dirivitage on implant image.
if (img.empty())
{
cout << "Implant image " << filename << " can not be read." << endl;
}
else
{
readImageCount++;
}
implantImages.push_back(img);
}
if (!readImageCount)
{
cout << "All implant images can not be read." << endl << ">" << endl;
return false;
}
else
cout << readImageCount << " implant images were read." << endl;
cout << ">" << endl;
return true;
}
static void detectKeypoints(const Mat& xrayImage, vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantKeypoints,
Ptr<FeatureDetector>& featureDetector)
{
cout << endl << "< Extracting keypoints from images..." << endl;
featureDetector->detect(xrayImage, xrayKeypoints);
featureDetector->detect(implantImages, implantKeypoints);
cout << ">" << endl;
}
static void computeDescriptors(const Mat& xrayImage, vector<KeyPoint>& implantKeypoints, Mat& implantDescriptors,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantImageKeypoints, vector<Mat>& implantImageDescriptors,
Ptr<DescriptorExtractor>& descriptorExtractor)
{
cout << "< Computing descriptors for keypoints..." << endl;
descriptorExtractor->compute(xrayImage, implantKeypoints, implantDescriptors);
descriptorExtractor->compute(implantImages, implantImageKeypoints, implantImageDescriptors);
int totalTrainDesc = 0;
for (vector<Mat>::const_iterator tdIter = implantImageDescriptors.begin(); tdIter != implantImageDescriptors.end(); tdIter++)
totalTrainDesc += tdIter->rows;
cout << "Query descriptors count: " << implantDescriptors.rows << "; Total train descriptors count: " << totalTrainDesc << endl;
cout << ">" << endl;
}
static void matchDescriptors(const Mat& xrayDescriptors, const vector<Mat>& implantDescriptors,
vector<DMatch>& matches, Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Set implant image descriptors collection in the matcher and match xray descriptors to them..." << endl;
//time_t timerBegin, timerEnd;
//time(&timerBegin);
descriptorMatcher->add(implantDescriptors);
descriptorMatcher->train();
//time(&timerEnd);
//double buildTime = difftime(timerEnd, timerBegin);
//time(&timerBegin);
descriptorMatcher->match(xrayDescriptors, matches);
//time(&timerEnd);
//double matchTime = difftime(timerEnd, timerBegin);
CV_Assert(xrayDescriptors.rows == (int)matches.size() || matches.empty());
cout << "Number of imageMatches: " << matches.size() << endl;
//cout << "Build time: " << buildTime << " ms; Match time: " << matchTime << " ms" << endl;
cout << ">" << endl;
}
static void saveResultImages(const Mat& xrayImage, const vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImage, const vector<vector<KeyPoint> >& implantImageKeypoints,
const vector<DMatch>& matches, const vector<string>& implantImagesName, const string& resultDir)
{
cout << "< Save results..." << endl;
Mat drawImg;
vector<char> mask;
for (size_t i = 0; i < implantImage.size(); i++)
{
if (!implantImage[i].empty())
{
maskMatchesByImplantImgIdx(matches, (int)i, mask);
drawMatches(xrayImage, xrayKeypoints, implantImage[i], implantImageKeypoints[i],
matches, drawImg, Scalar::all(-1), Scalar(0, 0, 255), mask, 4);
string filename = resultDir + "/result_" + implantImagesName[i];
if (!imwrite(filename, drawImg))
cout << "Image " << filename << " can not be saved (may be because directory " << resultDir << " does not exist)." << endl;
}
}
cout << ">" << endl;
//After all results have been saved, another function will scan and place the final result in a separate folder.
//For now this save process is required to manually access each result and determine if the current settings are working well.
}
int main(int argc, char** argv)
{
//Intialize variables to global defaults.
string detector = defaultDetector;
string descriptor = defaultDescriptor;
string matcher = defaultMatcher;
string xrayImagePath = defaultXrayImagePath;
string implantImagesTextListPath = defaultImplantImagesTextListPath;
string pathToSaveResults = defaultPathToResultsFolder;
//As long as you have 7 arguments, you can procede
if (argc != 7 && argc != 1)
{
//This will be called if the incorrect amount of commands are used to start the program.
printIntro(argv[1]);
system("PAUSE");
return -1;
}
//As long as you still have 7 arguments, I will set the variables for this
// to the arguments you decided on.
//If testing using XrayID --> Properties --> Debugging --> Command Arguments, remember to start with [detector] as the first command
// C++ includes the [appName] command as the first argument automantically.
if (argc != 1) //I suggest placing a break here and stepping through this to ensure the proper commands were sent in. With a
// GUI this would nto matter because the GUI would structure the input and use a default if no input was used.
{
detector = argv[1];
descriptor = argv[2];
matcher = argv[3];
xrayImagePath = argv[4];
implantImagesTextListPath = argv[5];
pathToSaveResults = argv[6];
}
//Set up cv::Ptr's for tools.
Ptr<FeatureDetector> featureDetector;
Ptr<DescriptorExtractor> descriptorExtractor;
Ptr<DescriptorMatcher> descriptorMatcher;
//Check to see if tools are created, if not true print intro and close program.
if (!createDetectorDescriptorMatcher(detector, descriptor, matcher, featureDetector, descriptorExtractor, descriptorMatcher))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
Mat testImage;
vector<Mat> implantImages;
vector<string> implantImagesNames;
//Check to see if readImages completes properly, if not true print intro and close program.
if (!readImages(xrayImagePath, implantImagesTextListPath, testImage, implantImages, implantImagesNames))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
vector<KeyPoint> xrayKeypoints;
vector<vector<KeyPoint> > implantKeypoints;
detectKeypoints(testImage, xrayKeypoints, implantImages, implantKeypoints, featureDetector);
Mat xrayDescriptors;
vector<Mat> implantTestImageDescriptors;
computeDescriptors(testImage, xrayKeypoints, xrayDescriptors, implantImages, implantKeypoints, implantTestImageDescriptors,
descriptorExtractor);
vector<DMatch> imageMatches;
matchDescriptors(xrayDescriptors, implantTestImageDescriptors, imageMatches, descriptorMatcher);
saveResultImages(testImage, xrayKeypoints, implantImages, implantKeypoints, imageMatches, implantImagesNames, pathToSaveResults);
system("PAUSE");
return 0;
}
Try below code.Hope this will help you.
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
#include <dirent.h>
#include <ctime>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int argc, const char *argv[])
{
double ratio = 0.9;
Mat image1 = imread("Image1_path);
Mat image2 = cv::imread("Image2_path");
Ptr<FeatureDetector> detector;
Ptr<DescriptorExtractor> extractor;
// TODO default is 500 keypoints..but we can change
detector = FeatureDetector::create("ORB");
extractor = DescriptorExtractor::create("ORB");
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(image1, keypoints1);
detector->detect(image2, keypoints2);
cout << "# keypoints of image1 :" << keypoints1.size() << endl;
cout << "# keypoints of image2 :" << keypoints2.size() << endl;
Mat descriptors1,descriptors2;
extractor->compute(image1,keypoints1,descriptors1);
extractor->compute(image2,keypoints2,descriptors2);
cout << "Descriptors size :" << descriptors1.cols << ":"<< descriptors1.rows << endl;
vector< vector<DMatch> > matches12, matches21;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
matcher->knnMatch( descriptors1, descriptors2, matches12, 2);
matcher->knnMatch( descriptors2, descriptors1, matches21, 2);
//BFMatcher bfmatcher(NORM_L2, true);
//vector<DMatch> matches;
//bfmatcher.match(descriptors1, descriptors2, matches);
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors1.rows; i++)
{
double dist = matches12[i].data()->distance;
if(dist < min_dist)
min_dist = dist;
if(dist > max_dist)
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
cout << "Matches1-2:" << matches12.size() << endl;
cout << "Matches2-1:" << matches21.size() << endl;
std::vector<DMatch> good_matches1, good_matches2;
for(int i=0; i < matches12.size(); i++)
{
if(matches12[i][0].distance < ratio * matches12[i][1].distance)
good_matches1.push_back(matches12[i][0]);
}
for(int i=0; i < matches21.size(); i++)
{
if(matches21[i][0].distance < ratio * matches21[i][1].distance)
good_matches2.push_back(matches21[i][0]);
}
cout << "Good matches1:" << good_matches1.size() << endl;
cout << "Good matches2:" << good_matches2.size() << endl;
// Symmetric Test
std::vector<DMatch> better_matches;
for(int i=0; i<good_matches1.size(); i++)
{
for(int j=0; j<good_matches2.size(); j++)
{
if(good_matches1[i].queryIdx == good_matches2[j].trainIdx && good_matches2[j].queryIdx == good_matches1[i].trainIdx)
{
better_matches.push_back(DMatch(good_matches1[i].queryIdx, good_matches1[i].trainIdx, good_matches1[i].distance));
break;
}
}
}
cout << "Better matches:" << better_matches.size() << endl;
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
// show it on an image
Mat output;
drawMatches(image1, keypoints1, image2, keypoints2, better_matches, output);
imshow("Matches result",output);
waitKey(0);
return 0;
}
That image looks rather like an artificial hip. If you're dealing with medical images, you should definitely check out The Insight Toolkit (ITK) which has many special features designed for the particular needs of this domain. You could do a simple Model-Image Registration between your real-world image and your template data to find the best result. I think you would get much better results with this approach than with the point-based testing described above.
This sort of registration performs an iterative optimisation of a set of parameters (in this case, an affine transform) which seeks to find the best mapping of the model to the image data.
ITK Affine Registration example
The example above takes a fixed image and attempts to find a transform that maps the moving image onto it. The transform is a 2D affine transform (rotation and translation in this case) and its parameters are the result of running the optimiser which maximises the matching metric. The metric measures how well the fixed image and the transformed moving image match. The interpolator is what takes the moving image and applies the transform to map it onto the fixed image.
In your sample images, fixed image could be the original X-ray and the moving image the actual implant. You will probably need to add scaling to make a full affine transform since the size of the two differs.
The metric is a measure of how well the transformed moving image matches the fixed image, so you would need to determine a tolerance or minimum metric for a match to be valid. If the images are very different, the metric would be very low and can be rejected.
The output is a set of transformation parameters and the output image is the final optimal transform applied to the moving image (not a combination of the images). The result is basically telling you where the implant is found in the X-ray.