OpenCV findHomography assertion failed error - c++

I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}

I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.

It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.

I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.

You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);

Related

Thin Plate Spline shape transformation run-time error [exited with code -1073741819]

I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.

C++ Opencv: Mat.zeros get wrong shape

I defined and initialized a Mat variable using the Mat::zeros, when I print its shape, i.e. rows, cols, channels, it seems I get wrong values.
My code is shown as follows:
#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
int n_Channel = 3;
int mySizes[3] = {100, 200, n_Channel};
Mat M = Mat::zeros(n_Channel, mySizes, CV_64F);
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
return 0;
}
The printed message is :
-1,-1,1
What's wrong with this?
I also find that if I declare a Mat using the following code:
int n_Channel = 3;
Mat M(Size(100, 200), CV_32FC(n_Channel));
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
the outcome is correct:
200,100,3
I'm confused about this. Thank you all for helping me!
You want to use a very special overloaded version of the cv::Mat::zeros method.
Let's have a look at the following code:
// Number of channels.
const int n_Channel = 3;
// Number of dimensions; must be 1 or 2?
const int n_Dimensions = 2;
// Create empty Mat using zeros, and output dimensions.
int mySizes[n_Dimensions] = { 200, 100 };
cv::Mat M1 = cv::Mat::zeros(n_Dimensions, mySizes, CV_64FC(n_Channel));
std::cout << "M1: " << M1.rows << "," << M1.cols << "," << M1.channels() << std::endl;
// Create empty Mat using constructor, and output dimensions.
cv::Mat M2 = cv::Mat(cv::Size(100, 200), CV_64FC(n_Channel), cv::Scalar(0, 0, 0));
std::cout << "M2: " << M2.rows << "," << M2.cols << "," << M2.channels() << std::endl;
which gives the following output:
M1: 200,100,3
M2: 200,100,3
So, basically you have to move the "channel number info" from mySizes to the cv::Mat::zeros method. Also, you have to pay attention to the order of the image dimensions provided in mySizes, since it seem to differ from the constructor using cv::Size. I guess the latter one is width x height, whereas the first one is number of rows x number of cols.
How to init CV mat :
cv::Mat test = cv::Mat::zeros(cv::Size(100, 200), CV_64F);
As you can see, the first parameter is the Size cf :
https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html

TIFF files garbled by ArrayFire (C++)

I notice that this simple ArrayFire program is causing loaded TIFF images to be heavily distorted:
#include <iostream>
#include <arrayfire.h>
int main( int argc, char** argv ) {
af::array img = af::loadImage( argv[1] );
double mn, mx;
unsigned idxn, idxx;
af::min( &mn, &idxn, img );
af::max( &mx, &idxx, img );
std::cout << "Image size = " << img.dims()[0] << ", " << img.dims()[1] << '\n';
std::cout << "Data type = " << img.type() << '\n';
std::cout << "Min = " << mn << " (at " << idxn << ")\n";
std::cout << "Max = " << mx << " (at " << idxx << ")\n";
af::saveImage( argv[2], img );
return 0;
}
I then compile and run on a simple (monochrome) image:
./a.out orig.tif out.tif
with the following output:
Image size = 256, 256
Data type = 0
Min = 0 (at 65535)
Max = 81.5025 (at 31356)
When I visualize these images I get the following result:
which of course is not what ArrayFire is expected to do; I would expect it to dump the exact same image out since I didn't make any changes to it. Unfortunately I don't know enough about the TIFF image format or the graphics backend of ArrayFire to understand what is going on. Am I doing something wrong while loading the image? (I followed the ArrayFire documentation for loadImage and saveImage).
I also tried using loadImageNative and saveImageNative alternatively, but the latter returns a 4-layer TIFF image while the original image is only a 1-layer TIFF.
Any help at all from ArrayFire experts would be great.
Thanks!

What is the best way to find the closest match to a complex shape, using opencv and c++?

Alright, here is my source code. This code will take an image in a file and compare it against a list of images in another file. In the file of images you must include a .txt file containing the names of all of the images in the file you are trying to compare. The problem i'm having is that these two images are very similar but are not exactly the same. I need a method to refine these matches further. Perhaps even an entire new way to compare these two shapes (in larger chunks, blobs, ect). One way I was considering is actually making an entire keypoint map, and only comparing keypionts if they are at or near a certain point that corespondes to both images. Ie: compare keypoints at point (12,200), +-10 pixels from (x, y) and see if there are similar keypoint on the other image.
All I need is a way to get the best matches possible from: ActualImplant and XrayOfThatSameImplantButASlightlyDifferentSize. Please and thank you!
PS: you will see commented out sections where I was experimenting with Sobel Derivatives and other such things. I ended up just adjusting contrast and brightness on the xray for the best outline. The same has to be done to the image of the implant before it is used to try to match anything.
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\features2d\features2d.hpp"
#include "opencv2\imgproc.hpp"
#include <iostream>
#include <fstream>
#include <ctime>
const string defaultDetector = "ORB";
const string defaultDescriptor = "ORB";
const string defaultMatcher = "BruteForce-Hamming";
const string defaultXrayImagePath = "../../xray.png";
const string defaultImplantImagesTextListPath = "../../implantImage.txt";
const string defaultPathToResultsFolder = "../../results";
static void printIntro(const string& appName)
{
cout << "/* *\n"
<< " * Created by: Alex Gatz. 1/11/12. Created for: Xray Implant Identification *\n"
<< " * This code was created to scan a file full of images of differnt implants, generate keypoint maps *\n"
<< " * for each image, and identifywhich image most closely matches a chosen image in another folder *\n"
<< " */ *\n"
<< endl;
cout << endl << "Format:\n" << endl;
cout << "./" << appName << " [detector] [descriptor] [matcher] [xrayImagePath] [implantImagesTextListPath] [pathToSaveResults]" << endl;
cout << endl;
cout << "\nExample:" << endl
<< "./" << appName << " " << defaultDetector << " " << defaultDescriptor << " " << defaultMatcher << " "
<< defaultXrayImagePath << " " << defaultImplantImagesTextListPath << " " << defaultPathToResultsFolder << endl;
}
static void maskMatchesByImplantImgIdx(const vector<DMatch>& matches, int trainImgIdx, vector<char>& mask)
{
mask.resize(matches.size());
fill(mask.begin(), mask.end(), 0);
for (size_t i = 0; i < matches.size(); i++)
{
if (matches[i].imgIdx == trainImgIdx)
mask[i] = 1;
}
}
static void readImplantFilenames(const string& filename, string& dirName, vector<string>& implantFilenames)
{
implantFilenames.clear();
ifstream file(filename.c_str());
if (!file.is_open())
return;
size_t pos = filename.rfind('\\');
char dlmtr = '\\';
if (pos == String::npos)
{
pos = filename.rfind('/');
dlmtr = '/';
}
dirName = pos == string::npos ? "" : filename.substr(0, pos) + dlmtr;
while (!file.eof())
{
string str; getline(file, str);
if (str.empty()) break;
implantFilenames.push_back(str);
}
file.close();
}
static bool createDetectorDescriptorMatcher(const string& detectorType, const string& descriptorType, const string& matcherType,
Ptr<FeatureDetector>& featureDetector,
Ptr<DescriptorExtractor>& descriptorExtractor,
Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Creating feature detector, descriptor extractor and descriptor matcher ..." << endl;
featureDetector = ORB::create( //All of these are parameters that can be adjusted to effect match accuracy and process time.
10000, //int nfeatures = Maxiumum number of features to retain; max vaulue unknown, higher number takes longer to process. Default: 500
1.4f, //float scaleFactor= Pyramid decimation ratio; between 1.00 - 2.00. Default: 1.2f
6, //int nlevels = Number of pyramid levels used; more levels more time taken to process, but more accurate results. Default: 8
40, //int edgeThreshold = Size of the border where the features are not detected. Should match patchSize roughly. Default: 31
0, //int firstLevel = Should remain 0 for now. Default: 0
4, //int WTA_K = Should remain 2. Default: 2
ORB::HARRIS_SCORE, //int scoreType = ORB::HARRIS_SCORE is the most accurate ranking possible for ORB. Default: HARRIS_SCORE
33 //int patchSize = size of patch used by the oriented BRIEF descriptor. Should match edgeThreashold. Default: 31
);
//featureDetector = ORB::create(); // <-- Uncomment this and comment the featureDetector above for default detector-
//OpenCV 3.1 got rid of the dynamic naming of detectors and extractors.
//These two are one in the same when using ORB, some detectors and extractors are separate
// in which case you would set "descriptorExtractor = descriptorType::create();" or its equivilant.
descriptorExtractor = featureDetector;
descriptorMatcher = DescriptorMatcher::create(matcherType);
cout << ">" << endl;
bool isCreated = !(featureDetector.empty() || descriptorExtractor.empty() || descriptorMatcher.empty());
if (!isCreated)
cout << "Can not create feature detector or descriptor extractor or descriptor matcher of given types." << endl << ">" << endl;
return isCreated;
}
static void manipulateImage(Mat& image) //Manipulates images into only showing an outline!
{
//Sobel Dirivative edge finder
//int scale = 1;
//int delta = 0;
//int ddepth = CV_16S;
////equalizeHist(image, image); //This will equilize the lighting levels in each image.
//GaussianBlur(image, image, Size(3, 3), 0, 0, BORDER_DEFAULT);
//Mat grad_x, grad_y;
//Mat abs_grad_x, abs_grad_y;
////For x
//Sobel(image, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_x, abs_grad_x);
////For y
//Sobel(image, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
//convertScaleAbs(grad_y, abs_grad_y);
//addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, image);
//Specific Level adjustment (very clean)
double alpha = 20; //Best Result: 20
int beta = -300; //Best Result: -300
image.convertTo(image, -1, alpha, beta);
}
static bool readImages(const string& xrayImageName, const string& implantFilename,
Mat& xrayImage, vector <Mat>& implantImages, vector<string>& implantImageNames)
{
//TODO: Add a funtion call to automatically adjust all images loaded to best settings for matching.
cout << "< Reading the images..." << endl;
xrayImage = imread(xrayImageName, CV_LOAD_IMAGE_GRAYSCALE); //Turns the image gray while loading.
manipulateImage(xrayImage); //Runs image manipulations
if (xrayImage.empty())
{
cout << "Xray image can not be read." << endl << ">" << endl;
return false;
}
string trainDirName;
readImplantFilenames(implantFilename, trainDirName, implantImageNames);
if (implantImageNames.empty())
{
cout << "Implant image filenames can not be read." << endl << ">" << endl;
return false;
}
int readImageCount = 0;
for (size_t i = 0; i < implantImageNames.size(); i++)
{
string filename = trainDirName + implantImageNames[i];
Mat img = imread(filename, CV_LOAD_IMAGE_GRAYSCALE); //Turns imamges gray while loading.
//manipulateImage(img); //Runs Sobel Dirivitage on implant image.
if (img.empty())
{
cout << "Implant image " << filename << " can not be read." << endl;
}
else
{
readImageCount++;
}
implantImages.push_back(img);
}
if (!readImageCount)
{
cout << "All implant images can not be read." << endl << ">" << endl;
return false;
}
else
cout << readImageCount << " implant images were read." << endl;
cout << ">" << endl;
return true;
}
static void detectKeypoints(const Mat& xrayImage, vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantKeypoints,
Ptr<FeatureDetector>& featureDetector)
{
cout << endl << "< Extracting keypoints from images..." << endl;
featureDetector->detect(xrayImage, xrayKeypoints);
featureDetector->detect(implantImages, implantKeypoints);
cout << ">" << endl;
}
static void computeDescriptors(const Mat& xrayImage, vector<KeyPoint>& implantKeypoints, Mat& implantDescriptors,
const vector<Mat>& implantImages, vector<vector<KeyPoint> >& implantImageKeypoints, vector<Mat>& implantImageDescriptors,
Ptr<DescriptorExtractor>& descriptorExtractor)
{
cout << "< Computing descriptors for keypoints..." << endl;
descriptorExtractor->compute(xrayImage, implantKeypoints, implantDescriptors);
descriptorExtractor->compute(implantImages, implantImageKeypoints, implantImageDescriptors);
int totalTrainDesc = 0;
for (vector<Mat>::const_iterator tdIter = implantImageDescriptors.begin(); tdIter != implantImageDescriptors.end(); tdIter++)
totalTrainDesc += tdIter->rows;
cout << "Query descriptors count: " << implantDescriptors.rows << "; Total train descriptors count: " << totalTrainDesc << endl;
cout << ">" << endl;
}
static void matchDescriptors(const Mat& xrayDescriptors, const vector<Mat>& implantDescriptors,
vector<DMatch>& matches, Ptr<DescriptorMatcher>& descriptorMatcher)
{
cout << "< Set implant image descriptors collection in the matcher and match xray descriptors to them..." << endl;
//time_t timerBegin, timerEnd;
//time(&timerBegin);
descriptorMatcher->add(implantDescriptors);
descriptorMatcher->train();
//time(&timerEnd);
//double buildTime = difftime(timerEnd, timerBegin);
//time(&timerBegin);
descriptorMatcher->match(xrayDescriptors, matches);
//time(&timerEnd);
//double matchTime = difftime(timerEnd, timerBegin);
CV_Assert(xrayDescriptors.rows == (int)matches.size() || matches.empty());
cout << "Number of imageMatches: " << matches.size() << endl;
//cout << "Build time: " << buildTime << " ms; Match time: " << matchTime << " ms" << endl;
cout << ">" << endl;
}
static void saveResultImages(const Mat& xrayImage, const vector<KeyPoint>& xrayKeypoints,
const vector<Mat>& implantImage, const vector<vector<KeyPoint> >& implantImageKeypoints,
const vector<DMatch>& matches, const vector<string>& implantImagesName, const string& resultDir)
{
cout << "< Save results..." << endl;
Mat drawImg;
vector<char> mask;
for (size_t i = 0; i < implantImage.size(); i++)
{
if (!implantImage[i].empty())
{
maskMatchesByImplantImgIdx(matches, (int)i, mask);
drawMatches(xrayImage, xrayKeypoints, implantImage[i], implantImageKeypoints[i],
matches, drawImg, Scalar::all(-1), Scalar(0, 0, 255), mask, 4);
string filename = resultDir + "/result_" + implantImagesName[i];
if (!imwrite(filename, drawImg))
cout << "Image " << filename << " can not be saved (may be because directory " << resultDir << " does not exist)." << endl;
}
}
cout << ">" << endl;
//After all results have been saved, another function will scan and place the final result in a separate folder.
//For now this save process is required to manually access each result and determine if the current settings are working well.
}
int main(int argc, char** argv)
{
//Intialize variables to global defaults.
string detector = defaultDetector;
string descriptor = defaultDescriptor;
string matcher = defaultMatcher;
string xrayImagePath = defaultXrayImagePath;
string implantImagesTextListPath = defaultImplantImagesTextListPath;
string pathToSaveResults = defaultPathToResultsFolder;
//As long as you have 7 arguments, you can procede
if (argc != 7 && argc != 1)
{
//This will be called if the incorrect amount of commands are used to start the program.
printIntro(argv[1]);
system("PAUSE");
return -1;
}
//As long as you still have 7 arguments, I will set the variables for this
// to the arguments you decided on.
//If testing using XrayID --> Properties --> Debugging --> Command Arguments, remember to start with [detector] as the first command
// C++ includes the [appName] command as the first argument automantically.
if (argc != 1) //I suggest placing a break here and stepping through this to ensure the proper commands were sent in. With a
// GUI this would nto matter because the GUI would structure the input and use a default if no input was used.
{
detector = argv[1];
descriptor = argv[2];
matcher = argv[3];
xrayImagePath = argv[4];
implantImagesTextListPath = argv[5];
pathToSaveResults = argv[6];
}
//Set up cv::Ptr's for tools.
Ptr<FeatureDetector> featureDetector;
Ptr<DescriptorExtractor> descriptorExtractor;
Ptr<DescriptorMatcher> descriptorMatcher;
//Check to see if tools are created, if not true print intro and close program.
if (!createDetectorDescriptorMatcher(detector, descriptor, matcher, featureDetector, descriptorExtractor, descriptorMatcher))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
Mat testImage;
vector<Mat> implantImages;
vector<string> implantImagesNames;
//Check to see if readImages completes properly, if not true print intro and close program.
if (!readImages(xrayImagePath, implantImagesTextListPath, testImage, implantImages, implantImagesNames))
{
printIntro(argv[0]);
system("PAUSE");
return -1;
}
vector<KeyPoint> xrayKeypoints;
vector<vector<KeyPoint> > implantKeypoints;
detectKeypoints(testImage, xrayKeypoints, implantImages, implantKeypoints, featureDetector);
Mat xrayDescriptors;
vector<Mat> implantTestImageDescriptors;
computeDescriptors(testImage, xrayKeypoints, xrayDescriptors, implantImages, implantKeypoints, implantTestImageDescriptors,
descriptorExtractor);
vector<DMatch> imageMatches;
matchDescriptors(xrayDescriptors, implantTestImageDescriptors, imageMatches, descriptorMatcher);
saveResultImages(testImage, xrayKeypoints, implantImages, implantKeypoints, imageMatches, implantImagesNames, pathToSaveResults);
system("PAUSE");
return 0;
}
Try below code.Hope this will help you.
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
#include <dirent.h>
#include <ctime>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int argc, const char *argv[])
{
double ratio = 0.9;
Mat image1 = imread("Image1_path);
Mat image2 = cv::imread("Image2_path");
Ptr<FeatureDetector> detector;
Ptr<DescriptorExtractor> extractor;
// TODO default is 500 keypoints..but we can change
detector = FeatureDetector::create("ORB");
extractor = DescriptorExtractor::create("ORB");
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(image1, keypoints1);
detector->detect(image2, keypoints2);
cout << "# keypoints of image1 :" << keypoints1.size() << endl;
cout << "# keypoints of image2 :" << keypoints2.size() << endl;
Mat descriptors1,descriptors2;
extractor->compute(image1,keypoints1,descriptors1);
extractor->compute(image2,keypoints2,descriptors2);
cout << "Descriptors size :" << descriptors1.cols << ":"<< descriptors1.rows << endl;
vector< vector<DMatch> > matches12, matches21;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
matcher->knnMatch( descriptors1, descriptors2, matches12, 2);
matcher->knnMatch( descriptors2, descriptors1, matches21, 2);
//BFMatcher bfmatcher(NORM_L2, true);
//vector<DMatch> matches;
//bfmatcher.match(descriptors1, descriptors2, matches);
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors1.rows; i++)
{
double dist = matches12[i].data()->distance;
if(dist < min_dist)
min_dist = dist;
if(dist > max_dist)
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
cout << "Matches1-2:" << matches12.size() << endl;
cout << "Matches2-1:" << matches21.size() << endl;
std::vector<DMatch> good_matches1, good_matches2;
for(int i=0; i < matches12.size(); i++)
{
if(matches12[i][0].distance < ratio * matches12[i][1].distance)
good_matches1.push_back(matches12[i][0]);
}
for(int i=0; i < matches21.size(); i++)
{
if(matches21[i][0].distance < ratio * matches21[i][1].distance)
good_matches2.push_back(matches21[i][0]);
}
cout << "Good matches1:" << good_matches1.size() << endl;
cout << "Good matches2:" << good_matches2.size() << endl;
// Symmetric Test
std::vector<DMatch> better_matches;
for(int i=0; i<good_matches1.size(); i++)
{
for(int j=0; j<good_matches2.size(); j++)
{
if(good_matches1[i].queryIdx == good_matches2[j].trainIdx && good_matches2[j].queryIdx == good_matches1[i].trainIdx)
{
better_matches.push_back(DMatch(good_matches1[i].queryIdx, good_matches1[i].trainIdx, good_matches1[i].distance));
break;
}
}
}
cout << "Better matches:" << better_matches.size() << endl;
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
// show it on an image
Mat output;
drawMatches(image1, keypoints1, image2, keypoints2, better_matches, output);
imshow("Matches result",output);
waitKey(0);
return 0;
}
That image looks rather like an artificial hip. If you're dealing with medical images, you should definitely check out The Insight Toolkit (ITK) which has many special features designed for the particular needs of this domain. You could do a simple Model-Image Registration between your real-world image and your template data to find the best result. I think you would get much better results with this approach than with the point-based testing described above.
This sort of registration performs an iterative optimisation of a set of parameters (in this case, an affine transform) which seeks to find the best mapping of the model to the image data.
ITK Affine Registration example
The example above takes a fixed image and attempts to find a transform that maps the moving image onto it. The transform is a 2D affine transform (rotation and translation in this case) and its parameters are the result of running the optimiser which maximises the matching metric. The metric measures how well the fixed image and the transformed moving image match. The interpolator is what takes the moving image and applies the transform to map it onto the fixed image.
In your sample images, fixed image could be the original X-ray and the moving image the actual implant. You will probably need to add scaling to make a full affine transform since the size of the two differs.
The metric is a measure of how well the transformed moving image matches the fixed image, so you would need to determine a tolerance or minimum metric for a match to be valid. If the images are very different, the metric would be very low and can be rejected.
The output is a set of transformation parameters and the output image is the final optimal transform applied to the moving image (not a combination of the images). The result is basically telling you where the implant is found in the X-ray.

Exception under opencv 3.0 and mingw under windows 7 when using AKAZE

I want to use AKAZE, which is integraded in OpenCV 3.0.
For this I've tested the following code:
#include <opencv2/features2d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <qcoreapplication.h>
#include <QDebug>
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Mat img1 = cv::imread("img1.jpg",IMREAD_GRAYSCALE);
Mat img2 = imread("img2.jpg", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
//ERROR after detectAndCompute(...)
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
for(unsigned i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
double dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
cout << "A-KAZE Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
cout << "# Keypoints 2: \t" << kpts2.size() << endl;
cout << "# Matches: \t" << matched1.size() << endl;
cout << "# Inliers: \t" << inliers1.size() << endl;
cout << "# Inliers Ratio: \t" << inlier_ratio << endl;
cout << endl;
return a.exec();
}
After line akaze->detectAndCompute(img1, noArray(), kpts1, desc1); the following exception was thrown:
OpenCV Error: Insufficient memory (Failed to allocate 72485160 bytes) in OutOfMemoryError, file C:\opencv\sources\modules\core\src\alloc.cpp, line 52.
OpenCV Error: Assertion failed (u != 0) in create, file C:\opencv\sources\modules\core\src\matrix.cpp, line 411 terminate called after throwing an instance of 'cv::Exception'
what(): C:\opencv\sources\modules\core\src\matrix.cpp:411: error: (-215) u != 0
I've compiled OpenCV mit mingw 4.92 under Windows 7.
Has somebody an answer?
thank you
More of a comment, than an answer, but I am unable to comment.
As the error states, you seem to be running out of memory while processing the A-KAZE detection. In one of my tests, (although my images were 4160x2340), processing three detection modules one after the other easily took around 7-8 GB of memory. What resolution are your images at, and how much RAM do you have?
Also, if you compile this application as 32-bit, it will not be able to allocate more than 4 GB (2 if you yourself are on a 32-bit OS). Are you on 32-bit or 64-bit, and if the latter, are you compiling it as a 64-bit application? One possible solution would be to just resize your image so that it has lesser pixels and requires lesser memory:
cv::resize(sourceImage, destinationImage, Size(), 0.5, 0.5, interpolation); // Halves the resolution
But this is a last resort, because higher resolution means more features and precision.