I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
So, this is the code I have so far:
printf("Calculating FAST+BRIEF features...\n");
Ptr<FastFeatureDetector> FASTdetector = FastFeatureDetector::create();
Ptr<BriefDescriptorExtractor> BRIEFdescriptor = BriefDescriptorExtractor::create();
std::vector<cv::KeyPoint> FASTkeypoints_1, FASTkeypoints_2, FASTkeypoints_3;
Mat BRIEFdescriptors_1, BRIEFdescriptors_2, BRIEFdescriptors_3;
FASTdetector->detect(left08, FASTkeypoints_1);
FASTdetector->detect(right08, FASTkeypoints_2);
FASTdetector->detect(left10, FASTkeypoints_3);
BRIEFdescriptor->compute(left08, FASTkeypoints_1, BRIEFdescriptors_1);
BRIEFdescriptor->compute(right08, FASTkeypoints_2, BRIEFdescriptors_2);
BRIEFdescriptor->compute(left10, FASTkeypoints_3, BRIEFdescriptors_3);
Mat FAST_left08, FAST_right08, FAST_left10;
drawKeypoints(left08, FASTkeypoints_1, FAST_left08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left08.png", FAST_left08);
drawKeypoints(right08, FASTkeypoints_2, FAST_right08, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_right08.png", FAST_right08);
drawKeypoints(left10, FASTkeypoints_3, FAST_left10, FASTBRIEFfeatcol_YELLOW, DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite("../Results/FASTBRIEF_left10.png", FAST_left10);
printf("FAST+BRIEF done. \n");
The code so far works perfectly fine, however I don't get rich keypoints, but standard ones. If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
I have done the same implementation with SIFT, SURF and ORB before that, but there I use the computeanddetect function directly, which gives me keypoints, where I can draw with the DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag.
I have to implement a feature detector using FAST+BRIEF (which is the manual implementation of ORB if I understand correctly).
Yes, that is correct.
If I understand correctly, this is because I need to somehow get the descriptor information to the keypoints first, right?
No, keypoints are detected by using different methods. You can use SIFT, FAST, HarrisDetector, SURF etc. only to detect keypoints at first. Then there are different methods to describe the detected keypoints (e.g. a 128-bit float vector descriptor for SIFT) and match them afterwards.
A keypoint in OpenCV can be described by the different attributes angle, size, octave and so on https://docs.opencv.org/3.4.2/d2/d29/classcv_1_1KeyPoint.html
For SIFT every KeyPoint attribute is filled with a number that can later be drawn in the DRAW_RICH_KEYPOINTS flag. For FAST only standard values for the attributes are assigned so that they keypoints can be drawn with the mentioned flag but the size, octave and angle do not vary. Thus, every drawn KeyPoint looks similar.
Here a small code sample as a proof (I only use the ->detect functions):
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
int main(int argc, char** argv)
{
// Load image
cv::Mat img = cv::imread("MT189.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (!img.data) {
std::cout << "Error reading image" << std::endl;
return EXIT_FAILURE;
}
cv::Mat output;
// Detect FAST keypoints
std::vector<cv::KeyPoint> keypoints_fast, keypoints_sift;
cv::Ptr<cv::FastFeatureDetector> fast = cv::FastFeatureDetector::create();
fast->detect(img, keypoints_fast);
for (size_t i = 0; i < 100; ++i) {
std::cout << "FAST Keypoint #:" << i;
std::cout << " Size " << keypoints_fast[i].size << " Angle " << keypoints_fast[i].angle << " Response " << keypoints_fast[i].response << " Octave " << keypoints_fast[i].octave << std::endl;
}
// Detect SIFT keypoints
cv::Ptr<cv::xfeatures2d::SiftFeatureDetector> sift = cv::xfeatures2d::SiftFeatureDetector::create();
sift->detect(img, keypoints_sift);
for (size_t i = 0; i < 100; ++i) {
std::cout << "SIFT Keypoint #:" << i;
std::cout << " Size " << keypoints_sift[i].size << " Angle " << keypoints_sift[i].angle << " Response " << keypoints_sift[i].response << " Octave " << keypoints_sift[i].octave << std::endl;
}
// Draw SIFT keypoints
cv::drawKeypoints(img, keypoints_sift, output, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
cv::imshow("Output", output);
cv::waitKey(0);
}
Related
I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.
I want to use AKAZE, which is integraded in OpenCV 3.0.
For this I've tested the following code:
#include <opencv2/features2d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <qcoreapplication.h>
#include <QDebug>
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Mat img1 = cv::imread("img1.jpg",IMREAD_GRAYSCALE);
Mat img2 = imread("img2.jpg", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
//ERROR after detectAndCompute(...)
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
for(unsigned i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
double dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
cout << "A-KAZE Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
cout << "# Keypoints 2: \t" << kpts2.size() << endl;
cout << "# Matches: \t" << matched1.size() << endl;
cout << "# Inliers: \t" << inliers1.size() << endl;
cout << "# Inliers Ratio: \t" << inlier_ratio << endl;
cout << endl;
return a.exec();
}
After line akaze->detectAndCompute(img1, noArray(), kpts1, desc1); the following exception was thrown:
OpenCV Error: Insufficient memory (Failed to allocate 72485160 bytes) in OutOfMemoryError, file C:\opencv\sources\modules\core\src\alloc.cpp, line 52.
OpenCV Error: Assertion failed (u != 0) in create, file C:\opencv\sources\modules\core\src\matrix.cpp, line 411 terminate called after throwing an instance of 'cv::Exception'
what(): C:\opencv\sources\modules\core\src\matrix.cpp:411: error: (-215) u != 0
I've compiled OpenCV mit mingw 4.92 under Windows 7.
Has somebody an answer?
thank you
More of a comment, than an answer, but I am unable to comment.
As the error states, you seem to be running out of memory while processing the A-KAZE detection. In one of my tests, (although my images were 4160x2340), processing three detection modules one after the other easily took around 7-8 GB of memory. What resolution are your images at, and how much RAM do you have?
Also, if you compile this application as 32-bit, it will not be able to allocate more than 4 GB (2 if you yourself are on a 32-bit OS). Are you on 32-bit or 64-bit, and if the latter, are you compiling it as a 64-bit application? One possible solution would be to just resize your image so that it has lesser pixels and requires lesser memory:
cv::resize(sourceImage, destinationImage, Size(), 0.5, 0.5, interpolation); // Halves the resolution
But this is a last resort, because higher resolution means more features and precision.
I have raw pixel data that I want to output via the opencv cvShowImage() function.
I have the following code:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cvShowImage("result",&mat);
Which outputs:
1124024336, 2, 480, 640
to the console, but fails to output the image with cvShowImage(). Instead throwing an exception with the message:
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat
I suspect the problem is in the way I create the mat object, but I am having a very hard time finding any more specific information on how I am supposed to do that.
I don't think CV_8UC3 is enough of a description for it to render the array of data. Doesn't it have to know whether the data is RGB or YUY2, etc.? How do I set that?
Try cv::imshow("result", mat) instead of mixing the old C and new C++ APIs. I expect casting a Mat to a CvArr* is the source of the problem.
So, something like this:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cv::imshow("result", mat);
I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}
I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.
It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.
I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.
You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);
I need to calculate the area of a blob/an object in a grayscale picture (loading it as Mat, not as IplImage) using OpenCV.
I thought it would be a good idea to get the coordinates of the edges (number of edges change form object to object) or to get all coordinates of the contour and then use contourArea() to calculate the area of my object.
I deleted all noise and got some nice and satisfying contours by using findContours() (programming in C++).
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy,int mode, int method, Point offset=Point());
Now I got to understand that param contours already owns the coordinates of all contours of my object. Did I get that right?
If yes, it there a way to access them?
And if no, how do I get the coordinates of the contour anyway?
contours is actually defined as
vector<vector<Point> > contours;
And now I think it's clear how to access its points.
The contour area is calculated by a function nicely called contourArea():
for (unsigned int i = 0; i < contours.size(); i++)
{
std::cout << "# of contour points: " << contours[i].size() << std::endl;
for (unsigned int j=0; j<contours[i].size(); j++)
{
std::cout << "Point(x,y)=" << contours[i][j] << std::endl;
}
std::cout << " Area: " << contourArea(contours[i]) << std::endl;
}