How to find euclidean distance between keypoints of a single image in opencv - c++

I want to get a distance vector d for each key point in the image. The distance vector should consist of distances from that keypoint to all other keypoints in that image.
Note: Keypoints are found using SIFT.
Im pretty new to opencv. Is there a library function in C++ that can make my task easy?

If you aren't interested int the position-distance but the descriptor-distance you can use this:
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
which will give you a N x N matrix (N = number of keypoints) where Mat_i,j = euclidean distance between keypoint i and j.
with this input:
I get these outputs:
image where keypoints are marked which have a distance of less than 0.05
image that corresponds to the matrix. white pixels are dist < 0.05.
REMARK: you can optimize many things in the computation of the matrix, since distances are symmetric!
UPDATE:
Here is another way to do it:
From your chat I know that you would need 13GB memory to hold those distance information for 41381 keypoints (which you tried). If you want instead only the N best matches, try this code:
// choose double here if you are worried about precision!
#define intermediatePrecision float
//#define intermediatePrecision double
//
void NBestMatches(cv::Mat descriptors1, cv::Mat descriptors2, unsigned int n, std::vector<std::vector<float> > & distances, std::vector<std::vector<int> > & indices)
{
// TODO: check whether descriptor dimensions and types are the same for both!
// clear vector
// get enough space to create n best matches
distances.clear();
distances.resize(descriptors1.rows);
indices.clear();
indices.resize(descriptors1.rows);
for(int i=0; i<descriptors1.rows; ++i)
{
// references to current elements:
std::vector<float> & cDistances = distances.at(i);
std::vector<int> & cIndices = indices.at(i);
// initialize:
cDistances.resize(n,FLT_MAX);
cIndices.resize(n,-1); // for -1 = "no match found"
// now find the 3 best matches for descriptor i:
for(int j=0; j<descriptors2.rows; ++j)
{
intermediatePrecision euclideanDistance = 0;
for( int dim = 0; dim < descriptors1.cols; ++dim)
{
intermediatePrecision tmp = descriptors1.at<float>(i,dim) - descriptors2.at<float>(j, dim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
float tmpCurrentDist = euclideanDistance;
int tmpCurrentIndex = j;
// update current best n matches:
for(unsigned int k=0; k<n; ++k)
{
if(tmpCurrentDist < cDistances.at(k))
{
int tmpI2 = cIndices.at(k);
float tmpD2 = cDistances.at(k);
// update current k-th best match
cDistances.at(k) = tmpCurrentDist;
cIndices.at(k) = tmpCurrentIndex;
// previous k-th best should be better than k+1-th best //TODO: a simple memcpy would be faster I guess.
tmpCurrentDist = tmpD2;
tmpCurrentIndex =tmpI2;
}
}
}
}
}
It computes the N best matches for each keypoint of the first descriptors to the second descriptors. So if you want to do that for the same keypoints you'll set to be descriptors1 = descriptors2 ion your call as shown below. Remember: the function doesnt know that both descriptor sets are identical, so the first best match (or at least one) will be the keypoint itself with distance 0 always! Keep that in mind if using the results!
Here's sample code to generate an image similar to the one above:
int main()
{
cv::Mat input = cv::imread("../inputData/MultiLena.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::SiftFeatureDetector detector( 7500 );
cv::SiftDescriptorExtractor describer;
std::vector<cv::KeyPoint> keypoints;
detector.detect( gray, keypoints );
// draw keypoints
cv::drawKeypoints(input,keypoints,input);
cv::Mat descriptors;
describer.compute(gray, keypoints, descriptors);
int n = 4;
std::vector<std::vector<float> > dists;
std::vector<std::vector<int> > indices;
// compute the N best matches between the descriptors and themselves.
// REMIND: ONE best match will always be the keypoint itself in this setting!
NBestMatches(descriptors, descriptors, n, dists, indices);
for(unsigned int i=0; i<dists.size(); ++i)
{
for(unsigned int j=0; j<dists.at(i).size(); ++j)
{
if(dists.at(i).at(j) < 0.05)
cv::line(input, keypoints[i].pt, keypoints[indices.at(i).at(j)].pt, cv::Scalar(255,255,255) );
}
}
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}

Create a 2D vector (size of which would be NXN) -->
std::vector< std::vector< float > > item;
Create 2 for loops to go till the number of keypoints (N) you have
Calculate distances as suggested by a-Jays
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y );
Add this to vector using push_back for each keypoint --> N times.

The keypoint class has a member called pt which in turn has x and y [the (x,y) location of the point] as its own members.
Given two keypoints kp1 and kp2, it's then easy to calculate the euclidean distance as:
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y )
In your case, it is going to be a double loop iterating over all the keypoints.

Related

Affine transform in C++

I am currently making a project for school on image processing in visual Studio 2013, using Open CV 3.1. My goal (for now) is to transform an image, using affine transform, so that the trapezoidal board will be transformed into a rectangle.
To do that I have substracted certain channels and thresholded the image so that now I have a binary image with white blocks in the corners of the board.
Now I need to pick 4 white points that are closest to each corner and (using affine transform) set them as corners of the transformed image.
And since this is my first time using Open CV, I am stuck.
Here's my code:
#include <iostream>
#include <opencv2\core.hpp>
#include <opencv2\highgui.hpp>
#include<opencv2/imgproc.hpp>
#include <stdlib.h>
#include <stdio.h>
#include <vector>
int main(){
double dist;
cv::Mat image;
image = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imagebin;
imagebin = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imageerode;
//cv::imshow("Test", image);
cv::Mat src = cv::imread("C:\\Users\\...\\ideal.png");
std::vector<cv::Mat>img_rgb;
cv::split(src, img_rgb);
//cv::imshow("ideal.png", img_rgb[2] - img_rgb[1]);
cv::threshold(img_rgb[2] - 0.5*img_rgb[1], imagebin , 20, 255, CV_THRESH_BINARY);
cv::erode(imagebin, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
cv::erode(imageerode, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
// cv::Point2f array[4];
// std::vector<cv::Point2f> array;
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i,j) > 0)
{
dist = std::min(dist, i + j);
}
}
}
//cv::imshow("Test binary", imagebin);
cv::namedWindow("Test", CV_WINDOW_NORMAL);
cv::imshow("Test", imageerode);
cv::waitKey(0);
std::cout << "Hello world!";
return 0;
}
As you can see I don't know how to loop over each white pixel using image.at and save the distance to each corner.
I would appreciate some help.
Also: I don't want to just do this. I really want to learn how to do that. But I'm currently having some mindstuck.
Thank you
EDIT:
I think I'm done with finding the coordinates of the 4 points. But I can't really get the idea of the warpAffine syntax.
Code:
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i, j) > 0)
{
if (i + j < distances[0])
{
distances[0] = i + j;
coordinates[0] = i;
coordinates[1] = j;
}
if (i + imageerode.cols-j < distances[1])
{
distances[1] = i + imageerode.cols-j;
coordinates[2] = i;
coordinates[3] = j;
}
if (imageerode.rows-i + j < distances[2])
{
distances[2] = imageerode.rows - i + j;
coordinates[4] = i;
coordinates[5] = j;
}
if (imageerode.rows-i + imageerode.cols-j < distances[3])
{
distances[3] = imageerode.rows - i + imageerode.cols - j;
coordinates[6] = i;
coordinates[7] = j;
}
}
}
Where I set all of the distances values to imageerode.cols+imageerode.rows since it's the maximum value it can get.
Also: note that I'm using taxicab geometry. I was told it's faster and the results are pretty much the same.
If anyone could help me with warpAffine it would be great. I don't understand where do I put the coordinates I have found.
Thank you
I am not sure how your "trapezoidal board" looks like but if it has a perspective transform like when you capture a rectangle with a camera, then an affine transform is not enough. Use perspective transform. I think Features2D + Homography to find a known object is very close to what you want to do.

Merging Overlapping Rectangle in OpenCV

I'm using OpenCV 3.0. I've made a car detection program and I keep running into the problem of overlapping bounding boxes:
Is there a way to merge overlapping bounding boxes as described on the images below?
I've used rectangle(frame, Point(x1, y1), Point(x2, y2), Scalar(255,255,255)); to draw those bounding boxes. I've searched for answer from similiar threads but I can't find them helpful. I'd like to form a single outer bounding rectangle after merging those bounding boxes.
Problem
Seems as if you are displaying each contour you are getting. You don't have to do that. Follow the algorithm and code given below.
Algorithm
In this case what you can do is iterate through each contour that you detect and select the biggest boundingRect. You don't have to display each contour you detect.
Here is a code that you can use.
Code
for( int i = 0; i< contours.size(); i++ ) // iterate through each contour.
{
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
Regards
As I've mentioned in a similar post here, this is a problem best solved by Non Maximum Suppression.
Although your code is in C++, have a look at this pyimagesearch article (python) to get an idea on how this works.
I've translated this code from python to C++,.
struct detection_box
{
cv::Rect box; /*!< Bounding box */
double svm_val; /*!< SVM response at that detection*/
cv::Size res_of_detection; /*!< Image resolution at which the detection occurred */
};
/*!
\brief Applies the Non Maximum Suppression algorithm on the detections to find the detections that do not overlap
The svm response is used to sort the detections. Translated from http://www.pyimagesearch.com/2014/11/17/non-maximum-suppression-object-detection-python/
\param boxes list of detections that are the input for the NMS algorithm
\param overlap_threshold the area threshold for the overlap between detections boxes. boxes that have overlapping area above threshold are discarded
\returns list of final detections that are no longer overlapping
*/
std::vector<detection_box> nonMaximumSuppression(std::vector<detection_box> boxes, float overlap_threshold)
{
std::vector<detection_box> res;
std::vector<float> areas;
//if there are no boxes, return empty
if (boxes.size() == 0)
return res;
for (int i = 0; i < boxes.size(); i++)
areas.push_back(boxes[i].box.area());
std::vector<int> idxs = argsort(boxes);
std::vector<int> pick; //indices of final detection boxes
while (idxs.size() > 0) //while indices still left to analyze
{
int last = idxs.size() - 1; //last element in the list. that is, detection with highest SVM response
int i = idxs[last];
pick.push_back(i); //add highest SVM response to the list of final detections
std::vector<int> suppress;
suppress.push_back(last);
for (int pos = 0; pos < last; pos++) //for every other element in the list
{
int j = idxs[pos];
//find overlapping area between boxes
int xx1 = max(boxes[i].box.x, boxes[j].box.x); //get max top-left corners
int yy1 = max(boxes[i].box.y, boxes[j].box.y); //get max top-left corners
int xx2 = min(boxes[i].box.br().x, boxes[j].box.br().x); //get min bottom-right corners
int yy2 = min(boxes[i].box.br().y, boxes[j].box.br().y); //get min bottom-right corners
int w = max(0, xx2 - xx1 + 1); //width
int h = max(0, yy2 - yy1 + 1); //height
float overlap = float(w * h) / areas[j];
if (overlap > overlap_threshold) //if the boxes overlap too much, add it to the discard pile
suppress.push_back(pos);
}
for (int p = 0; p < suppress.size(); p++) //for graceful deletion
{
idxs[suppress[p]] = -1;
}
for (int p = 0; p < idxs.size();)
{
if (idxs[p] == -1)
idxs.erase(idxs.begin() + p);
else
p++;
}
}
for (int i = 0; i < pick.size(); i++) //extract final detections frm input array
res.push_back(boxes[pick[i]]);
return res;
}

How to find the pixel value that corresponds to a specific number of pixels?

Assume that I have a grayscale image in OpenCV.
I want to find a value so that 5% of pixels in the images have a value greater than it.
I can iterate over pixels and find number of pixels with the same value and then from the result find the value that %5 of pixel are above my value, but I am looking for a faster way to do this. Is there any such technique in OpenCV?
I think histogram would help, but I am not sure how I can use it.
You need to:
Compute the cumulative histogram of your pixel values
Find the bin whose value is greater than 95% (100 - 5) of the total number of pixels.
Given an image uniformly random generated, you get an histogram like:
and the cumulative histogram like (you need to find the first bin whose value is over the blue line):
Then you need to find the proper bin. You can use std::lower_bound function to find the correct value, and std::distance to find the corresponding bin number (aka the value you want to find). (Please note that with lower_bound you'll find the element whose value is greater or equal to the given value. You can use upper_bound to find the element whose value is strictly greater then the given value)
In this case it results to be 242, which make sense for an uniform distribution from 0 to 255, since 255*0.95 = 242.25.
Check the full code:
#include <opencv2\opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
void drawHist(const vector<int>& data, Mat3b& dst, int binSize = 3, int height = 0, int ref_value = -1)
{
int max_value = *max_element(data.begin(), data.end());
int rows = 0;
int cols = 0;
float scale = 1;
if (height == 0) {
rows = max_value + 10;
}
else {
rows = height;
scale = float(height) / (max_value + 10);
}
cols = data.size() * binSize;
dst = Mat3b(rows, cols, Vec3b(0, 0, 0));
for (int i = 0; i < data.size(); ++i)
{
int h = rows - int(scale * data[i]);
rectangle(dst, Point(i*binSize, h), Point((i + 1)*binSize - 1, rows), (i % 2) ? Scalar(0, 100, 255) : Scalar(0, 0, 255), CV_FILLED);
}
if (ref_value >= 0)
{
int h = rows - int(scale * ref_value);
line(dst, Point(0, h), Point(cols, h), Scalar(255,0,0));
}
}
int main()
{
Mat1b src(100, 100);
randu(src, Scalar(0), Scalar(255));
int percent = 5; // percent % of pixel values are above a val
int val; // I need to find this value
int n = src.rows * src.cols; // Total number of pixels
int th = cvRound((100 - percent) / 100.f * n); // Number of pixels below val
// Histogram
vector<int> hist(256, 0);
for (int r = 0; r < src.rows; ++r) {
for (int c = 0; c < src.cols; ++c) {
hist[src(r, c)]++;
}
}
// Cumulative histogram
vector<int> cum = hist;
for (int i = 1; i < hist.size(); ++i) {
cum[i] = cum[i - 1] + hist[i];
}
// lower_bound returns an iterator pointing to the first element
// that is not less than (i.e. greater or equal to) th.
val = distance(cum.begin(), lower_bound(cum.begin(), cum.end(), th));
// Plot histograms
Mat3b plotHist, plotCum;
drawHist(hist, plotHist, 3, 300);
drawHist(cum, plotCum, 3, 300, *lower_bound(cum.begin(), cum.end(), th));
cout << "Value: " << val;
imshow("Hist", plotHist);
imshow("Cum", plotCum);
waitKey();
return 0;
}
Note
The histogram drawing function is an upgrade from a former version I posted here
You can use calcHist to compute the histograms, but I personally find easier to use the aforementioned method for 1D histograms.
1) Determine the height and the width of the image, h and w.
2) Determine what 5% of the total number of pixels is (X)...
X = int(h * w * 0.05)
3) Start at the brightest bin in the histogram. Set total T = 0.
4) Add the number of pixels in this bin to your total T. If T is greater than X, you are finished and the value you want is the lower limit of the range of the current histogram bin.
3) Move to the next darker bin in your histogram. Goto 4.

3D reconstruction from multiple images with one camera

So, I've been trying to get a 3D cloud point from a sequence of images of an object. I have successfully obtained a decent point cloud with two images. I got that from matching features on both images, finding the fundamental matrix and from that, extracting P' (the camera matrix for the second view). For the first view, I set P = K(I | 0), where K is the matrix for the camera intrinsics. But I haven't been able to extend this approach to several images. My idea was to do this sliding the two image window through the sequence of images(e.g. match image1 with image2, find 3d points, match image2 with image3 and then find the more 3d points, and so on). For the following image pairs, P would be made of a cumulative rotation matrix and a cumulative translation vector (this would allow me to keep bringing the points to the first camera coordinate system). But this is not working at all. I'm using OpenCV. What I wanna know is if this approach makes sense at all.
In the code, P_prev is P and Pl is P'. This is just the part that I think it's relevant.
Mat combinedPointCloud;
Mat P_prev;
P_prev = (Mat_<double>(3,4) << cameraMatrix.at<double>(0,0), cameraMatrix.at<double>(0,1), cameraMatrix.at<double>(0,2), 0,
cameraMatrix.at<double>(1,0), cameraMatrix.at<double>(1,1), cameraMatrix.at<double>(1,2), 0,
cameraMatrix.at<double>(2,0), cameraMatrix.at<double>(2,1), cameraMatrix.at<double>(2,2), 0);
for(int i = 1; i < images.size(); i++) {
Mat points3D;
image1 = images[i-1];
image2 = images[i];
matchTwoImages(image1, image2, imgpts1, imgpts2);
P = findSecondProjectionMatrix(cameraMatrix, imgpts1, imgpts2);
P.col(0).copyTo(R.col(0));
P.col(1).copyTo(R.col(1));
P.col(2).copyTo(R.col(2));
P.col(3).copyTo(t.col(0));
if(i == 1) {
Pl = P;
triangulatePoints(P_prev, Pl, imgpts1, imgpts2, points3D); //points3D is 4xN
//Transforming to euclidean by hand, because couldn't make
// opencv's convertFromHomogeneous work
aux.create(3, points3D.cols, CV_64F);// aux is 3xN
for(int i = 0; i < points3D.cols; i++) {
aux.at<float>(0, i) = points3D.at<float>(0, i)/points3D.at<float>(3, i);
aux.at<float>(1, i) = points3D.at<float>(1, i)/points3D.at<float>(3, i);
aux.at<float>(2, i) = points3D.at<float>(2, i)/points3D.at<float>(3, i);
}
points3D.create(3, points3D.cols, CV_64F);
aux.copyTo(points3D);
}
else {
R_aux = R_prev * R;
t_aux = t_prev + t;
R_aux.col(0).copyTo(Pl.col(0));
R_aux.col(1).copyTo(Pl.col(1));
R_aux.col(2).copyTo(Pl.col(2));
t_aux.col(0).copyTo(Pl.col(3));
triangulatePoints(P_prev, Pl, imgpts1, imgpts2, points3D);
//Transforming to euclidean by hand, because couldn't make
// opencv's convertFromHomogeneous work
aux.create(3, points3D.cols, CV_64F);// aux is 3xN
for(int i = 0; i < points3D.cols; i++) {
aux.at<float>(0, i) = points3D.at<float>(0, i)/points3D.at<float>(3, i);
aux.at<float>(1, i) = points3D.at<float>(1, i)/points3D.at<float>(3, i);
aux.at<float>(2, i) = points3D.at<float>(2, i)/points3D.at<float>(3, i);
}
points3D.create(3, points3D.cols, CV_64F);
aux.copyTo(points3D);
}
Pl.col(0).copyTo(R_prev.col(0));
Pl.col(1).copyTo(R_prev.col(1));
Pl.col(2).copyTo(R_prev.col(2));
Pl.col(3).copyTo(t_prev.col(0));
P_prev = Pl;
if(i==1) {
points3D.copyTo(combinedPointCloud);
} else {
hconcat(combinedPointCloud, points3D, combinedPointCloud);
}
}
show3DCloud(comninedPointCloud);

OpenCV templates in 2D point data set

I was wandering what the best approach would be for detecting 'figures' in an array of 2D points.
In this example I have two 'templates'. Figure 1 is a template and figure 2 is a template.
Each of these templates exists only as a vector of points with an x,y coordinate.
Let's say we have a third vector with points with x,y coordinate
What would be the best way to find out and isolate points matching one of the first two arrays in the third one. (including scaling, rotation)?
I have been trying nearest neigbours(FlannBasedMatcehr) or even SVM implementation but it doesn't seem to get me any result, template matching doesn't seem to be the way to go either, I think. I am not working on images but only on 2D points in memory...
Especially because the input vector always has more points than the original data set to be compared with.
All it needs to do is find points in array that match a template.
I am not a 'specialist' in machine learning or opencv. I guess I am overlooking something from the beginning...
Thank you very much for your help/suggestions.
just for fun I tried this:
Choose two points of the point dataset and compute the transformation mapping the first two pattern points to those points.
Test whether all transformed pattern points can be found in the data set.
This approach is very naive and has a complexity of O(m*n²) with n data points and a single pattern of size m (points). This complexity might be increased for some nearest neighbor search methods. So you have to consider whether it's not efficient enough for your appplication.
Some improvements could include some heuristic to not choose all n² combinations of points but, but you need background information of maximal pattern scaling or something like that.
For evaluation I first created a pattern:
Then I create random points and add the pattern somewhere within (scaled, rotated and translated):
After some computation this method recognizes the pattern. The red line shows the chosen points for transformation computation.
Here's the code:
// draw a set of points on a given destination image
void drawPoints(cv::Mat & image, std::vector<cv::Point2f> points, cv::Scalar color = cv::Scalar(255,255,255), float size=10)
{
for(unsigned int i=0; i<points.size(); ++i)
{
cv::circle(image, points[i], 0, color, size);
}
}
// assumes a 2x3 (affine) transformation (CV_32FC1). does not change the input points
std::vector<cv::Point2f> applyTransformation(std::vector<cv::Point2f> points, cv::Mat transformation)
{
for(unsigned int i=0; i<points.size(); ++i)
{
const cv::Point2f tmp = points[i];
points[i].x = tmp.x * transformation.at<float>(0,0) + tmp.y * transformation.at<float>(0,1) + transformation.at<float>(0,2) ;
points[i].y = tmp.x * transformation.at<float>(1,0) + tmp.y * transformation.at<float>(1,1) + transformation.at<float>(1,2) ;
}
return points;
}
const float PI = 3.14159265359;
// similarity transformation uses same scaling along both axes, rotation and a translation part
cv::Mat composeSimilarityTransformation(float s, float r, float tx, float ty)
{
cv::Mat transformation = cv::Mat::zeros(2,3,CV_32FC1);
// compute rotation matrix and scale entries
float rRad = PI*r/180.0f;
transformation.at<float>(0,0) = s*cosf(rRad);
transformation.at<float>(0,1) = s*sinf(rRad);
transformation.at<float>(1,0) = -s*sinf(rRad);
transformation.at<float>(1,1) = s*cosf(rRad);
// translation
transformation.at<float>(0,2) = tx;
transformation.at<float>(1,2) = ty;
return transformation;
}
// create random points
std::vector<cv::Point2f> createPointSet(cv::Size2i imageSize, std::vector<cv::Point2f> pointPattern, unsigned int nRandomDots = 50)
{
// subtract center of gravity to allow more intuitive rotation
cv::Point2f centerOfGravity(0,0);
for(unsigned int i=0; i<pointPattern.size(); ++i)
{
centerOfGravity.x += pointPattern[i].x;
centerOfGravity.y += pointPattern[i].y;
}
centerOfGravity.x /= (float)pointPattern.size();
centerOfGravity.y /= (float)pointPattern.size();
pointPattern = applyTransformation(pointPattern, composeSimilarityTransformation(1,0,-centerOfGravity.x, -centerOfGravity.y));
// create random points
//unsigned int nRandomDots = 0;
std::vector<cv::Point2f> pointset;
srand (time(NULL));
for(unsigned int i =0; i<nRandomDots; ++i)
{
pointset.push_back( cv::Point2f(rand()%imageSize.width, rand()%imageSize.height) );
}
cv::Mat image = cv::Mat::ones(imageSize,CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image, pointset, cv::Scalar(0,0,0));
cv::namedWindow("pointset"); cv::imshow("pointset", image);
// add point pattern to a random location
float scaleFactor = rand()%30 + 10.0f;
float translationX = rand()%(imageSize.width/2)+ imageSize.width/4;
float translationY = rand()%(imageSize.height/2)+ imageSize.height/4;
float rotationAngle = rand()%360;
std::cout << "s: " << scaleFactor << " r: " << rotationAngle << " t: " << translationX << "/" << translationY << std::endl;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(scaleFactor,rotationAngle,translationX,translationY));
//std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,trans);
drawPoints(image, transformedPattern, cv::Scalar(0,0,0));
drawPoints(image, transformedPattern, cv::Scalar(0,255,0),3);
cv::imwrite("dataPoints.png", image);
cv::namedWindow("pointset + pattern"); cv::imshow("pointset + pattern", image);
for(unsigned int i=0; i<transformedPattern.size(); ++i)
pointset.push_back(transformedPattern[i]);
return pointset;
}
void programDetectPointPattern()
{
cv::Size2i imageSize(640,480);
// create a point pattern, this can be in any scale and any relative location
std::vector<cv::Point2f> pointPattern;
pointPattern.push_back(cv::Point2f(0,0));
pointPattern.push_back(cv::Point2f(2,0));
pointPattern.push_back(cv::Point2f(4,0));
pointPattern.push_back(cv::Point2f(1,2));
pointPattern.push_back(cv::Point2f(3,2));
pointPattern.push_back(cv::Point2f(2,4));
// transform the pattern so it can be drawn
cv::Mat trans = cv::Mat::ones(2,3,CV_32FC1);
trans.at<float>(0,0) = 20.0f; // scale x
trans.at<float>(1,1) = 20.0f; // scale y
trans.at<float>(0,2) = 20.0f; // translation x
trans.at<float>(1,2) = 20.0f; // translation y
// draw the pattern
cv::Mat drawnPattern = cv::Mat::ones(cv::Size2i(128,128),CV_8U);
drawnPattern *= 255;
drawPoints(drawnPattern,applyTransformation(pointPattern, trans), cv::Scalar(0),5);
// display and save pattern
cv::imwrite("patternToDetect.png", drawnPattern);
cv::namedWindow("pattern"); cv::imshow("pattern", drawnPattern);
// draw the points and the included pattern
std::vector<cv::Point2f> pointset = createPointSet(imageSize, pointPattern);
cv::Mat image = cv::Mat(imageSize, CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image,pointset, cv::Scalar(0,0,0));
// normally we would have to use some nearest neighbor distance computation, but to make it easier here,
// we create a small area around every point, which allows to test for point existence in a small neighborhood very efficiently (for small images)
// in the real application this "inlier" check should be performed by k-nearest neighbor search and threshold the distance,
// efficiently evaluated by a kd-tree
cv::Mat pointImage = cv::Mat::zeros(imageSize,CV_8U);
float maxDist = 3.0f; // how exact must the pattern be recognized, can there be some "noise" in the position of the data points?
drawPoints(pointImage, pointset, cv::Scalar(255),maxDist);
cv::namedWindow("pointImage"); cv::imshow("pointImage", pointImage);
// choose two points from the pattern (can be arbitrary so just take the first two)
cv::Point2f referencePoint1 = pointPattern[0];
cv::Point2f referencePoint2 = pointPattern[1];
cv::Point2f diff1; // difference vector
diff1.x = referencePoint2.x - referencePoint1.x;
diff1.y = referencePoint2.y - referencePoint1.y;
float referenceLength = sqrt(diff1.x*diff1.x + diff1.y*diff1.y);
diff1.x = diff1.x/referenceLength; diff1.y = diff1.y/referenceLength;
std::cout << "reference: " << std::endl;
std::cout << referencePoint1 << std::endl;
// now try to find the pattern
for(unsigned int j=0; j<pointset.size(); ++j)
{
cv::Point2f targetPoint1 = pointset[j];
for(unsigned int i=0; i<pointset.size(); ++i)
{
cv::Point2f targetPoint2 = pointset[i];
cv::Point2f diff2;
diff2.x = targetPoint2.x - targetPoint1.x;
diff2.y = targetPoint2.y - targetPoint1.y;
float targetLength = sqrt(diff2.x*diff2.x + diff2.y*diff2.y);
diff2.x = diff2.x/targetLength; diff2.y = diff2.y/targetLength;
// with nearest-neighborhood search this line will be similar or the maximal neighbor distance must be relative to targetLength!
if(targetLength < maxDist) continue;
// scale:
float s = targetLength/referenceLength;
// rotation:
float r = -180.0f/PI*(atan2(diff2.y,diff2.x) + atan2(diff1.y,diff1.x));
// scale and rotate the reference point to compute the translation needed
std::vector<cv::Point2f> origin;
origin.push_back(referencePoint1);
origin = applyTransformation(origin, composeSimilarityTransformation(s,r,0,0));
// compute the translation which maps the two reference points on the two target points
float tx = targetPoint1.x - origin[0].x;
float ty = targetPoint1.y - origin[0].y;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(s,r,tx,ty));
// now test if all transformed pattern points can be found in the dataset
bool found = true;
for(unsigned int i=0; i<transformedPattern.size(); ++i)
{
cv::Point2f curr = transformedPattern[i];
// here we check whether there is a point drawn in the image. If you have no image you will have to perform a nearest neighbor search.
// this can be done with a balanced kd-tree in O(log n) time
// building such a balanced kd-tree has to be done once for the whole dataset and needs O(n*(log n)) afair
if((curr.x >= 0)&&(curr.x <= pointImage.cols-1)&&(curr.y>=0)&&(curr.y <= pointImage.rows-1))
{
if(pointImage.at<unsigned char>(curr.y, curr.x) == 0) found = false;
// if working with kd-tree: if nearest neighbor distance > maxDist => found = false;
}
else found = false;
}
if(found)
{
std::cout << composeSimilarityTransformation(s,r,tx,ty) << std::endl;
cv::Mat currentIteration;
image.copyTo(currentIteration);
cv::circle(currentIteration,targetPoint1,5, cv::Scalar(255,0,0),1);
cv::circle(currentIteration,targetPoint2,5, cv::Scalar(255,0,255),1);
cv::line(currentIteration,targetPoint1,targetPoint2,cv::Scalar(0,0,255));
drawPoints(currentIteration, transformedPattern, cv::Scalar(0,0,255),4);
cv::imwrite("detectedPattern.png", currentIteration);
cv::namedWindow("iteration"); cv::imshow("iteration", currentIteration); cv::waitKey(-1);
}
}
}
}