I want to use an array as an input to k means algorithm .That array has the values of displacement in x and y direction and is a result of Lucas Kanade optical flow estimation. The code is following :
EDITED :
int number_of_features=150;
// Lucas Kanade optical flow
cvCalcOpticalFlowPyrLK(frame1_1C,frame2_1C,pyramid1,pyramid2,frame1_features,frame2_features,number_of_features,optical_flow_window, 5,optical_flow_found_feature, optical_flow_feature_error,optical_flow_termination_criteria, 0 );
float Dx[150],Dy[150]; // displacement matrices
float Dis[150][2]; // total displacement matrix
int K=2; // clusters selected
Mat bestLabels, centers;
for(int i = 0; i < number_of_features; i++)
{
CvPoint p,q;
p.x = (int) frame1_features[i].x;
p.y = (int) frame1_features[i].y;
q.x = (int) frame2_features[i].x;
q.y = (int) frame2_features[i].y;
//displacements
Dx[i]=p.x-q.x;
Dy[i]=p.y-q.y;
Dis[i][0] = Dx[i];
Dis[i][1] = Dy[i];
}
// k means algorithm
// Creating Mat for Input data
cv::Mat flt_Dis(150, 2, CV_32F, Dis);
cv::kmeans(flt_Dis, K, bestLabels,TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0),3, KMEANS_PP_CENTERS, centers);
I have solved my previous problem , now i want to show the clustered image. I guess bestLabels store the indices for each element , e.g. if it is categorized to 0th or 1st category. Am I right ?? How can I show the clustered image ?
K-means can be implemented to work on integers. You just have to do it.
The result will not be integer though.
Related
I am trying to manually implement a fundamental matrix estimation function for corresponding points (based on similarities between two images). The corresponding points are obtained after performing ORB feature detection, extraction, matching and ratio test.
There is a lot of literature available on good sources about this topic. However none of them appear to give a good pseudo-code for doing the process. I went through various Chapters on Multiple View Geometry book; and also many online sources.
This source appears to give a formula for doing the normalization and I followed the formula mentioned on page 13 of this source.
Based on this formula, I created the following algorithm. I am not sure if I am doing it the right way though !
Normalization.hpp
class Normalization {
typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;
public:
Normalization () {}
~Normalization () {}
void makeAverage(intercepts pointsVec);
std::tuple <cv::Mat, cv::Mat> normalize(intercepts pointsVec);
matVec getNormalizedPoints(intercepts pointsVec);
private:
double xAvg = 0;
double yAvg = 0;
double count = 0;
matVec normalizedPts;
double distance = 0;
matVec matVecData;
cv::Mat forwardTransform;
cv::Mat reverseTransform;
};
Normalization.cpp
#include "Normalization.hpp"
typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;
/*******
*#brief : The makeAverage function receives the input 2D coordinates (x, y)
* and creates the average of x and y
*#params : The input parameter is a set of all matches (x, y pairs) in image A
************/
void Normalization::makeAverage(intercepts pointsVec) {
count = pointsVec.size();
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;
}
/*******
*#brief : The normalize function accesses the average distance calculated
* in the previous step and calculates the forward and inverse transformation
* matrices
*#params : The input to this function is a vector of corresponding points in given image
*#return : The returned data is a tuple of forward and inverse transformation matrices
*************/
std::tuple <cv::Mat, cv::Mat> Normalization::normalize(intercepts pointsVec) {
for (auto& member : pointsVec) {
// Accumulate the distance for every point
distance += ((1 / (count * std::sqrt(2))) *\
(std::sqrt(std::pow((member.x - xAvg), 2)\
+ std::pow((member.y - yAvg), 2))));
}
forwardTransform = (cv::Mat_<double>(3, 3) << (1 / distance), \
0, -(xAvg / distance), 0, (1 / distance), \
-(yAvg / distance), 0, 0, 1);
reverseTransform = (cv::Mat_<double>(3, 3) << distance, 0, xAvg, \
0, distance, yAvg, 0, 0, 1);
return std::make_tuple(forwardTransform, reverseTransform);
}
/*******
*#brief : The getNormalizedPoints function trannsforms the raw image coordinates into
* transformed coordinates using the forwardTransform matrix estimated in previous step
*#params : The input to this function is a vector of corresponding points in given image
*#return : The returned data is vector of transformed coordinates
*************/
matVec Normalization::getNormalizedPoints(intercepts pointsVec) {
cv::Mat triplet;
for (auto& member : pointsVec) {
triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
matVecData.emplace_back(forwardTransform * triplet);
}
return matVecData;
}
Is this the right way ? Are there other ways of Normalization ?
I think you can do it your way but in "Multiple View Geometry in Computer Vision" Hartley and Zisserman recommend isotropic scaling (p. 107):
Isotropic scaling. As a first step of normalization, the coordinates in each image are
translated (by a different translation for each image) so as to bring the centroid of the
set of all points to the origin. The coordinates are also scaled so that on the average a
point x is of the form x = (x, y,w)T, with each of x, y and w having the same average
magnitude. Rather than choose different scale factors for each coordinate direction, an
isotropic scaling factor is chosen so that the x and y-coordinates of a point are scaled
equally. To this end, we choose to scale the coordinates so that the average distance of
a point x from the origin is equal to
√
2. This means that the “average” point is equal
to (1, 1, 1)T. In summary the transformation is as follows:
(i) The points are translated so that their centroid is at the origin.
(ii) The points are then scaled so that the average distance from the origin is equal
to √2.
(iii) This transformation is applied to each of the two images independently.
They state that it is important for the direct linear transformation (DLT) but even more important for the calculation of a Fundamental Matrix like you want to do.
The algorithm you chose, normalized the point coordinates to (1, 1, 1) but did not apply a scaling so that the average distance from the origin is equal to √2.
Here is some code for this type of normalization. The averaging step stayed the same:
std::vector<cv::Mat> normalize(std::vector<cv::Point2d> pointsVec) {
// Averaging
double count = (double) pointsVec.size();
double xAvg = 0;
double yAvg = 0;
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;
// Normalization
std::vector<cv::Mat> points3d;
std::vector<double> distances;
for (auto& member : pointsVec) {
double distance = (std::sqrt(std::pow((member.x - xAvg), 2) + std::pow((member.y - yAvg), 2)));
distances.push_back(distance);
}
double xy_norm = std::accumulate(distances.begin(), distances.end(), 0.0) / distances.size();
// Create a matrix transforming the points into having mean (0,0) and mean distance to the center equal to sqrt(2)
cv::Mat_<double> Normalization_matrix(3, 3);
double diagonal_element = sqrt(2) / xy_norm;
double element_13 = -sqrt(2) * xAvg / xy_norm;
double element_23 = -sqrt(2)* yAvg/ xy_norm;
Normalization_matrix << diagonal_element, 0, element_13,
0, diagonal_element, element_23,
0, 0, 1;
// Multiply the original points with the normalization matrix
for (auto& member : pointsVec) {
cv::Mat triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
points3d.emplace_back(Normalization_matrix * triplet);
}
return points3d;
}
I'm trying to use cv::FindFundamentalMat but when I try to get the 4th argument (that should be :
Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
)
It only gives me 0's.
I'm using siftGPU to generate the keypoints (x,y) that are used in the function.
My code :
/*
... Use siftgpu
*/
std::vector<int(*)[2]> match_bufs; //Contain (x,y) from the 2 images that are paired
SiftGPU::SiftKeypoint & key1 = keys[match_bufs[i][0]];
SiftGPU::SiftKeypoint & key2 = keys[match_bufs[i][1]];
float x_l, y_l, x_r, y_r; //(x,y of left and right images)
x_l = key1.x; y_l = key1.y;
x_r = key2.x; y_r = key2.y;
vec1.push_back(x_l); vec1.push_back(y_l);
vec2.push_back(x_r); vec2.push_back(y_r);
std::vector<uchar> results;
int size = vec1.size();
results.resize(size);
std::vector<cv::Point2f> points1;
std::vector<cv::Point2f> points2;
for (int i = 0; i < size; i+=2) {
points1.push_back(cv::Point2f(vec1[i], vec1[i + 1]));
points2.push_back(cv::Point2f(vec2[i], vec2[i + 1]));
}
cv::Mat fund = cv::findFundamentalMat(points1, points2, CV_FM_RANSAC, 3, 0.99, results);
then,
std::cout << std::endl << fund << std::endl;
for (int j = 0; j < results.size(); ++j) {
std::cout << (int)results[j];
}
fund is :
0, -0.001, 0.6
0, 0, -0.3
-0.4, 0.2, 0
and results is composed with only 0's.
I'm maybe fooling myself because findFundamentalMat says :
Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
Since i'm not native speaker english, there is maybe something that I'm missing... My (x,y) are like (350.0, 560.0) (that are floating points). But do I have to normalize them between [0,1] and that's what floating-point means?
Or do I am missing something else?
Thanks!
(EDIT : I tried to normalize my points (divide by height and width of respective images, but results are still 0's)
The answer is quite easy : I have to use the good format for the template and cast it well.
So :
((int)results.at<uchar>(i, 0) == 1)
works :)
If it can help someone.
I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.
I'm trying to use LogPolar transform to obtain the scale and the rotation angle from two images. Below are two 300x300 sample images. The first rectangle is 100x100, and the second rectangle is 150x150, rotated by 45 degree.
The algorithm:
Convert both images to LogPolar.
Find the translational shift using Phase Correlation.
Convert the translational shift to scale and rotation angle (how to do this?).
My code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/imgproc/imgproc_c.h>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main()
{
cv::Mat a = cv::imread("rect1.png", 0);
cv::Mat b = cv::imread("rect2.png", 0);
if (a.empty() || b.empty())
return -1;
cv::imshow("a", a);
cv::imshow("b", b);
cv::Mat pa = cv::Mat::zeros(a.size(), CV_8UC1);
cv::Mat pb = cv::Mat::zeros(b.size(), CV_8UC1);
IplImage ipl_a = a, ipl_pa = pa;
IplImage ipl_b = b, ipl_pb = pb;
cvLogPolar(&ipl_a, &ipl_pa, cvPoint2D32f(a.cols >> 1, a.rows >> 1), 40);
cvLogPolar(&ipl_b, &ipl_pb, cvPoint2D32f(b.cols >> 1, b.rows >> 1), 40);
cv::imshow("logpolar a", pa);
cv::imshow("logpolar b", pb);
cv::Mat pa_64f, pb_64f;
pa.convertTo(pa_64f, CV_64F);
pb.convertTo(pb_64f, CV_64F);
cv::Point2d pt = cv::phaseCorrelate(pa_64f, pb_64f);
std::cout << "Shift = " << pt
<< "Rotation = " << cv::format("%.2f", pt.y*180/(a.cols >> 1))
<< std::endl;
cv::waitKey(0);
return 0;
}
The log polar images:
For the sample image images above, the translational shift is (16.2986, 36.9105). I have successfully obtain the rotation angle, which is 44.29. But I have difficulty in calculating the scale. How to convert the given translational shift to obtain the scale?
You have two Images f1, f2 with f1(m, n) = f2(m/a , n/a) That is f1 is scaled by factor a
In logarithmic notation that is equivalent to f1(log m, log n) = f2(logm − log a, log n − log a) where log a is the shift in your phasecorrelated image.
Compare B. S. Reddy, B. N. Chatterji: An FFT-Based Technique for Translation, Rotation and
Scale-Invariant Image Registration, IEEE Transactions On Image Processing Vol. 5
No. 8, IEEE, 1996
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.4387&rep=rep1&type=pdf
here is python version
which tells
ir = abs(ifft2((f0 * f1.conjugate()) / r0))
i0, i1 = numpy.unravel_index(numpy.argmax(ir), ir.shape)
angle = 180.0 * i0 / ir.shape[0]
scale = log_base ** i1
The value for the scale factor is indeed exp(pt.y). However, since you used a "magnitude scale parameter" of 40 for the cvLogPolar function, you now need to divide pt.x by 40 to get the correct value for the displacement:
Scale = exp( pt.x / 40) = exp(16.2986 / 40) = 1.503
The value of the "magnitude scale parameter" for the cvLogPolar function does not affect the displacement produced by the rotation angle pt.x, because according to the math, it cancels out. For that reason, your formula for the rotation gives the correct value.
On another note, I believe the formula for the rotation should actually be:
Rotation = pt.y*360/(a.cols)
But, for some strange reason, the ">> 1" that you added is causing the result to be multiplied by 2 (which I believe you compensated for by multiplying by 180 instead of 360?) Remove it, and you'll see what I mean.
Also, ">>1" is causing a division by 2 in:
cvPoint2D32f(a.cols >> 1, a.rows >> 1)
If to set the center parameter of the cvLogPolar function to the center of the image (which is what you want):
cvPoint2D32f(a.cols/2, a.rows/2)
and
cvPoint2D32f(b.cols/2, b.rows/2)
then, you'll also get the correct value for the rotation (i.e. the same value that you got), and for the scale.
This thread was helpful in getting me started on rotation-invariant phase correlation, so I hope my input will help resolve any lingering issues.
We aim to calculate the scale and rotation (which is incorrectly calculated in the code). Let's start by gathering the equations from the logPolar docs. There they state the following:
(1) I = (dx,dy) = (x-center.x, y-center.y)
(2) rho = M * ln(magnitude(I))
(3) phi = Ky * angle(I)_0..360
Note: rho is pt.x and phi is pt.y in the code above
We also know that
(4) M = src.cols/ln(maxRadius)
(5) Ky = src.rows/360
First, let's solve for scale. Solving for magnitude(I) (i.e. scale) in equation 2, we get
(6) magnitude(I) = scale = exp(rho/M)
Then we substitute for M and simplify to get
(7) magnitude(I) = scale = exp(rho*ln(maxRadius)/src.cols) = pow(maxRadius, rho/src.cols)
Now let's solve for rotation. Solving for angle(I) (i.e. rotation) in equation 3, we get
(8) angle(I) = rotation = phi/Ky
Then we substitute for Ky and simplify to get
(9) angle(I) = rotation = phi*360/src.rows
So, scale and rotation can be calculated using equations 7 and 9, respectively. It might be worth noting that you should use equation 4 for calculation M and Point2f center( (float)a.cols/2, (float)a.rows/2 ) for calculating center as opposed to what is in the code above. There are good bits of info in this logpolar example opencv code.
From the values by phase correlation, the coordinates are rectangular coordinates hence (16.2986, 36.9105) are (x,y). The scale is calculated as
scale = log((x^2 + y^ 2)^0.5) which is approximately 1.6(near to 1.5).
When we calculate angle by using formulae theta = arctan(y/x) = 66(approx).
The theta value is way of the real value(45 in this case).
I am working on a form of autocalibration for an optics device which is currently performed manually. The first part of the calibration is to determine whether a light beam has illuminated the set of 'calibration' points.
I am using OpenCV and have thresholded and cropped the image to leave only the possible relevant points. I know want to determine if these points lie along a stright (horizontal) line; if they a sufficient number do the beam is in the correct position! (The points lie in a straight line but the beam is often bent so hitting most of the points suffices, there are 21 points which show up as white circles when thresholded).
I have tried using a histogram but on the thresholded image the results are not correct and am now looking at Hough lines, but this detects straight lines from edges wwhere as I want to establish if detected points lie on a line.
This is the threshold code I use:
cvThreshold(output, output, 150, 256, CV_THRESH_BINARY);
The histogram results with anywhere from 1 to 640 bins (image width) is two lines at 0 and about 2/3rds through of near max value. Not the distribution expected or obtained without thresholding.
Some pictures to try to illistrate the point (note the 'noisy' light spots which are a feature of the system setup and cannot be overcome):
12 points in a stright line next to one another (beam in correct position)
The sort of output wanted (for illistration, if the points are on the line this is all I need to know!)
Any help would be greatly appreciated. One thought was to extract the co-ordinates of the points and compare them but I don't know how to do this.
Incase it helps anyone here is a very basic (the first draft) of some simple linaear regression code I used.
// Calculate the averages of arrays x and y
double xa = 0, ya = 0;
for(int i = 0; i < n; i++)
{
xa += x[i];
ya += y[i];
}
xa /= n;
ya /= n;
// Summation of all X and Y values
double sumX = 0;
double sumY = 0;
// Summation of all X*Y values
double sumXY = 0;
// Summation of all X^2 and Y^2 values
double sumXs = 0;
double sumYs = 0;
for(int i = 0; i < n; i++)
{
sumX = sumX + x[i];
sumY = sumY + y[i];
sumXY = sumXY + (x[i] * y[i]);
sumXs = sumXs + (x[i] * x[i]);
sumYs = sumYs + (y[i] * y[i]);
}
// (X^2) and (Y^2) sqaured
double Xs = sumX * sumX;
double Ys = sumY * sumY;
// Calculate slope, m
slope = (n * sumXY - sumX * sumY) / (n* sumXs - Xs);
// Calculate intercept
intercept = ceil((sumY - slope * sumX) / n);
// Calculate regression index, r^2
double r_top = (n * sumXY - sumX * sumY);
double r_bottom = sqrt((n* sumXs - Xs) * (n* sumYs - Ys));
double r = 0;
// Check line is not perfectly vertical or horizontal
if(r_top == 0 || r_bottom == 0)
r = 0;
else
r = r_top/ r_bottom;
There are more efficeint ways of doing this (see CodeCogs or AGLIB) but as quick fix this code seems to work.
To detect Circles in OpenCV I dropped the Hough Transform and adapeted codee from this post:
Detection of coins (and fit ellipses) on an image
It is then a case of refining the co-ordinates (removing any outliers etc) to determine if the circles lie on a horizontal line from the slope and intercept values of the regression.
Obtain the x,y coordinates of the thresholded points, then perform a linear regression to find a best-fit line. With that line, you can determine the r^2 value which effectively gives you the quality of fit. Based on that fitness measure, you can determine your calibration success.
Here is a good discussion.
you could do something like this, altough it is an aproximation:
var dw = decide a medium dot width in pixels
maxdots = 0;
for each line of the image {
var dots = 0;
scan by incrementing x by dw {
if (color==dotcolor) dots++;
}
if (dots>maxdots) maxdots=dots;
}
maxdots would be the best result...