Putting a vector of points through cartToPolar in OpenCV - c++

I have a vector<Point2f> with points that I need to convert to polar. The only function I see to do this is cartToPolar(), which, for some reason, doesn't allow just a vector of points but instead requires two, each containing only x coordinates and y coordinates respectively.
Is there any graceful solution to this or do I have to manually make these other two vectors?

Since cv::Mat has a constructor with a data pointer and a step, you can make two as containers for the interleaved x and y components of each Point2f in the vector, which has contiguous data.
Code
#include "opencv2/opencv.hpp"
int main(int argc,const char* argv[])
{
std::vector<cv::Point2f> sides;
sides.push_back(cv::Point2f(3, 4));
sides.push_back(cv::Point2f(6, 8));
sides.push_back(cv::Point2f(1, 1));
cv::Mat xpts(sides.size(), 1, CV_32F, &sides[0].x, 2 * sizeof(float));
cv::Mat ypts(sides.size(), 1, CV_32F, &sides[0].y, 2 * sizeof(float));
std::cout << "x: " << xpts.t() << std::endl;
std::cout << "y: " << ypts.t() << std::endl;
cv::Mat magnitude, angle;
cv::cartToPolar(xpts, ypts, magnitude, angle);
std::cout << "\nmagnitude: " << magnitude.t();
std::cout << "\nangle: " << angle.t() *180. / CV_PI << std::endl;
return 0;
}
The key is the step, 2 * sizeof(float). The data will not be copied unless you clone.
Output
x: [3, 6, 1]
y: [4, 8, 1]
magnitude: [5, 10, 1.4142135]
angle: [53.136284, 53.136284, 44.990456]

Related

Eigen 3x3 matrix inverse wrong result

problem description
I'm using Eigen for some matrix task. Say I have matrix A whose size is 4x3, then its transpose A^T is 3x4 size, then A^T * A is 3x3 size, thus the inverse, (A^T * A)^(-1), is also 3x3 size.
I would like to get (A^T * A)^(-1). By using the mentioned formula, and by manually defining A^T * A matrix then do inverse, I got different result for (A^T * A)^(-1) matrix. Curious why this two results mismatch and differs very much.
Reproduce
Eigen version: git commit 972cf0c28a8d2ee0808c1277dea2c5c206591ce6
System: Ubuntu 20.04
Compiler: Clang++, 10.0.0
Code:
#include <iostream>
#include <Eigen/Dense>
int main() {
{
printf("----- A^T * A by calculate\n");
Eigen::MatrixXd A(4, 3);
A << 1.70023e+06, 1303.93, 1,
1.99113e+06, 1411.07, 1,
1.97211e+06, 1404.32, 1,
1.69433e+06, 1301.67, 1;
std::cout << "A:" << std::endl << A << std::endl;
Eigen::MatrixXd AT = A.transpose();
std::cout << "A^T:" << std::endl << AT << std::endl;
Eigen::MatrixXd ATA = AT * A;
std::cout << "A^T * A:" << std::endl << ATA << std::endl;
std::cout << "(A^T * A)^(-1):" << std::endl << ATA.inverse() << std::endl;
}
{
printf("----- A^T * A by define\n");
Eigen::MatrixXd ATA(3, 3);
ATA << 1.36154e+13, 1.00015e+10, 7.3578e+06,
1.00015e+10, 7.3578e+06, 5420.99,
7.3578e+06, 5420.99, 4;
std::cout << "A^T * A:" << std::endl << ATA << std::endl;
std::cout << "(A^T * A)^(-1):" << std::endl << ATA.inverse() << std::endl;
}
return 0;
}
The actual outputs:
----- A^T * A by calculate
A:
1.70023e+06 1303.93 1
1.99113e+06 1411.07 1
1.97211e+06 1404.32 1
1.69433e+06 1301.67 1
A^T:
1.70023e+06 1.99113e+06 1.97211e+06 1.69433e+06
1303.93 1411.07 1404.32 1301.67
1 1 1 1
A^T * A:
1.36154e+13 1.00015e+10 7.3578e+06
1.00015e+10 7.35781e+06 5420.99
7.3578e+06 5420.99 4
(A^T * A)^(-1):
3.49282e-06 -0.00946871 6.40758
-0.00946871 25.6689 -17370.5
6.40758 -17370.5 1.17549e+07
----- A^T * A by define
A^T * A:
1.36154e+13 1.00015e+10 7.3578e+06
1.00015e+10 7.3578e+06 5420.99
7.3578e+06 5420.99 4
(A^T * A)^(-1):
6.1435e-09 -1.66513e-05 0.0112659
-1.66513e-05 0.0452221 -30.658
0.0112659 -30.658 20826.4
This is just a matter of numerical accuracy. As pointed out by #Damien in the comment, the matrix is ill-conditioned, and thus a small difference in the input can lead to a large change in the results. By copying from the output only the first five digits and using them to manually define the second matrix, a significant part is discarded that is handled internally but not displayed with the standard accuracy of std::cout.
Take for instance the entry ATA(0,0). By using is ATA << 1.36154e+13,..., any value of the order of 1.e7 or lower is not considered. This is, however, the order of the other entries in the matrix.
In short, both results are correct, but your manually defined matrix ATA is not the same as the one that is calculated in the first part. (You can take the difference between the two to verify this).
My today practice for inverse matrix 3x3 is using
template<typename Derived>
template<typename ResultType>
inline void MatrixBase<Derived>::computeInverseAndDetWithCheck(
ResultType& inverse,
typename ResultType::Scalar& determinant,
bool& invertible,
const RealScalar& absDeterminantThreshold
) const
Set manual absDeterminantThreshold is default:
1)for float 1e-5
2)for double 1e-12
3)for long double 1e-15
For example your CODE:
Eigen::Matrix3d ATA_inv;
bool invertible = false;
double determinant = 0.;
double Threshold = abs(ATA.diagonal().prod());//maybe any test
ATA.computeInverseAndDetWithCheck(ATA_inv,determinant,invertible,Threshold<1e-12?Threshold*0.1:1e-12);

use .resize() withour loosingthe elements

I want to make a dynamic matrix and assign a values specifically to its index so I have to state the size of the matrix before calling the indices, please see the following code
void Visual_Servoing::Pose_callback(const geometry_msgs::PoseStamped::ConstPtr &msg)
{
Rotations.resize(index + 3,3);
Translations.resize(3, iterator + 1);
std::cout << " #### I'm in Pose_Callback #### " << std::endl;
this->hole_detection(frame);
this->generate_line();
Translation << msg->pose.position.x ,msg->pose.position.y ,msg->pose.position.z;
Translations.col(iterator) = Translation;
std::cout << "iterator =" << iterator << std::endl;
QuatX = msg->pose.orientation.x;
QuatY = msg->pose.orientation.y;
QuatZ = msg->pose.orientation.z;
QuatW = msg->pose.orientation.w;
Rotation << (1 - (2*pow(QuatY,2)) - (2*pow(QuatZ,2))), (2*QuatX*QuatY + 2*QuatW*QuatZ) , (2*QuatX*QuatZ - 2*QuatW*QuatY),
(2*QuatX*QuatY - 2*QuatW*QuatZ) , (1 - (2*pow(QuatX,2)) - (2*pow(QuatZ,2))) , (2*QuatY*QuatZ + 2*QuatW*QuatX),
(2*QuatX*QuatZ + 2*QuatW*QuatY) , (2*QuatY*QuatZ - 2*QuatW*QuatX) , (1 - (2*pow(QuatX,2)) - (2*pow(QuatY,2)));
Rotations.block(index, 0, 3, 3) = Rotation;
std::cout << Translation << std::endl;
std::cout << Translations << std::endl;
std::cout << Rotation << std::endl;
std::cout << Rotations << std::endl;
if (iterator>10)
{
Eigen::MatrixXf VectorsCam(3, theta1.size()); // Matrix contains the vectors in camera frame
VecsinInertial.resize(3, theta1.size());
CirclePosinFrame();
for (int i=0; i<theta1.size(); i++)
{
VecsinInertial.col(i) = Translations.col(i) + Rotations.block(index, 0, 3, 3) * VectorsCam.col(i); // Each Column represents a vector in inertial frame
}
VectorCam = Eigen::MatrixXf::Zero(3, 1);
VectorsCam = Eigen::MatrixXf::Zero(3, theta1.size());
theta1.clear();
theta2.clear();
iterator = 0;
}
iterator = iterator + 1;
index = index + 3;
}
How can I update Translations and Rotations matrices sizes without loosing the already existing values in the matrices ? I already assigned Translations and Rotations in the header file as:
Eigen::MatrixXf Translations; // Matrix of Camera Translations Vectors
Eigen::MatrixXf Rotations;

C++ Opencv: Mat.zeros get wrong shape

I defined and initialized a Mat variable using the Mat::zeros, when I print its shape, i.e. rows, cols, channels, it seems I get wrong values.
My code is shown as follows:
#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
int n_Channel = 3;
int mySizes[3] = {100, 200, n_Channel};
Mat M = Mat::zeros(n_Channel, mySizes, CV_64F);
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
return 0;
}
The printed message is :
-1,-1,1
What's wrong with this?
I also find that if I declare a Mat using the following code:
int n_Channel = 3;
Mat M(Size(100, 200), CV_32FC(n_Channel));
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
the outcome is correct:
200,100,3
I'm confused about this. Thank you all for helping me!
You want to use a very special overloaded version of the cv::Mat::zeros method.
Let's have a look at the following code:
// Number of channels.
const int n_Channel = 3;
// Number of dimensions; must be 1 or 2?
const int n_Dimensions = 2;
// Create empty Mat using zeros, and output dimensions.
int mySizes[n_Dimensions] = { 200, 100 };
cv::Mat M1 = cv::Mat::zeros(n_Dimensions, mySizes, CV_64FC(n_Channel));
std::cout << "M1: " << M1.rows << "," << M1.cols << "," << M1.channels() << std::endl;
// Create empty Mat using constructor, and output dimensions.
cv::Mat M2 = cv::Mat(cv::Size(100, 200), CV_64FC(n_Channel), cv::Scalar(0, 0, 0));
std::cout << "M2: " << M2.rows << "," << M2.cols << "," << M2.channels() << std::endl;
which gives the following output:
M1: 200,100,3
M2: 200,100,3
So, basically you have to move the "channel number info" from mySizes to the cv::Mat::zeros method. Also, you have to pay attention to the order of the image dimensions provided in mySizes, since it seem to differ from the constructor using cv::Size. I guess the latter one is width x height, whereas the first one is number of rows x number of cols.
How to init CV mat :
cv::Mat test = cv::Mat::zeros(cv::Size(100, 200), CV_64F);
As you can see, the first parameter is the Size cf :
https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html

How can i get accurate center points?

How can I get accurate center points with OpenCV Hough Circle Transformation? I need more decimal places of accurate in the x,y coordinates.
( on accurate i mean something like this)
i got these center coordinates with matlab
x107,775526315904 y112,963480232638
x469,550463441518 y208,309866770404
x217,386414755458 y490,882895036058
Here is my code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
Mat src, src_gray, src0;
printf("Give the name of the picture\n");
char ImageName[20];
scanf("%s", &ImageName);
/// Read the image
src = imread(ImageName, 1);
if (!src.data)
{
printf("no image data\n");
return main();
}
// src0 = src;
// imshow("orignal", src0);
/// Convert it to gray
cvtColor(src, src_gray, CV_BGR2GRAY);
GaussianBlur(src_gray, src_gray, Size(9, 9), 2, 2);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 0.92, src_gray.rows / 10, 20, 10, 55, 60);
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// printf("%d center x %d y %d\n", i, cvRound(circles[i][0]), cvRound(circles[i][1]));
// printf("rad: %d\n", cvRound(circles[i][2]));
cout << i + 1 << " centerpoint " << " x: " << circles[i][0] << " y: " << circles[i][1] << "rad: " << circles[i][2] << endl;
// cout << i + 1 << " othermethod" << "x : " << center.x << endl;
// circle center
circle(src, center, 1, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(255, 0, 0), 1, 8, 0);
}
imshow("circlefinder", src);
waitKey(1);
system("PAUSE");
cvDestroyWindow("orignal");
cvDestroyWindow("circlefinder");
printf("new picture ?(y/n)");
char ans[1];
scanf("%s", &ans);
if (strcmp(ans, "y"))
{
return 0;
}
else if (strcmp(ans, "n"))
{
return main();
}
}
Here is the output:
Give the name of the picture
as2.jpg
145.5test x
1 centerpoint x: 105.5 y: 112.5rad: 55.5563
2 centerpoint x: 235.5 y: 427.5rad: 56.626
3 centerpoint x: 466.5 y: 196.5rad: 55.5203
A folytatáshoz nyomjon meg egy billentyűt . . .
new picture ?
It's not your data what is lost. It's just print function you made skipping some numbers.
Try std::setprecision:
std::cout << 432.123456789f << " " << std::setprecision(10) << 432.123456789f << std::endl;
Don't forget:
#include <iomanip>
Anyway I don't think you really need that precission. Your measure error in image processing is usually quite big and last digits don't give you any real information.

Pairwise differences between two matrices in Eigen

In matlab/octave pairwise distances between matrices as required for e.g. k-means are calculated by one function call (see cvKmeans.m), to distFunc(Codebook, X) with as arguments two matrices of dimensions KxD.
In Eigen this can be done for a matrix and one vector by using broadcasting, as explained on eigen.tuxfamily.org:
(m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
However, in this case v is not just a vector, but a matrix. What's the equivalent oneliner in Eigen to calculate such pairwise (Euclidean) distances across all entries between two matrices?
I think the appropriate solution is to abstract this functionality into a function. That function may well be templated; and it may well use a loop - the loop will be really short, after all. Many matrix operations are implemented using loops - that's not a problem.
For example, given your example of...
MatrixXd p0(2, 4);
p0 <<
1, 23, 6, 9,
3, 11, 7, 2;
MatrixXd p1(2, 2);
p1 <<
2, 20,
3, 10;
then we can construct a matrix D such that D(i,j) = |p0(i) - p1(j)|2
MatrixXd D(p0.cols(), p0.rows());
for (int i = 0; i < p1.cols(); i++)
D.col(i) = (p0.colwise() - p1.col(i)).colwise().squaredNorm().transpose();
I think this is fine - we can use some broadcasting to avoid 2 levels of nesting: we iterate over p1's points, but not over p0's points, nor over their dimensions.
However, you can make a oneliner if you observe that |p0(i) - p1(j)|2 = |p0(i)|2 + |p1(j)|2 - 2 p0(i)T p1(j). In particular, the last component is just matrix multiplication, so D = -2 p0T p1 + ...
The blank left to be filled is composed of a component that only depends on the row; and a component that only depends on the column: these can be expressed using rowwise and columnwise operations.
The final "oneliner" is then:
D = ( (p0.transpose() * p1 * -2
).colwise() + p0.colwise().squaredNorm().transpose()
).rowwise() + p1.colwise().squaredNorm();
You could also replace the rowwise/colwise trickery with an (outer) product with a 1 vector.
Both methods result in the following (squared) distances:
1 410
505 10
32 205
50 185
You'd have to benchmark which is fastest, but I wouldn't be surprised to see the loop win, and I expect that's more readable too.
Eigen is more of a headache than I thought on first sight.
There is no reshape() functionality for example (and conservativeResize is something else).
It also seems (I'd like to be corrected) to be the case that Map does not just offer a view on the data, but assignments to temporary variables seem to be required.
The minCoeff function after the colwise operator cannot return a minimum element and an index to that element.
It is unclear to me if replicate is actually allocating duplicates of the data. The reason behind broadcasting is that this is not required.
matrix_t data(2,4);
matrix_t means(2,2);
// data points
data << 1, 23, 6, 9,
3, 11, 7, 2;
// means
means << 2, 20,
3, 10;
std::cout << "Data: " << std::endl;
std::cout << data.replicate(2,1) << std::endl;
column_vector_t temp1(4);
temp1 = Eigen::Map<column_vector_t>(means.data(),4);
std::cout << "Means: " << std::endl;
std::cout << temp1.replicate(1,4) << std::endl;
matrix_t temp2(4,4);
temp2 = (data.replicate(2,1) - temp1.replicate(1,4));
std::cout << "Differences: " << std::endl;
std::cout << temp2 << std::endl;
matrix_t temp3(2,8);
temp3 = Eigen::Map<matrix_t>(temp2.data(),2,8);
std::cout << "Remap to 2xF: " << std::endl;
std::cout << temp3 << std::endl;
matrix_t temp4(1,8);
temp4 = temp3.colwise().squaredNorm();
std::cout << "Squared norm: " << std::endl;
std::cout << temp4 << std::endl;//.minCoeff(&index);
matrix_t temp5(2,4);
temp5 = Eigen::Map<matrix_t>(temp4.data(),2,4);
std::cout << "Squared norm result, the distances: " << std::endl;
std::cout << temp5.transpose() << std::endl;
//matrix_t::Index x, y;
std::cout << "Cannot get the indices: " << std::endl;
std::cout << temp5.transpose().colwise().minCoeff() << std::endl; // .minCoeff(&x,&y);
This is not a nice oneliner and seems overkill just to compare every column in data with every column in means and return a matrix with their differences. However, the versatility of Eigen does not seem to be such that this can be written down much shorter.