offset converting float to uchar in cv::Mat - c++

In process of speeding up some processes (can't name them, sorry), I tried to create a
cv::Mat_<uchar> discretization;
Now when I get a depth map in float
cv::Mat_<float> depth_map;
discretization = depth_map / resolution_mtr;
where resolution_mtr is a float. Its value is 0.1 currently.
When I do this, for a value say, 0.48 in depth map , I get the discretization value of 5. My understanding says it should be 4 . I guess it is round off to nearest uchar. Is there a way out of this without getting into for loop ?
Basically I want to use floor values in discretization and not round off .

Why not define an inherited class CvNoRoundMat and override its operator+ ?

You can just subtract 0.5 from the result.
This code
float resolution_mtr = 0.1;
float vals[] = {0.48, 0.4, 0.38, 0.31};
cv::Mat_<float> depth_map(1,4,vals);
cv::Mat_<uchar> discretization( depth_map / resolution_mtr - 0.5);
std::cout << "depth_map: " << depth_map << std::endl;
std::cout << "discretization: " << discretization << std::endl;
will give you following results:
depth_map: [0.47999999, 0.40000001, 0.38, 0.31]
discretization: [4, 4, 3, 3]

Related

SiftGPU and opencv::FundamentalMat

I'm trying to use cv::FindFundamentalMat but when I try to get the 4th argument (that should be :
Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
)
It only gives me 0's.
I'm using siftGPU to generate the keypoints (x,y) that are used in the function.
My code :
/*
... Use siftgpu
*/
std::vector<int(*)[2]> match_bufs; //Contain (x,y) from the 2 images that are paired
SiftGPU::SiftKeypoint & key1 = keys[match_bufs[i][0]];
SiftGPU::SiftKeypoint & key2 = keys[match_bufs[i][1]];
float x_l, y_l, x_r, y_r; //(x,y of left and right images)
x_l = key1.x; y_l = key1.y;
x_r = key2.x; y_r = key2.y;
vec1.push_back(x_l); vec1.push_back(y_l);
vec2.push_back(x_r); vec2.push_back(y_r);
std::vector<uchar> results;
int size = vec1.size();
results.resize(size);
std::vector<cv::Point2f> points1;
std::vector<cv::Point2f> points2;
for (int i = 0; i < size; i+=2) {
points1.push_back(cv::Point2f(vec1[i], vec1[i + 1]));
points2.push_back(cv::Point2f(vec2[i], vec2[i + 1]));
}
cv::Mat fund = cv::findFundamentalMat(points1, points2, CV_FM_RANSAC, 3, 0.99, results);
then,
std::cout << std::endl << fund << std::endl;
for (int j = 0; j < results.size(); ++j) {
std::cout << (int)results[j];
}
fund is :
0, -0.001, 0.6
0, 0, -0.3
-0.4, 0.2, 0
and results is composed with only 0's.
I'm maybe fooling myself because findFundamentalMat says :
Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
Since i'm not native speaker english, there is maybe something that I'm missing... My (x,y) are like (350.0, 560.0) (that are floating points). But do I have to normalize them between [0,1] and that's what floating-point means?
Or do I am missing something else?
Thanks!
(EDIT : I tried to normalize my points (divide by height and width of respective images, but results are still 0's)
The answer is quite easy : I have to use the good format for the template and cast it well.
So :
((int)results.at<uchar>(i, 0) == 1)
works :)
If it can help someone.

OpenCV why does setting a Mat equal to a decimal less than 1 not cause all of the values in the Mat to become 0?

I need help figuring out how OpenCV handles setting a matrix equal to something.
I have an 8-Bit Mat called Radiance that I want to tone map. Here is working code that accomplishes this for me, with K being the constant 450.
cv::cvtColor(radiance, radiance, CV_BGR2XYZ);
radiance = (K * radiance)/(1 + (K * radiance));
cv::cvtColor(radiance, radiance, CV_XYZ2BGR);`
This does not seem like it should work, but it does. It will create a fully tone mapped image that looks great. However, if you try to do this method on the individual pixels, they become a decimal that is between 0 and 1, which truncates to 0. Here is an example of this -
cv::cvtColor(radiance, radiance, CV_BGR2XYZ);
int x = radiance.at<cv::Vec3b>(500, 500)[0];
x = (K * x)/(1 + (K * x));
std::cout << x << "\n";
The output of this is exactly what I would expect
0
I understand why the second snippet of code prints out a zero, but what is going on in the first part that allows it to tone map the image properly, and how can I recreate this on the individual pixel level?
Can't you just define radiance as float matrix?
Mat radiance(m, n, DataType<float>::type);
So you can get a float
cv::cvtColor(radiance, radiance, CV_BGR2XYZ);
float x = radiance.at<cv::Vec3b>(500, 500)[0];
x = (K*x)/(1 + (K*x));
std::cout << x << "\n";

How to get the scale factor of getPerspectiveTransform in opencv?

I have image A and i want to get the bird-eye's view of image A. So I used getPerspectiveTransform method to get the transform matrix. The output result is 3x3 matrix. See my code. In my case i want to know the scale factor of the 3x3 matrix. I have looked the opencv document, but i cannot find detail of the transform matrix and i don't know how to get the scale. Also i have read some paper, the paper said we can get scaling, shearing and ratotion from a11, a12, a21, a22. See the pic. So how can i get the scale factor. Can you give me some advice? And can you explain the getPerspectiveTransform output matrix?Thank you!
Points[0] = Point2f(..., ...);
Points[1] = Point2f(..., ...);
Points[2] = Point2f(..., ...);
Points[3] = Point2f(..., ...);
dst[0] = Point2f(..., ...);
dst[1] = Point2f(..., ...);
dst[2] = Point2f(..., ...);
dst[3] = Point2f(..., ...);
Mat trans = getPerspectiveTransform(gpsPoints, dst);//I want to know the scale of trans
warpPerspective(A, B, trans, img.size());
When i change the camara position, the trapezium size and position will change. Now we set it into a rectangle and rectangle width/height was known. But i think camera in different height the rectangle size should have been changed.Because if we set into same size rectangle, the rectangle may have different detal. That's why i want to know scale from 3x3 transfrom matrix. For example, trapezium1 and trapezium2 have transfrom scale s1 and s2. So we can set rectangle1(width,height) = s2/s1 * rectangle2(width,height).
Ok, here you go:
H is the homography
H = T*R*S*L with
T = [1,0,tx; 0,1,ty; 0,0,1]
R = [cos(a),sin(a),0; -sin(a),cos(a),0; 0,0,1]
S = [sx,shear,0; 0,sy,0; 0,0,1]
L = [1,0,0; 0,1,0; lx,ly,1]
where tx/ty is translation; a is rotation angle; sx/sy is scale; shear is shearing factor; lx/ly are perspective foreshortening parameters.
You want to compute sx and sy if I understood right.
Now If lx and ly are both 0 it would be easy to compute sx and sy. It would be to decompose the upper left part of H by QR decomposition resulting in Q*R where Q is an orthogonal matrix (= rotation matrix) and R is an upper triangle matrix ([sx, shear; 0,sy]).
h1 h2 h3
h4 h5 h6
0 0 1
=> Q*R = [h1,h2; h4,h5]
But lx and ly destroy the easy way. So you have to find out how the upper left part of the matrix would look like without the influence of lx and ly.
If your whole homography is:
h1 h2 h3
h4 h5 h6
h7 h8 1
then you'll have:
Q*R =
h1-(h7*h3) h2-(h8*h3)
h4-(h7*h6) h5-(h8*h6)
So if you compute Q and R from this matrix, you can compute rotation, scale and shear easily.
I've tested this with a small C++ program:
double scaleX = (rand()%200) / 100.0;
double scaleY = (rand()%200) / 100.0;
double shear = (rand()%100) / 100.0;
double rotation = CV_PI*(rand()%360)/180.0;
double transX = rand()%100 - 50;
double transY = rand()%100 - 50;
double perspectiveX = (rand()%100) / 1000.0;
double perspectiveY = (rand()%100) / 1000.0;
std::cout << "scale: " << "(" << scaleX << "," << scaleY << ")" << "\n";
std::cout << "shear: " << shear << "\n";
std::cout << "rotation: " << rotation*180/CV_PI << " degrees" << "\n";
std::cout << "translation: " << "(" << transX << "," << transY << ")" << std::endl;
cv::Mat ScaleShearMat = (cv::Mat_<double>(3,3) << scaleX, shear, 0, 0, scaleY, 0, 0, 0, 1);
cv::Mat RotationMat = (cv::Mat_<double>(3,3) << cos(rotation), sin(rotation), 0, -sin(rotation), cos(rotation), 0, 0, 0, 1);
cv::Mat TranslationMat = (cv::Mat_<double>(3,3) << 1, 0, transX, 0, 1, transY, 0, 0, 1);
cv::Mat PerspectiveMat = (cv::Mat_<double>(3,3) << 1, 0, 0, 0, 1, 0, perspectiveX, perspectiveY, 1);
cv::Mat HomographyMatWithoutPerspective = TranslationMat * RotationMat * ScaleShearMat;
cv::Mat HomographyMat = HomographyMatWithoutPerspective * PerspectiveMat;
std::cout << "Homography:\n" << HomographyMat << std::endl;
cv::Mat DecomposedRotaScaleShear(2,2,CV_64FC1);
DecomposedRotaScaleShear.at<double>(0,0) = HomographyMat.at<double>(0,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(0,1) = HomographyMat.at<double>(0,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(1,0) = HomographyMat.at<double>(1,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(1,2));
DecomposedRotaScaleShear.at<double>(1,1) = HomographyMat.at<double>(1,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(1,2));
std::cout << "Decomposed submat: \n" << DecomposedRotaScaleShear << std::endl;
Now you can test the result by using the QR matrix decomposition of http://www.bluebit.gr/matrix-calculator/
First you can try to set perspectiveX and perspectiveY to zero. You'll see that you can use the upper left part of the matrix to decompose to the input values of rotation angle, shear and scale.
But if you don't set perspectiveX and perspectiveX to zero, you can use the "DecomposedRotaScaleShear" and decompose it to QR.
You'll get a result page with
Q:
a a
-a a
here you can compute acos(a) to get the angle
R:
sx shear
0 sy
here you can read sx and sy directly.
Hope this helps and I hope there is no error ;)

About EulerAngles Conversion from Eigen C++ Library

Suppose that I have a 3-dimensional frame with rotation roll = 0, pitch = 0 and yaw = 0 about x, y and z axis respectively.
I want the frame to rotate about x-axis for 3.14159 (Pi) or roll = Pi.
Below is the code for said situation.
The problem is, when I want to convert the rotation matrix back to roll, pitch, and yaw, the code gives different answer.
Instead of roll = Pi, the result is roll = 0, pitch = pi, and yaw = pi.
I think RVC toolbox by Peter Corke on Matlab gives correct answer.
Maybe something is not right with with my program or eulerAngles in Eigen works differently? Please help.
Code:
#include <iostream>
#include <Eigen/Dense>
const double PI = 3.14159265359;
int main()
{
using ::Eigen::AngleAxisd;
using ::Eigen::Matrix3d;
using ::Eigen::Vector3d;
using ::std::cout;
using ::std::endl;
Matrix3d R,Rx;
R = AngleAxisd(PI, Vector3d::UnitX())
* AngleAxisd(0, Vector3d::UnitY())
* AngleAxisd(0, Vector3d::UnitZ());
Rx = AngleAxisd(PI, Vector3d::UnitX());
cout << R << endl << endl;
cout << Rx << endl << endl;
Vector3d ea = R.eulerAngles(0,1,2);
Vector3d eax = Rx.eulerAngles(0,1,2);
cout << ea << endl << endl;
cout << eax << endl << endl;
std::cin.ignore();
return 0;
}
Output (I round off numbers which are too small to zero):
1 0 0
0 -1 0
0 0 -1
1 0 0
0 -1 0
0 0 -1
0
3.14159
3.14159
0
3.14159
3.14159
Euler's angles are not unique. In your XYZ convention, both (0, pi, pi) and (pi,0,0) represents the same rotation, and both are correct. The Eigen::eulerAngles method consistently chooses to minimize first angles.
Please refer to the documentation of Eigen:eulerAngles. Details on various conventions of Euler-angles is well documented in Wikipedia and MathWorld.
Edit:
You will get exact results if you use M_PI, which is internally defined, instead of truncated value of PI.
The Euler-angle representation suffers from singularity. The test case that you are trying to compare is a singular position.
You may want to use quaternions or axis-angle representation if you wish to overcome the singularities.
Different order euler angles(roll1, pitch1, yaw1 or pitch2, yaw2, roll2, ...) can result in the same rotation matrix.
Actually, the Eigen document gave the answer.
Read the function declaration of Eigen document more carefully, and you will get the answer.
Matrix< typename MatrixBase< Derived >::Scalar, 3, 1 > Eigen::MatrixBase< Derived >::eulerAngles ( Index a0,
Index a1,
Index a2
) const
Each of the three parameters a0,a1,a2 represents the respective rotation axis as an integer in {0,1,2}. For instance, in:
Vector3f ea = mat.eulerAngles(2, 0, 2);
"2" represents the z axis and "0" the x axis, etc

Is there a way to prevent rounding in opencv matrix divison

I have an integer matrix and I want to perform an integer division on it. But opencv always rounds the result.
I know I can divide each element manually but I want to know is there a better way for this or not?
Mat c = (Mat_ <int> (1,3) << 80,71,64 );
cout << c/8 << endl;
// result
//[10, 9, 8]
// desired result
//[10, 8, 8]
Similar to #GPPK's optional method, you can hack it by:
Mat tmp, dst;
c.convertTo(tmp, CV_64F);
tmp = tmp / 8 - 0.5; // simulate to prevent rounding by -0.5
tmp.convertTo(dst, CV_32S);
cout << dst;
The problem is with using ints, you cant have decimal points with ints so I'm not sure how you are expecting not to get rounding.
You really have two options here, I do not think you can this without using one of these options:
You have a mathematically correct int matrix division [10, 9, 8]
You spin up your own divide function in order to give you the result you want.
Option 2:
Pseudocode:
Create a double matrix
perform the division to get the output [10.0, 8.875, 8.0]
strip away any numbers after a decimal point [10.0, 8.0, 8.0]
(optional) write these values back to a int matrix
(result) [10, 8, 8]