I have the following MATLAB code which I want to transport into C++
Assume Gr is 2d matrix and 1/newscale == 0.5
Gr = imresize(Gr, 1 / newScale);
in the MATLAB documentation:
B = imresize(A, scale) returns image B that is scale times the size of
A. The input image A can be a grayscale, RGB, or binary image. If
scale is between 0 and 1.0, B is smaller than A. If scale is greater
than 1.0, B is larger than A.
So this means I will get a 2D matrix == matrix_width/2 and matrix_height/2
How do I calculate the values? The default according to the docs are coming from cubic interpolation for nearest 4X4.
I can't find a sample code for C++ that does the same. Can you please provide a link to such code?
I also found this OpenCV function, resize.
Does it do the same as the MATLAB one?
Yes, just be aware that MATLAB's imresize has anti-aliasing enabled by default:
imresize(A,scale,'bilinear')
vs. what you would get with cv::resize(), which does not have anti-aliasing:
imresize(A,scale,'bilinear','AntiAliasing',false)
And as Amro mentioned, the default in MATLAB is bicubic, so be sure to specify.
Bilinear
No code modifications are necessary to get matching results with bilinear interpolation.
Example OpenCV snippet:
cv::Mat src(4, 4, CV_32F);
for (int i = 0; i < 16; ++i)
src.at<float>(i) = i;
std::cout << src << std::endl;
cv::Mat dst;
cv::resize(src, dst, Size(0, 0), 0.5, 0.5, INTER_LINEAR);
std::cout << dst << std::endl;
Output (OpenCV)
[0, 1, 2, 3;
4, 5, 6, 7;
8, 9, 10, 11;
12, 13, 14, 15]
[2.5, 4.5;
10.5, 12.5]
MATLAB
>> M = reshape(0:15,4,4).';
>> imresize(M,0.5,'bilinear','AntiAliasing',true)
ans =
3.125 4.875
10.125 11.875
>> imresize(M,0.5,'bilinear','AntiAliasing',false)
ans =
2.5 4.5
10.5 12.5
Note that the results are the same with anti-aliasing turned off.
Bicubic Difference
However, between 'bicubic' and INTER_CUBIC, the results are different on account of the weighting scheme! See here for details on the mathematical difference. The issue is in the interpolateCubic() function that computes the cubic interpolant's coefficients, where a constant of a = -0.75 is used rather than a = -0.5 like in MATLAB. However, if you edit imgwarp.cpp and change the code :
static inline void interpolateCubic( float x, float* coeffs )
{
const float A = -0.75f;
...
to:
static inline void interpolateCubic( float x, float* coeffs )
{
const float A = -0.50f;
...
and rebuild OpenCV (tip: disable CUDA and the gpu module for short compile time), then you get the same results:
MATLAB
>> imresize(M,0.5,'bicubic','AntiAliasing',false)
ans =
2.1875 4.3125
10.6875 12.8125
OpenCV
[0, 1, 2, 3;
4, 5, 6, 7;
8, 9, 10, 11;
12, 13, 14, 15]
[2.1875, 4.3125;
10.6875, 12.8125]
More about cubic HERE.
In OpenCV, the call would be:
cv::Mat dst;
cv::resize(src, dst, Size(0,0), 0.5, 0.5, INTER_CUBIC);
You might then have to do some smoothing/blurring to emulate the anti-aliasing which MATLAB also performs by default (see #chappjc's answer)
Related
I am working on a c++ codebase right now which uses a matrix library to calculate various things. One of those things is calculating the inverse of a matrix. It uses gauss elimation to achieve that. But the result is very inaccurate. So much so that multiplying the inverse matrix with the original matrix isn't even close the the identity matrix.
Here is the code that is used to calculate the inverse, the matrix is templated on a numerical type and the rows and columns:
/// \brief Take the inverse of the matrix.
/// \return A new matrix which is the inverse of the current one.
matrix<T, M, M> inverse() const
{
static_assert(M == N, "Inverse matrix is only defined for square matrices.");
// augmented the current matrix with the identiy matrix.
auto augmented = this->augment(matrix<T, M, M>::get_identity());
for (std::size_t i = 0; i < M; i++)
{
// divide the current row by the diagonal element.
auto divisor = augmented[i][i];
for (std::size_t j = 0; j < 2 * M; j++)
{
augmented[i][j] /= divisor;
}
// For each element in the column of the diagonal element that is currently selected
// set all element in that column to 0 except the diagonal element by using the currently selected row diagonal element.
for (std::size_t j = 0; j < M; j++)
{
if (i == j)
{
continue;
}
auto multiplier = augmented[j][i];
for (std::size_t k = 0; k < 2 * M; k++)
{
augmented[j][k] -= multiplier * augmented[i][k];
}
}
}
// Slice of the the new identity matrix on the left side.
return augmented.template slice<0, M, M, M>();
}
Now I have made a unit test which test if the inverse is correct using pre computed values. I try two matrices one 3x3 and one 4x4. I used this website to compute the inverse: https://matrix.reshish.com/ and they do match to a certain degree. since the unit test does succeed. But once I calculate the original matrix * the inverse nothing even resembling an identity matrix is achieved. See the comment in the code below.
BOOST_AUTO_TEST_CASE(matrix_inverse)
{
auto m1 = matrix<double, 3, 3>({
{7, 8, 9},
{10, 11, 12},
{13, 14, 15}
});
auto inverse_result1 = matrix<double,3, 3>({
{264917625139441.28, -529835250278885.3, 264917625139443.47},
{-529835250278883.75, 1059670500557768, -529835250278884.1},
{264917625139442.4, -529835250278882.94, 264917625139440.94}
});
auto m2 = matrix<double, 4, 4>({
{7, 8, 9, 23},
{10, 11, 12, 81},
{13, 14, 15, 11},
{1, 73, 42, 65}
});
auto inverse_result2 = matrix<double, 4, 4>({
{-0.928094660194201, 0.21541262135922956, 0.4117111650485529, -0.009708737864078209},
{-0.9641231796116679, 0.20979975728155775, 0.3562651699029188, 0.019417475728154842},
{1.7099261731391882, -0.39396237864078376, -0.6169346682848 , -0.009708737864076772 },
{-0.007812499999999244, 0.01562499999999983, -0.007812500000000278, 0}
});
// std::cout << (m1.inverse() * m1) << std::endl;
// results in
// 0.500000000 1.000000000 -0.500000000
// 1.000000000 0.000000000 0.500000000
// 0.500000000 -1.000000000 1.000000000
// std::cout << (m2.inverse() * m2) << std::endl;
// results in
// 0.396541262 -0.646237864 -0.689016990 -2.162317961
// 1.206917476 2.292475728 1.378033981 3.324635922
// -0.884708738 -0.958737864 -0.032766990 -3.756067961
// -0.000000000 -0.000000000 -0.000000000 1.000000000
BOOST_REQUIRE_MESSAGE(
m1.inverse().fuzzy_equal(inverse_result1, 0.1) == true,
"3x3 inverse is not the expected result."
);
BOOST_REQUIRE_MESSAGE(
m2.inverse().fuzzy_equal(inverse_result2, 0.1) == true,
"4x4 inverse is not the expected result."
);
}
I am at my wits end. I am by no means a specialist on matrix math since I had to learn it all on the job but this really is stumping me.
The complete code matrix class is available at:
https://codeshare.io/johnsmith
Line 404 is where the inverse function is located.
Any help is appreciated.
As already established in the comments the matrix of interest is singular and thus there is no inverse.
Great, your testing found already the first issue in the code - this case isn't handled properly, no error is raised.
The bigger problem is, that this is not easy to detect: If there where no errors due to rounding errors, it would be a cake of piece - just test that divisor isn't 0! But there are rounding errors in floating operations, so divisor will be a very small nonzero number.
And there is no way to tell, whether this nonzero value due to rounding errors or to the fact that the matrix is near singular (but not singular). However, if matrix is near singular it has a poor condition and thus the results cannot be trusted anyway.
So ideally, the algorithm should not only calculate the inverse, but also (estimate) the condition of the original matrix, so the caller can react upon a bad condition.
Probably it is wise to use well-known and well-tested libraries for this kind of calculation - there is a lot to be considered and what can be done wrong.
I'm trying to implement simple a neural network example in OpenCV version 3.0.0. According to latest reference. To make things simple, I use random 15 examples from iris data set for training. I have also reduced the output species to 2, just to make things much more simpler.
where trainData, and trainLabels are declared as:
Mat trainData(15, 4, CV_32FC1); //15 examples with 4 features each
Mat trainLabels(15, 1, CV_32FC1);
trainData:
[5.5, 3.5, 1.3, 0.2;
6.5, 2.8, 4.5999999, 1.5;
6.3000002, 2.3, 4.4000001, 1.3;
6, 2.2, 4, 1;
4.5999999, 3.0999999, 1.5, 0.2;
5, 3.2, 1.2, 0.2;
7.4000001, 2.8, 6.0999999, 1.9;
6, 2.9000001, 4.5, 1.5;
5, 3.4000001, 1.5, 0.2;
6.4000001, 2.9000001, 4.3000002, 1.3;
7.1999998, 3.5999999, 6.0999999, 2.5;
5.0999999, 3.3, 1.7, 0.5;
7.1999998, 3, 5.8000002, 1.6;
6.0999999, 2.8, 4, 1.3;
5.8000002, 2.7, 4.0999999, 1]
trainLabels:
[0;
0;
0;
0;
0;
0;
1;
0;
0;
0;
1;
0;
1;
0;
0]
The neural network code compiles and run without error up to predict. Here is the sniplet:
Ptr< ANN_MLP > nn = ANN_MLP::create();
nn->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM);
nn->setTrainMethod(cv::ml::ANN_MLP::BACKPROP);
nn->setBackpropMomentumScale(0.1);
nn->setBackpropWeightScale(0.1);
nn->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, (int)100000, 1e-6));
//setting the NN layer size
cv :: Mat layers = cv :: Mat (4 , 1 , CV_32SC1 );
layers . row (0) = cv :: Scalar (4) ;
layers . row (1) = cv :: Scalar (4) ;
layers . row (2) = cv :: Scalar (4) ;
layers . row (3) = cv :: Scalar (1) ;
nn->setLayerSizes(layers);
nn->train(trainData, ROW_SAMPLE, trainLabels);
But whenever I try to do "predict", I get "Segmentation fault (core dumped)" error :
nn->predict(trainData.row(1));
What is the problem here, and how can I fix it? Thank you.
For reference in Python, I use: trainData.getTestResponses(), where trainData is the entire struct of my input data.
I hope this helps...OpenCV 3's new structure confused me at first, but I appreciate the methods in which things are done now.
Try to switch the order of the setActivationFunction() and setLayerSize() functions. You should call the setLayerSize() before setAtctivationFunction(). It was the solution in my case (OpenCV 3.1).
I have an integer matrix and I want to perform an integer division on it. But opencv always rounds the result.
I know I can divide each element manually but I want to know is there a better way for this or not?
Mat c = (Mat_ <int> (1,3) << 80,71,64 );
cout << c/8 << endl;
// result
//[10, 9, 8]
// desired result
//[10, 8, 8]
Similar to #GPPK's optional method, you can hack it by:
Mat tmp, dst;
c.convertTo(tmp, CV_64F);
tmp = tmp / 8 - 0.5; // simulate to prevent rounding by -0.5
tmp.convertTo(dst, CV_32S);
cout << dst;
The problem is with using ints, you cant have decimal points with ints so I'm not sure how you are expecting not to get rounding.
You really have two options here, I do not think you can this without using one of these options:
You have a mathematically correct int matrix division [10, 9, 8]
You spin up your own divide function in order to give you the result you want.
Option 2:
Pseudocode:
Create a double matrix
perform the division to get the output [10.0, 8.875, 8.0]
strip away any numbers after a decimal point [10.0, 8.0, 8.0]
(optional) write these values back to a int matrix
(result) [10, 8, 8]
I am using C++ and opencv. I have to obtain a transformation matrix when I multiply a matrix,A, with another matrix,B. But matrix B needs to change before multiplying it to A. If B is a 2x3 matrix, it needs to be changed to a 3x3 with the first 2 rows containing the same elements as the original B matrix,but with the last row having all 1's. More simple put,I need to add a last row of 1's to the original B matrix. I want to know whether I can achieve this with any specific Mat matrix operation. Thankyou
You need to use Mat::push_back which will adds elements to the bottom of the matrix.
For example
Mat A = (Mat_<uchar>(3,4) << 1, 2, 3, 4,\
5, 6, 7, 8,\
9, 10, 11, 12); // 3X4 matrix.
Mat B = (Mat_<uchar>(1,4) << 13, 14, 15, 16); // 1X4 matrix
A.push_back(B); // Now A become 4X4 matrix
A straight forward way, but probably not the fastest or prettiest
Mat B_new(3,3,CV_32F);
B_new.row(0) = B.row(0);
B_new.row(1) = B.row(1);
B_new.row(2) = Mat::ones(1,3,CV_32F);
You should take a look at the Mat type documentation
I'm trying to use LogPolar transform to obtain the scale and the rotation angle from two images. Below are two 300x300 sample images. The first rectangle is 100x100, and the second rectangle is 150x150, rotated by 45 degree.
The algorithm:
Convert both images to LogPolar.
Find the translational shift using Phase Correlation.
Convert the translational shift to scale and rotation angle (how to do this?).
My code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/imgproc/imgproc_c.h>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main()
{
cv::Mat a = cv::imread("rect1.png", 0);
cv::Mat b = cv::imread("rect2.png", 0);
if (a.empty() || b.empty())
return -1;
cv::imshow("a", a);
cv::imshow("b", b);
cv::Mat pa = cv::Mat::zeros(a.size(), CV_8UC1);
cv::Mat pb = cv::Mat::zeros(b.size(), CV_8UC1);
IplImage ipl_a = a, ipl_pa = pa;
IplImage ipl_b = b, ipl_pb = pb;
cvLogPolar(&ipl_a, &ipl_pa, cvPoint2D32f(a.cols >> 1, a.rows >> 1), 40);
cvLogPolar(&ipl_b, &ipl_pb, cvPoint2D32f(b.cols >> 1, b.rows >> 1), 40);
cv::imshow("logpolar a", pa);
cv::imshow("logpolar b", pb);
cv::Mat pa_64f, pb_64f;
pa.convertTo(pa_64f, CV_64F);
pb.convertTo(pb_64f, CV_64F);
cv::Point2d pt = cv::phaseCorrelate(pa_64f, pb_64f);
std::cout << "Shift = " << pt
<< "Rotation = " << cv::format("%.2f", pt.y*180/(a.cols >> 1))
<< std::endl;
cv::waitKey(0);
return 0;
}
The log polar images:
For the sample image images above, the translational shift is (16.2986, 36.9105). I have successfully obtain the rotation angle, which is 44.29. But I have difficulty in calculating the scale. How to convert the given translational shift to obtain the scale?
You have two Images f1, f2 with f1(m, n) = f2(m/a , n/a) That is f1 is scaled by factor a
In logarithmic notation that is equivalent to f1(log m, log n) = f2(logm − log a, log n − log a) where log a is the shift in your phasecorrelated image.
Compare B. S. Reddy, B. N. Chatterji: An FFT-Based Technique for Translation, Rotation and
Scale-Invariant Image Registration, IEEE Transactions On Image Processing Vol. 5
No. 8, IEEE, 1996
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.4387&rep=rep1&type=pdf
here is python version
which tells
ir = abs(ifft2((f0 * f1.conjugate()) / r0))
i0, i1 = numpy.unravel_index(numpy.argmax(ir), ir.shape)
angle = 180.0 * i0 / ir.shape[0]
scale = log_base ** i1
The value for the scale factor is indeed exp(pt.y). However, since you used a "magnitude scale parameter" of 40 for the cvLogPolar function, you now need to divide pt.x by 40 to get the correct value for the displacement:
Scale = exp( pt.x / 40) = exp(16.2986 / 40) = 1.503
The value of the "magnitude scale parameter" for the cvLogPolar function does not affect the displacement produced by the rotation angle pt.x, because according to the math, it cancels out. For that reason, your formula for the rotation gives the correct value.
On another note, I believe the formula for the rotation should actually be:
Rotation = pt.y*360/(a.cols)
But, for some strange reason, the ">> 1" that you added is causing the result to be multiplied by 2 (which I believe you compensated for by multiplying by 180 instead of 360?) Remove it, and you'll see what I mean.
Also, ">>1" is causing a division by 2 in:
cvPoint2D32f(a.cols >> 1, a.rows >> 1)
If to set the center parameter of the cvLogPolar function to the center of the image (which is what you want):
cvPoint2D32f(a.cols/2, a.rows/2)
and
cvPoint2D32f(b.cols/2, b.rows/2)
then, you'll also get the correct value for the rotation (i.e. the same value that you got), and for the scale.
This thread was helpful in getting me started on rotation-invariant phase correlation, so I hope my input will help resolve any lingering issues.
We aim to calculate the scale and rotation (which is incorrectly calculated in the code). Let's start by gathering the equations from the logPolar docs. There they state the following:
(1) I = (dx,dy) = (x-center.x, y-center.y)
(2) rho = M * ln(magnitude(I))
(3) phi = Ky * angle(I)_0..360
Note: rho is pt.x and phi is pt.y in the code above
We also know that
(4) M = src.cols/ln(maxRadius)
(5) Ky = src.rows/360
First, let's solve for scale. Solving for magnitude(I) (i.e. scale) in equation 2, we get
(6) magnitude(I) = scale = exp(rho/M)
Then we substitute for M and simplify to get
(7) magnitude(I) = scale = exp(rho*ln(maxRadius)/src.cols) = pow(maxRadius, rho/src.cols)
Now let's solve for rotation. Solving for angle(I) (i.e. rotation) in equation 3, we get
(8) angle(I) = rotation = phi/Ky
Then we substitute for Ky and simplify to get
(9) angle(I) = rotation = phi*360/src.rows
So, scale and rotation can be calculated using equations 7 and 9, respectively. It might be worth noting that you should use equation 4 for calculation M and Point2f center( (float)a.cols/2, (float)a.rows/2 ) for calculating center as opposed to what is in the code above. There are good bits of info in this logpolar example opencv code.
From the values by phase correlation, the coordinates are rectangular coordinates hence (16.2986, 36.9105) are (x,y). The scale is calculated as
scale = log((x^2 + y^ 2)^0.5) which is approximately 1.6(near to 1.5).
When we calculate angle by using formulae theta = arctan(y/x) = 66(approx).
The theta value is way of the real value(45 in this case).