I have converted some code from old Opencv to the c++ version, and I get an error at matrix multiplication.
OpenCV Error: Sizes of input arguments do not match (The operation is neither
'array op array' (where arrays have the same size and the same number of channels),
nor 'array op scalar', nor 'scalar op array')
On the web, this error seems to be associated with having different number of channels - mine are all 1.
What I did find different though is a "step" - on one it is 24, on another is 32.
Where is this step ?
I created both input matrices using
cv::Mat YYY(3, 4, CV_64FC1); // step 32
cv::Mat XXX(3, 3, CV_64FC1); // step 24
Yet they seem to have different step ?
Could this be the culprit for the error in cv::multiply(XXX,YYY, DDD); ?
Is it not possible to perform operations (like a mask) between different types ?
Thank you
cv::multiply() performs element-wise multiplication of two matrices. As the error states, your matrices are not the same size.
You may be looking for matrix multiplication, which is accomplished via the * operator. Thus
cv::Mat DDD = XXX * YYY;
will compile and run correctly.
For the record, this has nothing (directly) to do with the step field, which for your matrices is the number of columns times sizeof(double), since your matrices are of type CV_64FC1. Things get more complicated if the matrices are not continuous, but that is not the case for you.
Related
I have these Eigen complex matrices:
Eigen::MatrixXcd matrix;
Eigen::VectorXcd rhs;
Eigen::VectorXcd solution;
I can successfully load values and compute the solution, but if I try:
rhs = matrix*solution;
I get compiler errors related to overloaded "+=" operator and double/complex conversion in Eigen files GeneralProduct.h and CxRealAbs.h
Similar issues with trying to compute a residual.
Is there an
Help ??
thanks
Kevin
According to their documentation, Eigen checks the validity of operations:
Validity of operations
Eigen checks the validity of the operations that you perform. When possible, it checks them at compile-time, producing compilation errors. These error messages can be long and ugly, but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
Matrix3f m;
Vector4f v;
v = m*v; // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time. Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off.
MatrixXf m(3,3);
VectorXf v(4);
v = m * v; // Run-time assertion failure here: "invalid matrix product"
Eigen
Your RHS is a vector and you are trying to assign it a value or set of values from a matrix and vector product. Even from a mathematical perspective, you should ask yourself if this is a valid type of operation.
Consider that M and N are matrices and A & B are vectors what are the results of the following operations:
M = AB // Cross product and not a dot product
M = AM, MA, BM, MB // yields a transformed matrix using the original matrix
M = AN, NA, BN, NB // yields a transformed matrix using a supplied matrix
A = AA, AB, BA // cross product and not the dot product
A = AM, MA, AN, NA, BM, MB, BN, NB // yields what?
So within Eigen at compile time it is checking the validity of your operators and therefore it is giving you the compile-time error that the += is not defined and that you have not provided an overloaded version of one. It simply doesn't know how to perform the intended operation. It shouldn't matter what the underlying type of the matrix and vector classes are. It pertains to the fact that the operators you are trying to perform are not defined somewhere.
Edit
Here is a link that describes matrix and vector multiplication. Intuitively we would assume that this yields a vector, however, this is implicitly understood, but from a matrix perspective, this vector can be row or column such that the returned vector could either be a 1xM or a Mx1 matrix.
Here is a link to show the different interpretations even though they will return the same result: This can be found in section 3.2.3 of the documentation: Maxtrix-Vector Product
The number of operations between the two different interpretations varies although they provide the same final result. Without explicitly stating if it should be a row or column product this could lead to some type of ambiguity. I don't know how Eigen determines which method it would use, I would assume that it would choose the one with the fewest operations with the least amount of computations.
This isn't the main issue of your problem. At the end of the day, the compiler is still generating an error message that an overloaded operator is not being defined. I don't know where this is coming from within your code. So this could be within their library itself when trying to perform the operations on their version of the Complex types. Without being able to run and compile your full source code I can not easily determine what exactly is generating this compiler error and I also don't know what compiler you are using that is generating your errors.
The problem was mine. I had overloaded the *operator for std::complex * real because some earlier versions of std were incomplete. The first error listed was into Eigen, which led me astray.
GeneralProduct.h(287): no viable overloaded '+='
CxRealAbs.h(111): cannot convert 'const Eigen::Map, -1, 1, 0, -1, 1>, 1, Eigen::Stride<0, 0> >' to 'double' without a conversion
Remove my overload and it compiles and runs ok. No problem with Eigen dynamic matrices.
thanks for the replies.
Kevin
I'm using Eigen v3.2.7.
I have a medium-sized rectangular matrix X (170x17) and row vector Y (170x1) and I'm trying to solve them using Eigen. Octave solves this problem fine using X\Y, but Eigen is returning incorrect values for these matrices (but not smaller ones) - however I suspect that it's how I'm using Eigen, rather than Eigen itself.
auto X = Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>{170, 17};
auto Y = Eigen::Matrix<T, Eigen::Dynamic, 1>{170};
// Assign their values...
const auto theta = X.colPivHouseholderQr().solve(Y).eval(); // Wrong!
According to the Eigen documentation, the ColPivHouseholderQR solver is for general matrices and pretty robust, but to make sure I've also tried the FullPivHouseholderQR. The results were identical.
Is there some special magic that Octave's mldivide does that I need to implement manually for Eigen?
Update
This spreadsheet has the two input matrices, plus Octave's and my result matrices.
Replacing auto doesn't make a difference, nor would I expect it to because construction cannot be a lazy operation, and I have to call .eval() on the solve result because the next thing I do with the result matrix is get at the raw data (using .data()) on tail and head operations. The expression template versions of the result of those block operations do not have a .data() member, so I have to force evaluation beforehand - in other words theta is the concrete type already, not an expression template.
The result for (X*theta-Y).norm()/Y.norm() is:
2.5365e-007
And the result for (X.transpose()*X*theta-X.transpose()*Y).norm() / (X.transpose()*Y).norm() is:
2.80096e-007
As I'm currently using single precision float for my basic numerical type, that's pretty much zero for both.
According to your verifications, the solution you get is perfectly fine. If you want more accuracy, then use double floating point numbers. Note that MatLab/Octave use double precision by default.
Moreover, it might also likely be that your problem is not full rank, in which case your problem admit an infinite number of solution. ColPivHouseholderQR picks one, somehow arbitrarily. On the other hand, mldivide will pick the minimal norm one that you can also obtain with Eigen::BDCSVD (Eigen 3.3), or the slower Eigen::JacobiSVD.
I'm attempting to run the halide FFT implementation found here for benchmarking against FTTW. I'm able to run the implementation as is, but I've encountered some issues when digging a little deeper. The routine fails with errors for different values of H and W (the height and width of the random input image). For example, I get the following error with H=W=5:
Error at ./fft.cpp:603:
Cannot vectorize dimension n0 of function v_S1_R5$6 because the function is scheduled inline.
Aborted (core dumped)
I've been attempting to test on small image sizes (i.e. 5x5) to compare the results of the algorithms, but I can't get the algorithm to complete for any values less than 16, which even at that point makes checking the values a long task. The FFT also fails for values greater than 32, seemingly not working for all non-powers of 2.
Has anyone run into this issue before? Are there any other implementations of FFT in halide that work for different sized images?
For reference, I'm running the code on RHEL7 using gcc 4.8.3.
I think there are a few issues going on. First, there looks to be a bug for very small FFTs that only use one pass. I think that's what you hit in your first case.
The second issue is that W and H need to be a multiple of the vector size of your target, not necessarily that W and H need to be a power of 2. For example, W = 48, H = 32 seems to work for me. There's a further complication that for real FFTs, one dimension gets internally cut in half (this is how efficient real FFTs are implemented), so if you are on an AVX machine, that dimension must be a multiple of 16 (2x the vector width of 8 floats).
If you want to run on really small FFTs, you could remove the vectorize scheduling directives, then it should work, at least for learning purposes.
However, I would point out that running 5x5 won't be very interesting, because it will be done in just one radix 5 pass, i.e. just a plain old DFT (this also appears to be broken, as you've found). 4x4 (factored into 2 radix 2 passes) will be the smallest interesting FFT. When debugging it, I often used 8x8 FFTs (radix 4, radix 2).
I came across the function addWeighted in OpenCV, where it was mentioned that it:
Calculates the weighted sum of two arrays.
Does that mean we multiply the pixels in the first array by some weight, and likewise to the second array, and then simply some the relevant pixel values together?
Thanks.
From the OpenCV documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html
You answer is not completely correct (unless your gamma is 0) because you have to sum the gamma value.
Yes, as it says there in the docs:
The function addWeighted calculates the weighted sum of two arrays
as follows:
dst(I) = saturate(src1(I)*alpha + src2(I)*beta + gamma)
where I is a multi-dimensional index of array elements. In case of
multi-channel arrays, each channel is processed independently.
The function can be replaced with a matrix expression:
dst = src1*alpha + src2*beta + gamma;
where saturate is the saturate_cast<>() conversion function (which performs saturation as opposed to modular arithmetic that wraps around)
You can always check the source as well:
https://github.com/Itseez/opencv/blob/2.4/modules/core/src/arithm.cpp#L2114
The function has multiple execution paths depending on how you build it (what optimizations are available: SSE2, NEON, unrolled version, and then finally a fallback implementation) and the data types involved.
I need to calculate rank of 4096x4096 sparse matrix, and I use C/C++ code.
I found some libraries (like Armadillo) that do it but they're too slow (almost 5 minutes).
I've also tried two Open Source version of Matlab (Freemat and Octave) but both crashed when I tried to make a test with a script.
5 minutes isn't so much but I must get rank from something like a million of matrix so the faster the better.
Someone knows a fast library for rank computation?
The Eigen library supports sparse matrices, try it out.
Computing the algebraic rank is O(n^3), where n is the matrix size, so it's inherently slow. You need eg. to perform pivoting, and this is slow and inaccurate if your matrix is not well conditioned (for n = 4096, a typical matrix is very ill conditioned).
Now, what is the rank ? It is the dimension of the image. It is very difficult to compute when n is large and it'll be spoiled by any small numerical inaccuracy of the input. For n = 4096, unless you happen to have particularly well conditioned matrices, this will prevent you from doing anything useful with a pivoting algorithm.
The best way is in fact to fix a cutoff epsilon, compute the singular values s_1 > ... > s_n and take as the rank the lowest integer r such that sum(s_i^2, i > r) < epsilon^2 * sum(s_i^2).
You thus need a sparse SVD routine, eg. from there.
This may not be faster, but to the very least it will be correct.
You can ask for less singular values that you need to speed up things. This is a tough problem, and with no info on the background and how you got these matrices, there is nothing more we can do.
Try the following code (the documentation is here).
It is an example for calculating the rank of the matrix A with Eigen library:
MatrixXd A(2,2);
A << 1 , 0, 1, 0;
FullPivLU<MatrixXd> luA(A);
int rank = luA.rank();