I am trying to do matrix operations using C++ STL containers.
There are two vectors of sizes say Y and X of sizes m,n(m>n). I want to multiply X with a scalar and add it from a given index in Y. In this process I don't want to transform X (don't want to use std::transform). In fact the X's are columns in a matrix DPtr. One version I tried is given below.
std::vector<double> D(&DPtr[index1],&DPtr[index1]+size_C);
std::transform(D.begin(), D.end(), D.begin(),std::bind2nd(std::multiplies<double>(), val1*val2));
std::transform (D.begin(), D.end(), YPtr.begin()+index2, YPtr.begin()+index2, std::plus<double>());
I am trying to copy the column in to a temporary vector and do operations on it.
Can some one help me in rewriting the code in a much simpler manner where I need not copy columns into another vector?
I am guessing I have to use std::for_each and lamda expression or a function call? But I am new to C++?
Just to give a lead, I want to write it as
std::for_each(YPtr.begin()+index2,YPtr.begin()+index2+(size_c-1),scalarAdd);
using a function scalarAdd or any lamda expression where I can access DPtr directly.
Also can I write
YPtr.begin()+index2+(size_c-1)
.
as the second argument.Is it valid?
Also Imagine I made the matrix as a C++-vector where all columns of DPtr matrix are stored in one single C++ vector D.
Visual Representation of my question
May I suggest you use a dedicated linear-algebra library like Eigen? With that you could simply write Y += X * a, and the library+compiler will figure out the best implementation for you.
Since you are using C++11, you could use std::transform together with a lambda (std::for_each is not recommended since you need to transform your Y).
std::transform(DPtr.begin() + index1, DPtr.begin() + index1 + size_c, // X
YPtr.begin() + index2, // Y_in
YPtr.begin() + index2, // Y_out
[](double x, double y_in) { return a*x + y_in; });
// y_out = a*x + y_in, for every entry.
Related
Consider the following code:
#include <Eigen/Core>
using Matrix = Eigen::Matrix<float, 2, 2>;
Matrix func1(const Matrix& mat) { return mat + 0.5; }
Matrix func2(const Matrix& mat) { return mat / 0.5; }
func1() does not compile; you need to replace mat with mat.array() in the function body to fix it ([1]). However, func2() does compile as-is.
My question has to do with why the API is designed this way. Why is addition-with-scalar and division-by-scalar treated differently? What problems would arise if the following method is added to the Matrix class, and why haven't those problems arisen already for the operator/ method?:
auto operator+(Scalar s) const { return this->array() + s; }
From a mathematics perspective, a scalar added to a matrix "should" be the same as adding the scalar only to the diagonal. That is, a math text would usually use M + 0.5 to mean M + 0.5I, for I the identity matrix. There are many ways to justify this. For example, you can appeal to the analogy I = 1, or you can appeal to the desire to say Mx + 0.5x = (M + 0.5)x whenever x is a vector, etc.
Alternatively, you could take M + 0.5 to add 0.5 to every element. This is what you think is right if you don't think of matrices from a "linear algebra mindset" and treat them as just collections (arrays) of numbers, where it is natural to just "broadcast" scalar operations.
Since there are multiple "obvious" ways to handle + between a scalar and a matrix, where someone expecting one may be blindsided by the other, it is a good idea to do as Eigen does and ban such expressions. You are then forced to signify what you want in a less ambiguous way.
The natural definition of / from an algebra perspective coincides with the array perspective, so no reason to ban it.
I have two Eigen::Array which have the same number of columns. One of them, a, has one row, and the other, b, has two rows.
What I want to do, is to multiply every column of b with the entry in the respective column in a, so that it behaves like this:
ArrayXXd result;
result.resizeLike(b);
for (int i=0; i<a.cols(); ++i)
result.col(i) = a.col(i)[0] * b.col(i);
However, it's part of a rather long expression with several of such multiplications, and I don't want to have to evaluate intermediate results in temporaries. Therefore, I'd rather get an Eigen expression of the above, like
auto expr = a * b;
This, of course, triggers an assertion, because a.rows() != b.rows().
What I tried, which works, is:
auto expr = a.replicate(2,1) * b;
However, the resulting code is very slow, so I hope there's a better option.
Possibly related.
Eigen provides the possibility to use broadcasting for such cases. However, the one-dimensional array should first be converted into a Vector:
broadcasting operations can only be applied with an object of type Vector
This will work in your case:
RowVectorXd av = a;
ArrayXXd expr = b.rowwise() * av.array();
Edit
To avoid a copy of the data into a new vector one can use Map:
ArrayXXd expr = b.rowwise() * RowVectorXd::Map(&a(0), a.cols()).array();
I have posted the same solution to your previous question but here is my answer again:
Define your arrays with fixed number of rows but dynamic number of columns whereas ArrayXXd type yields an array with both dynamic number of rows and columns.
Use fixed-size version of operations. This should typically give faster code.
I have a vector containing complex values (either defined as std::vector<std::complex<double>> or arma::cx_vec) and would like to convert them into vectors containing double-values of twice the size. Afterwards I would like to convert them back again. Currently I use two loops (here from going from double-vectors to complex vectors and back):
//get x and dx as vectors containing real values, with a size of 2 * dim
arma::cx_colvec local_x(dim), local_dx(dim);
for(size_t i = 0; i < x.size(); i += 2) {
local_x(i / 2) = std::complex<double>(x(i), x(i + 1));
}
//Do something with local_x and local_dx
for(size_t i = 0; i < 2 * dim; i += 2) {
dx(i) = local_dx(i / 2).real();
dx(i + 1) = local_dx(i / 2).imag();
}
//Send dx back
I can imagine that that might be rather slow. Therefore, are there other possibilities of reshaping those vectors from complex to double and back? Ideally involving iterators (such that I can use methods such as transform()) instead of a loop over a size.
Background to this question is: I have complex input data which has to be put into a function A which I can not modify, but which calls a user-supplied function again (called U). This function does not support complex data types, only real types. Therefore, my intention was to flatten the vector before putting it into A, unflatten it in U, do the calculations on it, reflatten it and send it back again.
std::complex<double> is explicitly called out as something that can be treated as a double[2]
Array-oriented access
For any object z of type complex<T>, reinterpret_cast<T(&)[2]>(z)[0]
is the real part of z and reinterpret_cast<T(&)[2]>(z)[1] is the
imaginary part of z.
For any pointer to an element of an array of complex<T> named p and
any valid array index i, reinterpret_cast<T*>(p)[2*i] is the real part
of the complex number p[i], and reinterpret_cast<T*>(p)[2*i + 1] is
the imaginary part of the complex number p[i]
The intent of this requirement is to preserve binary compatibility
between the C++ library complex number types and the C language
complex number types (and arrays thereof), which have an identical
object representation requirement.
So you can use std::vector::data() to obtain a complex<double> *, and reinterpret it as a double * with twice as many elements.
I have a dense matrix A of size 2N*N that has to be multiplied by a matrix B, of size N*2N.
Matrix B is actually a horizontal concatenation of 2 sparse matrices, X and Y. B requires only a read-only access.
Unfortunately for me, there doesn't seem to be a concatenate operation for sparse matrices. Of course, I could simply create a matrix of size N*2N and populate it with the data, but this seems rather wasteful. It seems like there could be a way to group X and Y into some sort of matrix view.
Additional simplification in my case is that either X or Y is a zero matrix.
For your specific case, it is sufficient to multiply A by either X or Y - depending on which one is nonzero. The result will be exactly the same as the multiplication by B (simple matrix algebra).
If your result matrix is column major (the default), you can assign partial results to vertical sub-blocks like so (if X or Y is structurally zero, the corresponding sub-product is calculated in O(1)):
typedef Eigen::SparseMatrix<float> SM;
void foo(SM& out, SM const& A, SM const& X, SM const &Y)
{
assert(X.rows()==Y.rows() && X.rows()==A.cols());
out.resize(A.rows(), X.cols()+Y.cols());
out.leftCols(X.cols()) = A*X;
out.rightCols(Y.cols()) = A*Y;
}
If you really want to, you could write a wrapper class which holds references to two sparse matrices (X and Y) and implement operator*(SparseMatrix, YourWrapper) -- but depending on how you use it, it is probably better to make an explicit function call.
I am trying to find tf.maximum(X, Y) function equivalence in C++. Basically, what tf.maximum() does is "Returns the max of x and y (i.e. x > y ? x : y) element-wise."
If there is a library or a built-in function, I would like to use it.
I really want to find the fastest option for that except going through all the elements in a matrix or vector.
The main reason why I want to use a max function, I would like to replace any negative value item with 0 in a matrix or a vector in C++.
Any suggestions? Thank you in advance.
The normal C++ way of replacing values with other values is std::transform and a function object
std::transform(thing.begin(), thing.end(), thing.begin(), [](float value) { return std::max(0, value); });