I am just getting familiar with Boost GIL (and image processing in general) and suspect that this is simple, but I haven't found the relevant documentation.
I have a set of image views that I would like to combine with an arbitrary function. For simplicity, lets say the images are aligned (same size and locator type) and I just want to add the pixel values together. One approach would be to create a combining iterator from a zip_iterator and a transform_iterator, but I'm guessing that there are image processing algorithms that are conveniently abstracted for this purpose.
The Mandelbrot example in the documentation is probably relevant, because it computes pixel values from a function, but I'm getting lost in the details and having trouble adapting it to my case.
The only binary channel algorithm I can find is channel_multiply.
The algorithm you're probably really looking for is transform_pixels which does combine in a binary variation.
Here's the simplest example I could make.
#include <boost/gil/gil_all.hpp>
#include <boost/gil/extension/io/png_io.hpp>
namespace gil = boost::gil;
int main() {
using Img = gil::rgba8_image_t;
using Pix = Img::value_type;
Img a, b;
gil::png_read_image("/tmp/a.png", a);
gil::png_read_image("/tmp/b.png", b);
assert(a.dimensions() == b.dimensions());
Img c(a.dimensions());
gil::transform_pixels(view(a), view(b), view(c), [](gil::rgba8_ref_t a, gil::rgba8_ref_t b) {
gil::red_t R;
gil::green_t G;
gil::blue_t B;
gil::alpha_t A;
return Pix (
get_color(a, R) + get_color(b, R),
get_color(a, G) + get_color(b, G),
get_color(a, B) + get_color(b, B),
get_color(a, A) + get_color(b, A)
);
});
gil::png_write_view("/tmp/c.png", view(c));
}
When a.png is and b.png is (note transparencies too), c.png became (again, note the transparencies).
You will want to fine-tune the transformation function to do something more useful suppose.
Related
Consider the following code:
#include <Eigen/Core>
using Matrix = Eigen::Matrix<float, 2, 2>;
Matrix func1(const Matrix& mat) { return mat + 0.5; }
Matrix func2(const Matrix& mat) { return mat / 0.5; }
func1() does not compile; you need to replace mat with mat.array() in the function body to fix it ([1]). However, func2() does compile as-is.
My question has to do with why the API is designed this way. Why is addition-with-scalar and division-by-scalar treated differently? What problems would arise if the following method is added to the Matrix class, and why haven't those problems arisen already for the operator/ method?:
auto operator+(Scalar s) const { return this->array() + s; }
From a mathematics perspective, a scalar added to a matrix "should" be the same as adding the scalar only to the diagonal. That is, a math text would usually use M + 0.5 to mean M + 0.5I, for I the identity matrix. There are many ways to justify this. For example, you can appeal to the analogy I = 1, or you can appeal to the desire to say Mx + 0.5x = (M + 0.5)x whenever x is a vector, etc.
Alternatively, you could take M + 0.5 to add 0.5 to every element. This is what you think is right if you don't think of matrices from a "linear algebra mindset" and treat them as just collections (arrays) of numbers, where it is natural to just "broadcast" scalar operations.
Since there are multiple "obvious" ways to handle + between a scalar and a matrix, where someone expecting one may be blindsided by the other, it is a good idea to do as Eigen does and ban such expressions. You are then forced to signify what you want in a less ambiguous way.
The natural definition of / from an algebra perspective coincides with the array perspective, so no reason to ban it.
I have a question on why matrix multiplication is %*% in R but just * in C++.
Example:
in R script:
FunR <- function(mX, mY) {
mZ = mX %*% mY
mZInv = solve(mZ)
return(mZInv)
}
in C++ script:
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
mat FunC(mat mX, mat mY) {
mat mZ = mX * mY;
mat mZInv = mZ.i();
return mZInv;
}
I ask because C++ can be easily incorporated into R documents.
Also, the "*" character is used to multiply matrices in R but it is not the standard matrix product as we know it. How are you supposed to know this stuff?
R and C++ are different languages. There is no reason to expect them to share syntax. You should be more surprised when the syntax matches than when it differs.
That being said, when you have a package, like Rcpp, that integrates languages, there usually is some attempt to make the syntax consistent. So why not use the same operator as R in this case? Because it is not possible. The list of operators in C++ is fixed, and %*% is not on that list. The operator * is on the list, though, so that operator could be chosen. Always better to choose something that can be chosen than to have nothing work. :)
(In case it got missed along the way: C++ has no native support for matrix operations. There is no matrix multiplication "in C++", only in specific libraries, such as Armadillo.)
I am trying to compute colsum(N * P), where N is a sparse, 1M by 2500 matrix, and P is a dense 2500 by 1.5M matrix. I am using the Eigen C++ library with Intel's MKL library. The issue is that the matrix N*P can't actually exist in memory, it's way too big (~10 TB). My question is whether Eigen will be able to handle this computation through some combination of lazy evaluation and parallelism? It says here that Eigen won't make temporary matrices unnecessarily: http://eigen.tuxfamily.org/dox-devel/TopicLazyEvaluation.html
But does Eigen know to compute N * P in piecewise chunks that will actually fit in memory? IE: it will have to do something like colsum(N * P_1) ++ colsum(N * P_2) ++ .. ++ colsum(N * P_n), where P is split into n different submatrices column-wise and "++" is concatenation.
I am working with 128 GB RAM.
I gave it a try but ended up with a bad malloc (I'm only running on 8GB on Win8). I set up my main() and used a not inline colsum function I wrote.
int main(int argc, char *argv[])
{
Eigen::MatrixXd dense = Eigen::MatrixXd::Random(1000, 100000);
Eigen::SparseMatrix<double> sparse(100000, 1000);
typedef Triplet<int> Trip;
std::vector<Trip> trps(dense.rows());
for(int i = 0; i < dense.rows(); i++)
{
trps[i] = Trip(20*i, i, 2);
}
sparse.setFromTriplets(trps.begin(), trps.end());
VectorXd res = colsum(sparse, dense);
std::cout << res;
std::cin >> argc;
return 0;
}
The attempt was simply:
__declspec(noinline) VectorXd
colsum(const Eigen::SparseMatrix<double> &sparse, const Eigen::MatrixXd &dense)
{
return (sparse * dense).colwise().sum();
}
That had a bad malloc. Sol it looks like you have to split it up manually on your own (unless someone else has a better solution).
EDIT
I improved the function a bit, but the get the same bad malloc:
__declspec(noinline) VectorXd
colsum(const Eigen::SparseMatrix<double> &sparse, const Eigen::MatrixXd &dense)
{
return (sparse * dense).topRows(4).colwise().sum();
}
EDIT 2
Another option would be to make the sparse matrix dense and force a lazy evaluation. I don't think that it would work with a sparse matrix (oh well).
__declspec(noinline) VectorXd
colsum(const Eigen::SparseMatrix<double> &sparse, const Eigen::MatrixXd &dense)
{
Eigen::MatrixXd denseSparse(sparse);
return denseSparse.lazyProduct(dense).colwise().sum();
}
This doesn't give me the bad malloc, but computes a lot of pointless 0*x_i expressions.
To answer your question: Especially, when products are involved, Eigen often evaluates parts of expressions into temporaries. In some situations this could be optimized but is not implemented yet, in some cases this is essentially the most efficient way to implement it.
However, in your case you could simply calculate the colsum of N (a 1 x 2500 vector) and multiply that by P.
Maybe future versions of Eigen will be able to make this kind of optimization themselves, but most of the time it is a good idea to make problem-specific optimizations oneself before letting the computer do the rest of the work.
Btw: I'm afraid sparse.colwise() is not implemented yet, so you must compute that manually. If you are lazy, you can instead compute Eigen::RowVectorXd Nsum = Eigen::RowVectorXd::Ones(N.rows())*P; (I have not checked it, but this might actually get optimized to near optimal code, with the most recent versions of Eigen).
I am new to the Eigen library and trying to solve a generalized eigen value problem. As per the documentation of the GeneralizedEigenSolver template class in Eigen library here I am able to get the eigen values but not the eigen vectors. Seems like the eigenvectors() member function is not implemented. Is there any other way I can generate the eigen vectors once I know the eigen values.I am using Eigen 3.2.4.
It's strange that this isn't implemented, the docs suggest that it is. It's definitely worth asking on the Eigen mailing list or filing a ticket, maybe somebody is working on this and it's in the latest repository.
I have in the past used the GeneralizedSelfAdjointEigenSolver and it definitely produces eigenvectors. So if you know that both your matrices are symmetric, you can use that.
If your matrices are very small, as a quick fix you could apply the standard EigenSolver to M^{-1} A since
A x = lambda * M x <==> M^{-1} A x = lambda * x,
but obviously this requires you to compute the inverse of your right-hand side matrix which is very expensive, so this is really a last resort.
If all else fails, you could pull in a dedicated eigensolver library, say, FEAST, or use the LAPACK routines.
It doesn't appear to be implemented yet. At the end of the compute function there is:
m_eigenvectorsOk = false;//computeEigenvectors;
indicating that they're not actually calculated. Additionally, the eigenvectors() function is commented out and looks like (note the TODO):
//template<typename MatrixType>
//typename GeneralizedEigenSolver<MatrixType>::EigenvectorsType GeneralizedEigenSolver<MatrixType>::eigenvectors() const
//{
// eigen_assert(m_isInitialized && "EigenSolver is not initialized.");
// eigen_assert(m_eigenvectorsOk && "The eigenvectors have not been computed together with the eigenvalues.");
// Index n = m_eivec.cols();
// EigenvectorsType matV(n,n);
// // TODO
// return matV;
//}
If you wanted eigenvalues from a single matrix you could use EigenSolver like this:
int main(int argc, char *argv[]) {
Eigen::EigenSolver<Eigen::MatrixXf> es;
Eigen::MatrixXf A = Eigen::MatrixXf::Random(4,4);
es.compute(A);
std::cout << es.eigenvectors() << std::endl;
return 0;
}
I'm creating a circuit analysis library in C++ (also to learn C++, so I'm very new to it).
After getting familiar with Eigen, I'd like to have a matrix where each cell hosts a 3x3 complex matrix.
So far I've tried this very simple prove of principle:
typedef Eigen::MatrixXcd cx_mat;
typedef Eigen::SparseMatrix<cx_mat> sp_mat_mat;
void test(cx_mat Z1){
sp_mat_mat Y(2, 2);
Y(0, 0) = Z1;
Y(2, 2) = Z1;
cout << "\n\nY:\n" << Y << endl;
}
Testing this simple example fails as a probable consequence of Eigen expecting a number instead of a structure.
As a matter of fact the matrix of matrices is prompt to be sparse, hence the sparse matrix structure.
Is there any way to make this work?
Any help is appreciated.
I don't believe Eigen will give you a way to make this work. I you think about the other functions which are connected to Matrix or Sparse matrix, like:
inverse()
norm()
m.row()*m.col()
what should Eigen do when a matrix element number is replaced by a matrix?
What I can understand is that you want to have a data structure that stores your Eigen::MatrixXcd in an memory efficient way.
You could also realize this using the map container:
#include <map>
typedef Eigen::MatrixXcd cx_mat;
cx_mat Z1;
std::map<int,Eigen::MatrixXcd> sp_mat_mat;
int cols = 2;
sp_mat_mat[0*cols+0]=Z1;
sp_mat_mat[2*cols+2]=Z1;
Less memory efficient, but perhaps easier to access would be using the vector container:
#include <vector>
std::vector<std::vector<Eigen::MatrixXcd>> mat_mat;
Have you found a way to create a matrix of matrices?
I see that we can use a 2-D array to create a matrix of matrices.
For example,
#include <Eigen/Dense>
MatrixXd A;
MatrixXd B;
A = MatrixXd::Random(3, 3);
B = MatrixXd::Random(3, 4);
C = MatrixXd::Random(4, 4);
MatrixXd D[2][2];
D[0][0] = A;
D[0][1] = B; D[1][0] = B.transpose();
D[1][1] = C;
I don't know if this way is memory-efficient or not. Let's check it out.
You asked "sparse matrix structure. Is there any way to make this work?" I would say no, because it is not easy to translate a circuit design into a "matrix of matrices" in the first place.. if you want to simulate something, you choose a representation close to it,. In case of an electronic circuit diagram, the schema in memory should IMHO be a directed graph, with linked-list items. At each node/junction, there is a matrix representing the behaviour of a particular component input to output transfer (eg resistor, capacitor, transistor) and you propagate the signal through the matrices assigned to each component. The transformed signal eventually arrives at an output, through the connections in your connected graph. In software, it should work similarly.. Suggested further reading: https://core.ac.uk/download/pdf/53745212.pdf