I'm using Eigen to provide some convenient array operations on some data that also interfaces with some C libraries (particularly FFTW). Essentially I need to FFT the x, y, and z components of a (large) collection of vectors. Because of this, I'm trying to decide if it makes more sense to use
array<Eigen::Vector3d, n> vectors;
(which will make the FFTW calls a little more cumbersome as I'd need a pointer to the very first double) or
array<double, 3 * n> data;
(which will make the linear algebra/vector products more cumbersome as I'd have to wrap chunks of data with Eigen::Map<Vector3d>). A little bit of empirical testing shows that constructing the Map is about 30% slower than constructing a Eigen::Vector3d but I could have easily oversimplified my use-case in exploring this. Is there a good reason to prefer one design over the other? I'm inclined to use the second because data has less abstraction in the way of treating its contents as "a collection of doubles that I can interpret however I want."
Edit: There's probably something to be said for implicit Eigen::Vector3d alignment, but I'm assuming vectors and data align the underlying doubles the same way.
Related
I have a (kind of) performance problem in my code, that roots in the chosen architecture.
I will use multidimensional tensors (basically matrices with more dimensions) in the form of cubes to store my data.
Since the dimension is not known at compile-time, I can't use Boost's MultidimensionalArray (IIRC), but have to come up, with my own solution.
Right now, I save each dimension, on it's own. I have a Tensor of dimension (let's say 3), that holds a lot of tensors of dimension 2 (in an std::vector), that each have a std::vector with tensors of dimension 1, that each holds a std::vector of (numerical) data. I use an abstract base-class for my tensor, so everything in there is a pointer to the abstract class, while beeing (secretly) multi- or one-dimensional.
I extract a single numerical data-point by giving a std::list of indices to a tensor, that get's the first element, searches for the according tensor and passes the rest of the list to that tensor in a (kind of) recursive call.
I now have to do a multi-dimensional Fast-Fourier Transformation on that data. I use a Threadpool and Job-Objects, that works on copying data from an Tensor along one dimension, doing an FFT and writes that data back.
I already have logic to implement ThreadPool and organize the dimensions to FFT along, but there is one problem:
My data-structure is the cache-unfriendliest beast, one can think of... While the Data-Copying along the first dimension (that, with it's data in a single 1D-Tensor) is reasonable fast, but in other directions, I need to copy my data from all over the place.
Since there are no race-conditions (I make sure every concurrent FFT is on distinct data-points), I thought, I would not use a Mutex-Guard to let everybody copy at the same time. However this heavily slows down the process ("I copy my data now!" - "No, I copy my data now!"- "But it's my turn now!"...)
Guarding the copy-Process with a mutex, does not increase speed. The FFT of a vector with 1024 elements is way faster, then the copy-process to get these elements, resulting in nearly all of my threads waiting, while one is copying.
Long story short:
Is there any kind of multi-dimensional data-structure, that does not need to set the dimension at compile-time, that allows me to traverse fast along all axis? I searched for a while now, by nothing came up besides Boost MultiArray. Vectorization also does not work since the indices would grow too fast to hold in usual int-types.
I can't think of how to present code-examples here, since most of that code is rather simple, but If needed, I can get that in.
Eigen has multi-dimensional tensor support (nominally unsupported, but written by the DeepMind people, so "somewhat" supported?), and FFTW has 1d to 3d FFTs. Using external libraries with a set of 1D to 3D FFTs would outsource most of the hard work.
Edit: Actually, FFTW has support for threaded n-dimensional FFTs
I'm new to C++ and I think a good way for me to jump in is to build some basic models that I've built in other languages. I want to start with just Linear Regression solved using first order methods. So here's how I want things to be organized (in pseudocode).
class LinearRegression
LinearRegression:
tol = <a supplied tolerance or defaulted to 1e-5>
max_ite = <a supplied max iter or default to 1k>
fit(X, y):
// model learns weights specific to this data set
_gradient(X, y):
// compute the gradient
score(X,y):
// model uses weights learned from fit to compute accuracy of
// y_predicted to actual y
My question is when I use fit, score and gradient methods I don't actually need to pass around the arrays (X and y) or even store them anywhere so I want to use a reference or a pointer to those structures. My problem is that if the method accepts a pointer to a 2D array I need to supply the second dimension size ahead of time or use templating. If I use templating I now have something like this for every method that accepts a 2D array
template<std::size_t rows, std::size_t cols>
void fit(double (&X)[rows][cols], double &y){...}
It seems there likely a better way. I want my regression class to work with any size input. How is this done in industry? I know in some situations the array is just flattened into row or column major format where just a pointer to the first element is passed but I don't have enough experience to know what people use in C++.
You wrote a quite a few points in your question, so here are some points addressing them:
Contemporary C++ discourages working directly with heap-allocated data that you need to manually allocate or deallocate. You can use, e.g., std::vector<double> to represent vectors, and std::vector<std::vector<double>> to represent matrices. Even better would be to use a matrix class, preferably one that is already in mainstream use.
Once you use such a class, you can easily get the dimension at runtime. With std::vector, for example, you can use the size() method. Other classes have other methods. Check the documentation for the one you choose.
You probably really don't want to use templates for the dimensions.
a. If you do so, you will need to recompile each time you get a different input. Your code will be duplicated (by the compiler) to the number of different dimensions you simultaneously use. Lots of bad stuff, with little gain (in this case). There's no real drawback to getting the dimension at runtime from the class.
b. Templates (in your setting) are fitting for the type of the matrix (e.g., is it a matrix of doubles or floats), or possibly the number of dimesions (e.g., for specifying tensors).
Your regressor doesn't need to store the matrix and/or vector. Pass them by const reference. Your interface looks like that of sklearn. If you like, check the source code there. The result of calling fit just causes the class object to store the parameter corresponding to the prediction vector β. It doesn't copy or store the input matrix and/or vector.
I am using large matrices of doubles in C++. I need to get rows or columns from these matrices and pass them to a function. What is the fastest way I can do this?
One way is to write a function that returns a copy of the desired row or column as an std::vector.
Another way is to pass the whole thing as a reference and modify the function to be able to read the desired values.
Are there any other options? Which one do you recommend?
BTW, how do you recommend I store the data in the matrix class? I am using std::vector< std::vector< double > > right now.
EDIT
I mus have mentioned that the matrices might have more that two dimensions. So using boost or arma::mat here is out of the question. Although, I am using armadillo in other parts of the library.
If a variable number of dimensions above 2 is a key requirement, take a look at boost's multidimensional array library. It has efficient (copying free) "views" you can use to reference lower-dimensional "slices" of the full matrix.
The details of what's "fastest" for this sort of thing depends an awful lot on what exactly you're doing, and how the access patterns/working set "footprint" fit to your HW's various levels of cache and memory latency; in practice it can be worth copying to more compact representations to get more cache coherent access, in preference to making sparse strided accesses which just waste a lot of a cache line. Alternatives are Morton-order accessing schemes which can at least amortize "bad axis" effects over all axes. Only your own benchmarking on your own code and use-cases on your HW can really answer that though.
(Note that I wouldn't use Boost.MultiArray for 2 dimensional arrays - there are faster, better options for linear algebra / image processing applications - but for 3+ it's worth considering.)
I would use a library like http://arma.sourceforge.net/ Because not only do you get a way to store the matrix. You also have functions that can do operations on it.
Efficient (multi)linear algebra is a surprisingly deep subject; there are no easy one-size-fits-all answers. The principal challenge is data locality: the memory hardware of your computer is optimized for accessing contiguous regions of memory, and probably cannot operate on anything other than a cache line at a time (and even if it could, the efficiency would go down).
The size of a cache line varies, but think 64 or 128 bytes.
Because of this, it is a non-trivial challenge to lay out the data in a matrix so that it can be accessed efficiently in multiple directions; even more so for higher rank tensors.
And furthermore, the best choices will probably depend heavily on exactly what you're doing with the matrix.
Your question really isn't one that can be satisfactorily answered in a Q&A format like this.
But to at least get you started on researching, here are two keyphrases that may be worth looking into:
block matrix
fast transpose algorithm
You may well do better to use a library rather than trying to roll your own; e.g. blitz++. (disclaimer: I have not used blitz++)
vector<vector<...>> will be slow to allocate, slow to free, and slow to access because it will have more than one dereference (not cache-friendly).
I would recommend it only if your (rows or columns) don't have the same size (jagged arrays).
For a "normal" matrix, you could go for something like:
template <class T, size_t nDim> struct tensor {
size_t dims[nDim];
vector<T> vect;
};
and overload operator(size_t i, size_t j, etc.) to access elements.
operator() will have to do index calculations (you have to choose between row-major or column-major order). For nDim > 2, it becomes somewhat complicated, and it could benefit from caching some indexing computations.
To return a row or a column, you could then define sub types.
template <class T, size_t nDim> struct row /*or column*/ {
tensor<T, nDim> & tensor;
size_t iStart;
size_t stride;
}
Then define an operator(size_t i) that will return tensor.vect[iStart + i*stride]
stride value will depend on whether it is a row or a column, and your (row-major or column-major) ordering choice.
stride will be 1 for one of your sub type. Note that for this sub type, iterating will probably be much faster, because it will be cache-friendly. For other sub types, unfortunately, it will probably be rather slow, and there is not much you can do about it.
See other SO questions about why iterating on the rows then the columns will probably have a huge performance difference than iterating on the columns then the rows.
I recommend you pass it by reference as copying might be a slow process depending on the size. std::vector is fine if you want the ability to expand and contract the container.
I have inherited some code which makes extensive use of double pointers to represent 2D arrays. I have little experience using Eigen but it seems easier to use and more robust than double pointers.
Does anyone have insight as to which would be preferable?
Both Eigen and Boost.uBLAS define expression hierarchies and abstract matrix data structures that can use any storage class that satisfies certain constraints. These libraries are written so that linear algebra operations can be clearly expressed and efficiently evaluated at a very high level. Both libraries use expression templates heavily, and are capable of doing pretty complicated compile-time expression transformations. In particular, Eigen can also use SIMD instructions, and is very competitive on several benchmarks.
For dense matrices, a common approach is to use a single pointer and keep track of additional row, column, and stride variables (the reason that you may need the third is because you may have allocated more memory than you really need to store x * y * sizeof(value_type) elements because of alignment). However, you have no mechanisms in place to check for out-of-range accessing, and nothing in the code to help you debug. You would only want to use this sort of approach if, for example, you need to implement some linear algebra operations for educational purposes. (Even if this is the case, I advise that you first consider which algorithms you would like to implement, and then take a look at std::unique_ptr, std::move, std::allocator, and operator overloading).
Remember Eigen has a Map capability that allows you to make an Eigen matrix to a contiguous array of data. If it's difficult to completely change the code you have inherited, mapping things to an Eigen matrix at least might make interoperating with raw pointers easier.
Yes definitely, for modern C++ you should be using a container rather than raw pointers.
Eigen
When using Eigen, take note that its fixed size classes (like Vector3d) use optimizations that require them to be properly aligned. This requires special care if you include those fixed size Eigen values as members in structures or classes. You also can't pass them by value, only by reference.
If you don't care about such optimizations, it's trivial enough to disable it: simply add
#define EIGEN_DONT_ALIGN
as the first line of all source files (.h, .cpp, ...) that use Eigen.
The other two options are:
Boost Matrix
#include <boost/numeric/ublas/matrix.hpp>
boost::numeric::ublas::matrix<double> m (3, 3);
std::vector
#include <vector>
std::vector<std::vector<double> > m(3, std::vector<double>(3));
This is my little big question about containers, in particular, arrays.
I am writing a physics code that mainly manipulates a big (> 1 000 000) set of "particles" (with 6 double coordinates each). I am looking for the best way (in term of performance) to implement a class that will contain a container for these data and that will provide manipulation primitives for these data (e.g. instantiation, operator[], etc.).
There are a few restrictions on how this set is used:
its size is read from a configuration file and won't change during execution
it can be viewed as a big two dimensional array of N (e.g. 1 000 000) lines and 6 columns (each one storing the coordinate in one dimension)
the array is manipulated in a big loop, each "particle / line" is accessed and computation takes place with its coordinates, and the results are stored back for this particle, and so on for each particle, and so on for each iteration of the big loop.
no new elements are added or deleted during the execution
First conclusion, as the access on the elements is essentially done by accessing each element one by one with [], I think that I should use a normal dynamic array.
I have explored a few things, and I would like to have your opinion on the one that can give me the best performances.
As I understand there is no advantage to use a dynamically allocated array instead of a std::vector, so things like double** array2d = new ..., loop of new, etc are ruled out.
So is it a good idea to use std::vector<double> ?
If I use a std::vector, should I create a two dimensional array like std::vector<std::vector<double> > my_array that can be indexed like my_array[i][j], or is it a bad idea and it would be better to use std::vector<double> other_array and acces it with other_array[6*i+j].
Maybe this can gives better performance, especially as the number of columns is fixed and known from the beginning.
If you think that this is the best option, would it be possible to wrap this vector in a way that it can be accessed with a index operator defined as other_array[i,j] // same as other_array[6*i+j] without overhead (like function call at each access) ?
Another option, the one that I am using so far is to use Blitz, in particular blitz::Array:
typedef blitz::Array<double,TWO_DIMENSIONS> store_t;
store_t my_store;
Where my elements are accessed like that: my_store(line, column);.
I think there are not much advantage to use Blitz in my case because I am accessing each element one by one and that Blitz would be interesting if I was using operations directly on array (like matrix multiplication) which I am not.
Do you think that Blitz is OK, or is it useless in my case ?
These are the possibilities I have considered so far, but maybe the best one I still another one, so don't hesitate to suggest me other things.
Thanks a lot for your help on this problem !
Edit:
From the very interesting answers and comments bellow a good solution seems to be the following:
Use a structure particle (containing 6 doubles) or a static array of 6 doubles (this avoid the use of two dimensional dynamic arrays)
Use a vector or a deque of this particle structure or array. It is then good to traverse them with iterators, and that will allow to change from one to another later.
In addition I can also use a Blitz::TinyVector<double,6> instead of a structure.
So is it a good idea to use std::vector<double> ?
Usually, a std::vector should be the first choice of container. You could use either std::vector<>::reserve() or std::vector<>::resize() to avoid reallocations while populating the vector. Whether any other container is better can be found by measuring. And only by measuring. But first measure whether anything the container is involved in (populating, accessing elements) is worth optimizing at all.
If I use a std::vector, should I create a two dimensional array like std::vector<std::vector<double> > [...]?
No. IIUC, you are accessing your data per particle, not per row. If that's the case, why not use a std::vector<particle>, where particle is a struct holding six values? And even if I understood incorrectly, you should rather write a two-dimensional wrapper around a one-dimensional container. Then align your data either in rows or columns - what ever is faster with your access patterns.
Do you think that Blitz is OK, or is it useless in my case?
I have no practical knowledge about blitz++ and the areas it is used in. But isn't blitz++ all about expression templates to unroll loop operations and optimizing away temporaries when doing matrix manipulations? ICBWT.
First of all, you don't want to scatter the coordinates of one given particle all over the place, so I would begin by writing a simple struct:
struct Particle { /* coords */ };
Then we can make a simple one dimensional array of these Particles.
I would probably use a deque, because that's the default container, but you may wish to try a vector, it's just that 1.000.000 of particles means about a single chunk of a few MBs. It should hold but it might strain your system if this ever grows, while the deque will allocate several chunks.
WARNING:
As Alexandre C remarked, if you go the deque road, refrain from using operator[] and prefer to use iteration style. If you really need random access and it's performance sensitive, the vector should prove faster.
The first rule when choosing from containers is to use std::vector. Then, only after your code is complete and you can actually measure performance, you can try other containers. But stick to vector first. (And use reserve() from the start)
Then, you shouldn't use an std::vector<std::vector<double> >. You know the size of your data: it's 6 doubles. No need for it to be dynamic. It is constant and fixed. You can define a struct to hold you particle members (the six doubles), or you can simply typedef it: typedef double particle[6]. Then, use a vector of particles: std::vector<particle>.
Furthermore, as your program uses the particle data contained in the vector sequentially, you will take advantage of the modern CPU cache read-ahead feature at its best performance.
You could go several ways. But in your case, don't declare astd::vector<std::vector<double> >. You're allocating a vector (and you copy it around) for every 6 doubles. Thats way too costly.
If you think that this is the best option, would it be possible to wrap this vector in a way that it can be accessed with a index operator defined as other_array[i,j] // same as other_array[6*i+j] without overhead (like function call at each access) ?
(other_array[i,j] won't work too well, as i,j employs the comma operator to evaluate the value of "i", then discards that and evaluates and returns "j", so it's equivalent to other_array[i]).
You will need to use one of:
other_array[i][j]
other_array(i, j) // if other_array implements operator()(int, int),
// but std::vector<> et al don't.
other_array[i].identifier // identifier is a member variable
other_array[i].identifier() // member function getting value
other_array[i].identifier(double) // member function setting value
You may or may not prefer to put get_ and set_ or similar on the last two functions should you find them useful, but from your question I think you won't: functions are prefered in APIs between parts of large systems involving many developers, or when the data items may vary and you want the algorithms working on the data to be independent thereof.
So, a good test: if you find yourself writing code like other_array[i][3] where you've decided "3" is the double with the speed in it, and other_array[i][5] because "5" is the the acceleration, then stop doing that and give them proper identifiers so you can say other_array[i].speed and .acceleration. Then other developers can read and understand it, and you're much less likely to make accidental mistakes. On the other hand, if you are iterating over those 6 elements doing exactly the same things to each, then you probably do want Particle to hold a double[6], or to provide an operator[](int). There's no problem doing both:
struct Particle
{
double x[6];
double& speed() { return x[3]; }
double speed() const { return x[3]; }
double& acceleration() { return x[5]; }
...
};
BTW / the reason that vector<vector<double> > may be too costly is that each set of 6 doubles will be allocated on the heap, and for fast allocation and deallocation many heap implementations use fixed-size buckets, so your small request will be rounded up t the next size: that may be a significant overhead. The outside vector will also need to record a extra pointer to that memory. Further, heap allocation and deallocation is relatively slow - in you're case, you'd only be doing it at startup and shutdown, but there's no particular point in making your program slower for no reason. Even more importantly, the areas on the heap may just around in memory, so your operator[] may have cache-faults pulling in more distinct memory pages than necessary, slowing the entire program. Put another way, vectors store elements contiguously, but the pointed-to-vectors may not be contiguous.