So I am writing a class, which has 1d-arrays and 2d-arrays, that I dynamically allocate in the constructor
class Foo{
int** 2darray;
int * 1darray;
};
Foo::Foo(num1, num2){
2darray = new int*[num1];
for(int i = 0; i < num1; i++)
{
array[i] = new int[num2];
}
1darray = new int[num1];
}
Then I will have to delete every 1d-array and every array in the 2d array in the destructor, right?
I want to use std::vector for not having to do this. Is there any downside of doing this? (makes compilation slower etc?)
TL;DR: when to use std::vector for dynamically allocated arrays, which do NOT need to be resized during runtime?
vector is fine for the vast majority of uses. Hand-tuned scenarios should first attempt to tune the allocator1, and only then modify the container. Correctness of memory management (and your program in general) is worth much, much more than any compilation time gains.
In other words, vector should be your starting point, and until you find it unsatisfactory, you shouldn't care about anything else.
As an additional improvement, consider using a 1-dimensional vector as a backend storage and only provide 2-dimensional indexed view. This scenario can improve the cache locality and overall performance, while also making some operations like copying of the whole structure much easier.
1 the second of two template parameters that vector accepts, which defaults to a standard allocator for a given type.
There should not be any drawbacks since vector guarantees contiguous memory. But if the size is fixed and C++11 is available maybe an array among other options:
it doesn't allow resizing
depending on how the vector is initialized prevents reallocations
size is hardcoded in the instructions (template argument). See Ped7g comment for a more detailed description
An 2D array is not a array of pointers.
If you define it this way, each row/colum can have a different size.
Furthermore the elements won't be in sequence in memory.
This might lead to poor performance as the prefetcher wont be able to predict your access-patterns really well.
Therefore it is not advised to nest std::vectors inside eachother to model multi-dimensional arrays.
A better approach is to map an continuous chunk of memory onto an mult-dimensional space by providing custom access methods.
You can test it in the browser: http://fiddle.jyt.io/github/3389bf64cc6bd7c2218c1c96f62fa203
#include<vector>
template<class T>
struct Matrix {
Matrix(std::size_t n=1, std::size_t m=1)
: n{n}, m{m}, data(n*m)
{}
Matrix(std::size_t n, std::size_t m, std::vector<T> const& data)
: n{n}, m{m}, data{data}
{}
//Matrix M(2,2, {1,1,1,1});
T const& operator()(size_t i, size_t j) const {
return data[i*m + j];
}
T& operator()(size_t i, size_t j) {
return data[i*m + j];
}
size_t n;
size_t m;
std::vector<T> data;
using ScalarType = T;
};
You can implement operator[] by returning a VectorView which has access to data an index and the dimensions.
Related
I have a 2D vector vector<vector<int>> data. I want to know the total memory it is taking after a while. I can iterate through the vector and add the size of each inner vector but is there a better way of doing so?
Follow up: if I have a vector<string> data. Is there a way to get the memory it is taking after a while without iterating through the vector?
If you're using the vector class, you'll have to iterate. Or maybe there is some function that will do this with some magic syntax, but there will still be an iteration going on under the hood.
So if you want to avoid the iteration, then you probably have to write a new container class that supports what you want. Or at least some wrapper class.
In practice, I see two possible ways of calculating the size:
Iterate every element like you're doing now
Recalculate the size every time you add or remove an element
Here is an (incomplete) example that shows the idea:
class MyContainer {
private:
std::vector<std::vector<int>> data;
int total_size;
public:
void push_back(int pos, int val) {
data[pos].push_back(val);
total_size++;
}
int size() {
return total_size;
}
};
Remember though that this will keep track of the number of elements. If you multiply them by sizeof (int) you'll get an approximation of the memory usage. But it will not include the overhead for storing things. And a bigger factor is that classes like vector typically reserves more memory than needed in order to avoid having to allocate each time you add a new element.
So if you want to use the vector class, you would get a better approximation of memory usage by iterating over capacity instead of size. But then you're back to iterating. Because you cannot hook the allocation as easy as you can with push_back. But it's possible. Here is a rough idea for that:
class MyContainer {
private:
std::vector<std::vector<int>> data;
std::vector<int> capacity;
int total_capacity;
public:
void push_back(int pos, int val) {
data[pos].push_back(val);
total_capacity += data[pos].capacity - capacity[pos];
capacity[pos] = data[pos].capacity;
}
int capacity() {
return total_capacity;
}
};
This does not take the extra size of MyContainer::capacity but you could modify the code to account for that.
I want to load N-dimensional matrices from disk (HDF5) into std::vector objects.
I know their rank beforehand, just not the shape. For instance, one of the matrices is 4-rank std::vector<std::vector<std::vector<std::vector<float>>>> data;
I want to use vectors to store the values because they are standard and not as ugly as c-arrays (mostly because they are aware of their length).
However, the way to load them is using a loading function that takes a void *, which would work fine for rank 1 vectors where I can just resize them and then access its data pointer (vector.data()). For higher ranks, vector.data() will just point to vectors, not the actual data.
Worst case scenario I just load all the data to an auxiliary c-array and then copy it manually but this could slow it down quite a bit for big matrices.
Is there a way to have contiguous multidimensional data in vectors and then get a single address to it?
If you are concerned about performance please don't use a vector of vector of vector... .
Here is why. I think the answer of #OldPeculier is worth reading.
The reason that it's both fat and slow is actually the same. Each "row" in the matrix is a separately allocated dynamic array. Making a heap allocation is expensive both in time and space. The allocator takes time to make the allocation, sometimes running O(n) algorithms to do it. And the allocator "pads" each of your row arrays with extra bytes for bookkeeping and alignment. That extra space costs...well...extra space. The deallocator will also take extra time when you go to deallocate the matrix, painstakingly free-ing up each individual row allocation. Gets me in a sweat just thinking about it.
There's another reason it's slow. These separate allocations tend to live in discontinuous parts of memory. One row may be at address 1,000, another at address 100,000—you get the idea. This means that when you're traversing the matrix, you're leaping through memory like a wild person. This tends to result in cache misses that vastly slow down your processing time.
So, if you absolute must have your cute [x][y] indexing syntax, use that solution. If you want quickness and smallness (and if you don't care about those, why are you working in C++?), you need a different solution.
Your plan is not a wise one. Vectors of vectors of vectors are inefficient and only really useful for dynamic jagged arrays, which you don't have.
Instead of your plan, load into a flst vector.
Next, wrap it with a multidimensional view.
template<class T, size_t Dim>
struct dimensional{
size_t const* strides;
T* data;
dimensional<T, Dim-1> operator[](size_t i)const{
return {strides+1, data+i* *strides};
}
};
template<class T>
struct dimensional<T,0>{
size_t const* strides; // not valid to dereference
T* data;
T& operator[](size_t i)const{
return data[i];
}
};
where strides points at an array of array-strides for each dimension (the product of the sizes of all later dimensions).
So my_data.access()[3][5][2] gets a specific element.
This sketch of a solution leaves everything public, and doesn't support for(:) iteration. A more shipping quality one would have proper privacy and support c++11 style for loops.
I am unaware of the name of a high quality multi-dimensional array view already written for you, but there is almost certainly one in boost.
For a bi-dimensional matrix, you could use an ugly c-array like that:
float data[w * h]; //width, height
data[(y * w) + x] = 0; //access (x,y) element
For a tri-dimensional matrix:
float data[w * h * d]; //width, height, depth
data[((z * h) + y) * w + x] = 0; //access (x,y,z) element
And so on. To load data from, let's say, a file,
float *data = yourProcToLoadData(); //works for any dimension
That's not very scalable but you deal with a known dimension. This way your data is contiguous and you have a single address.
I'm hesitating on how to organize the memory layout of my 2D data.
Basically, what I want is an N*M 2D double array, where N ~ M are in the thousands (and are derived from user-supplied data)
The way I see it, I have 2 choices :
double *data = new double[N*M];
or
double **data = new double*[N];
for (size_t i = 0; i < N; ++i)
data[i] = new double[M];
The first choice is what I'm leaning to.
The main advantages I see are shorter new/delete syntax, continuous memory layout implies adjacent memory access at runtime if I arrange my access correctly, and possibly better performance for vectorized code (auto-vectorized or use of vector libraries such as vDSP or vecLib)
On the other hand, it seems to me that allocating a big chunk of continuous memory could fail/take more time compared to allocating a bunch of smaller ones. And the second method also has the advantage of the shorter syntax data[i][j] compared to data[i*M+j]
What would be the most common / better way to do this, mainly if I try to view it from a performance standpoint (even though those are gonna be small improvements, I'm curious to see which would more performing).
Between the first two choices, for reasonable values of M and N, I would almost certainly go with choice 1. You skip a pointer dereference, and you get nice caching if you access data in the right order.
In terms of your concerns about size, we can do some back-of-the-envelope calculations.
Since M and N are in the thousands, suppose each is 10000 as an upper bound. Then your total memory consumed is
10000 * 10000 * sizeof(double) = 8 * 10^8
This is roughly 800 MB, which while large, is quite reasonable given the size of memory in modern day machines.
If N and M are constants, it is better to just statically declare the memory you need as a two dimensional array. Or, you could use std::array.
std::array<std::array<double, M>, N> data;
If only M is a constant, you could use a std::vector of std::array instead.
std::vector<std::array<double, M>> data(N);
If M is not constant, you need to perform some dynamic allocation. But, std::vector can be used to manage that memory for you, so you can create a simple wrapper around it. The wrapper below returns a row intermediate object to allow the second [] operator to actually compute the offset into the vector.
template <typename T>
class matrix {
const size_t N;
const size_t M;
std::vector<T> v_;
struct row {
matrix &m_;
const size_t r_;
row (matrix &m, size_t r) : m_(m), r_(r) {}
T & operator [] (size_t c) { return m_.v_[r_ * m_.M + c]; }
T operator [] (size_t c) const { return m_.v_[r_ * m_.M + c]; }
};
public:
matrix (size_t n, size_t m) : N(n), M(m), v_(N*M) {}
row operator [] (size_t r) { return row(*this, r); }
const row & operator [] (size_t r) const { return row(*this, r); }
};
matrix<double> data(10,20);
data[1][2] = .5;
std::cout << data[1][2] << '\n';
In addressing your particular concern about performance: Your rationale for wanting a single memory access is correct. You should want to avoid doing new and delete yourself, however (which is something this wrapper provides), and if the data is more naturally interpreted as multi-dimensional, then showing that in the code will make the code easier to read as well.
Multiple allocations as shown in your second technique is inferior because it will take more time, but its advantage is that it may succeed more often if your system is fragmented (the free memory consists of smaller holes, and you do not have a free chunk of memory large enough to satisfy the single allocation request). But multiple allocations has another downside in that some more memory is needed to allocate space for the pointers to each row.
My suggestion provides the single allocation technique without needed to explicitly call new and delete, as the memory is managed by vector. At the same time, it allows the data to be addressed with the 2-dimensional syntax [x][y]. So it provides all the benefits of a single allocation with all the benefits of the multi-allocation, provided you have enough memory to fulfill the allocation request.
Consider using something like the following:
// array of pointers to doubles to point the beginning of rows
double ** data = new double*[N];
// allocate so many doubles to the first row, that it is long enough to feed them all
data[0] = new double[N * M];
// distribute pointers to individual rows as well
for (size_t i = 1; i < N; i++)
data[i] = data[0] + i * M;
I'm not sure if this is a general practice or not, I just came up with this. Some downs still apply to this approach, but I think it eliminates most of them, like being able to access the individual doubles like data[i][j] and all.
I currently use
std::vector<std::vector<std::string> > MyStringArray
But I have read several comments here on SO that discourage the use of nested vectors on efficiency grounds.
Unforunately I have yet to see examples of alternatives to nested vector for a situation like this.
Here's a simple dynamic 2D array with runtime-configurable column number:
class TwoDArray
{
size_t NCols;
std::vector<std::string> data;
public:
explicit TwoDArray(size_t n) : NCols(n) { }
std::string & operator()(size_t i, size_t j) { return data[i * NCols + j]; }
const std::string & operator()(size_t i, size_t j) const { return data[i * NCols + j]; }
void set_number_of_rows(size_t r) { data.resize(NCols * r); }
void add_row(const std::vector<std::string> & row)
{
assert(row.size() == NCols);
data.insert(data.end(), row.begin(), row.end());
}
};
Usage:
TwoDArray arr(5); // five columns per row
arr.set_number_of_rows(20);
arr(0, 3) = "hello";
arr(17,2) = "world";
This is just a completely arbitrary and random example. Your real class would obviously have to contain interface methods that are suitable to what you're doing; or you might decide not to have a wrapping class at all and address the naked vector directly.
The key feature is the two-dimensional accessor operator via (i,j), which replaces the nested vectors' [i][j].
It seems a reasonable design to me, given your stated design goals. Note that you should avoid operations which resize the outer vector; these may result in a deep copy of all the data in the overall structure (this may be mitigated somewhat with a C++0x STL implementation).
The most efficent way is probably to have the strings be contiguous in memory (separated by null terminators), and having contiguous array of references to each string, and another contiguous array of references to each array.
This is to maintain locality and help use the cache effectively, but it really ultimately depends on how you access the data.
I know that the standard does not force std::vector to allocate contiguous memory blocks, but all implementations obey this nevertheless.
Suppose I wish to create a vector of a multidimensional, static array. Consider 2 dimensions for simplicity, and a vector of length N. That is I wish to create a vector with N elements of, say, int[5].
Can I be certain that all N*5 integers are now contiguous in memory? So that I in principle could access all of the integers simply by knowing the address of the first element? Is this implementation dependent?
For reference the way I currently create a 2D array in a contiguous memory block is by first making a (dynamic) array of float* of length N, allocating all N*5 floats in one array and then copying the address of every 5th element into the first array of float*.
The standard does require the memory of an std::vector to be
contiguous. On the other hand, if you write something like:
std::vector<std::vector<double> > v;
the global memory (all of the v[i][j]) will not be contiguous. The
usual way of creating 2D arrays is to use a single
std::vector<double> v;
and calculate the indexes, exactly as you suggest doing with float.
(You can also create a second std::vector<float*> with the addresses
if you want. I've always just recalculated the indexes, however.)
Elements of a Vector are gauranteed to be contiguous as per C++ standard.
Quotes from the standard are as follows:
From n2798 (draft of C++0x):
23.2.6 Class template vector [vector]
1 A vector is a sequence container that supports random access iterators. In addition, it supports (amortized) constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage management is handled automatically, though hints can be given to improve efficiency. The elements of a vector are stored contiguously, meaning that if v is a vector where T is some type other than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().
C++03 standard (23.2.4.1):
The elements of a vector are stored contiguously, meaning that if v is a vector where T is some type other than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().
Also, see here what Herb Sutter's views on the same.
As #Als already pointed out, yes, std::vector (now) guarantees contiguous allocation. I would not, however, simulate a 2D matrix with an array of pointers. Instead, I'd recommend one of two approaches. The simpler by (by far) is to just use operator() for subscripting, and do a multiplication to convert the 2D input to a linear address in your vector:
template <class T>
class matrix2D {
std::vector<T> data;
int columns;
public:
T &operator()(int x, int y) {
return data[y * columns + x];
}
matrix2D(int x, int y) : data(x*y), columns(x) {}
};
If, for whatever reason, you want to use matrix[a][b] style addressing, you can use a proxy class to handle the conversion. Though it was for a 3D matrix instead of 2D, I posted a demonstration of this technique in previous answer.
For reference the way I currently create a 2D array in a contiguous memory block is by first making a (dynamic) array of float* of length N, allocating all N*5 floats in one array and then copying the address of every 5th element into the first array of float*.
That's not a 2D array, that's an array of pointers. If you want a real 2D array, this is how it's done:
float (*p)[5] = new float[N][5];
p [0] [0] = 42; // access first element
p[N-1][4] = 42; // access last element
delete[] p;
Note there is only a single allocation. May I suggest reading more about using arrays in C++?
Under the hood, a vector may look approximately like (p-code):
class vector<T> {
T *data;
size_t s;
};
Now if you make a vector<vector<T> >, there will be a layout like this
vector<vector<T>> --> data {
vector<T>,
vector<T>,
vector<T>
};
or in "inlined" form
vector<vector<T>> --> data {
{data0, s0},
{data1, s1},
{data2, s2}
};
Yes, the vector-vector therefore uses contiguous memory, but no, not as you'd like it. It most probably stores an array of pointers (and some other variables) to external places.
The standard only requires that the data of a vector is contiguous, but not the vector as a whole.
A simple class to create, as you call it, a 2D array, would be something like:
template <class T> 2DArray {
private:
T *m_data;
int m_stride;
public:
2DArray(int dimY, int dimX) : m_stride(dimX) : m_data(new[] T[dimX * dimY]) {}
~2DArray() { delete[] m_data; }
T* operator[](int row) { return m_data + m_stride * row; }
}
It's possible to use this like:
2DArray<int> myArray(30,20);
for (int i = 0; i < 30; i++)
for (int j = 0; j < 20; j++)
myArray[i][j] = i + j;
Or even pass &myArray[0][0] as address to low-level functions that take some sort of "flat buffers".
But as you see, it turns naive expectations around in that it's myarray[y][x].
Generically, if you interface with code that requires some sort of classical C-style flat array, then why not just use that ?
Edit: As said, the above is simple. No bounds check attempts whatsoever. Just like, "an array".