2D complex array in C++ - c++

I am new in C++ programing so I need a help about 2D arrays. Is it possible to create complex array from two real array with two for loops?I was trying to do that in my code but...I do not know how to do that.
Thanks for help!
This is my code::
#include <iostream>
#include <fstream>
#include <complex>
#include <cmath>
using namespace std;
int const BrGr = 15, BrCv = BrGr + 1, BrSat = 24;
//(BrCv=number of nodes,BrSat=number of hours)
int main()
{
// Every array must be dynamic array.It is a task.Is this correct way?
auto *Ipot = new double[BrCv - 1][BrSat];
auto *cosfi = new double[BrCv - 1][BrSat];
auto *S_pot = new complex<double>[BrCv - 1][BrSat];
auto *I_inj = new complex<double>[BrCv - 1][BrSat];
auto *V_cvo = new complex<double>[BrCv][BrSat];
ifstream reader("Input.txt");
if (reader.is_open())
{
for (int i = 0;i < BrCv - 1;i++)
{
for (int j = 0;j < BrSat;j++)
{
reader >> Ipot[i][j];
}
}
for (int i = 0;i < BrCv - 1;i++)
{
for (int j = 0;j < BrSat;j++)
{
reader >> cosfi[i][j];
}
}
}
else cout << "Error!" << endl;
reader.close();
// Here i want to create 2D array of complex numbers - Is this correct way?
// Also in same proces i want to calculate a value of S_pot in every node for every hour
for (int i = 0;i < BrCv - 1;i++)
{
for (int j = 0;j < BrSat;j++)
{
S_pot[i][j] = complex<double>(Ipot[i][j]*cosfi[i][j],Ipot[i][j]*sqr(1-pow(cosfi[i][j],2)));
}
}
// Here i give a value for V_cvo in nodes for every single hour
for (int i = 0;i < BrCv;i++)
{
for (int j = 0;j < BrSat;j++)
{
V_cvo[i][j] = 1;
}
}
// Here i want to calculate a value of I_inj in every node for every hour
for (int i = 0;i < BrCv - 1;i++)
{
for (int j = 0;j < BrSat;j++)
{
I_inj[i][j] = conj(S_pot[i][j] / V_cvo[i][j]);
}
}
// Here i want to delete all arrays
delete[] Ipot, cosfi, S_pot, I_inj, V_cvo;
system("pause");
return 0;

Note: I'm using double through out these examples, but you can replace double with any type.
To be honest, you probably don't want to use a 2D array.
Creating a 2D dynamically-sized array in C++ is a multi-stage operation. You can't just
double twoDArray [nrRows][nrColumns];
or
auto twoDArray = new double[nrRows][nrColumns];
There are a couple things wrong with this, but the most important is the rows and columns are not a constant, defined at compile time values. Some compilers allow the first, but this cannot be guaranteed. I don't know if any compiler allows the second.
Instead, First you create an array of rows to hold the columns, then you separately create each row of columns. Yuck.
Here's the set up:
double * arr[] = new double*[nrRows]; // create rows to point at columns
for (size_t index = 0; index < nrRows; index++)
{
arr[index] = new double[nrColumns]; // create columns
}
And here's clean-up
for (size_t index = 0; index < nrRows; index++)
{
delete[] arr[index]; // delete all columns
}
delete[] arr; // delete rows
For your efforts you get crappy spacial locality and the performance hit (Cache miss) that causes because your many arrays could be anywhere in RAM, and you get crappy memory management issues. One screw-up, one unexpected exception and you have a memory leak.
This next option has better locality because there is one big data array to read from instead of many, but still the same leakage problems.
double * arr2[] = new double*[nrRows]; // create rows to point at columns
double holder[] = new double[nrRows* nrColumns]; // create all columns at once
for (size_t index = 0; index < nrRows; index++)
{
arr[index] = &holder[index * nrColumns]; // attach columns to rows
}
and clean up:
delete[] arr2;
delete[] holder;
In C++, the sane person chooses std::vector over a dynamically-sized array unless given very, very compelling reason not to. Why has been documented to death all over SO and the Internet at large, and the proof litters the Internet with hijacked computers serving up heaping dollops of spam and other nastiness.
std::vector<std::vector<double>> vec(nrRows, std::vector<double>(nrColumns));
Usage is exactly what array users are used to:
vec[i][j] = somevalue;
This has effectively no memory problems, but is back to crappy locality because the vectors could be anywhere.
But...!
There is a better method still: Use a One Dimensional array and wrap it in a simple class to make it look 2D.
template <class TYPE>
class TwoDee
{
private:
size_t mNrRows;
size_t mNrColumns;
vector<TYPE> vec;
public:
TwoDee(size_t nrRows, size_t nrColumns):
mNrRows(nrRows), mNrColumns(nrColumns), vec(mNrRows*mNrColumns)
{
}
TYPE & operator()(size_t row, size_t column)
{
return vec[row* mNrColumns + column];
}
TYPE operator()(size_t row, size_t column) const
{
return vec[row* mNrColumns + column];
}
};
This little beastie will do most of what you need a 2D vector to do. You can copy it, you can move it. You can crunch all you want. Jay Leno will make more.
I jumped directly to the templated version because I'm stumped for a good reason to explain class TwoDee twice.
The constructor is simple. You give it the dimensions of the array and it builds a nice, safe 1D vector. No muss, no fuss, and No Zayn required.
The operator() functions take the row and column indices, do a simple bit of arithmetic to turn the indices into a single index and then either return a reference to the indexed value to allow modification or a copy of the indexed value for the constant case.
If you're feeling like you need extra safety, add in range checking.
TYPE & operator()(size_t row, size_t column)
{
if (row < mNrRows && column < mNrColumns)
{
return vec[row* mNrColumns + column];
}
throw std::out_of_range("Bad indices");
}
OK. How does the OP use this?
TwoDee<complex<double>> spot(BrCv - 1, BrSat);
Created and ready to go. And to load it up:
for (int i = 0;i < BrCv - 1;i++)
{
for (int j = 0;j < BrSat;j++)
{
Spot(i,j) = complex<double>(7.8*Ipot(i,j),2.3*cosfi(i,j));
}
}

Declaring a dynamic 2D array for a premitive type is the same as for std::complex<T>.
Jagged array:
complex<int> **ary = new complex<int>*[sizeY];
//run loop to initialize
for (int i = 0; i < sizeY; ++i)
{
ary[i] = new complex<int>[sizeX];
}
//clean up (you could wrap this in a class and write this in its destructor)
for (int i = 0; i < sizeY; ++i)
{
delete[] ary[i];
}
delete[] ary;
//access with
ary[i][j];
//assign index with
ary[i][j] = complex<int>(int,int);
It's a little heavier weight than it needs to be, and it allocates more blocks than you need.
Multidimensional arrays only need one block of memory, they don't need one block per row.
Rectangular array:
complex<int> *ary = new complex<int>[sizeX * sizeY];
//access with:
ary[y*sizeX + x]
//assign with
ary[y*sizeX+x] = complex<int>(int,int);
//clean up
delete[] ary;
Allocating just a single contiguous block is the way to go (less impact on allocator, better locality, etc But you have to sacrifice clean and nice subscripting.

Related

How to delete and reassign a dynamically allocated pointer

I'm taking a c++ programming course (we are still mostly using C) and we just got to dynamic allocation of memory. For one of my homeworks, I'm asked to create a function that transposes any given matrix. This function is given the following arguments as inputs: a pointer, in which are saved the matrix elements, the number of rows and of colunms. I would like this to be a void type function that changes the order of the stored elements without returning any new pointer.
I tried creating a new pointer, in which I save the elemtens in the correct order (using 2 for loops). Then what I would like to do is deallocating the original pointer (using the delete command), assinging it to the new pointer and finally deleting the new pointer.
This unfortunately does not work (some elements turn out to be random numbers), but I don't understand why.
I hope my code is more precise and clear than my explanation:
void Traspose(float *matrix, const int rows, const int cols ){
auto *tras = new float [rows * cols];
int k = 0;
for(int i = 0; i < cols; i++){
for(int j = 0; j < rows * cols; j += cols){
tras[k] = matrix[j + i];
k++;
}
}
delete[] matrix;
matrix = tras;
delete[] tras;
}
All those lines are wrong:
delete[] matrix;
matrix = tras;
delete[] tras;
You didn't allocate matrix so you don't want do delete it.
You assign tras to matrix and then you delete tras, after that, tras points nowhere, nor does matrix.
matrix = tras is pointless anyway, because matrix is a local variable, and any changes to local variables are lost after the function ends.
You're inventing a problem where none should exist.
A matrix AxB in dimension will transpose to a matrix BxA in size. While the dimensional difference is obvious the storage requirements might not be so. Your storage is identical.
Per the function signature, the change must be done in the same memory allocated to matrix. E.g., the results should be stored back into matrix memory. So, don't delete that memory; leave it alone. It is both large enough to hold the transposition, and owned by the caller regardless.
Rather, do this:
void Traspose(float *matrix, const int rows, const int cols)
{
float *tras = new float[ rows * cols ];
int k = 0;
for (int i = 0; i < cols; i++)
{
for (int j = 0; j < rows * cols; j += cols)
tras[k++] = matrix[j + i];
}
for (int i=0; i<k; ++i)
matrix[i] = tras[i];
delete [] tras;
}
Note this gets quite a bit simpler (and safer) if the option to use the standard library algorithms and containers is on the table:
void Traspose(float *matrix, const int rows, const int cols)
{
std::vector<float> tras;
tras.reserve(rows*cols);
for (int i = 0; i < cols; i++)
{
for (int j = 0; j < rows * cols; j += cols)
tras.emplace_back(matrix[j + i]);
}
std::copy(tras.begin(), tras.end(), matrix);
}
Finally, probably worth investigating in your spare time, there are algorithms to do this, even for non-square matrices, in place without temporary storage using permutation chains. I'll leave researching those as an exercise to the OP.

Way to ensure a dynamically allocated matrix is square?

I would like to determine if there is a way to determine whether a dynamically allocated matrix is square (nxn).
The first thing that came to mind was to see if there is a way to find out whether a pointer is about to point to an invalid memory location. But according to these posts:
C++ Is it possible to determine whether a pointer points to a valid object?
Testing pointers for validity (C/C++)
This cannot be done.
The next idea I came up with was to somehow use the sizeof() function to find a pattern with square matrices, but using sizeof() on a pointer will always yield the same value.
I start off by creating a dynamically allocated array to be of size nxn:
int **array = new int*[n]
for(int i = 0; i < n; i++)
array[i] = new int[n];
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
array[i][j] = 0;
}
}
Now I have a populated square matrix of size nxn. Let's say I'm implementing a function to print a square 2D array, but a user has inadvertently created and passed a 2D array of size mxn into my function (accomplished by the code above, except there are more row pointers than elements that comprise the columns, or vice versa), and we're also not sure whether the user has passed a value of n corresponding to n rows or n columns:
bool(int **arr, int n){
for(int rows = 0; rows < n; rows++)
for(int cols = 0; cols < n; cols++)
cout << *(*(arr + rows) + cols) << " ";
// Is our next column value encroaching on unallocated memory?
}
cout << endl;
// Is our next row value out of bounds?
}
}
Is there any way to inform this user (before exiting with a segmentation fault), that this function is for printing square 2D arrays only?
Edit: corrected 3rd line from
array[i] = new int[i]
to
array[i] = new int[n]
There is NO way to find out information about an allocation. The ONLY way you can do that, is to store the information about the matrix dimensions somewhere. Pointers are just pointers. Nothing more, nothing less. If you need something more than a pointer, you'll need to define a type that encapsulates all of that information.
class Matrix2D
{
public:
Matrix2D(int N, int M)
: m_N(N), m_M(M), m_data(new int[N*M]) {}
int N() const { return this->m_N; }
int M() const { return this->m_M; }
int* operator[] (int index) const
{ return m_data + m_M * index; }
private:
int m_N;
int m_M;
int* m_data;
};

How to heap allocate a 2D array in C++? [duplicate]

This question already has answers here:
How do I declare a 2d array in C++ using new?
(29 answers)
Closed 5 years ago.
I am trying to do something like this:
std::string* Plane = new std::string[15][60];
However this code seems not to compile.
Is there any other way to accomplish the same result?
Thanks for any potential help.
There's three ways of doing this.
The first is to allocate it as an 'array of arrays' structure (I'm converting your code to std::vector, because it's way safer than dealing with raw pointers). This is ideal if you need each row to have its own length, but eats up extra memory:
std::vector<std::vector<std::string>> Plane(15);
for(size_t index = 0; index < 15; index++)
Plane[index].resize(60);
for(size_t i = 0; i < 15; i++)
for(size_t j = 0; j < 60; j++)
Plane[i][j] = "This is a String!";
The second is to allocate it as a flat structure, which dramatically improves performance at the cost of reduction of flexibility:
std::vector<std::string> Plane(15 * 60);
for(size_t i = 0; i < 15; i++)
for(size_t j = 0; j < 60; j++)
Plane[i* 60 + j] = "This is a String!";
The third, which I consider the best option by far because of its extensibility, is to roll a Matrix class which abstracts away these details for you, making it less likely you'll make a mistake in your coding:
template<typename T>
class Matrix {
std::vector<T> _data;
size_t rows, columns;
public:
Matrix(size_t rows, size_t columns) : rows(rows), columns(columns), _data(rows * columns) {}
T & operator()(size_t row, size_t column) {
return _data[row * columns + column];
}
T const& operator()(size_t row, size_t column) const {
return _data[row * columns + column];
}
};
Matrix<std::string> Plane(15, 60);
for(size_t i = 0; i < 15; i++)
for(size_t j = 0; j < 60; j++)
Plane(i, j) = "This is a String!";
Of course, that's an extremely simplified implementation; you'd probably want to add a bunch of STL-like functionality like rows(), columns(), at(), begin(), end(), etc.
When using new[] to allocate a multi-dimensional array, you have to allocate each dimension separately, eg:
std::string** Plane = new std::string*[15];
for(int i = 0; i < 15; ++i)
Plane[i] = new std::string[60];
...
for(int i = 0; i < 15; ++i)
delete[] Plane[i];
delete[] Plane;
To access a string at a given row/column pair, you can using Planes[row][column] syntax.
Otherwise, flatten it into a 1-dimensional array instead:
std::string* Plane = new std::string[15*60];
...
delete[] Plane;
To access a string at a given row/column pair, you can using Planes[(row*60)+column] syntax.
That being said, you should stay away from using raw pointers like this. Use std::vector or std::array instead:
typedef std::vector<std::string> string_vec;
// or, in C++11 and later:
// using string_vec = std::vector<std::string>;
std::vector<string_vec> Planes(15, string_vec(60));
// C++11 and later only...
std::vector<std::array<std::string, 60>> Planes(15);
// C++11 and later only...
using Plane_60 = std::array<std::string, 60>;
std::unique_ptr<Plane_60[]> Planes(new Plane_60[15]);
// C++14 and later only..
using Plane_60 = std::array<std::string, 60>;
std::unique_ptr<Plane_60[]> Planes = std::make_unique<Plane_60[]>(15);
Any of these will let you access strings using Planes[row][column] syntax, while managing the array memory for you.

Define an array which the number of rows(clos) is unknown in C++

I have a 2048x2048 matrix of grayscale image,i want to find some points which value are > 0 ,and store its position into an array of 2 columns and n rows (n is also the number of founded points) Here is my algorithm :
int icount;
icount = 0;
for (int i = 0; i < 2048; i++)
{
for (int j = 0; j < 2048; j++)
{
if (iout.at<double>(i, j) > 0)
{
icount++;
temp[icount][1] = i;
temp[icount][2] = j;
}
}
}
I have 2 problems :
temp is an array which the number of rows is unknown 'cause after each loop the number of rows increases ,so how can i define the temp array ? I need the exact number of rows for another implementation later so i can't give some random number for it.
My algorithm above doesn't work,the results is
temp[1][1]=0 , temp[1][2]=0 , temp[2][1]=262 , temp[2][2]=655
which is completely wrong,the right one is :
temp[1][1]=1779 , temp[1][2]=149 , temp[2][1]=1780 , temp[2][2]=149
i got the right result because i implemented it in Matlab, it is
[a,b]=find(iout>0);
How about a std::vector of std::pair:
std::vector<std::pair<int, int>> temp;
Then add (i, j) pairs to it using push_back. No size needed to be known in advance:
temp.push_back(make_pair(i, j));
We'll need to know more about your problem and your code to be able to tell what's wrong with the algorithm.
When you define a variable of pointer type, you need to allocate memory and have the pointer point to that memory address. In your case, you have a multidimensional pointer so it requires multiple allocations. For example:
int **temp = new int *[100]; // This means you have room for 100 arrays (in the 2nd dimension)
int icount = 0;
for(int i = 0; i < 2048; i++) {
for(int j = 0; j < 2048; j++) {
if(iout.at<double>(i, j) > 0) {
temp[icount] = new int[2]; // only 2 variables needed at this dimension
temp[icount][1] = i;
temp[icount][2] = j;
icount++;
}
}
}
This will work for you, but it's only good if you know for sure you're not going to need any more than the pre-allocated array size (100 in this example). If you know exactly how much you need, this method is ok. If you know the maximum possible, it's also ok, but could be wasteful. If you have no idea what size you need in the first dimension, you have to use a dynamic collection, for example std::vector as suggested by IVlad. In case you do use the method I suggested, don't forget to free the allocated memory using delete []temp[i]; and delete []temp;

slow performance for 3D array delete C++

int newHeight = _height/2;
int newWidth = _width/2;
double*** imageData = new double**[newHeight];
for (int i = 0; i < newHeight; i++)
{
imageData[i] = new double*[newWidth];
for (int j = 0; j < newWidth; j++)
{
imageData[i][j] = new double[4];
}
}
I have dynamically allocated this 3D matrix.
what is the fastest and safest way to free the memory here?
here is that I have done but this takes a few seconds my matrix is big (1500,2000,4)
for (int i = 0; i != _height/2; i++)
{
for (int j = 0; j != _width/2; j++)
{
delete[] imageData[i][j];
}
delete[] imageData[i];
}
delete[] imageData;
Update
As suggested I have chosen this solution:
std::vector<std::vector<std::array<double,4>>>
the performance is great for my case
Allocate the entire image data as one block so you can free it as one block, ie. double* imageData = new double[width*height*4]; delete [] imageData; and index into it using offsets. Right now you are making 3 million separate allocations which is thrashing your heap.
I agree with qartar's answer right up until he said "index into it using offsets". That isn't necessary. You can have your single allocation and multiple subscript access (imageData[i][j][k]) too. I previously showed this method here, it's not difficult to adapt it for the 3-D case:
allocation code as follows:
double*** imageData;
imageData = new double**[width];
imageData[0] = new double*[width * height];
imageData[0][0] = new double[width * height * 4];
for (int i = 0; i < width; i++) {
if (i > 0) {
imageData[i] = imageData[i-1] + height;
imageData[i][0] = imageData[i-1][0] + height * 4;
}
for (int j = 1; j < height; j++) {
imageData[i][j] = imageData[i][j-1] + 4;
}
}
Deallocation becomes simpler:
delete[] imageData[0][0];
delete[] imageData[0];
delete[] imageData;
Of course, you can and should use std::vector to do the deallocation automatically:
std::vector<double**> imageData(width);
std::vector<double*> imageDataRows(width * height);
std::vector<double> imageDataCells(width * height * 4);
for (int i = 0; i < width; i++) {
imageData[i] = &imageDataRows[i * height];
for (int j = 0; j < height; j++) {
imageData[i][j] = &imageDataCells[(i * height + j) * 4];
}
}
and deallocation is completely automatic.
See my other answer for more explanation.
Or use std::array<double,4> for the last subscript, and use 2-D dynamic allocation via this method.
A slight variation on the first idea of Ben Voigt's answer:
double ***imagedata = new double**[height];
double **p = new double*[height * width];
double *q = new double[height * width * length];
for (int i = 0; i < height; ++i, p += width) {
imagedata[i] = p;
for (int j = 0; j < width; ++j, q += length) {
imagedata[i][j] = q;
}
}
// ...
delete[] imagedata[0][0];
delete[] imagedata[0];
delete[] imagedata;
It is possible to do the whole thing with a single allocation, but that would introduce a bit of complexity that you might not want to pay.
Now, the fact that each table lookup involves a couple of back-to-back reads of pointers from memory, this solution will pretty much always be quite inferior to allocating a flat array, and doing index calculations to convert a triple of indices into one flat index (and you should write a wrapper class that does these index calculations for you).
The main reason to use arrays of pointers to arrays of pointers to arrays is when your array is ragged — that is, imagedata[a][b] and imagedata[c][d] have different lengths — or maybe for swapping rows around, such as swap(imagedata[a][b], imagedata[c][d]). And under these circumstances, vector as you've used it is preferable to use until proven otherwise.
The primary portion of your algorithm that is killing performance is the granularity and sheer number of allocations you're making. In total you're producing 3001501 broken down as:
1 allocation for 1500 double**
1500 allocations, each of which obtains 2000 double*
3000000 allocations each of which obtains double[4]
This can be considerably reduced. You can certainly do as other suggest and simply allocate 1 massive array of double, leaving the index calculation to accessor functions. Of course, if you do that you need to ensure you bring the sizes along for the ride. The result, however, will easily deliver the fastest allocation time and access performance. Using a std::vector<double> arr(d1*d2*4); and doing the offset math as needed will serve very well.
Another Way
If you are dead set on using a pointer array approach, you can eliminate the 3000000 allocations by obtaining both of the inferior dimensions in single allocations. Your most-inferior dimension is fixed (4), thus you could do this: (but you'll see in a moment there is a much more C++-centric mechanism):
double (**allocPtrsN(size_t d1, size_t d2))[4]
{
typedef double (*Row)[4];
Row *res = new Row[d1];
for (size_t i=0; i<d1; ++i)
res[i] = new T[d2][4];
return res;
}
and simply invoke as:
double (**arr3D)[4] = allocPtrsN(d1,d2);
where d1 and d2 are your two superior dimensions. This produces exactly d1 + 1 allocations, the first being d1 pointers, the remaining be d1 allocations, one for each double[d2][4].
Using C++ Standard Containers
The prior code is obviously tedious, and frankly prone to considerable error. C++ offers a tidy solution this using a vector of vector of fixed array, doing this:
std::vector<std::vector<std::array<double,4>>> arr(1500, std::vector<std::array<double,4>>(2000));
Ultimately this will do nearly the same allocation technique as the rather obtuse code shown earlier, but provide you all the lovely benefits of the standard library while doing it. You get all those handy members of the std::vector and std::array templates, and RAII features as an added bonus.
However, this is one significant difference. The raw pointer method shown earlier will not value-initialize each allocated entity; the vector of vector of array method will. If you think it doesn't make a difference...
#include <iostream>
#include <vector>
#include <array>
#include <chrono>
using Quad = std::array<double, 4>;
using Table = std::vector<Quad>;
using Cube = std::vector<Table>;
Cube allocCube(size_t d1, size_t d2)
{
return Cube(d1, Table(d2));
}
double ***allocPtrs(size_t d1, size_t d2)
{
double*** ptrs = new double**[d1];
for (size_t i = 0; i < d1; i++)
{
ptrs[i] = new double*[d2];
for (size_t j = 0; j < d2; j++)
{
ptrs[i][j] = new double[4];
}
}
return ptrs;
}
void freePtrs(double***& ptrs, size_t d1, size_t d2)
{
for (size_t i=0; i<d1; ++i)
{
for (size_t j=0; j<d2; ++j)
delete [] ptrs[i][j];
delete [] ptrs[i];
}
delete [] ptrs;
ptrs = nullptr;
}
double (**allocPtrsN(size_t d1, size_t d2))[4]
{
typedef double (*Row)[4];
Row *res = new Row[d1];
for (size_t i=0; i<d1; ++i)
res[i] = new double[d2][4];
return res;
}
void freePtrsN(double (**p)[4], size_t d1, size_t d2)
{
for (size_t i=0; i<d1; ++i)
delete [] p[i];
delete [] p;
}
std::vector<std::vector<std::array<double,4>>> arr(1500, std::vector<std::array<double,4>>(2000));
template<class C>
void print_duration(const std::chrono::time_point<C>& beg,
const std::chrono::time_point<C>& end)
{
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - beg).count() << "ms\n";
}
int main()
{
using namespace std::chrono;
time_point<system_clock> tp;
volatile double vd;
static constexpr size_t d1 = 1500, d2 = 2000;
tp = system_clock::now();
for (int i=0; i<10; ++i)
{
double ***cube = allocPtrs(d1,d2);
cube[d1/2][d2/21][1] = 1.0;
vd = cube[d1/2][d2/2][3];
freePtrs(cube, 1500, 2000);
}
print_duration(tp, system_clock::now());
tp = system_clock::now();
for (int i=0; i<10; ++i)
{
Cube cube = allocCube(1500,2000);
cube[d1/2][d2/21][1] = 1.0;
vd = cube[d1/2][d2/2][3];
}
print_duration(tp, system_clock::now());
tp = system_clock::now();
for (int i=0; i<10; ++i)
{
auto cube = allocPtrsN(d1,d2);
cube[d1/2][d2/21][1] = 1.0;
vd = cube[d1/2][d2/21][1];
freePtrsN(cube, d1, d2);
}
print_duration(tp, system_clock::now());
}
Output
5328ms
418ms
95ms
Thusly, if you're planning on loading up every element with something besides zero anyway, it is something to keep in mind.
Conclusion
If performance were critical I would use the 24MB (on my implementation, anyway) single-allocation, likely in a std::vector<double> arr(d1*d2*4);, and do the offset calculations as needed using one form of secondary indexing or another. Other answers proffer up interesting ideas on this, notably Ben's, which radically reduces the allocation count two a mere three blocks (data, and two secondary pointer arrays). Sorry, I didn't have time to bench it, but I would suspect the performance would be stellar. But if you really want to keep your existing technique, consider doing it in a C++ container as shown above. If the extra cycles spent value initializing the world aren't too heavy a price to pay, it will be much easier to manage (and obviously less code to deal with in comparison to raw pointers).
Best of luck.