I take the size of rows n and columns m from the user
I want to make a 2D array (matrix) of the size nxm , initialize it and do some work on it
int main()
{
int m,n;
cin>>m>>n;
const int grow=m;
const int gcol=n;
auto G = new double[grow][gcol](); //GIVES ERROR that grow and gcol must be const
/*int** G = new int*[n];
for (int i = 0; i < n; ++i)
G[i] = new int[n];*/
}
You can always index in a one dimensional array with y * gcol + x to make it effectively work as a two dimensional one. With that you can use a dynamic memory e.g. with a std::vector<double>.
//GIVES ERROR that grow and gcol must be const
No, it does not. Unless your compiler is bad. Read the error again.
It gives an error that gcol must be a constant expression.
You cannot have dynamic arrays of dynamic arrays. It's simply not possible in c++. You can only have dynamic arrays of things that have a static size, known at compile time.
Therefore, you cannot have a 2D array where both dimensions are determined at runtime.
You have 2 alternatives:
Use a dynamic array of pointers to dynamic arrays. Which is what you have there, commented out. A dynamic array of vectors works too.
Use a flat, one dimensional array that contains the rows in succession.
In either case, I recommend using a class to manage the memory. std::vector, perhaps.
Array size is part of the type and needs to be known at compile time. You get it at runtime. Use vectors instead.
Related
I've got some data structured as a multi-dimensional array, i.e. double[][], and I need to pass it to a function that expects a single linear array of double[] along with dimensional metadata for the multi-dimensional representation.
For example, I might have a 3 x 5 multidimensional array, which I need to pass as a 15-element flat array along with height and width parameters so that the function knows it is a 3x5 array rather than a 5x3 array.
The function will then return a flat array and size metadata, which I need to use to convert the data back into a multidimensional type.
I believe the data layout in memory is exactly the same for both the flat and multi-dimensional representations; the only difference is how the indexing operations are performed. So I'd like to do the "conversion" with typecasting rather than copying the array values.
What's the most correct and readable way to typecast between multidimensional and flat arrays of the same total size?
I actually know what the dimensions of the multi-dimensional array will be at compile time. The array sizes aren't dynamic.
The most correct way has been given by #Maxim Egorushkin and #ypnos: double *flat = &multi[0][0];. And it will work fine with any decent compiler. But unfortunately is not valid C++ code and invokes Undefined Bahaviour.
The problem is that for an array double multi[N][M]; (N and M being compile time contant expressions), &multi[0][0] is the address of the first element of an array of size M. So it is legal to do pointer arithmetics only up to M. See this other question of mine for more details.
What's the most correct and readable way to typecast between multidimensional and flat arrays of the same total size?
The address of the first array element coincides with the address of the array. You can pass around the address of the first element, no casting is necessary.
I would assume the most popular way to do it is:
double *flat = &multi[0][0];
This is how it is done in C, and you do operate with simple C arrays.
You could also have a look at std::array in your use case (dimensions known at compile time), but that one is not multi-dimensional, so if you would cascade it, you would lose the contiguous layout.
You can use cast to a reference to an array. This require to use some fancy C++ type syntax but in return it allows to use all features that work on arrays, like for each loop.
#include <iostream>
using namespace std;
int main()
{
static constexpr size_t x = 5, y = 3;
unsigned multiArray[x][y];
for (size_t i = 0; i != x; ++i)
for (size_t j = 0; j != y; ++j)
multiArray[i][j] = i * j;
static constexpr size_t z = x * y;
unsigned (&singleArray)[z] = (unsigned (&)[z])multiArray[0][0];
for (const unsigned value : singleArray)
cout << value << ' ';
cout << endl;
return 0;
}
Take into account that this and other methods basing on casts work only with real multi-dimensional arrays. If it is an array of arrays (like unsigned **multiArray;), it isn't allocated in a continuous block of memory and a cast cannot bypass that.
for a project using Tensorflow's C API I have to pass a void pointer (void*) to a method of Tensorflow. In the examples the void* points to a 2d array, which also worked for me. However now I have array dimensions which do not allow me to use the stack, which is why I have to use a dynamic array or a vector.
I managed to create a dynamic array with the same entries like this:
float** normalizedInputs;//
normalizedInputs = new float* [noCellsPatches];
for(int i = 0; i < noCellsPatches; ++i)
{
normalizedInputs[i] = new float[no_input_sizes];
}
for(int i=0;i<noCellsPatches;i++)
{
for(int j=0;j<no_input_sizes;j++)
{
normalizedInputs[i][j]=inVals.at(no_input_sizes*i+j);
////
////
//normalizedInputs[i][j]=(inVals.at(no_input_sizes*i+j)-inputMeanValues.at(j))/inputVarValues.at(j);
}
}
The function call needing the void* looks like this:
TF_Tensor* input_value = TF_NewTensor(TF_FLOAT,in_dims_arr,2,normalizedInputs,num_bytes_in,&Deallocator, 0);
In argument 4 you see the "normalizedInputs" array. When I run my program now, the calculated results are totally wrong. When I go back to the static array they are right again. What do I have to change?
Greets and thanks in advance!
Edit: I also noted that the TF_Tensor* input_value holds totally different values for both cases (for dynamic it has many 0 and nan entries). Is there a way to solve this by using a std::vector<std::vector<float>>?
Respectively: is there any valid way pass a consecutive dynamic 2d data structure to a function as void*?
In argument 4 you see the "normalizedInputs" array. When I run my program now, the calculated results are totally wrong.
The reason this doesn't work is because you are passing the pointers array as data. In this case you would have to use normalizedInputs[0] or the equivalent more explicit expression &normalizedInputs[0][0]. However there is another bigger problem with this code.
Since you are using new inside a loop you won't have contiguous data which TF_NewTensor expects. There are several solutions to this.
If you really need a 2d-array you can get away with two allocations. One for the pointers and one for the data. Then set the pointers into the data array appropriately.
float **normalizedInputs = new float* [noCellsPatches]; // allocate pointers
normalizedInputs[0] = new float [noCellsPatches*no_input_sizes]; // allocate data
// set pointers
for (int i = 1; i < noCellsPatches; ++i) {
normalizedInputs[i] = &normalizedInputs[i-1][no_input_sizes];
}
Then you can use normalizedInputs[i][j] as normal in C++ and the normalizedInputs[0] or &normalizedInputs[0][0] expression for your TF_NewTensor call.
Here is a mechanically simpler solution, just use a flat 1d array.
float * normalizedInputs = new float [noCellsPatches*no_input_sizes];
You access the i,j-th element by normalizedInputs[i*no_input_sizes+j] and you can use it directly in the TF_NewTensor call without worrying about any addresses.
C++ standard does its best to prevent programmers to use raw arrays, specifically multi-dimensional ones.
From your comment, your statically declared array is declared as:
float normalizedInputs[noCellsPatches][no_input_sizes];
If noCellsPatches and no_input_sizes are both compile time constants you have a correct program declaring a true 2D array. If they are not constants, you are declaring a 2D Variable Length Array... which does not exist in C++ standard. Fortunately, gcc allow it as an extension, but not MSVC nor clang.
If you want to declare a dynamic 2D array with non constant rows and columns, and use gcc, you can do that:
int (*arr0)[cols] = (int (*) [cols]) new int [rows*cols];
(the naive int (*arr0)[cols] = new int [rows][cols]; was rejected by my gcc 5.4.0)
It is definitely not correct C++ but is accepted by gcc and does what is expected.
The trick is that we all know that the size of an array of size n in n times the size of one element. A 2D array of rows rows of columnscolumns if then rows times the size of one row, which is columns when measured in underlying elements (here int). So we ask gcc to allocate a 1D array of the size of the 2D array and take enough liberalities with the strict aliasing rule to process it as the 2D array we wanted. As previously said, it violates the strict aliasing rule and use VLA in C++, but gcc accepts it.
First of all, happy new year!
So, I'd like to ask if I could use some input values as the size of a bidimensional array, for example:
I'd like to know, if instead of doing this:
const int N = 10;
const int M = 10;
typedef int IntMatrix[N][M];
Let's say that would be the max size of the array I could create, but then the user inputs that the size must have a size of 5x5. I know I could then use 5x5 as a limit when doing stuff, but could I do like the same, but using the input values as the dimension of the Matrix?
Something like:
cin >> N >> M;
And then use that as the MAX size of each dimension.
Thanks for your help!
No. The size of an array must be known at compile time and can not be determined at runtime as described in this tutorial for example. Therefore, the size of the array cannot depend on user input.
What you can do, is allocate an array dynamically and store it's address in a pointer. The size of a dynamic array can be determined at runtime. However, there is a problem. Only the outermost dimension of a dynamically allocated 2D array may be determined at runtime. You have 2 options: Either you allocate a flat array of size NxM where the rows are stored continuously one after the other and you calculate the index using maths. Or, you use an array of pointers and assign each pointer to a dynamically allocated array column. The first option is more efficient.
There is another problem. Dynamic memory management is hard, and it's never a good idea to do it manually even if you know what you're doing. Much less if you don't. There is a container class in the standard library which takes care of memory management of dynamic arrays. It's std::vector. Always use it when you need a dynamic array. Your options stay similar. Either use a flat, NxM size vector, or a vector of vectors.
The array should be dynamically allocatedn because array size should be known at compile-time. You can do this way:
int N,M; // Dimensions
int** intMatrix; // Array of array
std::cin << N << M;
intMatrix = new int*[N]; // Allocate N the row
for(int i=0; i<N; i++){
intMatrix[i] = new int[M]; // For each row, allocate the col
}
// aaaand don't forget to free memory like this:
for(int i=0; i<N; i++){
delete [] intMatrix[i];
}
delete [] intMatrix;
I'm trying to make a dynamically allocated bidimensional array with variable size but I don't know why if I create my own constant value it won't compile:
const int oConstanta=N+1;
int (*m)[oConstanta]=new int[oConstanta][oConstanta];
But when I use a normal constant such as 1000 between the brackets it compiles successfully.
const int oConstanta=N+1;
int (*m)[1000]=new int[1000][1000];
Does anyone know the reason for this?
PS: I know that:
int **m=new int*[oConstanta];
for(i=1;i<=N;i++)
{
m[i]=new int[oConstanta];
init(m[i]);
}
will solve my problems but I want to learn why my former method was a bad idea.
Unless N is a compile-time constant expression, oConstanta is not a compile-time constant either.
The best way of making a two-dimensional array in C++ is using std::vector of std::vectors, for example, like this:
#include <vector>
std::vector<std::vector<int> > m(N+1, std::vector<int>(N+1, 0));
Ultimately the reason is that you can't create static arrays of variable length.
In your code you are trying to create a static array of dynamic arrays, both of variable length.
Now, static arrays live in the stack, while dynamic arrays live in the heap. While the memory management of the heap is "flexible", the stack is different: the compiler needs to be able to determine the size of each frame in the stack. This is clearly not possible if you use an array of variable length.
On the other hand, if you use a pointer the size of the stack frame is known (a pointer has a known size) and everything is fine.
If you want to try, this should compile fine
int (*m)[1000]=new int[oConstanta][1000]
since it's a fixed-size static array, whose entries are dynamically allocated arrays of variable length (allowed).
In short: whenever the size of an object is not known at compile time, that object cannot be in the stack, it has to be dynamically allocated.
To make a dynamically sized, 2D matrix with contiguous elements and a single allocation:
std::vector<int> matrix(Rows*Columns);
Access an element in the i th row and j th column:
matrix[Columns*i + j] = 1;
You can wrap this all up in a class. Here's a very basic example:
struct Matrix {
std::vector<int> m;
size_t rows,columns;
Matrix(size_t rows,size_t columns)
: rows(rows)
, columns(columns)
, m(rows*columns)
{}
int &at(size_t i,size_t j) {
return m.at(i*columns + j);
}
};
int *x = new int[5]();
With the above mentality, how should the code be written for a 2-dimensional array - int[][]?
int **x = new int[5][5] () //cannot convert from 'int (*)[5]' to 'int **'
In the first statement I can use:
x[0]= 1;
But the second is more complex and I could not figure it out.
Should I use something like:
x[0][1] = 1;
Or, calculate the real position then get the value
for the fourth row and column 1
x[4*5+1] = 1;
I prefer doing it this way:
int *i = new int[5*5];
and then I just index the array by 5 * row + col.
You can do the initializations separately:
int **x = new int*[5];
for(unsigned int i = 0; i < 5; i++)
x[i] = new int[5];
There is no new[][] operator in C++. You will first have to allocate an array of pointers to int:
int **x = new int*[5];
Then iterate over that array. For each element, allocate an array of ints:
for (std::size_t i = 0; i < 5; ++i)
x[i] = new int[5];
Of course, this means you will have to do the inverse when deallocating: delete[] each element, then delete[] the larger array as a whole.
This is how you do it:
int (*x)[5] = new int[7][5] ;
I made the two dimensions different so that you can see which one you have to use on the lhs.
Ff the array has predefined size you can write simply:
int x[5][5];
It compiles
If not why not to use a vector?
There are several ways to accomplish this:
Using gcc's support for flat multidimensional arrays (TonyK's answer, the most relevant to the question IMO). Note that you must preserve the bounds in the array's type everywhere you use it (e.g. all the array sizes, except possibly the first one), and that includes functions that you call, because the produced code will assume a single array. The allocation of $ new int [7][5] $ causes a single array to be allocated in memory. indexed by the compiler (you can easily write a little program and print the addresses of the slots to convince yourself).
Using arrays of pointers to arrays. The problem with that approach is having to allocate all the inner arrays manually (in loops).
Some people will suggest using std::vector's of std::vectors, but this is inefficient, due to the memory allocation and copying that has to occur when the vectors resize.
Boost has a more efficient version of vectors of vectors in its multi_array lib.
In any case, this question is better answered here:
How do I use arrays in C++?