Static Multi Dimensional Arrays ( C/C++) - c++

Is memory allocated for multidimensional arrays in C or C++ always contiguous, or is the storage dependent on the compiler? If it is guaranteed to be contiguous is there a standard on it somewhere for reference? For example
int x[2][2] = { { 1 , 2 } , { 5 , 10 } } ;
Are the integers 1, 2, 5, 10 in sequence in memory ?

Arrays are guaranteed contiguous. What we have here is an array of arrays - each layer of which is contiguous. The inner most arrays we know must be {1, 2} and {5, 10}, and the outer most array must also be contiguous. Therefore, {{1,2},{5,10}} must be 1, 2, 5, 10 sequentially in memory.

Yes. Arrays are always allocated in contiguous memory location. It doesn't matter whether its a single or multi dimensional array.

Related

Why are some ARRAYS written like this in C++?

I just wanted to know why are some arrays written like this?
int arr[] = {3, 4, 6, 9, 11};
Why can't it just be written like this instead?
int arr[5] = {3, 4, 6, 9, 11}
What's the benefit?
Why can't it just be written like this instead?
The premise is wrong. It can be written both ways.
What's the benefit?
The size is redundant information. We've already implicitly given the size 5 by providing 5 initialisers. Providing redundant information is a potential source of bugs during development when that information accidentally goes out of sync.
For example, if the programmer decides later that the last 11 wasn't supposed to be there and should be removed leaving 4 elements in the array, the programmer making that change might not notice that the size has to also be changed, leading to a case where the last element is not removed as intended but replaced with value initialised element instead.
If the size of the array is supposed to be the same as the number of initialisers, then it is safer to not specify that size explicitly. To not specify the size explicitly is to follow the "Don't Repeat Yourself" principle.
On the other hand, if the size of the array is always supposed to be 5 regardless of the number of initialisers, then specifying the size explicitly achieves that. I suspect that this case is rarer in practice (except when there are no initialisers at all). Note that you should probably use a constant variable instead of a magic number for the size.
When you don't write any number into brackets, it means the array will have size equal to count of elements intialized with.
When you put there some number, you are giving the array a size.
int arr[] = {1, 7, 5}; // Size of the array is 3
int arr[3] = {1, 7, 5}; // Size of the array is 3
int arr[5] = {1, 7, 5}; // Size of the array is 5
Declaring with defined size is good when you know that you will need this count of items, but you don't have the items yet.
Without defined size it is simply easier to write.

Memory allocation in C++ STL for dynamic containers

When you declare 2D arrays, they are stored in contiguous memory locations and this is easy as the number of rows is fixed while declaring them.
Whereas when we declare a 2D vector vector<vector<int>> v, how does it work. As the number of rows is not fixed at all. My first guess was the new vector which you push_back into it are randomly allocated but then even that wont work as these vectors of int are randomly accessible.
My first guess is to allocate vectors of int randomly in the memory and store their address in another vector of addresses.
eg
vector<vector<int>> vmain;
vector<int> a = {1, 2, 3};
vector<int> b = {1, 2, 3};
vector<int> c = {1, 2, 3};
vmain.push_back(a);
vmain.push_back(b);
vmain.push_back(c);
is stored something similar to
vector<&vector<int>> vmain; //vector of pointer to vector
vector<int> a = {1, 2, 3};
vector<int> b = {1, 2, 3};
vector<int> c = {1, 2, 3};
vmain.push_back(&a);
vmain.push_back(&b);
vmain.push_back(&c);
Please tell me if this is the correct way.
And also for vector of maps or sets vector<map<int, int>> v1 and vector<set<int>> v2. As size of maps and sets is not fixed.
The vector object doesn't store the elements. It stores a pointer to a contiguous chunk of memory containing the elements. When you have std::vector<std::vector<int>> the outer vector contains a pointer to a contiguous chunk of memory containing vector objects, each of which have a pointer to a contiguous chunk of memory containing the ints.
std::map and std::set also don't store the elements in the object itself. Each object contains a pointer to a BST containing the elements.

Resizing array, does this code influence it badly in any way?

I've seen bunch of tutorials and threads, but no-one does this to resize an array. My question is, whether this affects bad something in my program or is there better way to resize it?
//GOAL:Array to become {1,2,4,5,6,7,8,9}
int size=9;
int array[size] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
for (int i = 2; i < 8; ++i)
array[i] = array[i + 1];
//ARRAY IS NOW{1, 2, 4, 5, 6, 7, 8, 9, 9}
//GOAL: TO DELETE THAT LAST 9 FROM ARRAY
size=8;
array[size];
//IT SHOULD BE {1,2,4,5,6,7,8,9} now, but does it effect my program in any negative context?
int array[size] declares an array of size elements.
Outside of a declaration, array[size] access element size in array. It does not resize the array. If you do something like that without changing the value of size, it actually tries to access the element after the last element in the array; not a good idea. In this case, since you changed size to be one less than the original, it accesses the last element of the array, which is safe but does not do what you want.
You can not resize an array in C/C++ that is declared on the stack (one allocated on the heap with malloc could be reallocated to a different size, but you'd have trouble copying it as the newly allocated array of the new size is possibly at a completely different memory location; you'd have to save the old one, allocate a new one of the new size, copy the elements you want, and then free the old one.)
If you want something resizeable, you are in C++; use a container (vector, for example, but pick the one that most suits your needs).
And....I just saw arnav-borborah's comment; don't know how I missed that. You can't even declare the array like that, as size is not a compile time constant.
Until size variable is not constexpr, this
int size=9;
int array[size] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
is Variable length array, which is not part of c++ standard, only extension of some compilers.
Also automatic arrays are not resizeable, they have fixed size since declaration until they goes out of scope.
You should use some STL container, like std::array, std::vector.
std::array needs to know size at compile time, so there is the best approach, std::vector, which is easy to use and resizeable.
// #include<vector>
std::vector<int> array { 1,2,3,4,5,6,7,8,9 }; // Uniform initialization
// Remove last element
array.pop_back(); // 'array' has now only 8 elements (1..8)
EDIT
As mentioned in comments, if you want to remove n-th element in vector, you may do
array.erase(array.begin()+n);
and job is done.
Hugely. say if you have a payload of 1GB and you Array.Resize the destination array in chunks of 10k then most of your application CPU and wait states will be resizing that array.
If you pre-allocate the array to 1GB, populating that array will be orders of magnitude faster. This is because every time you use Array.Resize.
The computer needs to move that memory in its entirety to another location in memory just to add the extra length you resized it by.
But of cause if you are dealing with very small arrays. This effect is not noticeable.

C++ 3D vector preserving "blocks"

Suppose I require an undetermined number of 3-by-4 matrices. (Or a sequence of any other fixed m-by-n-dimensional matrices.) My first thought is to store these matrices in a std::vector, where each matrix is itself a std::vector<std::vector<double> >. How can I use std::vector::reserve() to preallocate space for a number, say x, of these matrices? Because I know two of the dimensions, I ought (or I'd like) to be able to x times the size of these blocks.
I know how to implement this object in a 1D std::vector, but I'd like to know how to do it in a 3D std::vector, if for no other reason than to better learn how to use the std::vector class.
Storing matrices as vectors-of-vectors is probably pretty inefficient, but if you must, go for it. Reserving space is the same as always:
typedef std::vector<std::vector<int>> matrix_type;
std::vector<matrix_type> collection;
collection.reserve(100); // set capacity for 100 "matrices"
// make 10 4x3-matrices; `collection` won't reallocate
collection.resize(10, matrix_type(4, std::vector<int>(3)));
For your base type you might be better of to have a single vector of m * n elements and access it in strides, i.e. the (i,j)th element would be at position i * n + j. Each vector itself is a dynamic container, and you probably don't want all that many dynamic allocations all over the place.
In the same vein, the above reserve call probably doesn't do what you think, as it only reserves memory for the inner vector's bookkeeping data (typically three words per vector, i.e. 300 words), and not for the actual data.
In that light, you might even like to consider an std::array<int, m*n> as your matrix type (and access it in strides); now you can actually reserve space for actual matrices up-front - but m and n now have to be compile-time constants.
A better approach would be to provide a class interface and use a single linear block of memory for the whole matrix. Then you can implement that interface in different ways, that range from an internal array of the appropriate sizes (if the sizes are part of the size), or a single std::vector<int> by providing indexing (pos = row*cols + col).
In the std::vector< std::vector<int> > approach the outer vector will allocate memory to store the inner vectors, and each one of those will allocate memory to hold its own elements. Using raw pointers, it is similar in memory layout to:
int **array = new int*[ N ];
for ( int i = 0; i < N; ++i )
array[i] = new int[ M ];
That is:
[ 0 ] -------> [ 0, 1, 2, ... ]
[---]
[ 1 ] -------> [ 0, 1, 2, ... ]
[ . ]
[ . ]
Or basically N+1 separate blocks of memory.

Simple Deque initialization question

I have used the following code to insert some data in a deque.
int data[] = {10, 9, 8, 7, 6, 5, 4, 3, 2, 1};
deque<int> rawData (data, data + sizeof(data) / sizeof(int));
But I dont understand this part of the code,
data + sizeof(data) / sizeof(int)
What does it mean?
Let's take that bit by bit.
data is the iterator showing where to start. It's an array, but in C and C++ arrays decay to pointers on any provocation, so it's used as a pointer. Start taking in data from data on, and continue until the end iterator.
The end iterator is a certain amount past the start iterator, so it can be expressed as data + <something>, where <something> is the length. The start iterator is an int [] that is treated as an int *, so we want to find the length in ints. (In C and C++, pointers increment by the length of the pointed-to type.)
Therefore, sizeof(data) / sizeof(int) should be the length of the array. sizeof(data) is the total size of the array in bytes. (This is one of the differences between arrays and pointers: arrays have a defined size, while pointers point to what might be the start of an array of unknown size.) sizeof(int) is the total size of an int in bytes, and so the quotient is the total size of array in ints.
We want the size of array in ints because array decays into an int *, and so data + x points to the memory location x ints past data. From a beginning and a total size, we find the end of data, and so we copy everything in data from the beginning to the end.
That's a pointer to the imaginary element beyond the last element of the array. The sizeof(data)/sizeof(data[0]) yields the number of elements in data array. deque constructor accepts "iterator to the first element" and "iterator beyond the last element" (that's what end() iterator yields). This construct effectively computes the same as what .end() iterator would yield.