1D dynamic array, parentheses - c++

Let's say I have a 1D dynamic array, that I want to fill with fibonacci numbers.
User enters size as 15, so I want first 15 fibonacci numbers.
So my question is this:
int* arr = new int [size];
and this
int* arr = new int [size]{};
What do {} do ?, and what is the difference ?

Effectively the paranthesis version ensures that all values are initliazized (ints will be initialized to zero) while otherwise that array might be filled with memory garbage since it just gave you a memory block, not doing anything with it. As a result, the paranthesis version might be a tad slower but that will only matter if you call this very often or with megabyte large arrays.
Note that often in debug mode, you will get a clean array anyways.
More details:
Using (or not using) parentheses when initializing an array ( whetehr it's () or {} does not matter in this case as both can be used for initializing types or classes that have no custom constructor).

Related

3D pointer seg error

I want to stick 2D arrays in a 3D array together, first i defined the 3D array in the following way
int ***grid;
grid=new int **[number];
then I want to assign the 2D arrays to the 3D construct
for(i=0;i<number;i++)
grid[i]=rk4tillimpact2dens(...);
with
int** rk4tillimpact2dens(...
...
static int** grid;
grid=new int*[600];
for(i=0;i<600;i++)
grid[i]=new int[600];
memset(grid,0x0,sizeof(grid));
...
return(grid);
}
so far no problem, everything works fine, but when I want to access the 3D array afterwards I get a seg fault. Like that e.g.
printf("%d",grid[1][1][1]);
What is my mistake?
Best,
Hannes
Oh, sorry, it was typo in my question, I did
printf("%d",grid[1][1][1]);
it's not working :(. But even
printf("%d",&grid[1][1][1]);
or
printf("%d",*grid[1][1][1]);
would not work. The strange thing is, that there are no errors unless I try to access the array
First, you discard the very first row of each matrix with that memset (the actual row is leaked). While technically grid[1][1][1] should still be readable, it probably becomes corrupt in some other place.
Can you provide a minimal verifiable example? This is likely to solve your problem.
To clear out the memory allocated for grid, you can't do the whole NxN matrix with one memset, it isn't contiguous memory. Since each row is allocated as a separate memory block, you need to clear them individually.
for(i=0;i<600;i++) {
grid[i]=new int[600];
memset(grid[i], 0, sizeof(int) * 600);
}
The 600 value should be a named constant, and not a hardcoded number.
And grid does not need to be a static variable.
Printing out the address
printf("%p",&grid[1][1][1]);
You are printing the address here. That's why you may not get what you desire to see.
printf("%d",grid[1][1][1]);
This will print the array element.
And to read an input from stdin you will use scanf() which requires you to pass address of an variable.
scanf("%d",&grid[1][1][1]);
Zeroing out the allocated memory
Also you can't get the size of the array using sizeof. SO to initialize with 0 you use memset on the chunks that are allocated at once with a new.
In your case example would be Like 1201ProgramAlarm pointed out
for(int i = 0; i < 600; i++){
...
memset(grid[i],0,sizeof(int)*600);
}
There is another way you can initialise an allocated memory in c++.
grid[i]=new int[600]();
For example:
int** rk4tillimpact2dens(...
...
static int** grid;
grid=new int*[600];
for(i=0;i<600;i++)
grid[i]=new int[600]();
...
return(grid);
}
Do you expect memset(grid,0x0,sizeof(grid)); not to zero the pointer values you've just assigned to grid[0] through to grid[599]? If so, you should test that theory by inspecting the pointer values of grid[0] through to grid[599] before and after that call to memset, to find out what memset does to true (more on that later) arrays.
Your program is dereferencing a null pointer which results directly from that line of code. Typically, a crash can be expected when you attempt to dereference a null pointer, because null pointers don't reference any objects. This explains your observation of a crash, and your observation of the crash disappearing when you comment out that call to memset. You can't expect good things to happen if you try to use the value of something which isn't an object, such as grid[1][... where grid[1] is a pointer consisting entirely of zero bits.
The term 3D array doesn't mean what you think it means, by the way. Arrays in C and C++ are considered to be a single allocation, where-as what your code is producing seems to be multiple allocations, associated in a hierarchical form; you've allocated a tree as opposed to an array, and memset isn't appropriate to zero a tree. Perhaps your experiments could be better guided from this point on by a book regarding algorithms, such as Algorithms in C, parts 1-4 by Robert Sedgewick.
For the meantime, in C, the following will get you a pointer to a 2D array which you can mostly use as though it's a 3D array:
void *make_grid(size_t x, size_t y, size_t z) {
int (*grid)[y][z] = malloc(x * sizeof *grid);
/* XXX: use `grid` as though it's a 3D array here.
* i.e. grid[0][1][2] = 42;
*/
return grid;
}
Assuming make_grid returns something non-null, you can use a single call to memset to zero the entirety of the array pointed to by that function because there's a single call to malloc matching that a single call to memset... Otherwise, if you want to zero a tree, you'll probably want to call memset n times for n items.
In C++, I don't think you'll find many who discourage the use of std::vector in place of arrays. You might want to at least consider that option, as well as the other options you have (such as trees; it seems like you want to use a tree, which is fine because trees have perfectly appropriate usecases that arrays aren't valid for, and you haven't given us enough context to tell which would be most appropriate for you).

How do I reinitialize a multidimensional array efficiently in Qt/C++?

In Qt, I have a 3-D int-array (say ID[x][y][z]) which I need to set back to 0 during computation.
Is there an efficient way to do it without using a loop?
I need the reinitialization because I am running a specific algorithm with a simple cost-function to get an estimation for the following more detailed computation, and I want to use the same data structure. Simply overwriting the array is not an option, because the algorithm reads and checks entries before writing them.
Sooner or later there's going to be a loop, but you can delegate it to another and more optimized function.
Also, if the "3D array" is really an array of arrays of arrays of some basic type (like int or char), then all the memory is contiguous so you can use a single function call to clear all of the memory in one single call.
Now which function to use; In C++ there are basically two functions you can use: The old C memset function, and the C++ std::fill function. Both should work fine, with proper casting and size, to set all of the data to a specific value.
Under the hood it will (almost) always be a loop. Don't worry, a loop without branches is quite efficient.
If you go for more readable, you can use a function like memset or std::fill which hides the loop.
Have a look at the answers to this question: What's the safe way to fill multidimensional array using std::fill?
The users #JoachimPileborg and #GeorgSchölly have already explained in their answers which functions can be used to reinitialize an array. However, I'd like to add some sample code and explain a difference between std::fill() and memset().
I assume that you want to (re)initialize your array with 0. In this case you can do it as shown in the following code:
#include <xutility> // For std::fill().
int main(int argc, char* argv[])
{
const int x = 2;
const int y = 3;
const int z = 5;
const int arraySize = x * y * z;
// Initialize array with 0.
int ID[x][y][z] = {};
// Use the array...
// Either reinitialize array with 0 using std::fill()...
std::fill(&ID[0][0][0], &ID[0][0][0] + arraySize, 0);
// ...or reinitialize array with 0 using memset().
memset(&ID[0][0][0], 0, sizeof(ID));
// Reuse the array...
return 0;
}
However, if you want to initialize your array with another value (for example, 1), then you have to be aware of a crucial difference between std::fill() and memset(): The function std::fill() sets each element of your array to the specified value. The function memset(), in contrast, sets each byte your array to the specified value. This is why memset() takes the array size as number of bytes (sizeof(ID)).
If you initialize your array with 0, this difference doesn't cause a problem.
But, if you want to initialize each element of your array with a non-zero value, then you have to use std::fill(), because memset() will yield the wrong result.
Let's assume you want to set all elements of your array to 1, then the following line of code will yield the expected result:
std::fill(&ID[0][0][0], &ID[0][0][0] + arraySize, 1);
However, if you use memset() in the following way, then you get a different result:
memset(&ID[0][0][0], 1, sizeof(ID));
As explained above, this line of code sets every byte to 1. Therefore, each integer element of your array will be set to 16843009, because this equals the hexadecimal value 0x01010101.

What should I do to get the size of a 'dynamic' array? [duplicate]

This question already has answers here:
How do I find the length of an array?
(30 answers)
Closed 8 years ago.
I have this code.
int x[5];
printf("%d\n",sizeof(x) );
int *a;
a = new int[3];
printf("%d\n",sizeof(*a));
When I pass a 'static' array to sizeof(), it returns the dimension of the declared array multiplied by the number of bytes that the datatype uses in memory. However, a dynamic array seems to be different. My question is what should I do to get the size of an 'dynamic' array?
PD: Could it be related to the following?
int *a;
a=new int[3];
a[0]=3;
a[1]=4;
a[2]=5;
a[3]=6;
Why can I modify the third position if it's supposed I put a 'limit' in "a=new int[3]".
When I pass a 'static' array to sizeof(), it returns the dimension of the declared array multiplied by the number of bytes that the datatype uses in memory.
Correct, that is how the size of the entire array is computed.
However, a dynamic array seems to be different.
This is because you are not passing a dynamic array; you are passing a pointer. Pointer is a data type with the size independent of the size of the block of memory to which it may point, hence you always get a constant value. When you allocate memory for your dynamically sized memory block, you need to store the size of allocation for future reference:
size_t count = 123; // <<== You can compute this count dynamically
int *array = new int[count];
cout << "Array size: " << (sizeof(*array) * count) << endl;
C++14 will have variable-length arrays. These arrays will provide a proper size when you check sizeof.
Could it be related to the following? [...]
No, it is unrelated. Your code snippet shows undefined behavior (writing past the end of the allocated block of memory), meaning that your code is invalid. It could crash right away, lead to a crash later on, or exhibit other arbitrary behavior.
In C++ arrays do not have any intrinsic size at runtime.
At compile time one can use sizeof as you showed in order to obtain the size known to the compiler, but if the actual size is not known until runtime then it is the responsibility of the program to keep track of the length.
Two popular strategies are:
Keep a separate variable that contains the current length of the array.
Add an extra element to the end of the array that contains some sort of marker value that indicates that it's the last element. For example, if your array is known to be only of positive integers then you could use -1 as your marker.
If you do not keep track of the end of your array and you write beyond what you allocated then you risk overwriting other data stored adjacent to the array in memory, which could cause crashes or other undefined behavior.
Other languages tend to use the former strategy and provide a mechanism for obtaining the current record of the length. Some languages also allow the array to be dynamically resized after it's created, which usually involves creating a new array and copying over all of the data before destroying the original.
The vector type in the standard library provides an abstraction over arrays that can be more convenient when the size of the data is not known until runtime. It keeps track of the current array size, and allows the array to grow later. For example:
#include <vector>
int main() {
std::vector<int> a;
a.push_back(3);
a.push_back(4);
a.push_back(5);
a.push_back(6);
printf("%d\n", a.size());
return 0;
}
As a side-note, since a.size() (and sizeof(...)) returns a size_t, which isn't necessarily the same size as an int (though it happens to be on some platforms), using printf with %d is not portable. Instead, one can use iostream, which is also more idiomatic C++:
#include <iostream>
std::cout << a.size() << '\n';
You do not, at least not in standard C++. You have to keep track of it yourself, use an alternative to raw pointers such as std::vector that keeps track of the allocated size for you, or use a non-standard function such as _msize https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/msize?view=msvc-160 on Microsoft Windows or malloc_size https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/malloc_size.3.html on MacOS X.

Defining Array C/C++

What is the difference between this two array definitions and which one is more correct and why?
#include <stdio.h>
#define SIZE 20
int main() {
// definition method 1:
int a[SIZE];
// end definition method 1.
// defintion method 2:
int n;
scanf("%d", &n);
int b[n];
// end definition method 2.
return 0;
}
I know if we read size, variable n, from stdin, it's more correct to define our (block of memory we'll be using) array as a pointer and use stdlib.h and array = malloc(n * sizeof(int)), rather than decalring it as int array[n], but again why?
It's not "more correct" or "less correct". It either is xor isn't correct. In particular, this works in C, but not in C++.
You are declaring dynamic arrays. Better way to declare Dynamic arrays as
int *arr; // int * type is just for simplicity
arr = malloc(n*sizeof(int*));
this is because variable length arrays are only allowed in C99 and you can't use this in c89/90.
In (pre-C99) C and C++, all types are statically sized. This means that arrays must be declared with a size that is both constant and known to the compiler.
Now, many C++ compilers offer dynamically sized arrays as a nonstandard extension, and C99 explicitly permits them. So int b[n] will most likely work if you try it. But in some cases, it will not, and the compiler is not wrong in those cases.
If you know SIZE at compile-time:
int ar[SIZE];
If you don't:
std::vector<int> ar;
I don't want to see malloc anywhere in your C++ code. However, you are fundamentally correct and for C that's just what you'd do:
int* ptr = malloc(sizeof(int) * SIZE);
/* ... */
free(ptr);
Variable-length arrays are a GCC extension that allow you to do:
int ar[n];
but I've had issues where VLAs were disabled but GCC didn't successfully detect that I was trying to use them. Chaos ensues. Just avoid it.
Q1 : First definition is the static array declaration. Perfectly correct.
It is when you have the size known, so no comparison with VLA or malloc().
Q2 : Which is better when taking size as an input from the user : VLA or malloc .
VLA : They are limited by the environment's bounds on the size of automatic
allocation. And automatic variables are usually allocated on the stack which is relatively
small.The limitation is platform specific.Also, this is in c99 and above only.Some ease of use while declaring multidimensional arrays is obtained by VLA.
Malloc : Allocates from the heap.So, for large size is definitely better.For, multidimensional arrays pointers are involved so a bit complex implementataion.
Check http://bytes.com/topic/c/answers/578354-vla-feature-c99-vs-malloc
I think that metod1 could be little bit faster, but both of them are correct in C.
In C++ first will work, but if you want to use a second you should use:
int size = 5;
int * array = new int[size];
and remember to delete it:
delete [] array;
I think it gives you more option to use while coding.
If you use malloc or other dynamic allocation to get a pointer. You will use like p+n..., but if you use array, you could use array[n]. Also, while define pointer, you need to free it; but array does not need to free.
And in C++, we could define user-defined class to do such things, and in STL, there is std::vector which do the array-things, and much more.
Both are correct. the declaration you use depends on your code.
The first declaration i.e. int a[size]; creates an array with a fixed size of 20 elements.
It is helpful when you know the exact size of the array that will be used in the code. for example, you are generating
table of a number n up till its 20th multiple.
The second declaration allows you to make an array of the size that you desire.
It is helpful when you will need an array of different sizes, each time the code is executed for example, you want to generate the fibonacci series till n. In that case, the size of the array must be n for each value of n. So say you have n = 5, in this case int a [20] will waste memory because only the first five slots will be used for the fibonacci series and the rest will be empty. Similarly if n = 25 then your array int a[20] will become too small.
The difference if you define array using malloc is that, you can pass the size of array dynamically i.e at run time. You input a value your program has during run time.
One more difference is that arrays created using malloc are allocated space on heap. So they are preserved across function calls unlike static arrays.
example-
#include<stdio.h>
#include<stdlib.h>
int main()
{
int n;
int *a;
scanf("%d",&n);
a=(int *)malloc(n*sizeof(int));
return 0;
}

C++ Why is this passed-by-reference array generating a runtime error?

void pushSynonyms (string synline, char matrizSinonimos [1024][1024]){
stringstream synstream(synline);
vector<int> synsAux;
int num;
while (synstream >> num) {synsAux.push_back(num);}
int index=0;
while (index<(synsAux.size()-1)){
int primerSinonimo=synsAux[index];
int segundoSinonimo=synsAux[++index];
matrizSinonimos[primerSinonimo][segundoSinonimo]='S';
matrizSinonimos [segundoSinonimo][primerSinonimo]='S';
}
}
and the call..
char matrizSinonimos[1024][1024];
pushSynonyms("1 7", matrizSinonimos)
It's important for me to pass matrizSinonimos by reference.
Edit: took away the & from &matrizSinonimos.
Edit: the runtime error is:
An unhandled win32 exception occurred in program.exe [2488]![alt text][1]
What's wrong with it
The code as you have it there - i can't find a bug. The only problem i spot is that if you provide no number at all, then this part will cause harm:
(synsAux.size()-1)
It will subtract one from 0u . That will wrap around, because size() returns an unsigned integer type. You will end up with a very big value, somewhere around 2^16 or 2^32. You should change the whole while condition to
while ((index+1) < synsAux.size())
You can try looking for a bug around the call side. Often it happens there is a buffer overflow or heap corruption somewhere before that, and the program crashes at a later point in the program as a result of that.
The argument and parameter stuff in it
Concerning the array and how it's passed, i think you do it alright. Although, you still pass the array by value. Maybe you already know it, but i will repeat it. You really pass a pointer to the first element of this array:
char matrizSinonimos[1024][1024];
A 2d array really is an array of arrays. The first lement of that array is an array, and a pointer to it is a pointer to an array. In that case, it is
char (*)[1024]
Even though in the parameter list you said that you accept an array of arrays, the compiler, as always, adjusts that and make it a pointer to the first element of such an array. So in reality, your function has the prototype, after the adjustments of the argument types by the compiler are done:
void pushSynonyms (string synline, char (*matrizSinonimos)[1024]);
Although often suggested, You cannot pass that array as a char**, because the called function needs the size of the inner dimension, to correctly address sub-dimensions at the right offsets. Working with a char** in the called function, and then writing something like matrizSinonimos[0][1], it will try to interpret the first sizeof(char**) characters of that array as a pointer, and will try to dereference a random memory location, then doing that a second time, if it didn't crash in between. Don't do that. It's also not relevant which size you had written in the outer dimension of that array. It rationalized away. Now, it's not really important to pass the array by reference. But if you want to, you have to change the whole thingn to
void pushSynonyms (string synline, char (&matrizSinonimos)[1024][1024]);
Passing by reference does not pass a pointer to the first element: All sizes of all dimensions are preserved, and the array object itself, rather than a value, is passed.
Arrays are passed as pointers - there's no need to do a pass-by-reference to them. If you declare your function to be:
void pushSynonyms(string synline, char matrizSinonimos[][1024]);
Your changes to the array will persist - arrays are never passed by value.
The exception is probably 0xC00000FD, or a stack overflow!
The problem is that you are creating a 1 MB array on the stack, which probably is too big.
try declaring it as:
void pushSynonyms (const string & synline, char *matrizSinonimos[1024] )
I believe that will do what you want to do. The way you have it, as others have said, creates a 1MB array on the stack. Also, changing synline from string to const string & eliminates pushing a full string copy onto the stack.
Also, I'd use some sort of class to encapsulate matrizSinonimos. Something like:
class ms
{
char m_martix[1024][1024];
public:
pushSynonyms( const string & synline );
}
then you don't have to pass it at all.
I'm at a loss for what's wrong with the code above, but if you can't get the array syntax to work, you can always do this:
void pushSynonyms (string synline, char *matrizSinonimos, int rowsize, int colsize )
{
// the code below is equivalent to
// char c = matrizSinonimos[a][b];
char c = matrizSinonimos( a*rowsize + b );
// you could also Assert( a < rowsize && b < colsize );
}
pushSynonyms( "1 7", matrizSinonimos, 1024, 1024 );
You could also replace rowsize and colsize with a #define SYNONYM_ARRAY_DIMENSION 1024 if it's known at compile time, which will make the multiplication step faster.
(edit 1) I forgot to answer your actual question. Well: after you've corrected the code to pass the array in the correct way (no incorrect indirection anymore), it seems most probable to me that you did not check you inputs correctly. You read from a stream, save it into a vector, but you never checked whether all the numbers you get there are actually in the correct range. (end edit 1)
First:
Using raw arrays may not be what you actually want. There are std::vector, or boost::array. The latter one is compile-time fixed-size array like a raw-array, but provides the C++ collection type-defs and methods, which is practical for generic (read: templatized) code.
And, using those classes there may be less confusion about type-safety, pass by reference, by value, or passing a pointer.
Second:
Arrays are passed as pointers, the pointer itself is passed by value.
Third:
You should allocate such big objects on the heap. The overhead of the heap-allocation is in such a case insignificant, and it will reduce the chance of running out of stack-space.
Fourth:
void someFunction(int array[10][10]);
really is:
(edit 2) Thanks to the comments:
void someFunction(int** array);
void someFunction(int (*array)[10]);
Hopefully I didn't screw up elsewhere....
(end edit 2)
The type-information to be a 10x10 array is lost. To get what you've probably meant, you need to write:
void someFunction(int (&array)[10][10]);
This way the compiler can check that on the caller side the array is actually a 10x10 array. You can then call the function like this:
int main() {
int array[10][10] = { 0 };
someFunction(array);
return 0;
}