Why can't I do something like this:
int size = menu.size;
int list[size];
Is there anyway around this instead of using a vector? (arrays are faster, so I wanted to use arrays)
thanks
The size must be known at compile-time, since the compiler needs to know how much stack space will be needed to allocate enough memory for it. (Edit: I stand corrected. In C, variable length arrays can be allocated on the stack. C++ does not allow variable length arrays, however.)
But, you can create arrays on the heap at run-time:
int* list = new int[size];
Just make sure you free the memory when you're done, or you'll get a memory leak:
delete [] list;
Note that it's very easy to accidentally create memory leaks, and a vector is almost for sure easier to use and maintain. Vectors are quite fast (especially if you reserve() them to the right size first), and I strongly recommend using a vector instead of manual memory-management.
In general, it's a good idea to profile your code to find out where the real bottlenecks are than to micro-optimize up front (because the optimizations are not always optimizations).
As other have said, the C++ language designers have chosen not to allow variable length arrays, VLAs, in spite of them being available in C99. However, if you are prepared to do a bit more work yourself, and you are simply desperate to allocate memory on the stack, you can use alloca().
That said, I personally would use std::vector. It is simpler, safer, more maintainable and likely fast enough.
C++ does not allow variable length arrays. The size must be known at compile-time. So you can't do that.
You can either use vectors or new.
vector<int> list;
or
int *list = new int[size];
If you go with the latter, you need to free it later on:
delete[] list;
arrays are faster, so I wanted to use arrays
This is not true. Where did you hear this from?
In C++, the length of an array needs to be known at compile time, in order to allocate it on the stack.
If you need to allocate an array whose size you do not know at compile time, you'll need to allocate it on the heap, using operator new[]
int size = menu.size;
int *list = new int[size];
But since you've new'd memory on the heap, you need to ensure you properly delete it when you are done with it.
delete[] list;
AFAIK In C++03, there are no variable-length-arrays (VLA):
you probably want to do this:
const int size = menu.size;
int list[size]; // size is compile-time constant
or
int *list = new int[size]; // creates an array at runtime;
delete[] list; // size can be non-const
first of all, vectors aren't significantly faster.
as for the reason why you cant do something like this:
the code will allocate the array on the stack. compilers have to know this size upfront so they can account for it. hence you can only use constants with that syntax.
an easy way around is creating the array on the heap: int list = new int[size]; Don't forget to delete[] later on.
However, if you use a vector and reserve the correct size upfront + compile with optimization, there should be little, to absolutely no overhead.
Since list is allocated on the stack, its size has to be known at compile time. But here, size is not known until runtime, so the size of list is not known at compile time.
Related
I'm new to C++, trying to learn by myself (I've got Java background).
There's this concept of dynamic memory allocation that I can assign to an array (for example) using new.
In C (and also in C++) I've got malloc and realloc that are doing that. In C++ they've added the new for some reason I can't understand.
I've read a lot about the difference between a normal array that goes to the stack while the dynamic allocated array goes to the heap.
So what I understand is that by using new I'm allocating space in the heap which will not be deleted automatically when finished a function let's say, but will remain where it is until I finally, manually free it.
I couldn't find practical examples of using the dynamic memory allocation over the normal memory.
It's said that I can't allocate memory through runtime when using normal array. Well, probably I didn't understand it right because when I tried to create a normal array (without new) with a capacity given as an input by the user (like arr[input]), it worked fine.
here is what I mean:
int whatever;
cin>>whatever;
int arr2[whatever];
for (int i = 0; i < whatever; i++) {
arr2[i]=whatever;
cout<<arr2[i];
}
I didn't really understand why it's called dynamic when the only way of extending the capacity of an array is to copy it to a new, larger array.
I understood that the Vector class (which I haven't yet learned) is much better to use. But still, I can't just leave that gap of knowledge begin and I must understand why exactly it's called dynamic and why should I use it instead of a normal array.
Why should I bother freeing memory manually when I can't really extend it but only copy it to a new array?
When you know the size of an array at compile time you can declare it like this and it will live on the stack:
int arr[42];
But if you don't know the size at compile time, only at runtime, then you cannot say:
int len = get_len();
int arr[len];
In this case you must allocate the array at runtime. In this case the array will live on the heap.
int len = get_len();
int* arr = new int[len];
When you no longer need that memory you need to do a delete [] arr.
std::vector is a variable size container that allows you to allocate and reallocate memory at runtime without having to worry about explicitly allocating and freeing it.
int len = get_len();
std::vector<int> v(len); // v has len elements
v.resize(len + 10); // add 10 more elements to the vector
For static allocation, you must specify the size as a constant:
MyObj arrObject[5];
For dynamic allocation, that can be varied at run-time:
MyObj *arrObject = new MyObj[n];
The different between new and malloc is that new will call the ctor for all those objects in the array, while malloc just gives you raw memory.
if you wanna use an array and you dont know the exact size at the compile time, thats when dynamic memory allocation steps in. See the example below,
int a[3] = {1,2,3}; //<= valid in terms of syntax;
however,
int size = 3;
int a[size] = {1,2,3} //<= compile error
in order to fix this,
int* ArrayPtr = new int[size];
also, when freeing it, call delete[] ArrayPtr; instead of delete alone, coz we are talking abt freeing a BLOCK of memory at this moment.
In C (and also in C++) I've got malloc and realloc that are doing that. In C++ they've added the "new" for some reason I can't understand.
malloc and realloc take the number of bytes to allocate instead of the type you want to allocate, and also don't call any constructors (again, they only know about the size to allocate). This works fine in C (as it really has more of a size system than a type system), but with C++'s much more involved type system, it falls short. In contrast, new is type safe (it doesn't return a void* as malloc does) and constructs the object allocated for you before returning.
It's said that I can't allocate memory through runtime when using normal array. Well, probably I didn't understand it right because when I tried to create a normal array (without "new") with a capacity given as an input by the user (like arr[input]). it worked fine.
This is a compiler extension (and part of C99), it is NOT standard C++. The standard requires that a 'normal' array have a bound which is known at compile time. However, it seems your compiler decided to support variable length 'normal' arrays anyways.
I didn't really understand why it's called dynamic when the only way of extending the capacity of an array is to copy it to a new, larger array.
Its dynamic in that you don't know the size until run time (and thus can be different across different invocations). Compile time vs. run time is a distinction you don't often run across in other languages (in my experience at least), but it is crucial to understanding C++.
int* array = new int[ 10 ]( );
Is this the proper usage of the new operator? To my knowledge the previous code will initialize each element in the array to 0.
int* array = new int[ 10 ];
Does the second line of code just initialize the array, but not set the values to zero?
The proper way to use the new operator depends on whatever you are going to do next after allocating the memory.
int* array = new int[10](); will zero out the memory that you are allocating because it is running the int initializer for each int in the array.
int* array = new int[10]; will not initialize the memory, so the value of each int in the array will be whatever the value was at the memory address you get from new. It might be zeros if you're lucky, but more than likely it's probably garbage left there from some other memory request/release.
Generally speaking, you need to treat uninitialized variables as garbage values and not use them before assigning them a value. That is unless you are using it as entropy in a random number generator, but even then it might not be random enough if the memory happens to be too clean. Another rare use case might be snooping on what another program left in memory after closing. Both of these examples are exceptions to the rule.
Usually the best reason NOT to initialize is speed. Setting each item in the array to 0 has a speed cost to it, and although it might be small, it might be noticeable if your array is huge or you execute this code frequently. This is for when you KNOW you will be setting those values before you use them, you can save yourself the cost of initializing them needlessly.
Now having said all that, I also agree with the comments that std::vector<int> is usually the better way to go, if nothing more than for the advantage that you don't have to worry about memory leaks (which can cost a lot of debug/development time and should not be underestimated) and you also get a lot of benefits. Not to mention you can do all the same things with vectors that you could a regular array - this is because vectors allocate contiguous memory.
std::vector<int> safeArray(10);
int* array = &safeArray[0]; // array now points to the 0th element in safeArray
One thing you lose with std::vector is that you no longer have a choice on whether or not you initialize.
I am using Dev C++ to write a simulation program. For it, I need to declare a single dimensional array with the data type double. It contains 4200000 elements - like double n[4200000].
The compiler shows no error, but the program exits on execution. I have checked, and the program executes just fine for an array having 5000 elements.
Now, I know that declaring such a large array on the stack is not recommended. However, the thing is that the simulation requires me to call specific elements from the array multiple times - for example, I might need the value of n[234] or n[46664] for a given calculation. Therefore, I need an array in which it is easier to sift through elements.
Is there a way I can declare this array on the stack?
No there is no(we'll say "reasonable") way to declare this array on the stack. You can however declare the pointer on the stack, and set aside a bit of memory on the heap.
double *n = new double[4200000];
accessing n[234] of this, should be no quicker than accessing n[234] of an array that you declared like this:
double n[500];
Or even better, you could use vectors
std::vector<int> someElements(4200000);
someElements[234];//Is equally fast as our n[234] from other examples, if you optimize (-O3) and the difference on small programs is negligible if you don't(+5%)
Which if you optimize with -O3, is just as fast as an array, and much safer. As with the
double *n = new double[4200000];
solution you will leak memory unless you do this:
delete[] n;
And with exceptions and various things, this is a very unsafe way of doing things.
You can increase your stack size. Try adding these options to your link flags:
-Wl,--stack,36000000
It might be too large though (I'm not sure if Windows places an upper limit on stack size.) In reality though, you shouldn't do that even if it works. Use dynamic memory allocation, as pointed out in the other answers.
(Weird, writing an answer and hoping it won't get accepted... :-P)
Yes, you can declare this array on the stack (with a little extra work), but it is not wise.
There is no justifiable reason why the array has to live on the stack.
The overhead of dynamically allocating a single array once is neglegible (you could say "zero"), and a smart pointer will safely take care of not leaking memory, if that is your concern.
Stack allocated memory is not in any way different from heap allocated memory (apart from some caching effects for small objects, but these do not apply here).
Insofar, just don't do it.
If you insist that you must allocate the array on the stack, you will need to reserve 32 megabytes of stack space first (preferrably a bit more). For that, using Dev-C++ (which presumes Windows+MingW) you will either need to set the reserved stack size for your executable using compiler flags such as -Wl,--stack,34000000 (this reserves somewhat more than 32MiB), or create a thread (which lets you specify a reserved stack size for that thread).
But really, again, just don't do that. There's nothing wrong with allocating a huge array dynamically.
Are there any reasons you want this on the stack specifically?
I'm asking because the following will give you a construct that can be used in a similar way (especially accessing values using array[index]), but it is a lot less limited in size (total max size depending on 32bit/64bit memory model and available memory (RAM and swap memory)) because it is allocated from the heap.
int arraysize= 4200000;
int *heaparray= new int[arraysize];
...
k= heaparray[456];
...
delete [] heaparray;
return;
For instance:
int* pArray;
pArray = new array[];
instead of:
int* pArray;
pArray = new array[someNumber];
Since pointers are able to dynamically change the size of an array at run time, and the name of the pointer points to the first element of an array, shouldn't the default size be [1]? Does anyone know what's happening behind the scene?
Since pointers are able to dynamically change the size of an array at run time
This is not true. They can't change the size unless you allocate a new array with the new size.
If you want to have an array-like object that dynamically changes the size you should use the std::vector.
#include<vector>
#include<iostream>
...
std::vector<int> array;
array.push_back(1);
array.push_back(2);
array.push_back(3);
array.push_back(4);
std::cout << array.size() << std::endl; // should be 4
When you create an array with new, you are allocating a specific amount of memory for that array. You need to tell it how many items are to be stored so it can allocate enough memory.
When you "resize" the array, you are creating a new array (one with even more memory) and copying the items over before deleting the old array (or else you have a memory leak).
Quite simply, C++ arrays have no facility to change their size automatically. Therefore, when allocating an array you must specify it size.
Pointers cannot change an array. They can be made to point to different arrays at runtime, though.
However, I suggest you stay away from anything involving new until you have learned more about the language. For arrays changing their size dynamically use std::vector.
Pointers point to dynamically allocated memory. The memory is on the heap rather than the stack. It is dynamic because you can call new and delete on it, adding to it and removing from it at run time (in simple terms). The pointer has nothing to do with that - a pointer can point to anything and in this case, it just happens to point to the beginning of your dynamic memory. The resizing and management of that memory is completely your responsibility (or the responsibility of the container you may use, e.g. std::vector manages dynamic memory and acts as a dynamic array).
They cannot change the size dynamically. You can get the pointer to point to a new allocation of memory from the heap.
Behind the scenes there is memory allocated, a little chunk of silicium somewhere in your machine is now dedicated to the array you just newed.
When you want to "resize" your array, it is only possible to do so in place if the chunk of silicium has some free space around it. Most of the times, it is instead necessary to reserve another, bigger, chunk and copy the data that were in the first... and obviously relinquish the first (otherwise you have a memory leak).
This is done automatically by STL containers (like std::vector or std::deque), but manually when you yourself call new. Therefore, the best solution to avoid leaks is to use the Standard Library instead of trying to emulate it yourself.
int *pArray = new int; can be considered an array of size 1 and it kinda does what you want "by default".
But what if I need an array of 10 elements?
Pointers do not have any magical abilites, they just point to memory, therefore:
pArray[5] = 10; will just yield a run-time error (if you are lucky).
Therefore there is a possibility to allocate an array of needed size by calling new type[size].
I was always wondering if there is operator for deleting multi dimensional arrays in the standard C++ language.
If we have created a pointer to a single dimensional array
int *array = new int[size];
the delete looks like:
delete [] array;
That's great. But if we have two dimension array, we can not do
delete [][] twoDimenstionalArray;
Instead, we should loop and delete the items, like in this example.
Can anybody explain why?
Technically, there aren't two dimensional arrays in C++. What you're using as a two dimensional array is a one dimensional array with each element being a one dimensional array. Since it doesn't technically exist, C++ can't delete it.
Because there is no way to call
int **array = new int[dim1][dim2];
All news/deletes must be balanced, so there's no point to a delete [][] operator.
new int[dim1][dim2] returns a pointer to an array of size dim1 of type int[dim2]. So dim2 must be a compile time constant. This is similar to allocating multi-dimensional arrays on the stack.
The reason delete is called multiple times in that example is because new is called multiple times too. Delete must be called for each new.
For example if I allocate 1,000,000 bytes of memory I cannot later delete the entries from 200,000 - 300,00, it was allocated as one whole chunk and must be freed as one whole chunk.
The reason you have to loop, like in the example you mention, is that the number of arrays that needs to be deleted is not known to the compiler / allocator.
When you allocated your two-dimensional array, you really created N one-dimensional arrays. Now each of those have to be deleted, but the system does not know how many of them there are. The size of the top-level array, i.e. the array of pointers to your second-level arrays, is just like any other array in C: its size is not stored by the system.
Therefore, there is no way to implement delete [][] as you describe (without changing the language significantly).
not sure of the exact reason from a language design perspective, I' guessing it has something to do with that fact that when allocating memory you are creating an array of arrays and each one needs to be deleted.
int ** mArr = new int*[10];
for(int i=0;i<10;i++)
{
mArr[i]=new int[10];
}
my c++ is rusty, I'm not sure if thats syntactically correct, but I think its close.
While all these answers are relevant, I will try to explain what came to an expectation, that something like delete[][] array; may work on dynamically allocated arrays and why it's not possible:
The syntax int array[ROWS][COLS]; allowed on statically allocated arrays is just abstraction for programmers, which in reality creates one-dimensional array int array[ROWS*COLS];. But during compilation process (when dimension sizes COLS and ROWS must be constants by standard) the compiler also remembers the size of those dimensions, that are necessary to later address elements using syntax e.g. array[x][y] = 45. Compiler, being known of this size, will then replace [x][y] with the corresponding index to one-dimensional array using simple math: [COLS*x + y].
On the other hand, this is not the case with dynamically allocated arrays, if you want the same multi-dimensional functionality (in fact notation). As their size can be determined during runtime, they would have to remember the size of each additional dimension for later usage as well - and remember that for the whole life of the array. Moreover, system changes would have to be implemented here to work with arrays actually as multi-dimensional, leaving the form of [x][y] access notation in the code, not replacing it with an one-dimensional notation during compilation, but later replacing it within runtime.
Therefore an absence of array = new int[ROWS][COLS] implies no necessity for delete[][] array;. And as already mentioned, it can't be used on your example to delete your "multi-dimensional" array, because your sub-arrays (additional dimensions) are allocated separately (using separate new call), so they are independent of the top array (array_2D) which contains them and they all can't be deleted at once.
delete[] applies to any non-scalar (array).
You can use a wrapper class to do all those things for you.
Working with "primitive" data types usually is not a good solution (the arrays should be encapsulated in a class). For example std::vector is a very good example that does this.
Delete should be called exactly how many times new is called. Because you cannot call "a = new X[a][b]" you cannot also call "delete [][]a".
Technically it's a good design decision preventing the appearance of weird initialization of an entire n-dimensional matrix.
Well, I think it is easy to implement, but too dangerous. It is easy to tell whether a pointer is created by new[], but hard to tell about new[]...[](if allowed).