I want to create an array in Visual Studio as it is possible in C99:
int function(int N)
{
int x[N];
return 1;
};
But even VS2013 doesn't support this. Is there another way (except using new and delete) to create x without the need to fix N at translation time?
Thank you very much!
The condoned way of doing this in C++ is by using vectors
#include <vector>
int function(int N)
{
std::vector<int> x(N);
return 1;
};
If you know that N won't be enormous (and since you seem to want this memory on the stack, anyway), you can use _alloca, which is a C function. See the documentation here.
#include <malloc.h>
int function(int N)
{
int *x = _alloca(sizeof(int)*N);
return 1;
};
This isn't really recommended, and has been superceded by _malloca, which requires a call to _freea after you're done with the memory.
Note that both versions will free the memory when the function exits, but the first version will allocate memory on the heap, whereas the second will allocate on the stack (and therefore has a much stricter upper limit on N). Besides the convenience and low-overhead of the std::vector class, using C-style pointers in a C++ program is kind of a downer anyway. :)
In order for an array in C++ to be defined on the stack (i.e. without new), the size of it must be known at compile time, such as a const int or a preprocessor macro. As this is not the case in this situation, you can use the following instead:
#include <vector>
using namespace std;
// ...
int function(int n)
{
vector<int> x (n);
return 1;
}
There is no such a possibily to define a variable length array in the stack in C++. But you can use std::vector instead though it allocates memory in the heap.
Related
I need to make a big array in one task (more than 10^7).
And what I found that if i do it int main the code wouldnt work (the program will exit before doing cout "Process returned -1073741571 (0xC00000FD)").
If I do it outside everything will work.
(I am using Code::Blocks 17.12)
// dont work
#include <bits/stdc++.h>
using namespace std;
const int N = 1e7;
int main() {
int a[N];
cout << 1;
return 0;
}
// will work
#include <bits/stdc++.h>
using namespace std;
const int N = 1e7;
int a[N];
int main() {
cout << 1;
return 0;
}
So I have questions:
-Why it happens?
-What can I do to define array in int main()? (actually if i do vector same size in int main() everything will work and it is strange)
There are four main types of memory which are interesting for C++ programmers: stack, heap, static memory, and the memory of registers.
In
const int N = 1e7;
int main(){int a[N];}
stack memory is deployed.
This type of memory is usually more limited than the heap and the static memory in size. For that reason, the error code is returned.
Operator new (or other function which allocates memory in heap) is needed so as to use heap:
const int N = 1e7;
int main(){int* a = new int[N]; delete a;}
Usually, the operator new is not used explicitly.
std::vector uses heap (i.e. it uses new or something of the lower level underneath) (as opposed to the std::array or the C-style array, e.g. int[N]). Because of that, std::vector is usually capable of holding bigger chunks of data than the std::array or the C-style array.
If you do
const int N = 1e7;
int a[N];
int main(){}
static memory is utilized. It's usually less limited in size than the stack memory.
To wrap up, you used stack in int main(){int a[N];}, static memory in int a[N]; int main(){}, and heap in int main(){std::vector<int> v(N);}, and, because of that, received different results.
Use heap for big arrays (via the std::vector or the operator new, examples are given above).
The problem is that your array is actually very big. Assuming that int is 4 bytes, 10 000 000 integers will be 40 000 000bytes which is about 40 Mb. In windows maximum stack size is 1Mb and on modern Linux 8Mb. As local variables are located in stack so youre allocating your 40mb array in 1mb or 8mb stack (if youre in windows or linux respectively). So your program runs out of stack space. In case of global array its ok, because global variables are located in bss(data) segment of program which has static size and is not changing. And in case of std::vector your array is allocated in dynamic memory e.g. in heap, thats why your program is not crashing. If you don't want to use std::vector you can dynamically allocate an array on heap like following
int* arrayPtr = new int[N]
Then you need to free unused dynamically allocated memory with delete operator:
delete arrayPtr;
But in this case you need to know how to work with pointers. Or if you want it to not be dynamic and be only in main, you can make your array in main static (I think 99.9% this will work 😅 and I think you need to try) like this
int main() {static int arr[N];return 0;}
Which will be located in data segment (like global variable)
I have some code using a variable length array (VLA), which compiles fine in gcc and clang, but does not work with MSVC 2015.
class Test {
public:
Test() {
P = 5;
}
void somemethod() {
int array[P];
// do something with the array
}
private:
int P;
}
There seem to be two solutions in the code:
using alloca(), taking the risks of alloca in account by making absolutely sure not to access elements outside of the array.
using a vector member variable (assuming that the overhead between vector and c array is not the limiting factor as long as P is constant after construction of the object)
The ector would be more portable (less #ifdef testing which compiler is used), but I suspect alloca() to be faster.
The vector implementation would look like this:
class Test {
public:
Test() {
P = 5;
init();
}
void init() {
array.resize(P);
}
void somemethod() {
// do something with the array
}
private:
int P;
vector<int> array;
}
Another consideration: when I only change P outside of the function, is having a array on the heap which isn't reallocated even faster than having a VLA on the stack?
Maximum P will be about 400.
You could and probably should use some dynamically allocated heap memory, such as managed by a std::vector (as answered by Peter). You could use smart pointers, or plain raw pointers (new, malloc,....) that you should not forget to release (delete,free,....). Notice that heap allocation is probably faster than what you believe (practically, much less than a microsecond on current laptops most of the time).
Sometimes you can move the allocation out of some inner loop, or grow it only occasionally (so for a realloc-like thing, better use unsigned newsize=5*oldsize/4+10; than unsigned newsize=oldsize+1; i.e. have some geometrical growth). If you can't use vectors, be sure to keep separate allocated size and used lengths (as std::vector does internally).
Another strategy would be to special case small sizes vs bigger ones. e.g. for an array less than 30 elements, use the call stack; for bigger ones, use the heap.
If you insist on allocating (using VLAs -they are a commonly available extension of standard C++11- or alloca) on the call stack, be wise to limit your call frame to a few kilobytes. The total call stack is limited (e.g. often to about a megabyte or a few of them on many laptops) to some implementation specific limit. In some OSes you can raise that limit (see also setrlimit(2) on Linux)
Be sure to benchmark before hand-tuning your code. Don't forget to enable compiler optimization (e.g. g++ -O2 -Wall with GCC) before benchmarking. Remember that caches misses are generally much more expensive than heap allocation. Don't forget that developer's time also has some cost (which often is comparable to cumulated hardware costs).
Notice that using static variable or data has also issues (it is not reentrant, not thread safe, not async-signal-safe -see signal-safety(7) ....) and is less readable and less robust.
First of all, you're getting lucky if your code compiles with ANY C++ compiler as is. VLAs are not standard C++. Some compilers support them as an extension.
Using alloca() is also not standard, so is not guaranteed to work reliably (or even at all) when using different compilers.
Using a static vector is inadvisable in many cases. In your case, it gives behaviour that is potentially not equivalent to the original code.
A third option you may wish to consider is
// in definition of class Test
void somemethod()
{
std::vector<int> array(P); // assume preceding #include <vector>
// do something with array
}
A vector is essentially a dynamically allocated array, but will be cleaned up properly in the above when the function returns.
The above is standard C++. Unless you perform rigorous testing and profiling that provides evidence of a performance concern this should be sufficient.
Why don't you make the array a private member?
#include <vector>
class Test
{
public:
Test()
{
data_.resize(5);
}
void somemethod()
{
// do something with data_
}
private:
std::vector<int> data_;
}
As you've specified a likely maximum size of the array, you could also look at something like boost::small_vector, which could be used like:
#include <boost/container/small_vector.hpp>
class Test
{
public:
Test()
{
data_.resize(5);
}
void somemethod()
{
// do something with data_
}
private:
using boc = boost::container;
constexpr std::size_t preset_capacity_ = 400;
boc::small_vector<int, preset_capacity_> data_;
}
You should profile to see if this is actually better, and be aware this will likely use more memory, which could be an issue if there are many Test instances.
I made a program for school's little contest, I have:
#include <iostream>
#include <vector>
int main(){
int n;
std::cin >> n;
int c[n];
std::vector<int> r(n);
std::cout << "I made it through";
//rest of program
}
When i type in 1000000 and press enter program crashes without a word, exit code (0xC00000FD). I presumed it happens when i initialize vector, but i though it can handle a lot more. Am i doing something wrong? I know i can use pointer but i'd prefer not to touch things that are already working.
The stack is a very restricted resource.
Even though your compiler seems to implement C-style VLA's in C++ (int c[n];), it does not magically gain more memory.
Test 1) Success of reading n, and 2) n not being out-of-bounds for your use before executing the statement allocating that stack-array.
Default stack-size for Windows: 1MB sizeof(int): 4 Thus, about 250000 fit.
Default stack size for Linux: 8MB sizeof(int): 4 Thus, about 2000000 fit.
Workaround: Use dynamic allocation for the ints, like with a std::vector:
std::vector<int> c(n);
Alternatively, at least use a smart-pointer:
std::unique_ptr<int[]> c(new int[n]);
The problem isn't vector (which uses the heap for its underlying expanding storage in most STL implementations), but int c[n], which will allocate 1,000,000 4-byte integers in the stack, which is almost 4MB. On Win32 the stack is by default around 1MB, hence the overflow.
If you really need to use an array then change your c array to be allocated on the heap by using new, but don't forget to delete[], otherwise use of vector is preferred for most expanding storage scenarios. If you need a fixed-length array then consider array<class T, size_t N> (new in C++11) which adds bounds-checking.
You probably just need to allocate the array dynamically. In C++
#include <iostream>
#include <vector>
int main()
{
int n;
std::cin >> n;
int *c = new int[n];
if(nullptr == c) {
std::cerr << "Runtime error" << std::endl;
return 1;
}
std::vector<int> r(begin(n), end(n));
std::cout << "I made it through";
delete[] c;
return 0;
}
Also to make the vector after you have allocated c dynamically you can use begin, end
What is the difference between this two array definitions and which one is more correct and why?
#include <stdio.h>
#define SIZE 20
int main() {
// definition method 1:
int a[SIZE];
// end definition method 1.
// defintion method 2:
int n;
scanf("%d", &n);
int b[n];
// end definition method 2.
return 0;
}
I know if we read size, variable n, from stdin, it's more correct to define our (block of memory we'll be using) array as a pointer and use stdlib.h and array = malloc(n * sizeof(int)), rather than decalring it as int array[n], but again why?
It's not "more correct" or "less correct". It either is xor isn't correct. In particular, this works in C, but not in C++.
You are declaring dynamic arrays. Better way to declare Dynamic arrays as
int *arr; // int * type is just for simplicity
arr = malloc(n*sizeof(int*));
this is because variable length arrays are only allowed in C99 and you can't use this in c89/90.
In (pre-C99) C and C++, all types are statically sized. This means that arrays must be declared with a size that is both constant and known to the compiler.
Now, many C++ compilers offer dynamically sized arrays as a nonstandard extension, and C99 explicitly permits them. So int b[n] will most likely work if you try it. But in some cases, it will not, and the compiler is not wrong in those cases.
If you know SIZE at compile-time:
int ar[SIZE];
If you don't:
std::vector<int> ar;
I don't want to see malloc anywhere in your C++ code. However, you are fundamentally correct and for C that's just what you'd do:
int* ptr = malloc(sizeof(int) * SIZE);
/* ... */
free(ptr);
Variable-length arrays are a GCC extension that allow you to do:
int ar[n];
but I've had issues where VLAs were disabled but GCC didn't successfully detect that I was trying to use them. Chaos ensues. Just avoid it.
Q1 : First definition is the static array declaration. Perfectly correct.
It is when you have the size known, so no comparison with VLA or malloc().
Q2 : Which is better when taking size as an input from the user : VLA or malloc .
VLA : They are limited by the environment's bounds on the size of automatic
allocation. And automatic variables are usually allocated on the stack which is relatively
small.The limitation is platform specific.Also, this is in c99 and above only.Some ease of use while declaring multidimensional arrays is obtained by VLA.
Malloc : Allocates from the heap.So, for large size is definitely better.For, multidimensional arrays pointers are involved so a bit complex implementataion.
Check http://bytes.com/topic/c/answers/578354-vla-feature-c99-vs-malloc
I think that metod1 could be little bit faster, but both of them are correct in C.
In C++ first will work, but if you want to use a second you should use:
int size = 5;
int * array = new int[size];
and remember to delete it:
delete [] array;
I think it gives you more option to use while coding.
If you use malloc or other dynamic allocation to get a pointer. You will use like p+n..., but if you use array, you could use array[n]. Also, while define pointer, you need to free it; but array does not need to free.
And in C++, we could define user-defined class to do such things, and in STL, there is std::vector which do the array-things, and much more.
Both are correct. the declaration you use depends on your code.
The first declaration i.e. int a[size]; creates an array with a fixed size of 20 elements.
It is helpful when you know the exact size of the array that will be used in the code. for example, you are generating
table of a number n up till its 20th multiple.
The second declaration allows you to make an array of the size that you desire.
It is helpful when you will need an array of different sizes, each time the code is executed for example, you want to generate the fibonacci series till n. In that case, the size of the array must be n for each value of n. So say you have n = 5, in this case int a [20] will waste memory because only the first five slots will be used for the fibonacci series and the rest will be empty. Similarly if n = 25 then your array int a[20] will become too small.
The difference if you define array using malloc is that, you can pass the size of array dynamically i.e at run time. You input a value your program has during run time.
One more difference is that arrays created using malloc are allocated space on heap. So they are preserved across function calls unlike static arrays.
example-
#include<stdio.h>
#include<stdlib.h>
int main()
{
int n;
int *a;
scanf("%d",&n);
a=(int *)malloc(n*sizeof(int));
return 0;
}
I have a struc like this:
struct process {int PID;int myMemory[];};
however, when I try to use it
process p;
int memory[2];
p.myMemory = memory;
I get an criptic error from eclipse saying int[0] is not compatible with int[2];
what am i doing wrong?
Thanks!
Don't use static arrays, malloc, or even new if you're using C++. Use std::vector which will ensure correct memory management.
#include <vector>
struct Process {
int pid;
std::vector<int> myMemory;
};
Process p;
p.reserve(2); // allocates enough space on the heap to store 2 ints
p.myMemory.push_back( 4815 ); // add an index-zero element of 4815
p.myMemory.push_back( 162342 ); // add an index-one element of 162342
I might also suggest creating a constructor so that pid does not initially have an undefined value:
struct Process {
Process() : pid(-1), myMemory() {
}
int pid;
std::vector<int> myMemory;
};
I think you should declare myMemory as an int* then malloc() when you know the size of it. After this it can be used like a normal array. Int[0] seems to mean "array with no dimension specified".
EXAMPLE:
int *a; // suppose you'd like to have an array with user specified length
// get dimension (int d)
a = (int *) malloc(d * sizeof(int));
// now you can forget a is a pointer:
a[0] = 5;
a[2] = 1;
free((void *) a); // don't forget this!
All these answers about vector or whatever are confused :) using a dynamically allocated pointer opens up a memory management problem, using vector opens up a performance problem as well as making the data type a non-POD and also preventing memcpy() working.
The right answer is to use
Array<int,2>
where Array is a template the C++ committee didn't bother to put in C++99 but which is in C++0x (although I'm not sure of the name). This is an inline (no memory management or performance issues) first class array which is a wrapper around a C array. I guess Boost has something already.
In C++, array definition is almost equal to pointer constants, meaning that their address cannot be changed, while the values which they point to can be changed. That said, you cannot copy elements of an array into another by the assignment operator. You have to go through the arrays and copy the elements one by one and check for the boundary conditions yourself.
The syntax ...
struct process {int PID;int myMemory[];};
... is not valid C++, but it may be accepted by some compilers as a language extension. In particular, as I recall g++ accepts it. It's in support for the C "struct hack", which is unnecessary in C++.
In C++, if you want a variable length array in a struct, use std::vector or some other array-like class, like
#include <vector>
struct Process
{
int pid;
std::vector<int> memory;
};
By the way, it's a good idea to reserve use of UPPERCASE IDENTIFIERS for macros, so as to reduce the probability of name collisions with macros, and not make people reading the code deaf (it's shouting).
Cheers & hth.,
You cannot make the array (defined using []) to point to another array. Because the array identifier is a const pointer. You can change the value pointed by the pointer but you cannot change the pointer itself. Think of "int array[]" as "int* const array".
The only time you can do that is during initialization.
// OK
int array[] = {1, 2, 3};
// NOT OK
int array[];
array = [1, 2, 3]; // this is no good.
int x[] is normally understood as int * x.
In this case, it is not, so if you want a vector of integers of an undetermined number of positions, change your declaration to:
struct process {int PID;int * myMemory;};
You should change your initialization to:
int memory[2];
p.myMemory = new int[ 10 ];