If I have a fixed number of elements of class MyClass, should I use arrays or vectors?, ie:
MyClass* myArray[];
or
std::vector<MyClass*> myVector;
?
Use std::array or raw arrays for a small, static number of elements.
If you have a lot of elements (more than say 100kb), you hog the stack and are asking for stack corruption / overflow. In that case, or if the number of elements can only be known at runtime, use std::vector.
if you know the number in compile time - use static array.
if the number is dynamic (obtained from the user) - vector is much better to save you the hurdle of managing the memory
"Fixed" has two meanings in this context. The usual one is set once, never change, such as a value read from input. This value is known at runtime and requires dynamic allocation on the heap. Your options are a C-style array with new or a vector; it is highly recommended you use a vector.
#include <vector>
#include <iostream>
int main() {
int size;
std::cin >> size;
int *myArray = new int[size];
std::vector<int> myVector(size);
}
"Fixed" can also mean a compile-time constant, meaning it is constant for any run of the program. You can use a C-style array or a C++ array (automatic memory allocation on the stack).
#include <array>
int main() {
const int size = 50;
int myArray[size];
std::array<int, size> myArray;
}
These are faster, but your program needs to have access to sufficient stack memory, which is something you can change in your project settings. See this topic for more info. If the size of the array is really big, you may want to consider allocating on the Heap anyway (vector).
Related
I want to malloc an array in my code, and its size should be defined at runtime.
I tried like this:
#include <iostream>
#include <array>
int main(){
int M=4,N=3,P=5;
M=N+P;
std::array<std::array<double,M>,N> arr;
}
But MSVC told me:
a variable with non-static storage duration cannot be used as a non-type argument
I don't find the answer to this in stackoverflow.(The existing question seem not to solve my problem...)
How to dynamically allocate a 2D std::array in C++?
I know I could use std::vector to solve this. But the vector memory size needs to be organized by myself and this would be used many times in my project. And I want to use C++ type code rather than C type...Maybe there is a method to turn a 2D array in C type to std::array, but I can't find it by Google...
So I ask this question...
I mean the M and N should be got dynamically(not changed,but I can only know it in runtime...),like:
#include <iostream>
int main(){
int a=3;
int b=4;
int rowCount=a+b;
int colCout=b-a;
int** a = new int*[rowCount];
for(int i = 0; i < rowCount; ++i)
{
a[i] = new int[colCount];
}
}
I know where is my mistake. I fell into a logical question... If I don't use push_back,the vector works well. If I use it, the array doesn't work, too.
I think the capcity of vector is bigger than its size, I want to avoid this. But another question: How to limit the capacity of std::vector to the number of element show I should use my allocator or std::vector::shrink_to_fit() to avoid it...(There is no guarantee in C++17 if you use reserve(n))
The dynamically allocated array container in C++ is std::vector. std::array is for specifically compile-time fixed-length arrays.
https://cppreference.com is your friend!
But the vector memory size needs to be organized by myself
Not quite sure what you mean with that, but you specify the size of your std::vector using the constructor.
std::vector<std::vector<int>> arr(N);
If you need some special allocator (not just new/malloc), then you can also specify a custom allocator.
Your whole program that you propose is not good C++. A C++ solution would look like:
#include <vector>
int main() {
int a = 3;
int b = 4;
unsigned int rowCount = a + b;
unsigned int colCount = b - a;
std::vector<std::vector<int>> matrix(rowCount);
for (auto& row : matrix) {
row.resize(colCount);
}
}
std::array, like an actual array in C++, requires a constant size. It's what gives it any advantage at all over std::vector.
For a technical explanation as to how that requirement is implemented, remember that template parameters are required to be compile-time constants (since it changes how the code is generated, again at compile-time).
Anyway, you want to use std::vector here. If you know the size you want, give it as a constructor parameter.
#include <iostream>
#include <fstream>
#include <cmath>
#include <math.h>
#include <iomanip>
using std::ifstream;
using namespace std;
int main (void)
{
int count=0;
float sum=0;
float maximum=-1000000;
float sumOfX;
float sumOfY;
int size;
int negativeY=0;
int positiveX=0;
int negativeX=0;
ifstream points; //the points to be imported from file
//points.open( "data.dat");
//points>>size;
//cout<<size<<endl;
size=100;
float x[size][2];
while (count<size) {
points>>(x[count][0]);
//cout<<"x= "<<(x[count][0])<<" ";//read in x value
points>>(x[count][1]);
//cout<<"y= "<<(x[count][1])<<endl;//read in y value
count++;
}
This program is giving me expected constant expression error on the line where I declare float x[size][2]. Why?
float x[size][2];
That doesn't work because declared arrays can't have runtime sizes. Try a vector:
std::vector< std::array<float, 2> > x(size);
Or use new
// identity<float[2]>::type *px = new float[size][2];
float (*px)[2] = new float[size][2];
// ... use and then delete
delete[] px;
If you don't have C++11 available, you can use boost::array instead of std::array.
If you don't have boost available, make your own array type you can stick into vector
template<typename T, size_t N>
struct array {
T data[N];
T &operator[](ptrdiff_t i) { return data[i]; }
T const &operator[](ptrdiff_t i) const { return data[i]; }
};
For easing the syntax of new, you can use an identity template which effectively is an in-place typedef (also available in boost)
template<typename T>
struct identity {
typedef T type;
};
If you want, you can also use a vector of std::pair<float, float>
std::vector< std::pair<float, float> > x(size);
// syntax: x[i].first, x[i].second
The array will be allocated at compile time, and since size is not a constant, the compiler cannot accurately determine its value.
You cannot have variable length arrays (as they are called in C99) in C++. You need to use dynamically allocated arrays (if the size varies) or a static integral constant expression for size.
The line float x[size][2] won't work, because arrays have to be allocated at compile time (with a few compiler-specific exceptions). If you want to be able to easily change the size of the array x at compile time, you can do this:
#define SIZE 100
float x[SIZE][2];
If you really want to allocate the array based on information you only have at runtime, you need to allocate the array dynamically with malloc or new.
The size of an automatic array must be a compile-time constant.
const int size = 100;
float x[size][2];
If the size weren't known at compile-time (e.g entered by the user, determined from the contents of the file), you'd need to use dynamic allocation, for example:
std::vector<std::pair<float, float> > x(somesize);
(Instead of a pair, a dedicated Point struct/class would make perfect sense.)
Because it expected a constant expression!
Array dimensions in C (ignoring C99's VLAs) and C++ must be quantities known at compile-time. That doesn't mean just labelled with const: they have to be hard-coded into the program.
Use dynamic allocation or std::vector (which is a wrapper around dynamic array allocation) to determine array sizes at run-time.
It is a restriction of the language. Array sizes must be constant expressions. Here's a partial jsutification from cplusplus.com
NOTE: The elements field within brackets [] which represents the number of elements the array is going to hold, must be a constant value, since arrays are blocks of non-dynamic memory whose size must be determined before execution. In order to create arrays with a variable length dynamic memory is needed, which is explained later in these tutorials.
You haven't assigned any value to size; thus the compiler cannot allocate the memory for the array. (An array of null size? What?)
Additionally, you'd need to make SIZE a constant, and not a variable.
EDIT: Unfortunately, this response no longer makes sense since the poster has changed their question.
The standard way of allocating array using new int is:
int* arr = new int[50];
while declaring it in this manner there is going to be contiguous memory allocation and there will be a single array variable in the stack of variables.
if I want to declare it in the form of 50 different pointer variables so that each pointer would have different memory address and not necessarily contiguous the most obvious way of going for it is like this:
int * arr[50];
but in this way what would be the command / code for assigning memory ( i.e. via new int ) and what are the downsides or advantages of declaring in each manner.
The obvious way would be to iterate over all the elements and allocate memory for them:
for (int i = 0; i < 50; i++){
arr[i] = new int;
}
The downside of non-contiguous memory chunk would be cache misses.
You can read more on that here.
How to assign, is already mentioned in this answer; Hence not repeating.
For single int allocation, your below line is an overkill:
int* arr[50]; // all downsides only
Instead of that, you should use simple integers:
int arr[50];
Better to utilise facilities by standard containers such as:
std::vector<int> vi; // if the number of int-s are dynamic
std::array<int, 50> ai; // if the number of int-s are fixed
Finally, from this answer,
"Avoid pointers until you can't... So the rule of thumb is to use pointers only if there is no other choice."
int * a;
a = new int[10];
cout << sizeof(a)/sizeof(int);
if i would use a normal array the answer would be 10,
alas, the lucky number printed was 1, because sizeof(int) is 4 and iszeof(*int) is 4 too. How do i owercome this? In my case keeping size in memory is a complicated option. How do i get size using code?
My best guess would be to iterate through an array and search for it's end, and the end is 0, right? Any suggestions?
--edit
well, what i fear about vectors is that it will reallocate while pushing back, well you got the point, i can jus allocate the memory. Hoever i cant change the stucture, the whole code is releevant. Thanks for the answers, i see there's no way around, so ill just look for a way to store the size in memory.
what i asked whas not what kind of structure to use.
Simple.
Use std::vector<int> Or std::array<int, N> (where N is a compile-time constant).
If you know the size of your array at compile time, and it doens't need to grow at runtime, then use std::array. Else use std::vector.
These are called sequence-container classes which define a member function called size() which returns the number of elements in the container. You can use that whenever you need to know the size. :-)
Read the documentation:
std::array with example
std::vector with example
When you use std::vector, you should consider using reserve() if you've some vague idea of the number of elements the container is going to hold. That will give you performance benefit.
If you worry about performance of std::vector vs raw-arrays, then read the accepted answer here:
Is std::vector so much slower than plain arrays?
It explains why the code in the question is slow, which has nothing to do with std::vector itself, rather its incorrect usage.
If you cannot use either of them, and are forced to use int*, then I would suggest these two alternatives. Choose whatever suits your need.
struct array
{
int *elements; //elements
size_t size; //number of elements
};
That is self-explanatory.
The second one is this: allocate memory for one more element and store the size in the first element as:
int N = howManyElements();
int *array = int new[N+1]; //allocate memory for size storage also!
array[0] = N; //store N in the first element!
//your code : iterate i=1 to i<=N
//must delete it once done
delete []array;
sizeof(a) is going to be the size of the pointer, not the size of the allocated array.
There is no way to get the size of the array after you've allocated it. The sizeof operator has to be able to be evaluated at compile time.
How would the compiler know how big the array was in this function?
void foo(int size)
{
int * a;
a = new int[size];
cout << sizeof(a)/sizeof(int);
delete[] a;
}
It couldn't. So it's not possible for the sizeof operator to return the size of an allocated array. And, in fact, there is no reliable way to get the size of an array you've allocated with new. Let me repeat this there is no reliable way to get the size of an array you've allocated with new. You have to store the size someplace.
Luckily, this problem has already been solved for you, and it's guaranteed to be there in any implementation of C++. If you want a nice array that stores the size along with the array, use ::std::vector. Particularly if you're using new to allocate your array.
#include <vector>
void foo(int size)
{
::std::vector<int> a(size);
cout << a.size();
}
There you go. Notice how you no longer have to remember to delete it. As a further note, using ::std::vector in this way has no performance penalty over using new in the way you were using it.
If you are unable to use std::vector and std::array as you have stated, than your only remaning option is to keep track of the size of the array yourself.
I still suspect that your reasons for avoiding std::vector are misguided. Even for performance monitoring software, intelligent uses of vector are reasonable. If you are concerned about resizing you can preallocate the vector to be reasonably large.
I'm trying to make a dynamically allocated bidimensional array with variable size but I don't know why if I create my own constant value it won't compile:
const int oConstanta=N+1;
int (*m)[oConstanta]=new int[oConstanta][oConstanta];
But when I use a normal constant such as 1000 between the brackets it compiles successfully.
const int oConstanta=N+1;
int (*m)[1000]=new int[1000][1000];
Does anyone know the reason for this?
PS: I know that:
int **m=new int*[oConstanta];
for(i=1;i<=N;i++)
{
m[i]=new int[oConstanta];
init(m[i]);
}
will solve my problems but I want to learn why my former method was a bad idea.
Unless N is a compile-time constant expression, oConstanta is not a compile-time constant either.
The best way of making a two-dimensional array in C++ is using std::vector of std::vectors, for example, like this:
#include <vector>
std::vector<std::vector<int> > m(N+1, std::vector<int>(N+1, 0));
Ultimately the reason is that you can't create static arrays of variable length.
In your code you are trying to create a static array of dynamic arrays, both of variable length.
Now, static arrays live in the stack, while dynamic arrays live in the heap. While the memory management of the heap is "flexible", the stack is different: the compiler needs to be able to determine the size of each frame in the stack. This is clearly not possible if you use an array of variable length.
On the other hand, if you use a pointer the size of the stack frame is known (a pointer has a known size) and everything is fine.
If you want to try, this should compile fine
int (*m)[1000]=new int[oConstanta][1000]
since it's a fixed-size static array, whose entries are dynamically allocated arrays of variable length (allowed).
In short: whenever the size of an object is not known at compile time, that object cannot be in the stack, it has to be dynamically allocated.
To make a dynamically sized, 2D matrix with contiguous elements and a single allocation:
std::vector<int> matrix(Rows*Columns);
Access an element in the i th row and j th column:
matrix[Columns*i + j] = 1;
You can wrap this all up in a class. Here's a very basic example:
struct Matrix {
std::vector<int> m;
size_t rows,columns;
Matrix(size_t rows,size_t columns)
: rows(rows)
, columns(columns)
, m(rows*columns)
{}
int &at(size_t i,size_t j) {
return m.at(i*columns + j);
}
};