Type aliasing and dynamically allocated arrays - c++

I'm trying to facilitate automatic vectorization by the compiler in the blitz++ array library. For this reason, I'd like to present a view of the array data that is in chunks of fixed-length vectors, which are already vectorized well. However, I can't figure out what the type aliasing rules imply in conjunction with dynamically allocated arrays.
Here's the idea. An array currently consists of
T_numtype* restrict data_;
Operations are done by looping over these data. What I would like to do is present an alternative view of this array as an array of TinyVector<T_numtype, N>, which is a fixed-length vector whose operations are totally vectorized using the expression template machinery. The idea would be that a L-length array should be either T_numtype[L] or TinyVector<T_numtype, N>[L/N]. Is there a way to accomplish this without running afoul of the type alasing rules?
For a statically allocated array, one would do
union {
T_numtype data_[L];
TinyVector<T_numtype, N>[L/N];
};
The closest I could think of is to define
typedef union {
T_numtype data_[N];
TinyVector<T_numtype, N>;
} u;
u* data_;
and then allocate it with
data_ = new u[L/N];
But it seems that now I have given up my right to address the entire array as a flat array of T_numtype, so to access a particular element I would need to do data_[i/N].data_[i%N], which is a lot more complicated.
So, is there a way to legally create a union of T_numtype data_[L] and TinyVector<T_numtype, N>[L/N] where L is a dynamically determined size?
(I'm aware that there are additional alignment concerns, i.e. N must be a value that is the same as the alignment of the TinyVector member, otherwise there will be holes in the array.)

Aliasing is hard to make legal. However, if some "operations are done by looping over these data.", do those operations require that these data are exactly an array of T_numtype?
It may be better to wrap the data in a class with one data member of type TinyVector<T_numtype, N>[L/N] or even std::vector<TinyVector<T_numtype, N> > since that L is apparently determined at runtime, and expose a pair of iterators for those operations that want to loop over the entire data as a single sequence.

Related

Array of class holding an array memory layout

If we have a class which holds an array, let's call it vector and hold the values in a simple array called data:
class vector
{
public:
double data[3];
<...etc..>
};
Note: called as vector is for clearer explanation, it is not std::vector!!!
So my question is that, if I store only typedefs near this array inside the class and some constrexpr, am I correct if the class will be only 3 doubles after each other inside the memory?
And then if i create an array of vectors like:
vector vl[3];
Note: size of the array is not always known at compile time, not use 3 for the example.
then in the memory it'll be just 9 doubles after each other, right?
so vl[0].data[3] will always return the 2nd vectors 1st element? And in this case is it guaranteed that the result will be always like a simple array in the memory?
I found only cases with array of arrays, but not with array of classes holding an array, and I'm not sure if it is exactly the same at the end. I made some tests and it seems like it is working as I expected, but I don't know if it is always true..
Thank you!
Mostly, yes.
The standard doesn't promise that there never is anything after data in the representation of a vector, but all the implementations that I know of won't add any padding in this case.
What is promised is that there is no padding before data in the representation of vector, because it is a StandardLayout type.
You are right with your first example: The class layout is like a C struct. The first member resides at the address of the struct itself, and if it is an array, all the array's members are adjacent.
Between struct members, however, may be padding; so there is no guarantee that the size of a struct is the sum of all member sizes. I'd have to dig into the standard but I assume this includes padding at the end. This answer affirms that; assert(sizeof(vector) == 3*sizeof(double)) may not hold. In reality I'd assume that an implementation may pad a struct containing three chars so that the struct aligns at word boundaries in an array, but not three doubles which are typically the type with the strongest alignment requirements. But there is no guarantee between implementations, architectures and compiler options: Imagine we switch to 128 bit CPUs.
With respect to your second example: The above applies recursively, so the standard gives no guarantee that the 9 doubles will be adjacent. On the other hand, I bet they will be, and the program can assert it with a simple compile-time static_assert.

Are there any advantages of C++ Arrays over Vectors? [duplicate]

What are the differences between an array and a vector in C++? An example of the differences might be included libraries, symbolism, abilities, etc.
Array
Arrays contain a specific number of elements of a particular type. So that the compiler can reserve the required amount of space when the program is compiled, you must specify the type and number of elements that the array will contain when it is defined. The compiler must be able to determine this value when the program is compiled. Once an array has been defined, you use the identifier for the array along with an index to access specific elements of the array. [...] arrays are zero-indexed; that is, the first element is at index 0. This indexing scheme is indicative of the close relationship in C++ between pointers and arrays and the rules that the language defines for pointer arithmetic.
— C++ Pocket Reference
Vector
A vector is a dynamically-sized sequence of objects that provides array-style operator[] random access. The member function push_back copies its arguments via copy constructor, adds that copy as the last item in the vector and increments its size by one. pop_back does the exact opposite, by removing the last element. Inserting or deleting items from the end of a vector takes amortized constant time, and inserting or deleting from any other location takes linear time. These are the basics of vectors. There is a lot more to them. In most cases, a vector should be your first choice over a C-style array. First of all, they are dynamically sized, which means they can grow as needed. You don't have to do all sorts of research to figure out an optimal static size, as in the case of C arrays; a vector grows as needed, and it can be resized larger or smaller manually if you need to. Second, vectors offer bounds checking with the at member function (but not with operator[]), so that you can do something if you reference a nonexistent index instead of simply watching your program crash or worse, continuing execution with corrupt data.
— C++ Cookbook
arrays:
are a builtin language construct;
come almost unmodified from C89;
provide just a contiguous, indexable sequence of elements; no bells and whistles;
are of fixed size; you can't resize an array in C++ (unless it's an array of POD and it's allocated with malloc);
their size must be a compile-time constant unless they are allocated dynamically;
they take their storage space depending from the scope where you declare them;
if dynamically allocated, you must explicitly deallocate them;
if they are dynamically allocated, you just get a pointer, and you can't determine their size; otherwise, you can use sizeof (hence the common idiom sizeof(arr)/sizeof(*arr), that however fails silently when used inadvertently on a pointer);
automatically decay to a pointers in most situations; in particular, this happens when passing them to a function, which usually requires passing a separate parameter for their size;
can't be returned from a function; (Unless it is std::array)
can't be copied/assigned directly;
dynamical arrays of objects require a default constructor, since all their elements must be constructed first;
std::vector:
is a template class;
is a C++ only construct;
is implemented as a dynamic array;
grows and shrinks dynamically;
automatically manage their memory, which is freed on destruction;
can be passed to/returned from functions (by value);
can be copied/assigned (this performs a deep copy of all the stored elements);
doesn't decay to pointers, but you can explicitly get a pointer to their data (&vec[0] is guaranteed to work as expected);
always brings along with the internal dynamic array its size (how many elements are currently stored) and capacity (how many elements can be stored in the currently allocated block);
the internal dynamic array is not allocated inside the object itself (which just contains a few "bookkeeping" fields), but is allocated dynamically by the allocator specified in the relevant template parameter; the default one gets the memory from the freestore (the so-called heap), independently from how where the actual object is allocated;
for this reason, they may be less efficient than "regular" arrays for small, short-lived, local arrays;
when reallocating, the objects are copied (moved, in C++11);
does not require a default constructor for the objects being stored;
is better integrated with the rest of the so-called STL (it provides the begin()/end() methods, the usual STL typedefs, ...)
Also consider the "modern alternative" to arrays - std::array; I already described in another answer the difference between std::vector and std::array, you may want to have a look at it.
I'll add that arrays are very low-level constructs in C++ and you should try to stay away from them as much as possible when "learning the ropes" -- even Bjarne Stroustrup recommends this (he's the designer of C++).
Vectors come very close to the same performance as arrays, but with a great many conveniences and safety features. You'll probably start using arrays when interfacing with API's that deal with raw arrays, or when building your own collections.
Those reference pretty much answered your question. Simply put, vectors' lengths are dynamic while arrays have a fixed size.
when using an array, you specify its size upon declaration:
int myArray[100];
myArray[0]=1;
myArray[1]=2;
myArray[2]=3;
for vectors, you just declare it and add elements
vector<int> myVector;
myVector.push_back(1);
myVector.push_back(2);
myVector.push_back(3);
...
at times you wont know the number of elements needed so a vector would be ideal for such a situation.

Why aren't built-in arrays safe?

The book C++ Primer, 5th edition by Stanley B. Lippman (ISBN 0-321-71411-3/978-0-321-71411-4) mentions:
An [std::]array is a safer, easier-to-use alternative to built-in arrays.
What's wrong with built-in arrays?
A built-in array is a contiguous block of bytes, usually on the stack. You really have no decent way to keep useful information about the array, its boundaries or its state. std::array keeps this information.
Built-in arrays are decayed into pointers when passed from/to functions. This may cause:
When passing a built-in array, you pass a raw pointer. A pointer doesn't keep any information about the size of the array. You will have to pass along the size of the array and thus uglify the code. std::array can be passed as reference, copy or move.
There is no way of returning a built-in array, you will eventually return a pointer to local variable if the array was declared in that function scope.
std::array can be returned safely, because it's an object and its lifetime is managed automatically.
You can't really do useful stuff on built-in arrays such as assigning, moving or copying them. You'll end writing a customized function for each built-in array (possibly using templates). std::array can be assigned.
By accessing an element which is out of the array boundaries, you are triggering undefined behaviour. std::array::at will preform boundary checking and throw a regular C++ exception if the check fails.
Better readability: built in arrays involves pointers arithmetic. std::array implements useful functions like front, back, begin and end to avoid that.
Let's say I want to sort a built-in array, the code could look like:
int arr[7] = {/*...*/};
std::sort(arr, arr+7);
This is not the most robust code ever. By changing 7 to a different number, the code breaks.
With std::array:
std::array<int,7> arr{/*...*/};
std::sort(arr.begin(), arr.end());
The code is much more robust and flexible.
Just to make things clear, built-in arrays can sometimes be easier. For example, many Windows as well as UNIX API functions/syscalls require some (small) buffers to fill with data. I wouldn't go with the overhead of std::array instead of a simple char[MAX_PATH] that I may be using.
It's hard to gauge what the author meant, but I would guess they are referring to the following facts about native arrays:
they are raw
There is no .at member function you can use for element access with bounds checking, though I'd counter that you usually don't want that anyway. Either you're accessing an element you know exists, or you're iterating (which you can do equally well with std::array and native arrays); if you don't know the element exists, a bounds-checking accessor is already a pretty poor way to ascertain that, as it is using the wrong tool for code flow and it comes with a substantial performance penalty.
they can be confusing
Newbies tend to forget about array name decay, passing arrays into functions "by value" then performing sizeof on the ensuing pointer; this is not generally "unsafe", but it will create bugs.
they can't be assigned
Again, not inherently unsafe, but it leads to silly people writing silly code with multiple levels of pointers and lots of dynamic allocation, then losing track of their memory and committing all sorts of UB crimes.
Assuming the author is recommending std::array, that would be because it "fixes" all of the above things, leading to generally better code by default.
But are native arrays somehow inherently "unsafe" by comparison? No, I wouldn't say so.
How is std::array safer and easier-to-use than a built-in array?
It's easy to mess up with built-in arrays, especially for programmers who aren't C++ experts and programmers who sometimes make mistakes. This causes many bugs and security vulnerabilities.
With a std::array a1, you can access an element with bounds checking a.at(i) or without bounds checking a[i]. With a built-in array, it's always your responsibility to diligently avoid out-of-bounds accesses. Otherwise the code can smash some memory that goes unnoticed for a long time and becomes very difficult to debug. Even just reading outside an array's bounds can be exploited for security holes like the Heartbleed bug that divulges private encryption keys.
C++ tutorials may pretend that array-of-T and pointer-to-T are the same thing, then later tell you about various exceptions where they are not the same thing. E.g. an array-of-T in a struct is embedded in the struct, while a pointer-to-T in a struct is a pointer to memory that you'd better allocate. Or consider an array of arrays (such as a raster image). Does auto-increment the pointer to the next pixel or the next row? Or consider an array of objects where an object pointer coerces to its base class pointer. All this is complicated and the compiler doesn't catch mistakes.
With a std::array a1, you can get its size a1.size(), compare its contents to another std::array a1 == a2, and use other standard container methods like a1.swap(a2). With built-in arrays, these operations take more programming work and are easier to mess up. E.g. given int b1[] = {10, 20, 30}; to get its size without hard-coding 3, you must do sizeof(b1) / sizeof(b1[0]). To compare its contents, you must loop over those elements.
You can pass a std::array to a function by reference f(&a1) or by value f(a1) [i.e. by copy]. Passing a built-in array only goes by reference and confounds it with a pointer to the first element. That's not the same thing. The compiler doesn't pass the array size.
You can return a std::array from a function by value, return a1. Returning a built-in array return b1 returns a dangling pointer, which is broken.
You can copy a std::array in the usual way, a1 = a2, even if it contains objects with constructors. If you try that with built-in arrays, b1 = b2, it'll just copy the array pointer (or fail to compile, depending on how b2 is declared). You can get around that using memcpy(b1, b2, sizeof(b1) / sizeof(b1[0])), but this is broken if the arrays have different sizes or if they contain elements with constructors.
You can easily change code that uses std::array to use another container like std::vector or std::map.
See the C++ FAQ Why should I use container classes rather than simple arrays? to learn more, e.g. the perils of built-in arrays containing C++ objects with destructors (like std::string) or inheritance.
Don't Freak Out About Performance
Bounds-checking access a1.at(i) requires a few more instructions each time you fetch or store an array element. In some inner loop code that jams through a large array (e.g. an image processing routine that you call on every video frame), this cost might add up enough to matter. In that rare case it makes sense to use unchecked access a[i] and carefully ensure that the loop code takes care with bounds.
In most code you're either offloading the image processing code to the GPU, or the bounds-checking cost is a tiny fraction of the overall run time, or the overall run time is not at issue. Meanwhile the risk of array access bugs is high, starting with the hours it takes you to debug it.
The only benefit of a built-in array would be slightly more concise declaration syntax. But the functional benefits of std::array blow that out of the water.
I would also add that it really doesn't matter that much. If you have to support older compilers, then you don't have a choice, of course, since std::array is only for C++11. Otherwise, you can use whichever you like, but unless you make only trivial use of the array, you should prefer std::array just to keep things in line with other STL containers (e.g., what if you later decide to make the size dynamic, and use std::vector instead, then you will be happy that you used std::array because all you will have to change is probably the array declaration itself, and the rest will be the same, especially if you use auto and other type-inference features of C++11.
std::array is a template class that encapsulate a statically-sized array, stored inside the object itself, which means that, if you instantiate the class on the stack, the array itself will be on the stack. Its size has to be known at compile time (it's passed as a template parameter), and it cannot grow or shrink.
Arrays are used to store a sequence of objects
Check the tutorial: http://www.cplusplus.com/doc/tutorial/arrays/
A std::vector does the same but it's better than built-in arrays (e.g: in general, vector hasn't much efficiency difference than built
in arrays when accessing elements via operator[]): http://www.cplusplus.com/reference/stl/vector/
The built-in arrays are a major source of errors – especially when they are used to build multidimensional arrays.
For novices, they are also a major source of confusion. Wherever possible, use vector, list, valarray, string, etc.
STL containers don't have the same problems as built in arrays
So, there is no reason in C++ to persist in using built-in arrays. Built-in arrays are in C++ mainly for backwards compatibility with C.
If the OP really wants an array, C++11 provides a wrapper for the built-in array, std::array. Using std::array is very similar to using the built-in array has no effect on their run-time performance, with much more features.
Unlike with the other containers in the Standard Library, swapping two array containers is a linear operation that involves swapping all the elements in the ranges individually, which generally is a considerably less efficient operation. On the other side, this allows the iterators to elements in both containers to keep their original container association.
Another unique feature of array containers is that they can be treated as tuple objects: The header overloads the get function to access the elements of the array as if it was a tuple, as well as specialized tuple_size and tuple_element types.
Anyway, built-in arrays are all ways passed by reference. The reason for this is when you pass an array to a function as a argument, pointer to it's first element is passed.
when you say void f(T[] array) compiler will turn it into void f(T* array)
When it comes to strings. C-style strings (i.e. null terminated character sequences) are all ways passed by reference since they are 'char' arrays too.
STL strings are not passed by reference by default. They act like normal variables.
There are no predefined rules for making parameter pass by reference. Even though the arrays are always passed by reference automatically.
vector<vector<double>> G1=connectivity( current_combination,M,q2+1,P );
vector<vector<double>> G2=connectivity( circshift_1_dexia(current_combination),M,q1+1,P );
This could also be copying vectors since connectivity returns a vector by value. In some cases, the compiler will optimize this out. To avoid this for sure though, you can pass the vector as non-const reference to connectivity rather than returning them. The return value of maxweight is a 3-dimensional vector returned by value (which may make a copy of it).
Vectors are only efficient for insert or erase at the end, and it is best to call reserve() if you are going to push_back a lot of values. You may be able to re-write it using list if you don't really need random access; with list you lose the subscript operator, but you can still make linear passes through, and save iterators to elements, rather than subscripts.
With some compilers, it can be faster to use pre-increment, rather than post-increment. Prefer ++i to i++ unless you actually need to use the post-increment. They are not the same.
Anyway, vector is going to be horribly slow if you are not compiling with optimization on. With optimization, it is close to built-in arrays. Built-in arrays can be quite slow without optimization on also, but not as bad as vector.
std::array has the at member function which is safe. It also have begin, end, size which you can use to make your code safer.
Raw arrays don't have that. (In particular, when raw arrays are decayed to pointers -e.g. when passed as arguments-, you lose any size information, which is kept in the std::array type since it is a template with the size as argument)
And a good optimizing C++11 compiler will handle std::array (or references to them) as efficiently as raw arrays.
Built-in arrays are not inherently unsafe - if used correctly. But it is easier to use built-in arrays incorrectly than it is to use alternatives, such as std::array, incorrectly and these alternatives usually offer better debugging features to help you detect when they have been used incorrectly.
Built-in arrays are subtle. There are lots of aspects that behave unexpectedly, even to experienced programmers.
std::array<T, N> is truly a wrapper around a T[N], but many of the already mentioned aspects are sorted out for free essentially, which is very optimal and something you want to have.
These are some I haven't read:
Size: N shall be a constant expression, it cannot be variable, for both. However, with built-in arrays, there's VLA(Variable Length Array) that allows that as well.
Officially, only C99 supports them. Still, many compilers allow that in previous versions of C and C++ as extensions. Therefore, you may have
int n; std::cin >> n;
int array[n]; // ill-formed but often accepted
that compiles fine. Had you used std::array, this could never work because N is required and checked to be an actual constant expression!
Expectations for arrays: A common flawed expectation is that the size of the array is carried along with the array itself when the array isn't really an array anymore, due to the kept misleading C syntax as in:
void foo(int array[])
{
// iterate over the elements
for (int i = 0; i < sizeof(array); ++i)
array[i] = 0;
}
but this is wrong because array has already decayed to a pointer, which has no information about the size of the pointed area. That misconception triggers undefined behavior if array has less than sizeof(int*) elements, typically 8, apart from being logically erroneous.
Crazy uses: Even further on that, there are some quirks arrays have:
Whether you have array[i] or i[array], there is no difference. This is not true for array, because calling an overloaded operator is effectively a function call and the order of the parameters matters.
Zero-sized arrays: N shall be greater than zero but it is still allowed as an extensions and, as before, often not warned about unless more pedantry is required. Further information here.
array has different semantics:
There is a special case for a zero-length array (N == 0). In that
case, array.begin() == array.end(), which is some unique value. The
effect of calling front() or back() on a zero-sized array is
undefined.

C++ alignment of multidimensional array structure

In my code, I have to consider an array of arrays, where the inner arrays are of a fixed dimension. In order to make use of STL algorithms, it is useful to actually store the data as array of arrays, but I also need to pass that data to a C library, which takes a flattened C-style array.
It would be great to be able to convert (i.e. flatten) the multi-dimensional array cheaply and in a portable way. I will stick to a very simple case, the real problem is more general.
struct my_inner_array { int data[3]; };
std::vector<my_inner_array> x(15);
Is
&(x[0].data[0])
a pointer to a continuous block of memory of size 45*sizeof(int) containing the same entries as x? Or do I have to worry about alignment? I am afraid that this will work for me (at least for certain data types and inner array sizes) but that it is not portable.
Is this code portable?
If not, is there a way to make it work?
If not, do you have any suggestions what I could do?
Does it change anything at all if my_inner_array is not a POD struct, but contains some methods (as long as the class does not contain any virtual methods)?
1 Theoretically no. The compiler may decide to add padding to my_inner_array. In practice, I don't see a reason why the compiler would add padding to a struct that has an array in it. In such a case there's no alignment problem creating an array of such structs. You can use a compile time assert:
typedef int my_inner_array_array[3];
BOOST_STATIC_ASSERT(sizeof(my_inner_array) == sizeof(my_inner_array_array));
4 If there are no virtual methods it shouldn't make any difference.

std::vector versus std::array in C++

What are the difference between a std::vector and an std::array in C++? When should one be preferred over another? What are the pros and cons of each? All my textbook does is list how they are the same.
std::vector is a template class that encapsulate a dynamic array1, stored in the heap, that grows and shrinks automatically if elements are added or removed. It provides all the hooks (begin(), end(), iterators, etc) that make it work fine with the rest of the STL. It also has several useful methods that let you perform operations that on a normal array would be cumbersome, like e.g. inserting elements in the middle of a vector (it handles all the work of moving the following elements behind the scenes).
Since it stores the elements in memory allocated on the heap, it has some overhead in respect to static arrays.
std::array is a template class that encapsulate a statically-sized array, stored inside the object itself, which means that, if you instantiate the class on the stack, the array itself will be on the stack. Its size has to be known at compile time (it's passed as a template parameter), and it cannot grow or shrink.
It's more limited than std::vector, but it's often more efficient, especially for small sizes, because in practice it's mostly a lightweight wrapper around a C-style array. However, it's more secure, since the implicit conversion to pointer is disabled, and it provides much of the STL-related functionality of std::vector and of the other containers, so you can use it easily with STL algorithms & co. Anyhow, for the very limitation of fixed size it's much less flexible than std::vector.
For an introduction to std::array, have a look at this article; for a quick introduction to std::vector and to the the operations that are possible on it, you may want to look at its documentation.
Actually, I think that in the standard they are described in terms of maximum complexity of the different operations (e.g. random access in constant time, iteration over all the elements in linear time, add and removal of elements at the end in constant amortized time, etc), but AFAIK there's no other method of fulfilling such requirements other than using a dynamic array. As stated by #Lucretiel, the standard actually requires that the elements are stored contiguously, so it is a dynamic array, stored where the associated allocator puts it.
To emphasize a point made by #MatteoItalia, the efficiency difference is where the data is stored. Heap memory (required with vector) requires a call to the system to allocate memory and this can be expensive if you are counting cycles. Stack memory (possible for array) is virtually "zero-overhead" in terms of time, because the memory is allocated by just adjusting the stack pointer and it is done just once on entry to a function. The stack also avoids memory fragmentation. To be sure, std::array won't always be on the stack; it depends on where you allocate it, but it will still involve one less memory allocation from the heap compared to vector. If you have a
small "array" (under 100 elements say) - (a typical stack is about 8MB, so don't allocate more than a few KB on the stack or less if your code is recursive)
the size will be fixed
the lifetime is in the function scope (or is a member value with the same lifetime as the parent class)
you are counting cycles,
definitely use a std::array over a vector. If any of those requirements is not true, then use a std::vector.
If you are considering using multidimensional arrays, then there is one additional difference between std::array and std::vector. A multidimensional std::array will have the elements packed in memory in all dimensions, just as a c style array is. A multidimensional std::vector will not be packed in all dimensions.
Given the following declarations:
int cConc[3][5];
std::array<std::array<int, 5>, 3> aConc;
int **ptrConc; // initialized to [3][5] via new and destructed via delete
std::vector<std::vector<int>> vConc; // initialized to [3][5]
A pointer to the first element in the c-style array (cConc) or the std::array (aConc) can be iterated through the entire array by adding 1 to each preceding element. They are tightly packed.
A pointer to the first element in the vector array (vConc) or the pointer array (ptrConc) can only be iterated through the first 5 (in this case) elements, and then there are 12 bytes (on my system) of overhead for the next vector.
This means that a std::vector> array initialized as a [3][1000] array will be much smaller in memory than one initialized as a [1000][3] array, and both will be larger in memory than a std:array allocated either way.
This also means that you can't simply pass a multidimensional vector (or pointer) array to, say, openGL without accounting for the memory overhead, but you can naively pass a multidimensional std::array to openGL and have it work out.
Summarizing the above discussion in a table for quick reference:
C-Style Array
std::array
std::vector
Size
Fixed/Static
Fixed/Static
Dynamic
Memory efficiency
More efficient
More Efficient
Less efficient (May double its size on new allocation.)
Copying
Iterate over elements or use std::copy()
Direct copy: a2 = a1;
Direct copy: v2 = v1;
Passing to function
Passed by pointer. (Size not available in function)
Passed by value
Passed by value (Size available in that function)
Size
sizeof(a1) / sizeof(a1[0])
a1.size()
v1.size()
Use case
For quick access and when insertions/deletions not frequently needed.
Same as classic array but safer and easier to pass and copy.
When frequent additions or deletions might be needed
Using the std::vector<T> class:
...is just as fast as using built-in arrays, assuming you are doing only the things built-in arrays allow you to do (read and write to existing elements).
...automatically resizes when new elements are inserted.
...allows you to insert new elements at the beginning or in the middle of the vector, automatically "shifting" the rest of the elements "up"( does that make sense?). It allows you to remove elements anywhere in the std::vector, too, automatically shifting the rest of the elements down.
...allows you to perform a range-checked read with the at() method (you can always use the indexers [] if you don't want this check to be performed).
There are two three main caveats to using std::vector<T>:
You don't have reliable access to the underlying pointer, which may be an issue if you are dealing with third-party functions that demand the address of an array.
The std::vector<bool> class is silly. It's implemented as a condensed bitfield, not as an array. Avoid it if you want an array of bools!
During usage, std::vector<T>s are going to be a bit larger than a C++ array with the same number of elements. This is because they need to keep track of a small amount of other information, such as their current size, and because whenever std::vector<T>s resize, they reserve more space then they need. This is to prevent them from having to resize every time a new element is inserted. This behavior can be changed by providing a custom allocator, but I never felt the need to do that!
Edit: After reading Zud's reply to the question, I felt I should add this:
The std::array<T> class is not the same as a C++ array. std::array<T> is a very thin wrapper around C++ arrays, with the primary purpose of hiding the pointer from the user of the class (in C++, arrays are implicitly cast as pointers, often to dismaying effect). The std::array<T> class also stores its size (length), which can be very useful.
A vector is a container class while an array is an allocated memory.