Performance comparison of STL sort on vector of strings vs. vector of string pointers - c++

I tried to compare the performance of STL sort on vector of strings and vector of pointers to strings.
I expected the pointers version to outperform, but the actual results for 5 million randomly generated strings are
vector of strings : 12.06 seconds
vector of pointers to strings : 16.75 seconds
What explains this behavior? I expected swapping pointers to strings should be faster than swapping string objects.
The 5 million strings were generated by converting random integers.
Compiled with (gcc 4.9.3): g++ -std=c++11 -Wall
CPU: Xeon X5650
// sort vector of strings
int main(int argc, char *argv[])
{
const int numElements=5000000;
srand(time(NULL));
vector<string> vec(numElements);
for (int i = 0; i < numElements; i++)
vec[i] = std::to_string(rand() % numElements);
unsigned before = clock();
sort(vec.begin(), vec.end());
cout<< "Time to sort: " << clock() - before << endl;
for (int i = 0; i < numElements; i++)
cout << vec[i] << endl;
return 0;
}
// sort vector of pointers to strings
bool comparePtrToString (string *s1, string *s2)
{
return (*s1 < *s2);
}
int main(int argc, char *argv[])
{
const int numElements=5000000;
srand(time(NULL));
vector<string *> vec(numElements);
for (int i = 0; i < numElements; i++)
vec[i] = new string( to_string(rand() % numElements));
unsigned before = clock();
sort(vec.begin(), vec.end(), comparePtrToString);
cout<< "Time to sort: " << clock() - before << endl;
for (int i = 0; i < numElements; i++)
cout << *vec[i] << endl;
return 0;
}

This is because all the operations that sort performs on strings is moves and swaps. Both move and swap for an std::string are constant time operations, meaning that they only involve changing some pointers.
Therefore, for both sorts moving of the data has the same performance overhead. However, in case of pointers to strings you pay some extra cost to dereference the pointers on each comparison, which causes it to be noticeably slower.

In the first case the internal pointers to representations of the strings are swapped and not the complete data copied.
You should not expect any benefit from the implementation with pointers, which in fact is slower, since the pointers have to be dereferenced additionally, to perform the comparison.

What explains this behavior? I expected swapping pointers to strings
should be faster than swapping string objects.
There's various things going on here which could impact performance.
Swapping is relatively cheap both ways. Swapping strings tends to always be a shallow operation (just swapping PODs like pointers and integrals) for large strings and possibly deep for small strings (but still quite cheap -- implementation-dependent). So swapping strings tends to be pretty cheap overall, and typically not much more expensive than simply swapping pointers to them*.
[sizeof(string) is certanly bigger than sizeof(string*), but it's not an astronomical difference basically as the operation still
occurs in constant-time, and quite a bit cheaper in this context
when the string fields already have to be fetched into a faster form
of memory for the comparator, giving us temporal locality with
respect to its fields.]
String contents must be accessed anyway both ways. Even the pointer version of your comparator has to examine the string contents (including the fields designating size and capacity). As a result, we end up paying the memory cost of fetching the data for the string contents regardless. Naturally if you just sorted the strings by pointer address (ex: without using a comparator) instead of a lexicographical comparison of the string contents, the performance edge should shift towards the pointer version since that would reduce the amount of data accessed considerably while improving spatial locality (more pointers can fit in a cache line than strings, e.g.).
The pointer version is scattering (or at least increasing the stride of) the string fields in memory. For the pointer version, you're allocating each string on the free store (in addition to the string contents which may or may not be allocated on the free store). That can disperse the memory and reduce locality of reference, so you're potentially incurring a greater cost in the comparator that way with increased cache misses. Even if a sequential allocation of this sort results in a very contiguous set of pages being allocated (ideal scenario), the stride to get from one string's fields to the next would tend to get at least a little larger because of the allocation metadata/alignment overhead (not all allocators require metadata to be stored directly in a chunk, but typically they will at least add some small overhead to the chunk size).
It might be simpler to attribute this to the cost of dereferencing the pointer but it's not so much the cost of the mov/load instruction doing the memory addressing that's expensive (in this relative context) as loading from slower/bigger forms of memory that aren't already cached/paged to faster, smaller memory. Allocating each string individually on the free store will typically increase this cost whether it's due to a loss of contiguity or a larger constant stride between each string entry (in an ideal case).
Even at a basic level without trying too hard to diagnose what's happening at the memory level, this increases the total size of the data that the machine has to look at (string contents/fields + pointer address) in addition to reduced locality/larger or variable strides (typically if you increase the amount of data accessed, it has to at least have improved locality to have a good chance of being beneficial). You might start to see more comparable times if you just sorted pointers to strings that were allocated contiguously (not in terms of the string contents which we have no control over, but just contiguous in terms of the adjacent string objects themselves -- effectively pointers to strings stored in an array). Then you'd get back the spatial locality at least for the string fields in addition to packing the data associated more tightly within a contiguous space.
Swapping smaller data types like indices or pointers can sometimes offer a benefit but they typically need to avoid examining the original contents of the data they refer to or provide a significantly cheaper swap/move behavior (in this case string is already cheap and becomes cheaper in this context considering temporal locality) or both.

Well, a std::string is typically about 3-4 times as big as a std::string*.
So just straight-up swapping two of the former shuffles that much more memory around.
But that is dwarfed by the following effects:
Locality of reference. You need to follow one more pointer to a random position to read the string.
More memory-usage: A pointer plus bookkeeping per allocation of each std::string.
Both put extra demand on caching, and the former cannot even be prefetched.

Swaping containers change just container's content, in string case is the pointer to first character of string, not whole string.
In case vectors of pointers of strings you performed one additional step - casting pointers

Related

What does std::vector look like in memory?

I read that std::vector should be contiguous. My understanding is, that its elements should be stored together, not spread out across the memory. I have simply accepted the fact and used this knowledge when for example using its data() method to get the underlying contiguous piece of memory.
However, I came across a situation, where the vector's memory behaves in a strange way:
std::vector<int> numbers;
std::vector<int*> ptr_numbers;
for (int i = 0; i < 8; i++) {
numbers.push_back(i);
ptr_numbers.push_back(&numbers.back());
}
I expected this to give me a vector of some numbers and a vector of pointers to these numbers. However, when listing the contents of the ptr_numbers pointers, there are different and seemingly random numbers, as though I am accessing wrong parts of memory.
I have tried to check the contents every step:
for (int i = 0; i < 8; i++) {
numbers.push_back(i);
ptr_numbers.push_back(&numbers.back());
for (auto ptr_number : ptr_numbers)
std::cout << *ptr_number << std::endl;
std::cout << std::endl;
}
The result looks roughly like this:
1
some random number
2
some random number
some random number
3
So it seems as though when I push_back() to the numbers vector, its older elements change their location.
So what does it exactly mean, that std::vector is a contiguous container and why do its elements move? Does it maybe store them together, but moves them all together, when more space is needed?
Edit: Is std::vector contiguous only since C++17? (Just to keep the comments on my previous claim relevant to future readers.)
It roughly looks like this (excuse my MS Paint masterpiece):
The std::vector instance you have on the stack is a small object containing a pointer to a heap-allocated buffer, plus some extra variables to keep track of the size and and capacity of the vector.
So it seems as though when I push_back() to the numbers vector, its older elements change their location.
The heap-allocated buffer has a fixed capacity. When you reach the end of the buffer, a new buffer will be allocated somewhere else on the heap and all the previous elements will be moved into the new one. Their addresses will therefore change.
Does it maybe store them together, but moves them all together, when more space is needed?
Roughly, yes. Iterator and address stability of elements is guaranteed with std::vector only if no reallocation takes place.
I am aware, that std::vector is a contiguous container only since C++17
The memory layout of std::vector hasn't changed since its first appearance in the Standard. ContiguousContainer is just a "concept" that was added to differentiate contiguous containers from others at compile-time.
The Answer
It's a single contiguous storage (a 1d array).
Each time it runs out of capacity it gets reallocated and stored objects are moved to the new larger place — this is why you observe addresses of the stored objects changing.
It has always been this way, not since C++17.
TL; DR
The storage is growing geometrically to ensure the requirement of the amortized O(1) push_back(). The growth factor is 2 (Capn+1 = Capn + Capn) in most implementations of the C++ Standard Library (GCC, Clang, STLPort) and 1.5 (Capn+1 = Capn + Capn / 2) in the MSVC variant.
If you pre-allocate it with vector::reserve(N) and sufficiently large N, then addresses of the stored objects won't be changing when you add new ones.
In most practical applications is usually worth pre-allocating it to at least 32 elements to skip the first few reallocations shortly following one other (0→1→2→4→8→16).
It is also sometimes practical to slow it down, switch to the arithmetic growth policy (Capn+1 = Capn + Const), or stop entirely after some reasonably large size to ensure the application does not waste or grow out of memory.
Lastly, in some practical applications, like column-based object storages, it may be worth giving up the idea of contiguous storage completely in favor of a segmented one (same as what std::deque does but with much larger chunks). This way the data may be stored reasonably well localized for both per-column and per-row queries (though this may need some help from the memory allocator as well).
std::vector being a contiguous container means exactly what you think it means.
However, many operations on a vector can re-locate that entire piece of memory.
One common case is when you add element to it, the vector must grow, it can re-allocate and copy all elements to another contiguous piece of memory.
So what does it exactly mean, that std::vector is a contiguous container and why do its elements move? Does it maybe store them together, but moves them all together, when more space is needed?
That's exactly how it works and why appending elements does indeed invalidate all iterators as well as memory locations when a reallocation takes place¹. This is not only valid since C++17, it has been the case ever since.
There are a couple of benefits from this approach:
It is very cache-friendly and hence efficient.
The data() method can be used to pass the underlying raw memory to APIs that work with raw pointers.
The cost of allocating new memory upon push_back, reserve or resize boil down to constant time, as the geometric growth amortizes over time (each time push_back is called the capacity is doubled in libc++ and libstdc++, and approx. growths by a factor of 1.5 in MSVC).
It allows for the most restricted iterator category, i.e., random access iterators, because classical pointer arithmetic works out well when the data is contiguously stored.
Move construction of a vector instance from another one is very cheap.
These implications can be considered the downside of such a memory layout:
All iterators and pointers to elements are invalidate upon modifications of the vector that imply a reallocation. This can lead to subtle bugs when e.g. erasing elements while iterating over the elements of a vector.
Operations like push_front (as std::list or std::deque provide) aren't provided (insert(vec.begin(), element) works, but is possibly expensive¹), as well as efficient merging/splicing of multiple vector instances.
¹ Thanks to #FrançoisAndrieux for pointing that out.
In terms of the actual structure, an std::vector looks something like this in memory:
struct vector { // Simple C struct as example (T is the type supplied by the template)
T *begin; // vector::begin() probably returns this value
T *end; // vector::end() probably returns this value
T *end_capacity; // First non-valid address
// Allocator state might be stored here (most allocators are stateless)
};
Relevant code snippet from the libc++ implementation as used by LLVM
Printing the raw memory contents of an std::vector:
(Don't do this if you don't know what you're doing!)
#include <iostream>
#include <vector>
struct vector {
int *begin;
int *end;
int *end_capacity;
};
int main() {
union vecunion {
std::vector<int> stdvec;
vector myvec;
~vecunion() { /* do nothing */ }
} vec = { std::vector<int>() };
union veciterator {
std::vector<int>::iterator stditer;
int *myiter;
~veciterator() { /* do nothing */ }
};
vec.stdvec.push_back(1); // Add something so we don't have an empty vector
std::cout
<< "vec.begin = " << vec.myvec.begin << "\n"
<< "vec.end = " << vec.myvec.end << "\n"
<< "vec.end_capacity = " << vec.myvec.end_capacity << "\n"
<< "vec's size = " << vec.myvec.end - vec.myvec.begin << "\n"
<< "vec's capacity = " << vec.myvec.end_capacity - vec.myvec.begin << "\n"
<< "vector::begin() = " << (veciterator { vec.stdvec.begin() }).myiter << "\n"
<< "vector::end() = " << (veciterator { vec.stdvec.end() }).myiter << "\n"
<< "vector::size() = " << vec.stdvec.size() << "\n"
<< "vector::capacity() = " << vec.stdvec.capacity() << "\n"
;
}

multi dimensional array allocation in chunks

I was poking around with multidimensional arrays today, and i came across blog which distinguishes rectangular arrays, and jagged arrays; usually i would do this on both jagged and rectangular:
Object** obj = new Obj*[5];
for (int i = 0; i < 5; ++i)
{
obj[i] = new Obj[10];
}
but in that blog it was said that if i knew that the 2d array was rectangular then i'm better off allocating the entire thing in a 1d array and use an improvised way of accessing the elements, something like this:
Object* obj = new Obj[rows * cols];
obj[x * cols + y];
//which should have been obj[x][y] on the previous implementation
I somehow have a clue that allocating a continuous memory chunk would be good, but i don't really understand how big of a difference this would make, can somebody explain?
First and less important, when you allocate and free your object you only need to do a single allocation/deallocation.
More important: when you use the array you basically get to trade a multiplication against a memory access. On modern computers, memory access is much much much slower than arithmetic.
That's a bit of a lie, because much of the slowness of memory accesses gets hidden by caches -- regions of memory that are being accessed frequently get stored in fast memory inside, or very near to, the CPU and can be accessed faster. But these caches are of limited size, so (1) if your array isn't being used all the time then the row pointers may not be in the cache and (2) if it is being used all the time then they may be taking up space that could otherwise be used by something else.
Exactly how it works out will depend on the details of your code, though. In many cases it will make no discernible difference one way or the other to the speed of your program. You could try it both ways and benchmark.
[EDITED to add, after being reminded of it by Peter Schneider's comment:] Also, if you allocate each row separately they may end up all being in different parts of memory, which may make your caches a bit less effective -- data gets pulled into cache in chunks, and if you often go from the end of one row to the start of the next then you'll benefit from that. But this is a subtle one; in some cases having your rows equally spaced in memory may actually make the cache perform worse, and if you allocate several rows in succession they may well end up (almost) next to one another in memory anyway, and in any case it probably doesn't matter much unless your rows are quite short.
Allocating a 2D array as a one big chunk permits the compiler to generate a more efficient code than doing it in multiple chunks. At least, there would be one pointer dereferencing operation in one chunk approach. BTW, declaring the 2D array like this:
Object obj[rows][cols];
obj[x][y];
is equivalent to:
Object* obj = new Obj[rows * cols];
obj[x * cols + y];
in terms of speed. But the first one in not dynamic (you need to specify the values of "rows" and "cols" at compile time.
By having one large contiguous chunk of memory, you may get improved performance because there is more chance that memory accesses are already in the cache. This idea is called cache locality. We say the large array has better cache locality. Modern processors have several levels of cache. The lowest level is generally the smallest and the fastest.
It still pays to access the array in meaningful ways. For example, if data is stored in row-major order and you access it in column-major order, you are scattering your memory accesses. At certain sizes, this access pattern will negate the advantages of caching.
Having good cache performance is far preferable to any concerns you may have about multiplying values for indexing.
If one of the dimensions of your array is a compile time constant you can allocate a "truly 2-dimensional array" in one chunk dynamically as well and then index it the usual way. Like all dynamic allocations of arrays, new returns a pointer to the element type. In this case of a 2-dimensional array the elements are in turn arrays -- 1-dimensional arrays. The syntax of the resulting element pointer is a bit cumbersome, mostly because the dereferencing operator*() has a lower precedence than the indexing operator[](). One possible allocation statement could be int (*arr7x11)[11] = new int[7][11];.
Below is a complete example. As you see, the innermost index in the allocation can be a run-time value; it determines the number of elements in the allocated array. The other indices determine the element type (and hence element size as well as overall size) of the dynamically allocated array, which of course must be known to perform the allocation. As discussed above, the elements are themselves arrays, here 1-dimensional arrays of 11 ints.
#include<cstdio>
using namespace std;
int main(int argc, char **argv)
{
constexpr int cols = 11;
int rows = 7;
// overwrite with cmd line arg if present.
// if scanf fails, default is retained.
if(argc >= 2) { sscanf(argv[1], "%d", &rows); }
// The actual allocation of "rows" elements of
// type "array of 'cols' ints". Note the brackets
// around *arr7x11 in order to force operator
// evaluation order. arr7x11 is a pointer to array,
// not an array of pointers.
int (*arr7x11)[cols] = new int[rows][cols];
for(int row = 0; row<rows; row++)
{
for(int col = 0; col<cols; col++)
{
arr7x11[row][col] = (row+1)*1000 + col+1;
}
}
for(int row = 0; row<rows; row++)
{
for(int col = 0; col<cols; col++)
{
printf("%6d", arr7x11[row][col]);
}
putchar('\n');
}
return 0;
}
A sample session:
g++ -std=c++14 -Wall -o 2darrdecl 2darrdecl.cpp && ./2darrdecl 3
1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011

hashtable needs lots of memory

I've declared and defined the following HashTable class. Note that I needed a hashtable of hashtables so my HashEntry struct contains a HashTable pointer. The public part is not a big deal it has the traditional hash table functions so I removed them for simplicity.
enum Status{ACTIVE, DELETED, EMPTY};
enum Type{DNS_ENTRY, URL_ENTRY};
class HashTable{
private:
struct HashEntry{
std::string key;
Status current_status;
std::string ip;
int access_count;
Type entry_type;
HashTable *table;
HashEntry(
const std::string &k = std::string(),
Status s = EMPTY,
const std::string &u = std::string(),
const int &a = int(),
Type e = DNS_ENTRY,
HashTable *t = NULL
): key(k), current_status(s), ip(u), access_count(a), entry_type(e), table(t){}
};
std::vector<HashEntry> array;
int currentSize;
public:
HashTable(int size = 1181, int csz = 0): array(size), currentSize(csz){}
};
I am using quadratic probing and I double the size of the vector in my rehash function when I hit array.size()/2. The following list is used when a larger table size is needed.
int a[16] = {49663, 99907, 181031, 360461,...}
My problem is that this class consumes so much memory. I've just profiled it with massif and found out that it needs 33MB(33 million bytes!) of memory for 125000 insertion. To be clear, actually
1 insertion -> 47352 Bytes
8 insertion -> 48376 Bytes
512 insertion -> 76.27KB
1000 insertion 2MB (array size increased to 49663 here)
27000 insertion-> 8MB (array size increased to 99907 here)
64000 insertion -> 16MB (array size increased to 181031 here)
125000 insertion-> 33MB (array size increased to 360461 here)
These may be unnecessary but I just wanted to show you how memory usage changes with the input. As you can see, when rehashing is done, memory usage doubles. For example, our initial array size was 1181. And we have just seen that 125000 elements -> 33MB.
To debug the problem, I changed the initial size to 360461. Now 127000 insertion does not need rehashing. And I see that 20MB of memory is used with this initial value. That is still huge, but I think it suggests there is a problem with rehashing. The following is my rehash function.
void HashTable::rehash(){
std::vector<HashEntry> oldArray = array;
array.resize(nextprime(array.size()));
for(int j = 0; j < array.size(); j++){
array[j].current_status = EMPTY;
}
for(int i = 0; i < oldArray.size(); i++){
if(oldArray[i].current_status == ACTIVE){
insert(oldArray[i].key);
int pos = findPos(oldArray[i].key);
array[pos] = oldArray[i];
}
}
}
int nextprime(int arraysize){
int a[16] = {49663, 99907, 181031, 360461, 720703, 1400863, 2800519, 5600533, 11200031, 22000787, 44000027};
int i = 0;
while(arraysize >= a[i]){i++;}
return a[i];
}
This is the insert function used in rehashing and everywhere else.
bool HashTable::insert(const std::string &k){
int currentPos = findPos(k);
if(isActive(currentPos)){
return false;
}
array[currentPos] = HashEntry(k, ACTIVE);
if(++currentSize > array.size() / 2){
rehash();
}
return true;
}
What am I doing wrong here? Even if it's caused by rehashing, when no rehashing is done it is still 20MB and I believe 20MB is way too much for 100k items. This hashtable is supposed to contain like 8 million elements.
The fact that 360,461 HashEntry's take 20 MB is hardly surprising. Did you try looking at sizeof(HashEntry)?
Each HashEntry includes two std::strings, a pointer, and three int's. As the old joke has it, it's not easy to answer the question "How long is a string?", in this case because there are a large variety of string implementations and optimizations, so you might find that sizeof(std::string) is anywhere between 4 and 32 bytes. (It would only be 4 bytes on a 32-bit architecture.) In practice, a string requires three pointers and the string itself unless it happens to be empty. If sizeof(std::string) is the same as sizeof(void*), then you've probably got a not-too-recent GNU standard library, in which the std::string is an opaque pointer to a block containing two pointers, a reference count, and the string itself. If sizeof(std::string) is 32 bytes, then you might have a recent GNU standard library implementation in which there is a bit of extra space in the string structure for the short-string optimization. See the answer to Why does libc++'s implementation of std::string take up 3x memory as libstdc++? for some measurements. Let's just say 32 bytes per string, and ignore the details; it won't be off by much.
So two strings (32 bytes each) plus a pointer (8 bytes) plus three ints (another 12 bytes) and four bytes of padding because one of the ints is between two 8-byte aligned objects, and that's a total of 88 bytes per HashEntry. And if you have 360,461 hash entries, that would be 31,720,568 bytes, about 30 MB. The fact that you're "only" using 20MB is probably because you're using the old GNU standard library, which optimizes empty strings to a single pointer, and the majority of your strings are empty strings because half the slots have never been used.
Now, let's take a look at rehash. Reduced to its essentials:
void rehash() {
std::vector<HashEntry> tmp = array; /* Copy the entire array */
array.resize(new_size()); /* Internally does another copy */
for (auto const& entry : tmp)
if (entry.used()) array.insert(entry); /* Yet another copy */
}
At peak, we had two copies of the smaller array as well as the new big array. Even if the new array is only 20 MB, it's not surprising that peak memory usage was almost twice that. (Indeed, this is again surprisingly small, not surprisingly big. Possibly it was not actually necessary to change the address of the new vector because it was at the end of the current allocated memory space, which could just be extended.)
Note that we did two copies of all that data, and array.resize() potentially did another one. Let's see if we can do better:
void rehash() {
std::vector<HashEntry> tmp(new_size()); /* Make an array of default objects */
for (auto const& entry: array)
if (entry.used()) tmp.insert(entry); /* Copy into the new array */
std::swap(tmp, array); /* Not a copy, just swap three pointers */
}
This way, we only do one copy. Instead of a (possible) internal copy by resize, we do a bulk construction of the new elements, which should be similar. (It's just zeroing out the memory.)
Also, in the new version we only copy the actual strings once each, instead of twice each, which is the fiddliest part of the copy and thus probably quite a large saving.
Proper string management could reduce that overhead further. rehash doesn't actually need to copy the strings, since they are not changed. So we could keep the strings elsewhere, say in a vector of strings, and just use the index into the vector in the HashEntry. Since you are not expecting to hold billions of strings, only millions, the index could a four-byte int. By also shuffling the HashEntry fields around and reducing the enums to a byte instead of four bytes (in C++11, you can specify the underlying integer type of an enum), the HashEntry could be reduced to 24 bytes, and there wouldn't be a need to leave space for as many string descriptors.
Since you are using open addressing, half your hash slots have to be empty. Since HashEntry is quite large, storing a full HashEntry in each empty slot is terribly wasteful.
You should store your HashEntry structs somewhere else and put HashEntry* in your hash table, or switch to chaining with a much denser load factor. Either one will reduce this waste.
Also, if you're going to move HashEntry objects around, swap instead of copying, or use move semantics so you don't have to copy so many strings. Be sure to clear out the strings in any entries you're no longer using.
Also, even though you say you need HashTables of HashTables, you don't really explain why. It's usually more efficient to use one hash table with efficiently represented compound keys if small hash tables are not memory-efficient.
I have changed my structure a little bit just as you all suggested, but there is this one thing that nobody has noticed.
When rehashing/resizing is being done, my rehashfunction calls insert. In this insert function I am incrementing the currentSize, which holds how many elements a hashtable has. So each time a resizing is needed, currentSize doubles itself while it should have stayed the same. I removed that line and wrote the proper code for rehashing and now I think I'm okay.
I am using two different structs now, and the program consumes 1.6GB memory for 8 million elements, which is what expected due to multibyte strings, and integers. That number was like 7-8GB before.

Why Maintaining Sorted Array is faster than Vector in C++

I am creating an array and vector of size 100 and generating a random value and trying to maintain both array and vector as sorted.
Here is my code for the same
vector<int> myVector;
int arr[SIZE];
clock_t start, finish;
int random;
for(int i=0; i<SIZE;i++)
{
myVector.push_back(0);
arr[i] = 0;
}
//testing for Array
start = clock();
for(int i=0; i<MAX;++i)
{
random = getRandom(); //returns rand() % 100
for(int j=0; j<SIZE;++j){
if(random > arr[j])
{
for(int k = SIZE - 1; k > j ; --k)
{
arr[k] = arr[k-1];
}
arr[j] = random;
break;
}
}
}
finish = clock();
cout << "Array Time " << finish - start << endl;
//Vector Processing
start = clock();
for(int i=0; i<MAX;++i)
{
random = getRandom(); //returns rand() % 100
for(int j=0; j<SIZE;++j){
if(random > myVector[j])
{
for(int k = SIZE - 1; k > j ; --k)
{
myVector[k] = myVector[k-1];
}
myVector[j] = random;
break;
}
}
}
finish = clock();
cout << "Vector Time " << finish - start << endl;
The output is as follows:
Array Time : 5
Vector Time: 83
I am not able to understand why vector is so slow compared to array in this case?
Doesn't this contradict the thumb-rule of preferring Vector over Array.
Please Help !
First of all: Many rules of thumb in programming are not about ganing some milliseconds in performance, but about managing complexity, therefore avoiding bugs. In this case, it's about performing range checks wich most vector implementations do in debug mode, and wich arrays don't. It's also about memory management for dynamic arrays - vector does manage it's memory itself, while you have to do it manually in arrays at the risk of introducing memory leaks (ever forgot a delete[] or used delete instead? I be you have!). And it's about ease of use, e.g. resizing the vector or inserting element in the middle, wich is tedious work with manually managed arrays.
In other words, performance measurements can never ever contradict a rule of thumb, because a rule of thumb never targets performance. Performance measurements can only be one of the few possible reasons to not obey a coding guideline.
At first sight I'd guess you have not enabled optimizations. The main source of performance loss for the vector would then be index checks that many vector implementations have enabled for debug builds. Those won't kick in in optimized builds, so that should be your first concern. Rule of thumb: performance measurements without optimizations enabled are meaningless
If enabling optimizations still does show a better performance for the array, there's another difference:
The array is stored on the stack, so the compiler can directly use the adrresses and calculate address offsets at compiletime, while the vector elements are stored on the heap and the compiler will have to dereference the pointer stored in the vector. I'd expect the optimizer to dereference the pointer once and calculate the address offsets from that point on. Still, there might be a small performance penalty compared to compiletime-calculated address offsets, especially if the optimizer can unroll the loop a bit. This still does not contradict the rule of thumb, because you are comparing apples with pears here. The rule of thumb says,
Prefer std::vector over dynamic arrays, and prefer std::array over fixed arrays.
So either use a dynamically allocated array (including some kind of delete[], please) or compare the fixed size array to a std::array. In C++14, you'll have to consider new candidates in the game, namely std::dynarray and C++14 VLAs, non-resizable, runtime length arrays comparable to C's VLAs.
Update:
As was pointed out in the comments, optimizers are good at identifying code that has no side effects, like the operations on the array that you never read from. std::vector implementations are complicated enough that optimizers typically won't see through those several layers of indirection and optimize away all the insert, so you'll get zero time for the array compared to some time for the vector. Reading the array contens after the loop will disable such rude optimizations.
The vector class has to dynamically grow the memory, that may involve copying the whole thing from time to time.
Also it has to call internal functions for many operations - like reallocating.
Also it may have security functionality like boundary checks.
Meanwhile your array is preallocated and all your operations propably do not call any internal functions.
That is the overhead price for more functionality.
And who said that vectors should be faster than arrays in all cases?
Your array does not need to grow, thats a special case where arrays are indeed faster!
Because arrays are native data types, whereas the compiler can manipulate it directly from memory, they are managed internally by the compiled exec.
On the other hand, you get vector that is more like a class, template as I read, and it needs some management going through another header files and libraries.
Essentially native data type can be managed withouth including any headers, which make them easier to manipulate from the program, without having to use external code. Which makes the overhead on the vector time is the need for the program to look through the code and use the methods related to vector data type.
Every time you need to add more code to your app and operate from it, it will make your app performance to drop
You can read about it, here, here and here

Benefits of using reserve() in a vector - C++

What is the benefit of using reserve when dealing with vectors. When should I use them? Couldn't find a clear cut answer on this but I assume it is faster when you reserve in advance before using them.
What say you people smarter than I?
It's useful if you have an idea how many elements the vector will ultimately hold - it can help the vector avoid repeatedly allocating memory (and having to move the data to the new memory).
In general it's probably a potential optimization that you shouldn't need to worry about, but it's not harmful either (at worst you end up wasting memory if you over estimate).
One area where it can be more than an optimization is when you want to ensure that existing iterators do not get invalidated by adding new elements.
For example, a push_back() call may invalidate existing iterators to the vector (if a reallocation occurs). However if you've reserved enough elements you can ensure that the reallocation will not occur. This is a technique that doesn't need to be used very often though.
It can be ... especially if you are going to be adding a lot of elements to you vector over time, and you want to avoid the automatic memory expansion that the container will make when it runs out of available slots.
For instance, back-insertions (i.e., std::vector::push_back) are considered an ammortized O(1) or constant-time process, but that is because if an insertion at the back of a vector is made, and the vector is out of space, it must then reallocate memory for a new array of elements, copy the old elements into the new array, and then it can copy the element you were trying to insert into the container. That process is O(N), or linear-time complexity, and for a large vector, could take quite a bit of time. Using the reserve() method allows you to pre-allocate memory for the vector if you know it's going to be at least some certain size, and avoid reallocating memory every time space runs out, especially if you are going to be doing back-insertions inside some performance-critical code where you want to make sure that the time to-do the insertion remains an actual O(1) complexity-process, and doesn't incurr some hidden memory reallocation for the array. Granted, your copy constructor would have to be O(1) complexity as well to get true O(1) complexity for the entire back-insertion process, but in regards to the actual algorithm for back-insertion into the vector by the container itself, you can keep it a known complexity if the memory for the slot is already pre-allocated.
This excellent article deeply explains differences between deque and vector containers. Section "Experiment 2" shows the benefits of vector::reserve().
If you know the eventual size of the vector then reserve is worth using.
Otherwise whenever the vector runs out of internal room it will re-size the buffer. This usually involves doubling (or 1.5 * current size) the size of the internal buffer (can be expensive if you do this a lot).
The real expensive bit is invoking the copy constructor on each element to copy it from the old buffer to the new buffer, followed by calling the destructor on each element in the old buffer.
If the copy constructor is expensive then it can be a problem.
Faster and saves memory
If you push_back another element, then a full vector will typically allocate double the memory it's currently using - since allocate + copy is expensive
Don't know about people smarter than you, but I would say that you should call reserve in advance if you are going to perform lots in insertion operations and you already know or can estimate the total number of elements, at least the order of magnitude. It can save you a lot of reallocations in good circumstances.
Although its an old question, Here is my implementation for the differences.
#include <iostream>
#include <chrono>
#include <vector>
using namespace std;
int main(){
vector<int> v1;
chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
for(int i = 0; i < 1000000; ++i){
v1.push_back(1);
}
chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
chrono::duration<double> time_first = chrono::duration_cast<chrono::duration<double>>(t2-t1);
cout << "Time for 1000000 insertion without reserve: " << time_first.count() * 1000 << " miliseconds." << endl;
vector<int> v2;
v2.reserve(1000000);
chrono::steady_clock::time_point t3 = chrono::steady_clock::now();
for(int i = 0; i < 1000000; ++i){
v2.push_back(1);
}
chrono::steady_clock::time_point t4 = chrono::steady_clock::now();
chrono::duration<double> time_second = chrono::duration_cast<chrono::duration<double>>(t4-t3);
cout << "Time for 1000000 insertion with reserve: " << time_second.count() * 1000 << " miliseconds." << endl;
return 0;
}
When you compile and run this program, it outputs:
Time for 1000000 insertion without reserve: 24.5573 miliseconds.
Time for 1000000 insertion with reserve: 17.1771 miliseconds.
Seems to be some improvement with reserve, but not that too much improvement. I think it will be more improvement for complex objects, I am not sure. Any suggestions, changes and comments are welcome.
It's always interesting to know the final total needed space before to request any space from the system, so you just require space once. In other cases the system may have to move you in a larger free zone (it's optimized but not always a free operation because a whole data copy is required). Even the compiler will try to help you, but the best is to to tell what you know (to reserve the total space required by your process). That's what i think. Greetings.
There is one more advantage of reserve that is not much related to performance but instead to code style and code cleanliness.
Imagine I want to create a vector by iterating over another vector of objects. Something like the following:
std::vector<int> result;
for (const auto& object : objects) {
result.push_back(object.foo());
}
Now, apparently the size of result is going to be the same as objects.size() and I decide to pre-define the size of result.
The simplest way to do it is in the constructor.
std::vector<int> result(objects.size());
But now the rest of my code is invalidated because the size of result is not 0 anymore; it is objects.size(). The subsequent push_back calls are going to increase the size of the vector. So, to correct this mistake, I now have to change how I construct my for-loop. I have to use indices and overwrite the corresponding memory locations.
std::vector<int> result(objects.size());
for (int i = 0; i < objects.size(); ++i) {
result[i] = objects[i].foo();
}
And I don't like it. Indices are everywhere in the code. This is also more vulnerable to making accidental copies because of the [] operator. This example uses integers and directly assigns values to result[i], but in a more complex for-loop with complex data structures, it could be relevant.
Coming back to the main topic, it is very easy to adjust the first code by using reserve. reserve does not change the size of the vector but only the capacity. Hence, I can leave my nice for loop as it is.
std::vector<int> result;
result.reserve(objects.size());
for (const auto& object : objects) {
result.push_back(object.foo());
}