Insert or push_back to end of a std::vector? - c++

Is there any difference in performance between the two methods below to insert new elements to the end of a std::vector:
Method 1
std::vector<int> vec = { 1 };
vec.push_back(2);
vec.push_back(3);
vec.push_back(4);
vec.push_back(5);
Method 2
std::vector<int> vec = { 1 };
int arr[] = { 2,3,4,5 };
vec.insert(std::end(vec), std::begin(arr), std::end(arr));
Personally, I like method 2 because it is nice and concise and inserts all the new elements from an array in one go. But
is there any difference in performance?
After all, they do the same thing. Don't they?
Update
The reason why I am not initializing the vector with all the elements, to begin with, is that in my program I am adding the remaining elements based on a condition.

After all, they do the same thing. Don't they?
No. They are different. The first method using std::vector::push_back will undergo several reallocations compared to std::vector::insert.
The insert will internally allocate memory, according to the current std::vector::capacity before copying the range. See the following discussion for more:
Does std::vector::insert reserve by definition?
But is there any difference in performance?
Due to the reason explained above, the second method would show slight performance improvement. For instance, see the quick benck-mark below, using http://quick-bench.com:
See online bench-mark
Or write a test program to measure the performance(as #Some programmer dude mentioned in the comments). Following is a sample test program:
#include <iostream>
#include <chrono>
#include <algorithm>
#include <vector>
using namespace std::chrono;
class Timer final
{
private:
time_point<high_resolution_clock> _startTime;
public:
Timer() noexcept
: _startTime{ high_resolution_clock::now() }
{}
~Timer() noexcept { Stop(); }
void Stop() noexcept
{
const auto endTime = high_resolution_clock::now();
const auto start = time_point_cast<microseconds>(_startTime).time_since_epoch();
const auto end = time_point_cast<microseconds>(endTime).time_since_epoch();
const auto durationTaken = end - start;
const auto duration_ms = durationTaken * 0.001;
std::cout << durationTaken.count() << "us (" << duration_ms.count() << "ms)\n";
}
};
// Method 1: push_back
void push_back()
{
std::cout << "push_backing: ";
Timer time{};
for (auto i{ 0ULL }; i < 1000'000; ++i)
{
std::vector<int> vec = { 1 };
vec.push_back(2);
vec.push_back(3);
vec.push_back(4);
vec.push_back(5);
}
}
// Method 2: insert_range
void insert_range()
{
std::cout << "range-inserting: ";
Timer time{};
for (auto i{ 0ULL }; i < 1000'000; ++i)
{
std::vector<int> vec = { 1 };
int arr[] = { 2,3,4,5 };
vec.insert(std::end(vec), std::cbegin(arr), std::cend(arr));
}
}
int main()
{
push_back();
insert_range();
return 0;
}
release building with my system(MSVS2019:/Ox /std:c++17, AMD Ryzen 7 2700x(8-core, 3.70 Ghz), x64 Windows 10)
// Build - 1
push_backing: 285199us (285.199ms)
range-inserting: 103388us (103.388ms)
// Build - 2
push_backing: 280378us (280.378ms)
range-inserting: 104032us (104.032ms)
// Build - 3
push_backing: 281818us (281.818ms)
range-inserting: 102803us (102.803ms)
Which shows for the given scenario, std::vector::insert ing is about 2.7 times faster than std::vector::push_back.
See what other compilers(clang 8.0 and gcc 9.2) wants to say, according to their implementations: https://godbolt.org/z/DQrq51

There may be a difference between the two approaches if the vector needs to reallocate.
Your second method, calling the insert() member function once with an iterator range:
vec.insert(std::end(vec), std::begin(arr), std::end(arr));
would be able to provide the optimisation of allocating all the memory needed for the insertion of the elements in one blow since insert() is getting random access iterators, i.e., it takes constant time to know the size of the range, so the whole memory allocation can be done before copying the elements, and no reallocations during the call would follow.
Your first method, individual calls to the push_back() member function, may trigger several reallocations, depending on the number of elements to insert and the memory initially reserved for the vector.
Note that the optimisation explained above may not be available for forward or bidirectional iterators since it would take linear time in the size of the range to know the number of elements to be inserted. However, the time needed for multiple memory allocations likely dwarfs the time needed to calculate the length of the range for these cases, so probably they still implement this optimisation. For input iterators, this optimisation is not even possible since they are single-pass iterators.

The major contributing factor is going to be the re-allocations. vector has to make space for new elements.
Consider these 3 sinppets.
//pushback
std::vector<int> vec = {1};
vec.push_back(2);
vec.push_back(3);
vec.push_back(4);
vec.push_back(5);
//insert
std::vector<int> vec = {1};
int arr[] = {2,3,4,5};
vec.insert(std::end(vec), std::begin(arr), std::end(arr));
//cosntruct
std::vector<int> vec = {1,2,3,4,5};
To confirm the reallocations coming into picture, after adding a vec.reserve(5) in pushback and insert versions, we get the below results.

push_back inserts a single element, hence in the worst case you may encounter multiple reallocations.
For the sake of the example, consider the case where the initial capacity is 2 and increases by a factor of 2 on each reallocation. Then
std::vector<int> vec = { 1 };
vec.push_back(2);
vec.push_back(3); // need to reallocate, capacity is 4
vec.push_back(4);
vec.push_back(5); // need to reallocate, capacity is 8
You can of course prevent unnecessary reallocations by calling
vec.reserve(num_elements_to_push);
Though, if you anyhow insert from an array, the more idomatic way is to use insert.

Related

How do I obtain the subset of a unique-ptrs in a vector, which satisfy a predicate?

I have a vector of std::unique_ptr<Foo> objects. I want to get a collection of all vector items that match some condition.
I see the std functions but they all seem to test for a predicate (and return bool) or return a single element.
Is there a built-in mechanism to get a collection that's a subset of a vector? If not, is there a way to construct an iterator that tests items against an arbitrary predicate (to identify ones that meet my condition) and a mechanism to return all items that meet that predicate?
Be warned, since you've got a vector of unique_ptr, those elements can only be moved around, i.e. once you have got the subset, the original vector will not be the same anymore.
The least destructive method is to use std::stable_partition to divide the vector into two groups, while keeping everything in the same vector:
auto sep = std::stable_partition(vec.begin(), vec.end(), [](const auto& foo) {
return foo->is_good();
});
// the part `vec.begin() .. sep` contains all "good" foos.
// the part `sep .. vec.end()` contains all "bad" foos.
If order is not important, use std::partition instead. The usage is the same.
If you want to split the bad foos into another vector, you could use std::copy_if + std::make_move_iterator to move the objects out. Note that this will leave holes everywhere. Use std::remove to clean them up.
decltype(vec) bad_vec;
std::copy_if(std::make_move_iterator(vec.begin()),
std::make_move_iterator(vec.end()),
std::back_inserter(bad_vec),
[](const auto& p) { return !p->is_good(); });
auto new_end = std::remove(vec.begin(), vec.end(), nullptr);
vec.erase(new_end, vec.end());
If you no longer care about the "bad" objects, use std::remove_if:
auto new_end = std::remove_if(vec.begin(), vec.end(), [](const auto& foo) {
return !foo->is_good();
});
vec.erase(new_end, vec.end());
// now `vec` only contains "good" foos.
If you just want to get the raw pointers, instead of the unique_ptr itself, you could use std::transform to fill up a vector<Foo*> and then remove_if to filter it... But at this point it is probably just easier to write the for loop.
std::vector<int*> good_vec;
for (const auto& foo : vec) {
if (foo->is_good()) {
good_vec.push_back(foo.get());
}
}
Since your vector holds unique_ptr's (which we don't make copies of) - I'd recommend the second option you inquired about: An iterator which only iterates those elements matching your predicate. This is exactly boost::filter_iterator.
Sort-of-an example:
bool points_to_positive(int* ptr) {
return ptr != nullptr and *ptr > 0;
}
// ...
std::vector<std::unique_ptr<int>> vec;
// ...
auto iterator = boost::make_filter_iterator(
&points_to_positive, std::begin(vec), std::end(vec)
);
if, however, you plan on making that iteration multiple times, and do not want to trade time for space, you would probably be better served by just copying out the actual pointers like in #kennytm's last suggested option.
What you asked for is std::copy_if from <algorithm>. For unique_ptr elements, which cannot be copied, this is not what you want. Sample code:
#include <algorithm>
#include <array>
#include <cstdlib>
#include <experimental/array>
#include <iostream>
#include <type_traits>
#include <vector>
using std::cout;
using std::endl;
using std::size_t;
bool is_even( const int n )
{
// True iff n is even.
return n % 2 == 0;
}
std::ostream& operator<< ( std::ostream& os, const std::vector<int>& container )
{
// Boilerplate instrumentation.
for ( const int& x : container )
os << x << ' ';
return os;
}
int main(void)
{
// Our input array, raw:
constexpr int digits[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// The number of input elements:
constexpr size_t ndigits = std::extent<decltype(digits)>();
// Container wrapping our input array:
constexpr std::array<int, ndigits > numbers =
std::experimental::to_array(digits);
std::vector<int> even_numbers;
even_numbers.reserve(ndigits); // Upper bound on output size.
std::copy_if( numbers.cbegin(),
numbers.cend(),
std::back_inserter(even_numbers),
is_even );
even_numbers.shrink_to_fit();
// Correct output is "2 4 6 8 "
cout << even_numbers << endl;
return EXIT_SUCCESS;
}
However, your array contains unique_ptr objects, which can’t be copied. Several answers have other good suggestions to get equivalent results. If you want to copy the references meeting the requirements to a different collection, though, you could also change unique_ptr to shared_ptr or weak_ptr, which can be copied.

How to efficiently delete elements from a vector given an another vector

What is the best way to delete elements from a vector given an another vector?
I have come up with the following code:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
void remove_elements(vector<int>& vDestination, const vector<int>& vSource)
{
if(!vDestination.empty() && !vSource.empty())
{
for(auto i: vSource) {
vDestination.erase(std::remove(vDestination.begin(), vDestination.end(), i), vDestination.end());
}
}
}
int main()
{
vector<int> v1={1,2,3};
vector<int> v2={4,5,6};
vector<int> v3={1,2,3,4,5,6,7,8,9};
remove_elements(v3,v1);
remove_elements(v3,v2);
for(auto i:v3)
cout << i << endl;
return 0;
}
Here the output will be:
7
8
9
My version is the following, I only apply erase after all elements from the vector vSource have been moved to the end by std::remove and keep track of the pointer to the end of the vector vDestination to not iterate over it for nothing.
void remove_elements(vector<int>& vDestination, const vector<int>& vSource)
{
auto last = std::end(vDestination);
std::for_each(std::begin(vSource), std::end(vSource), [&](const int & val) {
last = std::remove(std::begin(vDestination), last, val);
});
vDestination.erase(last, std::end(vDestination));
}
See on coliru : http://coliru.stacked-crooked.com/a/6e86893babb6759c
Update
Here is a template version, so you don't care about the container type :
template <class ContainerA, class ContainerB>
void remove_elements(ContainerA & vDestination, const ContainerB & vSource)
{
auto last = std::end(vDestination);
std::for_each(std::begin(vSource), std::end(vSource), [&](typename ContainerB::const_reference val) {
last = std::remove(std::begin(vDestination), last, val);
});
vDestination.erase(last, std::end(vDestination));
}
Note
This version works for vectors without any constraints, if your vectors are sorted you can take some shortcuts and avoid iterating over and over the vector to delete each element.
I assume that by best you mean fastest that works. Since it's a question about efficiency, I performed a simple benchmark to compare efficiency of several algorithms. Note that they differ a little, since the problem is a bit underspecified - the questions that arise (and assumptions taken for benchmark) are:
is it guaranteed that vDestination contains all elements from vSource ? (assumption: no)
are duplicates allowed in either vDestination or vSource ? (assumption: yes, in both)
does the order of the elements in the result vector matter? (algorithms for both cases tested)
should every element from vDestination be removed if it is equal to any element from vSource, or only one-for-one? (assumption: yes, in both)
are sizes of vDestination and vSource somehow bounded? Is one of them always bigger or much bigger? (several cases tested)
in the comments it's already explained that vectors don't need to be sorted, but I've included this point, as it's not immediately visible from the question (no sorting assumed in either of vectors)
as you see, there are a few points in which algorithms will differ and consequently, as you can guess, best algorithm will depend on your use case. Compared algorithms include:
original one (proposed in question) - baseline
proposed in #dkg answer
proposed in #Revolver_Ocelot answer + additional sorting (required by the algorithm) and pre-reservation of space for result
vector
proposed in #Jarod42 answer
set-based algorithm (presented below - mostly optimization of #Jarod42 algorithm)
counting algorithm (presended below)
set-based algorithm:
std::unordered_set<int> elems(vSource.begin(), vSource.end());
auto i = destination.begin();
auto target = destination.end();
while(i <= target) {
if(elems.count(*i) > 0)
std::swap(*i, *(--target));
else
i++;
}
destination.erase(target, destination.end());
counting algorithm:
std::unordered_map<int, int> counts;
counts.max_load_factor(0.3);
counts.reserve(destination.size());
for(auto v: destination) {
counts[v]++;
}
for(auto v: source) {
counts[v]--;
}
auto i = destination.begin();
for(auto k: counts) {
if(k.second < 1) continue;
i = std::fill_n(i, k.second, k.first);
}
destination.resize(std::distance(destination.begin(), i));
Benchmarking procedure was executed using Celero library and was the following:
Generate n pseudo-random ints (n in set {10,100,1000,10000, 20000, 200000}) and put them to a vector
Copy a fraction (m) of these ints to second vector (fractions from set {0.01, 0.1, 0.2, 0.4, 0.6, 0.8}, min. 1 element)
Start timer
Execute removal procedure
Stop timer
Only algorithms 3, 5 and 6 were executed on datasets larger than 10 000 elements as the rest of them took to long for me to comfortably measure (feel free to do it yourself).
Long story short: if your vectors contain less than 1000 elements, pick whichever you prefer. If they are longer - rely on size of vSource. If it's less than 50% of vDestination - choose set-based algorithm, if it's more - sort them and pick #Revolver_Ocelot's solution (they tie around 60%, with set-based being over 2x faster for vSource being 1% size of vDestination). Please don't rely on order or provide a vector that is sorted from the beginning - requirement that ordering has to remain same slows the process down dramatically. Benchmark on your use case, your compiler, your flags and your hardware. I've attached link to my benchmarks, in case you wanted to reproduce them.
Complete results (file vector-benchmarks.csv) are available on GitHub together with benchmarking code (file tests/benchmarks/vectorRemoval.cpp) here.
Please keep in mind that these are results that I've obtained on my computer, my compiler etc. - in your case they will differ (especially when it comes to point in which one algorithm is better than another).
I've used GCC 6.1.1 with -O3 on Fedora 24, on top of VirtualBox.
If your vectors are always sorted, you can use set_difference:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
void remove_elements(std::vector<int>& vDestination, const std::vector<int>& vSource)
{
std::vector<int> result;
std::set_difference(vDestination.begin(), vDestination.end(), vSource.begin(), vSource.end(), std::back_inserter(result));
vDestination.swap(result);
}
int main()
{
std::vector<int> v1={1,2,3};
std::vector<int> v2={4,5,6};
std::vector<int> v3={1,2,3,4,5,6,7,8,9};
remove_elements(v3,v1);
remove_elements(v3,v2);
for(auto i:v3)
std::cout << i << '\n';
}
If not for reqirement, that output range should not ovelap with any input range, we could even avoid additional vector. Potentially you can roll your own version of set_difference which is allowed to output in range starting with vDestination.begin(), but it is outside of scope of this answer.
Can be written with STL as:
void remove_elements(vector<int>& vDestination, const vector<int>& vSource)
{
const auto isInSource = [&](int e) {
return std::find(vSource.begin(), vSource.end(), e) != vSource.end();
};
vDestination.erase(
std::remove_if(vDestination.begin(), vDestination.end(), isInSource),
vDestination.end());
}
if vSource is sorted, you may replace std::find by std::binary_search.

Why adding to vector does not work while using iterator?

I have two code sample, which do exactly same thing. One is in C++03 and C++11.
C++ 11
int main()
{
vector<int> v = {1,2,3};
int count = 0;
for each (auto it in v)
{
cout << it<<endl;
if (count == 0)
{
count++;
v.push_back(4);//adding value to vector
}
}
return 0;
}
C++ 03
int main()
{
vector<int> v = {1,2,3};
int count = 0;
for (vector<int>::iterator it = v.begin(); it != v.end(); it++)
{
cout << *it<<endl;
if (count == 0)
{
count++;
v.push_back(4);//adding value to vector
}
}
return 0;
}
Both the codes are giving following exception.
Now when I see vector::end() implementation,
iterator end() _NOEXCEPT
{
// return iterator for end of mutable sequence
return (iterator(this->_Mylast, this));
}
Here, inline function clearly takes _Mylast to calculate end. So, when I add, it pointer will be incremented to next location, like _Mylast++. Why I am getting this exception?
Thanks.
A vector stores its elements in contiguous memory. If that memory block needs to be reallocated, iterators become invalid.
If you need to modify the vector's size while iterating, iterate by index instead of iterator.
Another option is to use a different container with a different iterator behavior, for example a list will allow you to continue iterating as you insert items.
And finally, (dare I suggest this?) if you know the maximum size your vector will grow to, .reserve() it before iterating over it. This will ensure it doesn't get reallocated during your loop. I am not sure if this behavior is guaranteed by the standard though (maybe someone can chime in); I would definitely not do it, considering iterating by index is perfectly safe.
Your push_back is invalidating the iterator you're using in the for loop, because the vector is reallocating its memory, which invalidates all iterators to elements of the vector.
The idiomatic solution for this is to use an insert_iterator, like the one you get from calling std::back_insterter on the vector. Then you can do:
#include <iostream>
#include <iterator>
#include <vector>
int main()
{
std::vector<int> v;
auto inserter = std::back_inserter(v);
for(int i=0; i<100; ++i)
inserter = i;
for(const auto item : v)
std::cout << item << '\n';
}
And it will ensure its own validity even through reallocation calls of the underlying container.
Live demo here.

C++ vector's insert & push_back difference

I want to know what are difference(s) between vector's push_back and insert functions.
Is there a structural difference(s)?
Is there a really big performance difference(s)?
The biggest difference is their functionality. push_back always puts a new element at the end of the vector and insert allows you to select new element's position. This impacts the performance. vector elements are moved in the memory only when it's necessary to increase it's length because too little memory was allocated for it. On the other hand insert forces to move all elements after the selected position of a new element. You simply have to make a place for it. This is why insert might often be less efficient than push_back.
The functions have different purposes. vector::insert allows you to insert an object at a specified position in the vector, whereas vector::push_back will just stick the object on the end. See the following example:
using namespace std;
vector<int> v = {1, 3, 4};
v.insert(next(begin(v)), 2);
v.push_back(5);
// v now contains {1, 2, 3, 4, 5}
You can use insert to perform the same job as push_back with v.insert(v.end(), value).
Beside the fact, that push_back(x) does the same as insert(x, end()) (maybe with slightly better performance), there are several important thing to know about these functions:
push_back exists only on BackInsertionSequence containers - so, for example, it doesn't exist on set. It couldn't because push_back() grants you that it will always add at the end.
Some containers can also satisfy FrontInsertionSequence and they have push_front. This is satisfied by deque, but not by vector.
The insert(x, ITERATOR) is from InsertionSequence, which is common for set and vector. This way you can use either set or vector as a target for multiple insertions. However, set has additionally insert(x), which does practically the same thing (this first insert in set means only to speed up searching for appropriate place by starting from a different iterator - a feature not used in this case).
Note about the last case that if you are going to add elements in the loop, then doing container.push_back(x) and container.insert(x, container.end()) will do effectively the same thing. However this won't be true if you get this container.end() first and then use it in the whole loop.
For example, you could risk the following code:
auto pe = v.end();
for (auto& s: a)
v.insert(s, pe);
This will effectively copy whole a into v vector, in reverse order, and only if you are lucky enough to not get the vector reallocated for extension (you can prevent this by calling reserve() first); if you are not so lucky, you'll get so-called UndefinedBehavior(tm). Theoretically this isn't allowed because vector's iterators are considered invalidated every time a new element is added.
If you do it this way:
copy(a.begin(), a.end(), back_inserter(v);
it will copy a at the end of v in the original order, and this doesn't carry a risk of iterator invalidation.
[EDIT] I made previously this code look this way, and it was a mistake because inserter actually maintains the validity and advancement of the iterator:
copy(a.begin(), a.end(), inserter(v, v.end());
So this code will also add all elements in the original order without any risk.
I didn't see it in any of the comments above but it is important to know:
If we wish to add a new element to a given vector and the new size of the vector (including the new element) surpasses the current vector capacity it will cause an automatic reallocation of the allocated storage space.
And Beacuse memory allocation is an action we wish to minimize it will increase the capacity both in push_back and insert in the same way (for a vector with n elemeny will add about n/2).
So in terms of memory efficiency it is safe to say, use what ever you like best.
for example:
std::vector<int> test_Insert = { 1,2,3,4,5,6,7 };
std::vector<int> test_Push_Back = { 1,2,3,4,5,6,7 };
std::cout << test_Insert.capacity() << std::endl;
std::cout << test_Push_Back.capacity() << std::endl;
test_Insert.push_back(8);
test_Push_Back.insert(test_Push_Back.end(), 8);
std::cout << test_Insert.capacity() << std::endl;
std::cout << test_Push_Back.capacity() << std::endl;
This code will print:
7
7
10
10
Since there's no actual performance data, I reluctantly wrote some code to produce it. Keep in mind that I wrote this code because I wondered "Should I push_back multiple single elements, or use insert?".
#include <iostream>
#include <vector>
#include <cassert>
#include <chrono>
using namespace std;
vector<float> pushBackTest()
{
vector<float> v;
for(int i =0;i<10000000;i++)
{
// Using a for-loop took 200ms more (in the output)
v.push_back(0);
v.push_back(1);
v.push_back(2);
v.push_back(3);
v.push_back(4);
v.push_back(5);
v.push_back(6);
v.push_back(7);
v.push_back(8);
v.push_back(9);
}
return v;
}
vector<float> insertTest()
{
vector<float> v;
for(int i =0;i<10000000;i++)
{
v.insert(v.end(), {0,1,2,3,4,5,6,7,8,9});
}
return v;
}
int main()
{
std::chrono::steady_clock::time_point start = chrono::steady_clock::now();
vector<float> a = pushBackTest();
cout<<"pushBackTest: "<<chrono::duration_cast<chrono::milliseconds>(chrono::steady_clock::now() - start).count()<<"ms"<<endl;
start = std::chrono::steady_clock::now();
vector<float> b = insertTest();
cout<<"insertTest: "<<chrono::duration_cast<chrono::milliseconds>(chrono::steady_clock::now() - start).count()<<"ms"<<endl;
assert(a==b);
return 0;
}
Output:
pushBackTest: 5544ms
insertTest: 3402ms
Since curiosity killed my time, I run a similar test but adding a single number instead of multiple ones.
So, the two new functions are:
vector<float> pushBackTest()
{
vector<float> v;
for(int i =0;i<10000000;i++)
{
v.push_back(1);
}
return v;
}
vector<float> insertTest()
{
vector<float> v;
for(int i =0;i<10000000;i++)
{
v.insert(v.end(), 1);
}
return v;
}
Output:
pushBackTest: 452ms
insertTest: 615ms
So, if you wanna add batch of elements, insert is faster, otherwise push_back it is. Also, keep in mind that push_back can only push... back, yeah.

How to have valid .begin() and .end() without .resize() for std::vector?

I want to use a factory function for my vector and also use the iterators without calling resize which blows my previous values ?
Is it possible or am I missing a point in STL design ?
#include <vector>
#include <algorithm>
#include <iostream>
struct A
{
A():_x(42){}
A(double x):_x(x){}
double _x;
};
struct factory
{
A operator()()
{
return A(3.14);
}
};
int main()
{
std::vector<A> v;
int nbr = 3;
v.reserve(nbr);
std::generate_n(v.begin(), nbr, factory());
std::cout << "Good values" << std::endl;
for(int i = 0 ; i < nbr ; ++i)
std::cout << v[i]._x << std::endl;
v.resize(nbr); //How I can have the syntax below without the resize which blows my previous values ?
std::cout << "resize has been called so values are bad (i.e default ctor)" << std::endl;
for(std::vector<A>::iterator it = v.begin() ; it != v.end() ; ++it)
std::cout << (*it)._x << std::endl;
}
Thanks :)
Either I did not quite understand your concern, or else you have been mislead. resize() does not modify any of the existing elements in the container (other than those removed if you resize to a smaller size).
Now, your actual problem is that you have undefined behavior in your program. The vector has capacity() == nbr but size() == 0 when you call generate_n, and that is writting beyond the end of the container. There are two solutions for this, first you can resize before calling generate_n:
std::vector<A> v;
int nbr = 3;
v.resize(nbr);
std::generate_n(v.begin(), nbr, factory());
Or else you can change the type of the iterator:
std::vector<A> v;
int nbr = 3;
v.reserve(nbr);
std::generate_n(std::back_inserter(v), nbr, factory());
v.reserve(nbr);
std::generate_n(v.begin(), nbr, factory());
It`s error. Reserve != resize, reserve only allocate memory if we need it.
Why you use resize for print vector? Resize is function that resize vector, begin/end are not dependent on resize...
Your generate_n is not generating values into a vector properly. The size of the vector is 0 so while it may appear to work correctly, you're just getting luck when writing beyond the end of the vector. You really do need to use resize or similar. Alternately (and possibly more performant) you can use a back_inserter: std::generate_n(std::back_inserter(v), nbr, factory());
The first part of your code is already broken. In order to create vector elements you have to call resize, not reserve. reserve can only reserve future vector capacity by allocating raw memory, but it does not create (construct) real vector elements. You are generally not allowed to access vector elements that reside between vector's size and vector's capacity.
You called reserve and then you are tried to use your vector as if the elements have already been constructed: you assign values to them and you attempt to read and print these values. In general case this is illegal, it generally leads to undefined behavior. Meanwhile, the size of your vector remained 0, which is what you tried to compensate for by that strange call to resize later.
You need to call resize at the very beginning. Create a vector with the proper number of elements right from the start. (This can also be done by passing the initial size to vector's constructor).
For example, just do
int nbr = 3;
std::vector<A> v(nbr);
std::generate_n(v.begin(), nbr, factory());
or
std::vector<A> v;
int nbr = 3;
v.resize(nbr);
std::generate_n(v.begin(), nbr, factory());
and you are done. Forget about reserve - you don't need it in this case.