reduce the capacity of an stl vector - c++

Is there a way to reduce the capacity of a vector ?
My code inserts values into a vector (not knowing their number beforehand), and
when this finishes, the vectors are used only for read operations.
I guess I could create a new vector, do a .reseve() with the size and copy
the items, but I don't really like the extra copy operation.
PS: I don't care for a portable solution, as long as it works for gcc.

std::vector<T>(v).swap(v);
Swapping the contents with another vector swaps the capacity.
std::vector<T>(v).swap(v); ==> is equivalent to
std::vector<T> tmp(v); // copy elements into a temporary vector
v.swap(tmp); // swap internal vector data
Swap() would only change the internal data structure.

With C++11, you can call the member function shrink_to_fit(). The draft standard section 23.2.6.2 says:
shrink_to_fit is a non-binding request
to reduce capacity() to size(). [Note: The request is non-binding to
allow latitude for
implementation-specific optimizations.
—end note]

Go look at Scott Meyers Effective STL item 17.
Basically you can't directly reduce the storage size of a std::vector. resize() and reseve() will never reduce the actually memory footprint of a container. The "trick" is to create a new container of the right size, copy the data and swap that with the current container. If we would like to clear a container out this is simply:
std::vector<T>().swap(v);
If we have to copy the data over then we need to do the copy:
std::vector<T>(v).swap(v);
What this does is creates a new vector with the data from the old one, doing the copy that would be required in any operation that has the effect you need. Then calling swap() will just swap the internal buffers between the objects. At the end of the line the temporary vector that was created is deleted, but it has the guts from the old vector and the old vector has the guts from the new copy that is the exact size we need.

The idiomatic solution is to swap with a newly constructed vector.
vector<int>().swap(v);
Edit: I misread the question. The code above will clear the vector. OP wants to keep the elements untouched, only shrink capacity() to size().
It is difficult to say if aJ's code will do that. I doubt there's portable solution. For gcc, you'll have to take a look at their particular implementation of vector.
edit: So I've peeked at libstdc++ implementation. It seems that aJ's solution will indeed work.
vector<int>(v).swap(v);
See the source, line 232.

No, you cannot reduce the capacity of a vector without copying. However, you can control how much new allocation growth by checking capacity() and call reserve() every time you insert something. The default behavior for std::vector is to grow its capacity by a factor of 2 every time new capacity is needed. You can growth it by your own magic ratio:
template <typename T>
void myPushBack(std::vector<T>& vec, const T& val) {
if (vac.size() + 1 == vac.capacity()) {
vac.reserve(vac.size() * my_magic_ratio);
}
vec.push_back(val);
}
If you're into a bit hacky techniques, you can always pass in your own allocator and do whatever you need to do to reclaim the unused capacity.

I'm not saying that GCC couldn't have some method for doing what you want without a copy, but it would be tricky to implement (I think) because vectors need to use an Allocator object to allocate and deallocate memory, and the interface for an Allocator doesn't include a reallocate() method. I don't think it would be impossible to do, but it might be tricky.

If you're worried about about the overhead of your vector then maybe you should be looking to using another type of data structure. You mentioned that once your code is done initializing the vector it becomes a read only process. I would suggest going with an open ended array that will allow the program to decide its capacity at compile time. Or perhaps a linked list would be more suitable to your needs.
Lemme know if I completely misunderstood what you were getting at.
-UBcse

Old thread, I know, but in case anyone is viewing this in the future.. there's shrink_to_fit() in C++11 but since it is a non-binding request, the behaviour will depend on its implementation.
See: http://en.cppreference.com/w/cpp/container/vector/shrink_to_fit

I'm not an expert in C++,but it seems this solution works(atleast compiling it with g++ does):
std::vector<int>some_vector(20);//initial capacity 10
//first you gotta resize the vector;
some_vector.resize(10);
//then you can shrink to fit;
some_vector.shrink_to_fit();
//new capacity is 10;

This also works:
Try it online!
v = std::vector<T>(v); // if we need to keep same data
v = std::vector<T>(); // if we need to clear
It calls && overload of = operator, which does moving, same overload is used by swap().

Get the "Effective STL" book by Scott Myers. It has a complete item jus on reducing vector's capacity.

Related

How to deallocate excess memory allocated to a object (vector) [duplicate]

Is there a way to reduce the capacity of a vector ?
My code inserts values into a vector (not knowing their number beforehand), and
when this finishes, the vectors are used only for read operations.
I guess I could create a new vector, do a .reseve() with the size and copy
the items, but I don't really like the extra copy operation.
PS: I don't care for a portable solution, as long as it works for gcc.
std::vector<T>(v).swap(v);
Swapping the contents with another vector swaps the capacity.
std::vector<T>(v).swap(v); ==> is equivalent to
std::vector<T> tmp(v); // copy elements into a temporary vector
v.swap(tmp); // swap internal vector data
Swap() would only change the internal data structure.
With C++11, you can call the member function shrink_to_fit(). The draft standard section 23.2.6.2 says:
shrink_to_fit is a non-binding request
to reduce capacity() to size(). [Note: The request is non-binding to
allow latitude for
implementation-specific optimizations.
—end note]
Go look at Scott Meyers Effective STL item 17.
Basically you can't directly reduce the storage size of a std::vector. resize() and reseve() will never reduce the actually memory footprint of a container. The "trick" is to create a new container of the right size, copy the data and swap that with the current container. If we would like to clear a container out this is simply:
std::vector<T>().swap(v);
If we have to copy the data over then we need to do the copy:
std::vector<T>(v).swap(v);
What this does is creates a new vector with the data from the old one, doing the copy that would be required in any operation that has the effect you need. Then calling swap() will just swap the internal buffers between the objects. At the end of the line the temporary vector that was created is deleted, but it has the guts from the old vector and the old vector has the guts from the new copy that is the exact size we need.
The idiomatic solution is to swap with a newly constructed vector.
vector<int>().swap(v);
Edit: I misread the question. The code above will clear the vector. OP wants to keep the elements untouched, only shrink capacity() to size().
It is difficult to say if aJ's code will do that. I doubt there's portable solution. For gcc, you'll have to take a look at their particular implementation of vector.
edit: So I've peeked at libstdc++ implementation. It seems that aJ's solution will indeed work.
vector<int>(v).swap(v);
See the source, line 232.
No, you cannot reduce the capacity of a vector without copying. However, you can control how much new allocation growth by checking capacity() and call reserve() every time you insert something. The default behavior for std::vector is to grow its capacity by a factor of 2 every time new capacity is needed. You can growth it by your own magic ratio:
template <typename T>
void myPushBack(std::vector<T>& vec, const T& val) {
if (vac.size() + 1 == vac.capacity()) {
vac.reserve(vac.size() * my_magic_ratio);
}
vec.push_back(val);
}
If you're into a bit hacky techniques, you can always pass in your own allocator and do whatever you need to do to reclaim the unused capacity.
I'm not saying that GCC couldn't have some method for doing what you want without a copy, but it would be tricky to implement (I think) because vectors need to use an Allocator object to allocate and deallocate memory, and the interface for an Allocator doesn't include a reallocate() method. I don't think it would be impossible to do, but it might be tricky.
If you're worried about about the overhead of your vector then maybe you should be looking to using another type of data structure. You mentioned that once your code is done initializing the vector it becomes a read only process. I would suggest going with an open ended array that will allow the program to decide its capacity at compile time. Or perhaps a linked list would be more suitable to your needs.
Lemme know if I completely misunderstood what you were getting at.
-UBcse
Old thread, I know, but in case anyone is viewing this in the future.. there's shrink_to_fit() in C++11 but since it is a non-binding request, the behaviour will depend on its implementation.
See: http://en.cppreference.com/w/cpp/container/vector/shrink_to_fit
I'm not an expert in C++,but it seems this solution works(atleast compiling it with g++ does):
std::vector<int>some_vector(20);//initial capacity 10
//first you gotta resize the vector;
some_vector.resize(10);
//then you can shrink to fit;
some_vector.shrink_to_fit();
//new capacity is 10;
This also works:
Try it online!
v = std::vector<T>(v); // if we need to keep same data
v = std::vector<T>(); // if we need to clear
It calls && overload of = operator, which does moving, same overload is used by swap().
Get the "Effective STL" book by Scott Myers. It has a complete item jus on reducing vector's capacity.

Faster alternative to push_back(size is known)

I have a float vector. As I process certain data, I push it back.I always know what the size will be while declaring the vector.
For the largest case, it is 172,490,752 floats. This takes about eleven seconds just to push_back everything.
Is there a faster alternative, like a different data structure or something?
If you know the final size, then reserve() that size after you declare the vector. That way it only has to allocate memory once.
Also, you may experiment with using emplace_back() although I doubt it will make any difference for a vector of float. But try it and benchmark it (with an optimized build of course - you are using an optimized build - right?).
The usual way of speeding up a vector when you know the size beforehand is to call reserve on it before using push_back. This eliminates the overhead of reallocating memory and copying the data every time the previous capacity is filled.
Sometimes for very demanding applications this won't be enough. Even though push_back won't reallocate, it still needs to check the capacity every time. There's no way to know how bad this is without benchmarking, since modern processors are amazingly efficient when a branch is always/never taken.
You could try resize instead of reserve and use array indexing, but the resize forces a default initialization of every element; this is a waste if you know you're going to set a new value into every element anyway.
An alternative would be to use std::unique_ptr<float[]> and allocate the storage yourself.
::boost::container::stable_vector Notice that allocating a contiguous block of 172 *4 MB might easily fail and requires quite a lot page joggling. Stable vector is essentially a list of smaller vectors or arrays of reasonable size. You may also want to populate it in parallel.
You could use a custom allocator which avoids default initialisation of all elements, as discussed in this answer, in conjunction with ordinary element access:
const size_t N = 172490752;
std::vector<float, uninitialised_allocator<float> > vec(N);
for(size_t i=0; i!=N; ++i)
vec[i] = the_value_for(i);
This avoids (i) default initializing all elements, (ii) checking for capacity at every push, and (iii) reallocation, but at the same time preserves all the convenience of using std::vector (rather than std::unique_ptr<float[]>). However, the allocator template parameter is unusual, so you will need to use generic code rather than std::vector-specific code.
I have two answers for you:
As previous answers have pointed out, using reserve to allocate the storage beforehand can be quite helpful, but:
push_back (or emplace_back) themselves have a performance penalty because during every call, they have to check whether the vector has to be reallocated. If you know the number of elements you will insert already, you can avoid this penalty by directly setting the elements using the access operator []
So the most efficient way I would recommend is:
Initialize the vector with the 'fill'-constructor:
std::vector<float> values(172490752, 0.0f);
Set the entries directly using the access operator:
values[i] = some_float;
++i;
The reason push_back is slow is that it will need to copy all the data several times as the vector grows, and even when it doesn’t need to copy data it needs to check. Vectors grow quickly enough that this doesn’t happen often, but it still does happen. A rough rule of thumb is that every element will need to be copied on average once or twice; the earlier elements will need to be copied a lot more, but almost half the elements won’t need to be copied at all.
You can avoid the copying, but not the checks, by calling reserve on the vector when you create it, ensuring it has enough space. You can avoid both the copying and the checks by creating it with the right size from the beginning, by giving the number of elements to the vector constructor, and then inserting using indexing as Tobias suggested; unfortunately, this also goes through the vector an extra time initializing everything.
If you know the number of floats at compile time and not just runtime, you could use an std::array, which avoids all these problems. If you only know the number at runtime, I would second Mark’s suggestion to go with std::unique_ptr<float[]>. You would create it with
size_t size = /* Number of floats */;
auto floats = unique_ptr<float[]>{new float[size]};
You don’t need to do anything special to delete this; when it goes out of scope it will free the memory. In most respects you can use it like a vector, but it won’t automatically resize.

Is shrink_to_fit the proper way of reducing the capacity a `std::vector` to its size?

In C++11 shrink_to_fit was introduce to complement certain STL containers (e.g., std::vector, std::deque, std::string).
Synopsizing, its main functionality is to request the container that is associated to, to reduce its capacity to fit its size. However, this request is non-binding, and the container implementation is free to optimize otherwise and leave the vector with a capacity greater than its size.
Furthermore, in a previous SO question the OP was discouraged from using shrink_to_fit to reduce the capacity of his std::vector to its size. The reasons not to do so are quoted below:
shrink_to_fit does nothing or it gives you cache locality issues and it's O(n) to
execute (since you have to copy each item to their new, smaller home).
Usually it's cheaper to leave the slack in memory. #Massa
Could someone be so kind as to address the following questions:
Do the arguments in the quotation hold?
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
Do the arguments in the quotation hold?
Measure and you will know. Are you constrained in memory? Can you figure out the correct size up front? It will be more efficient to reserve than it will be to shrink after the fact. In general I am inclined to agree on the premise that most uses are probably fine with the slack.
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The comment does not only apply to shrink_to_fit, but to any other way of shrinking. Given that you cannot realloc in place, it involves acquiring a different chunk of memory and copying over there regardless of what mechanism you use for shrinking.
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
The request is non-binding, but the alternatives don't have better guarantees. The question is whether shrinking makes sense: if it does, then it makes sense to provide a shrink_to_fit operation that can take advantage of the fact that the objects are being moved to a new location. I.e., if the type T has a noexcept(true) move constructor, it will allocate the new memory and move the elements.
While you can achieve the same externally, this interface simplifies the operation. The equivalent to shrink_to_fit in C++03 would have been:
std::vector<T>(current).swap(current);
But the problem with this approach is that when the copy is done to the temporary it does not know that current is going to be replaced, there is nothing that tells the library that it can move the held objects. Note that using std::move(current) would not achieve the desired effect as it would move the whole buffer, maintaining the same capacity().
Implementing this externally would be a bit more cumbersome:
{
std::vector<T> copy;
if (noexcept(T(std::move(declval<T>())))) {
copy.assign(std::make_move_iterator(current.begin()),
std::make_move_iterator(current.end()));
} else {
copy.assign(current.begin(), current.end());
}
copy.swap(current);
}
Assuming that I got the if condition right... which is probably not what you want to write every time that you want this operation.
Will the arguments hold?
As the arguments are originally mine, don't mind if I defend them, one by one:
Either shrink_to_fit does nothing (...)
As it was mentioned, the standard says (many times, but in the case of vector it's section 23.3.7.3...) that the request is non-binding to allow an implementation latitude for optimizations. This means that the implementation can define shrink_to_fit as an no-op.
(...) or it gives you cache locality issues
In the case that shrink_to_fit is not implemented as a no-op, you have to allocate a new underlying container with capacity size(), copy (or, in the best case, move) construct all your N = size() new items from the old ones, destruct all the old ones (in the move case this should be optimized, but it's possible that this involves a loop again over the old container) and then destructing the old container per se. This is done, in libstdc++-4.9, exactly as David Rodriguez has described, by
_Tp(__make_move_if_noexcept_iterator(__c.begin()),
__make_move_if_noexcept_iterator(__c.end()),
__c.get_allocator()).swap(__c);
and in libc++-3.5, by a function in __alloc_traits that does approximately the same.
Oh, and an implementation absolutely cannot rely on realloc (even if it uses malloc inside ::operator new for its memory allocations) because realloc, if it cannot shrink in-place, will either leave the memory alone (no-op case) or make a bitwise copy (and miss the opportunity for readjusting pointers, etc. that the proper C++ copying/moving constructors would give).
Sure, one can write a shrinkable memory allocator, and use it in the constructor of its vectors.
In the easy case where the vectors are larger than the cache lines, all that movement puts pressure on the cache.
and it's O(n)
If n = size(), I think it was established above that, at the very least, you have to do one n sized allocation, n copy or move constructions, n destructions, and one old_capacity sized deallocation.
usually it's cheaper just to leave the slack in memory
Obviously, unless you are really pressed for free memory (in which case it might be wiser to save your data to the disk and re-load it later on demand...)
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The proper way is still shrink_to_fit... you just have to either not rely on it or know very well your implementation!
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
There is no better way, but the reason for the existence of shrink_to_fit is, AFAICT, that sometimes your program might feel memory pressure and it's one way of treating it. Not a very good way, but still.
HTH!
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The 'swap trick' will trim a vector to the exact size required (from More Effective STL):
vector<Person>(persons).swap(persons);
Particularly useful when the vector is empty, to release all memory:
vector<Person>().swap(persons);
Vectors were constantly tripping my unit tester's memory leak detection code because of retained allocations of unused space, and this sorted them out perfectly.
This is the kind of example where I really don't care about runtime efficiency (size or speed), but I do care about exact memory usage.
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
I really don't know what the point of providing a function that can legally do absolutely nothing is.
I cheered when I saw it had been introduced, then despaired when I found it couldn't be relied upon.
Perhaps we'll see maybe_sort() in the next version.

Efficient Array Reallocation in C++

How would I efficiently resize an array allocated using some standards-conforming C++ allocator? I know that no facilities for reallocation are provided in the C++ alloctor interface, but did the C++11 revision enable us to work with them more easily? Suppose that I have a class vec with a copy-assignment operator foo& operator=(const foo& x) defined. If x.size() > this->size(), I'm forced to
Call allocator.destroy() on all elements in the internal storage of foo.
Call allocator.deallocate() on the internal storage of foo.
Reallocate a new buffer with enough room for x.size() elements.
Use std::uninitialized_copy to populate the storage.
Is there some way that I more easily reallocate the internal storage of foo without having to go through all of this? I could provide an actual code sample if you think that it would be useful, but I feel that it would be unnecessary here.
Based on a previous question, the approach that I took for handling large arrays that could grow and shrink with reasonable efficiency was to write a container similar to a deque that broke the array down into multiple pages of smaller arrays. So for example, say we have an array of n elements, we select a page size p, and create 1 + n/p arrays (pages) of p elements. When we want to re-allocate and grow, we simply leave the existing pages where they are, and allocate the new pages. When we want to shrink, we free the totally empty pages.
The downside is the array access is slightly slower, in that given and index i, you need the page = i / p, and the offset into the page i % p, to get the element. I find this is still very fast however and provides a good solution. Theoretically, std::deque should do something very similar, but for the cases I tried with large arrays it was very slow. See comments and notes on the linked question for more details.
There is also a memory inefficiency in that given n elements, we are always holding p - n % p elements in reserve. i.e. we only ever allocate or deallocate complete pages. This was the best solution I could come up with in the context of large arrays with the requirement for re-sizing and fast access, while I don't doubt there are better solutions I'd love to see them.
A similar problem also arises if x.size() > this->size() in foo& operator=(foo&& x).
No, it doesn't. You just swap.
There is no function that will resize in place or return 0 on failure (to resize). I don't know of any operating system that supports that kind of functionality beyond telling you how big a particular allocation actually is.
All operating systems do however have support for implementing realloc, however, that does a copy if it cannot resize in place.
So, you can't have it because the C++ language would not be implementable on most current operating systems if you had to add a standard function to do it.
There are the C++11 rvalue reference and move constructors.
There's a great video talk on them.
Even if re-allocate exists, actually, you can only avoid #2 you mentioned in your question in a copy constructor. However in the case of internal buffer growing, re-allocate can save these four operations.
Is internal buffer of your array continuous? if so see the answer of your link
if not, Hashed array tree or array list may be your choice to avoid re-allocate.
Interestingly, the default allocator for g++ is smart enough to use the same address for consecutive deallocations and allocations of larger sizes, as long as there is enough unused space after the end of the initially-allocated buffer. While I haven't tested what I'm about to claim, I doubt that there is much of a time difference between malloc/realloc and allocate/deallocate/allocate.
This leads to a potentially very dangerous, nonstandard shortcut that may work if you know that there is enough room after the current buffer so that a reallocation would not result in a new address. (1) Deallocate the current buffer without calling alloc.destroy() (2) Allocate a new, larger buffer and check the returned address (3) If the new address equals the old address, proceed happily; otherwise, you lost your data (4) Call allocator.construct() for elements in the newly-allocated space.
I wouldn't advocate using this for anything other than satisfying your own curiosity, but it does work on g++ 4.6.

resize versus push_back in std::vector : does it avoid an unnecessary copy assignment?

When invoking the method push_back from std::vector, its size is incremented by one, implying in the creation of a new instance, and then the parameter you pass will be copied into this recently created element, right? Example:
myVector.push_back(MyVectorElement());
Well then, if I want to increase the size of the vector with an element simply using its default values, wouldn't it be better to use the resize method instead? I mean like this:
myVector.resize(myVector.size() + 1);
As far as I can see, this would accomplish exactly the same thing but would avoid the totally unnecessary assignment copy of the attributes of the element.
Is this reasoning correct or am I missing something?
At least with GCC, it doesn't matter which you use (Results below). However, if you get to the point where you are having to worry about it, you should be using pointers or (even better) some form of smart pointers.. I would of course recommend the ones in the boost library.
If you wanted to know which was better to use in practice, I would suggest either push_back or reserve as resize will resize the vector every time it is called unless it is the same size as the requested size. push_back and reserve will only resize the vector if needed. This is a good thing as if you want to resize the vector to size+1, it may already be at size+20, so calling resize would not provide any benefit.
Test Code
#include <iostream>
#include <vector>
class Elem{
public:
Elem(){
std::cout << "Construct\n";
}
Elem(const Elem& e){
std::cout << "Copy\n";
}
~Elem(){
std::cout << "Destruct\n";
}
};
int main(int argc, char* argv[]){
{
std::cout << "1\n";
std::vector<Elem> v;
v.push_back(Elem());
}
{
std::cout << "\n2\n";
std::vector<Elem> v;
v.resize(v.size()+1);
}
}
Test Output
1
Construct
Copy
Destruct
Destruct
2
Construct
Copy
Destruct
Destruct
I find myVector.push_back(MyVectorElement()); much more direct and easier to read.
The thing is, resize doesn't just resize the array and default-construct elements on those places; that's just what it defaults to. It actually takes a second parameter which is what each new element will be made a copy of, and this defaults to T(). In essence, your two code samples are exactly the same.
A c++0x perspective concerning test code of Yacobi's accepted answer:
Add a move constructor to the class:
Elem(Elem&& e) { std::cout << "Move\n"; }
With gcc I get "Move" instead of "Copy" as output for push_back, which is far more efficient in general.
Even slightly better with
emplace operations (take the same
arguments as the constructor):
v.emplace_back()
Test Output:
1
Construct
Destruct
2
Construct
Copy
Destruct
Destruct
At EA (Electronic Arts) this was considered such a big problem that they wrote their own version of the STL, EASTL, which among many other things include a push_back(void) in their vector class.
You are right that push_back cannot avoid at least one copy but I think that you are worrying about the wrong things, but resize will not necessarily perform any better either (it copies the value of its second parameter which defaults to a default constructed temporary anyway).
vector is not the right container for objects which are expensive to copy. (Almost) any push_back or resize could potentially cause every current member of the vector to be copied as well as any new member.
When you do push_back() the method checks the underlying storage area to see if space is needed. If space is needed then it will allocate a new contiguous area for all elements and copy the data to the new area.
BUT: The size of the newly allocated space is not just one element bigger. It uses a nifty little algorithm for increasing space (I don't think the algorithm is defined as part of the standard but it usually doubles the allocated space). Thus if you push a large number of elements only a small percentage of them actually cause the underlying space to be re-allocated.
To actually increase the allocate space manually you have two options:
reserve()
This increases the underlying storage space without adding elements to the vector. Thus making it less likely that future push_back() calls will require the need to increase the space.
resize()
This actually adds/removes elements to the vector to make it the correct size.
capacity()
Is the total number of elements that can be stored before the underlying storage needs to be re-allocated. Thus if capacity() > size() a push_back will not cause the vector storage to be reallocated.
myVector.resize(myVector.size() + 1);
will call an empty constructor of MyVectorElement. What are you trying to accomplish? In order to reserve space in a vector (and save memory allocation overhead) there's reserve() method. You cannot avoid the constructor.
When you call push_back, assuming that no resizing of the underlying storage is needed, the vector class will use the "placement new" operator to copy-construct the new elements in-place. The elements in the vector will not be default-constructed before being copy-constructed.
When you call resize, almost the exact same sequence occurs. vector allocates storage and then copies the default value via placement new into each new location.
The construction looks like this:
::new (p) _T1(_Val);
Where p is the pointer to the vector storage, _T1 is the type being stored in the vector, and _Val is the "default value" parameter (which defaults to _T1()).
In short, resize and push_back do the same things under the covers, and the speed difference would be due to multiple internal allocations, multiple array bounds checks and function call overhead. The time and memory complexity would be the same.
Obviously you're worried about efficiency and performance.
std::vector is actually a very good performer. Use the reserve method to preallocate space if you know roughly how big it might get. Obviously this is at the expense of potentially wasted memory, but it can have quite a performance impact if you're using push_back a lot.
I believe it's implementation dependent as to how much memory is reserved up front for a vector if any at all, or how much is reserved for future use as elements are added. Worst case scenario is what you're stating - only growing by one element at a time.
Try some performance testing in your app by comparing without the reserve and with it.
push_back: you create the object and it gets copied in the vector.
resize: the vector creates the object with the default constructor and copies it in the vector.
Speed difference: You will have to test your implementation of the STL and your compiler, but I think it won't matter.
I suspect the actual answer is strongly a function of the STL implementation and compiler used, however, the "resize" function has the prototype (ref)
void resize( size_type num, TYPE val = TYPE() );
which implies val is default constructed, and copied into the newly allocated (or possibly previously allocated but unused) space via placement new and the copy-constructor. As such both operations require the same sequence of actions:
Call default constructor
Allocate space
Initialise via a copy constructor
It's probably better to defer to the clearer and more generic (in terms of STL containers) push_back, rather than apply premature optimisation - if a profiler is highlighting a push_back as a hotspot then the most likely cause is the memory allocation which is best resolved through judicious use of reserve.