I remember that since the beginning of times the most popular approach to implementing std::list<>::sort() was the classic Merge Sort algorithm implemented in bottom-up fashion (see also What makes the gcc std::list sort implementation so fast?).
I remember seeing someone aptly refer to this strategy as "onion chaining" approach.
At least that's the way it is in GCC's implementation of C++ standard library (see, for example, here). And this is how it was in old Dimkumware's STL in MSVC version of standard library, as well as in all versions of MSVC all the way to VS2013.
However, the standard library supplied with VS2015 suddenly no longer follows this sorting strategy. The library shipped with VS2015 uses a rather straightforward recursive implementation of top-down Merge Sort. This strikes me as strange, since top-down approach requires access to the mid-point of the list in order to split it in half. Since std::list<> does not support random access, the only way to find that mid-point is to literally iterate through half of the list. Also, at the very beginning it is necessary to know the total number of elements in the list (which was not necessarily an O(1) operation before C++11).
Nevertheless, std::list<>::sort() in VS2015 does exactly that. Here's an excerpt from that implementation that locates the mid-point and performs recursive calls
...
iterator _Mid = _STD next(_First, _Size / 2);
_First = _Sort(_First, _Mid, _Pred, _Size / 2);
_Mid = _Sort(_Mid, _Last, _Pred, _Size - _Size / 2);
...
As you can see, they just nonchalantly use std::next to walk through the first half of the list and arrive at _Mid iterator.
What could be the reason behind this switch, I wonder? All I see is a seemingly obvious inefficiency of repetitive calls to std::next at each level of recursion. Naive logic says that this is slower. If they are willing to pay this kind of price, they probably expect to get something in return. What are they getting then? I don't immediately see this algorithm as having better cache behavior (compared to the original bottom-up approach). I don't immediately see it as behaving better on pre-sorted sequences.
Granted, since C++11 std::list<> is basically required to store its element count, which makes the above slightly more efficient, since we always know the element count in advance. But that still does not seem to be enough to justify the sequential scan on each level of recursion.
(Admittedly, I haven't tried to race the implementations against each other. Maybe there are some surprises there.)
Note this answer has been updated to address all of the issues mentioned in the comments below and after the question, by making the same change from an array of lists to an array of iterators, while retaining the faster bottom up merge sort algorithm, and eliminating the small chance of stack overflow due to recursion with the top down merge sort algorithm.
Initially I assumed that Microsoft would not have switched to a less efficient top down merge sort when it switched to using iterators unless it was necessary, so I was looking for alternatives. It was only when I tried to analyze the issues (out of curiosity) that I realized that the original bottom up merge sort could be modified to work with iterators.
In #sbi's comment, he asked the author of the top down approach, Stephan T. Lavavej, why the change to iterators was made. Stephan's response was "to avoid memory allocation and default constructing allocators". VS2015 introduced non-default-constructible and stateful allocators, which presents an issue when using the prior version's array of lists, as each instance of a list allocates a dummy node, and a change would be needed to handle no default allocator.
Lavavej's solution was to switch to using iterators to keep track of run boundaries within the original list instead of an internal array of lists. The merge logic was changed to use 3 iterator parameters, 1st parameter is iterator to start of left run, 2nd parameter is iterator to end of left run == iterator to start of right run, 3rd parameter is iterator to end of right run. The merge process uses std::list::splice to move nodes within the original list during merge operations. This has the added benefit of being exception safe. If a caller's compare function throws an exception, the list will be re-ordered, but no loss of data will occur (assuming splice can't fail). With the prior scheme, some (or most) of the data would be in the internal array of lists if an exception occurred, and data would be lost from the original list.
I changed bottom up merge sort to use an array of iterators instead of an array of lists, where array[i] is an iterator to the start of a sorted run with 2^i nodes, or it is empty (using std::list::end to indicate empty, since iterators can't be null). Similar to the top down approach, the array of iterators is only used to keep track of sorted run boundaries within the original linked list, with the same merge logic as top down that uses std::list::splice to move nodes within the original linked list.
A single scan of the list is done, building up sorted runs to the left of the current scan.next position according to the sorted run boundaries in the array, until all nodes are merged into the sorted runs. Then the sorted runs are merged, resulting in a sorted list.
For example, for a list with 7 nodes, after the scan:
2 1 0 array index
run0->run0->run0->run0->run1->run1->run2->end
Then the 3 sorted runs are merged right to left via merge(left, right), so that the sort is stable.
If a linked list is large and the nodes are scattered, there will be a lot of cache misses, and top down will be about 40% to 50% slower than bottom up depending on the processor. Then again, if there's enough memory, it would usually be faster to move the list to an array or vector, sort the array or vector, then create a new list from the sorted array or vector.
Example C++ code:
template <typename T>
typename std::list<T>::iterator Merge(std::list<T> &ll,
typename std::list<T>::iterator li,
typename std::list<T>::iterator ri,
typename std::list<T>::iterator ei);
// iterator array size
#define ASZ 32
template <typename T>
void SortList(std::list<T> &ll)
{
if (ll.size() < 2) // return if nothing to do
return;
typename std::list<T>::iterator ai[ASZ]; // array of iterator (bgn lft)
typename std::list<T>::iterator ri; // right iterator (end lft, bgn rgt)
typename std::list<T>::iterator ei; // end iterator (end rgt)
size_t i;
for (i = 0; i < ASZ; i++) // "clear" array
ai[i] = ll.end();
// merge nodes into array of runs
for (ei = ll.begin(); ei != ll.end();) {
ri = ei++;
for (i = 0; (i < ASZ) && ai[i] != ll.end(); i++) {
ri = Merge(ll, ai[i], ri, ei);
ai[i] = ll.end();
}
if(i == ASZ)
i--;
ai[i] = ri;
}
// merge array of runs into single sorted list
// ei = ll.end();
for(i = 0; (i < ASZ) && ai[i] == ei; i++);
ri = ai[i++];
while(1){
for( ; (i < ASZ) && ai[i] == ei; i++);
if (i == ASZ)
break;
ri = Merge(ll, ai[i++], ri, ei);
}
}
template <typename T>
typename std::list<T>::iterator Merge(std::list<T> &ll,
typename std::list<T>::iterator li,
typename std::list<T>::iterator ri,
typename std::list<T>::iterator ei)
{
typename std::list<T>::iterator ni;
(*ri < *li) ? ni = ri : ni = li;
while(1){
if(*ri < *li){
ll.splice(li, ll, ri++);
if(ri == ei)
return ni;
} else {
if(++li == ri)
return ni;
}
}
}
Example replacement code for VS2019's std::list::sort(), in include file list. The merge logic was made into a separate internal function, since it's now used in two places. The call to _Sort from std::list::sort() is _Sort(begin(), end(), _Pred, this->_Mysize());, where _Pred is a pointer to the compare function (defaults to std::less()).
private:
template <class _Pr2>
iterator _Merge(_Pr2 _Pred, iterator _First, iterator _Mid, iterator _Last){
iterator _Newfirst = _First;
for (bool _Initial_loop = true;;
_Initial_loop = false) { // [_First, _Mid) and [_Mid, _Last) are sorted and non-empty
if (_DEBUG_LT_PRED(_Pred, *_Mid, *_First)) { // consume _Mid
if (_Initial_loop) {
_Newfirst = _Mid; // update return value
}
splice(_First, *this, _Mid++);
if (_Mid == _Last) {
return _Newfirst; // exhausted [_Mid, _Last); done
}
}
else { // consume _First
++_First;
if (_First == _Mid) {
return _Newfirst; // exhausted [_First, _Mid); done
}
}
}
}
template <class _Pr2>
void _Sort(iterator _First, iterator _Last, _Pr2 _Pred,
size_type _Size) { // order [_First, _Last), using _Pred, return new first
// _Size must be distance from _First to _Last
if (_Size < 2) {
return; // nothing to do
}
const size_t _ASZ = 32; // array size
iterator _Ai[_ASZ]; // array of iterator to run (bgn lft)
iterator _Mi; // middle iterator to run (end lft, bgn rgt)
iterator _Li; // last (end) iterator to run (end rgt)
size_t _I; // index to _Ai
for (_I = 0; _I < _ASZ; _I++) // "empty" array
_Ai[_I] = _Last; // _Ai[] == _Last => empty entry
// merge nodes into array of runs
for (_Li = _First; _Li != _Last;) {
_Mi = _Li++;
for (_I = 0; (_I < _ASZ) && _Ai[_I] != _Last; _I++) {
_Mi = _Merge(_Pass_fn(_Pred), _Ai[_I], _Mi, _Li);
_Ai[_I] = _Last;
}
if (_I == _ASZ)
_I--;
_Ai[_I] = _Mi;
}
// merge array of runs into single sorted list
for (_I = 0; _I < _ASZ && _Ai[_I] == _Last; _I++);
_Mi = _Ai[_I++];
while (1) {
for (; _I < _ASZ && _Ai[_I] == _Last; _I++);
if (_I == _ASZ)
break;
_Mi = _Merge(_Pass_fn(_Pred), _Ai[_I++], _Mi, _Last);
}
}
The remainder of this answer is historical, and only left for the historical comments, otherwise it is no longer relevant.
I was able to reproduce the issue (old sort fails to compile, new one works) based on a demo from #IgorTandetnik:
#include <iostream>
#include <list>
#include <memory>
template <typename T>
class MyAlloc : public std::allocator<T> {
public:
MyAlloc(T) {} // suppress default constructor
template <typename U>
MyAlloc(const MyAlloc<U>& other) : std::allocator<T>(other) {}
template< class U > struct rebind { typedef MyAlloc<U> other; };
};
int main()
{
std::list<int, MyAlloc<int>> l(MyAlloc<int>(0));
l.push_back(3);
l.push_back(0);
l.push_back(2);
l.push_back(1);
l.sort();
return 0;
}
I noticed this change back in July, 2016 and emailed P.J. Plauger about this change on August 1, 2016. A snippet of his reply:
Interestingly enough, our change log doesn't reflect this change. That
probably means it was "suggested" by one of our larger customers and
got by me on the code review. All I know now is that the change came
in around the autumn of 2015. When I reviewed the code, the first
thing that struck me was the line:
iterator _Mid = _STD next(_First, _Size / 2);
which, of course, can take a very long time for a large list.
The code looks a bit more elegant than what I wrote in early 1995(!),
but definitely has worse time complexity. That version was modeled
after the approach by Stepanov, Lee, and Musser in the original STL.
They are seldom found to be wrong in their choice of algorithms.
I'm now reverting to our latest known good version of the original code.
I don't know if P.J. Plauger's reversion to the original code dealt with the new allocator issue, or if or how Microsoft interacts with Dinkumware.
For a comparison of the top down versus bottom up methods, I created a linked list with 4 million elements, each consisting of one 64 bit unsigned integer, assuming I would end up with a doubly linked list of nearly sequentially ordered nodes (even though they would be dynamically allocated), filled them with random numbers, then sorted them. The nodes don't move, only the linkage is changed, but now traversing the list accesses the nodes in random order. I then filled those randomly ordered nodes with another set of random numbers and sorted them again. I compared the 2015 top down approach with the prior bottom up approach modified to match the other changes made for 2015 (sort() now calls sort() with a predicate compare function, rather than having two separate functions). These are the results. update - I added a node pointer based version and also noted the time for simply creating a vector from list, sorting vector, copy back.
sequential nodes: 2015 version 1.6 seconds, prior version 1.5 seconds
random nodes: 2015 version 4.0 seconds, prior version 2.8 seconds
random nodes: node pointer based version 2.6 seconds
random nodes: create vector from list, sort, copy back 1.25 seconds
For sequential nodes, the prior version is only a bit faster, but for random nodes, the prior version is 30% faster, and the node pointer version 35% faster, and creating a vector from the list, sorting the vector, then copying back is 69% faster.
Below is the first replacement code for std::list::sort() I used to compare the prior bottom up with small array (_BinList[]) method versus VS2015's top down approach I wanted the comparison to be fair, so I modified a copy of < list >.
void sort()
{ // order sequence, using operator<
sort(less<>());
}
template<class _Pr2>
void sort(_Pr2 _Pred)
{ // order sequence, using _Pred
if (2 > this->_Mysize())
return;
const size_t _MAXBINS = 25;
_Myt _Templist, _Binlist[_MAXBINS];
while (!empty())
{
// _Templist = next element
_Templist._Splice_same(_Templist.begin(), *this, begin(),
++begin(), 1);
// merge with array of ever larger bins
size_t _Bin;
for (_Bin = 0; _Bin < _MAXBINS && !_Binlist[_Bin].empty();
++_Bin)
_Templist.merge(_Binlist[_Bin], _Pred);
// don't go past end of array
if (_Bin == _MAXBINS)
_Bin--;
// update bin with merged list, empty _Templist
_Binlist[_Bin].swap(_Templist);
}
// merge bins back into caller's list
for (size_t _Bin = 0; _Bin < _MAXBINS; _Bin++)
if(!_Binlist[_Bin].empty())
this->merge(_Binlist[_Bin], _Pred);
}
I made some minor changes. The original code kept track of the actual maximum bin in a variable named _Maxbin, but the overhead in the final merge is small enough that I removed the code associated with _Maxbin. During the array build, the original code's inner loop merged into a _Binlist[] element, followed by a swap into _Templist, which seemed pointless. I changed the inner loop to just merge into _Templist, only swapping once an empty _Binlist[] element is found.
Below is a node pointer based replacement for std::list::sort() I used for yet another comparison. This eliminates allocation related issues. If a compare exception is possible and occurred, all the nodes in the array and temp list (pNode) would have to be appended back to the original list, or possibly a compare exception could be treated as a less than compare.
void sort()
{ // order sequence, using operator<
sort(less<>());
}
template<class _Pr2>
void sort(_Pr2 _Pred)
{ // order sequence, using _Pred
const size_t _NUMBINS = 25;
_Nodeptr aList[_NUMBINS]; // array of lists
_Nodeptr pNode;
_Nodeptr pNext;
_Nodeptr pPrev;
if (this->size() < 2) // return if nothing to do
return;
this->_Myhead()->_Prev->_Next = 0; // set last node ->_Next = 0
pNode = this->_Myhead()->_Next; // set ptr to start of list
size_t i;
for (i = 0; i < _NUMBINS; i++) // zero array
aList[i] = 0;
while (pNode != 0) // merge nodes into array
{
pNext = pNode->_Next;
pNode->_Next = 0;
for (i = 0; (i < _NUMBINS) && (aList[i] != 0); i++)
{
pNode = _MergeN(_Pred, aList[i], pNode);
aList[i] = 0;
}
if (i == _NUMBINS)
i--;
aList[i] = pNode;
pNode = pNext;
}
pNode = 0; // merge array into one list
for (i = 0; i < _NUMBINS; i++)
pNode = _MergeN(_Pred, aList[i], pNode);
this->_Myhead()->_Next = pNode; // update sentinel node links
pPrev = this->_Myhead(); // and _Prev pointers
while (pNode)
{
pNode->_Prev = pPrev;
pPrev = pNode;
pNode = pNode->_Next;
}
pPrev->_Next = this->_Myhead();
this->_Myhead()->_Prev = pPrev;
}
template<class _Pr2>
_Nodeptr _MergeN(_Pr2 &_Pred, _Nodeptr pSrc1, _Nodeptr pSrc2)
{
_Nodeptr pDst = 0; // destination head ptr
_Nodeptr *ppDst = &pDst; // ptr to head or prev->_Next
if (pSrc1 == 0)
return pSrc2;
if (pSrc2 == 0)
return pSrc1;
while (1)
{
if (_DEBUG_LT_PRED(_Pred, pSrc2->_Myval, pSrc1->_Myval))
{
*ppDst = pSrc2;
pSrc2 = *(ppDst = &pSrc2->_Next);
if (pSrc2 == 0)
{
*ppDst = pSrc1;
break;
}
}
else
{
*ppDst = pSrc1;
pSrc1 = *(ppDst = &pSrc1->_Next);
if (pSrc1 == 0)
{
*ppDst = pSrc2;
break;
}
}
}
return pDst;
}
#sbi asked Stephan T. Lavavej, MSVC's standard library maintainer, who responded:
I did that to avoid memory allocation and default constructing
allocators.
To this I'll add "free basic exception safety".
To elaborate: the pre-VS2015 implementation suffers from several defects:
_Myt _Templist, _Binlist[_MAXBINS]; creates a bunch of intermediate lists (_Myt is simply a typedef for the current instantiation of list; a less confusing spelling for that is, well, list) to hold the nodes during sorting, but these lists are default constructed, which leads to a multitude of problems:
If the allocator used is not default constructible (and there is no requirement that allocators be default constructible), this simply won't compile, because the default constructor of list will attempt to default construct its allocator.
If the allocator used is stateful, then a default-constructed allocator may not compare equal to this->get_allocator(), which means that the later splices and merges are technically undefined behavior and may well break in debug builds. ("Technically", because the nodes are all merged back in the end, so you don't actually deallocate with the wrong allocator if the function successfully completes.)
Dinkumware's list uses a dynamically allocated sentinel node, which means that the above will perform _MAXBINS + 1 dynamic allocations. I doubt that many people expect sort to potentially throw bad_alloc. If the allocator is stateful, then these sentinel nodes may not be even allocated from the right place (see #2).
The code is not exception safe. In particular, the comparison is allowed to throw, and if it throws while there are elements in the intermediate lists, those elements are simply destroyed with the lists during stack unwinding. Users of sort don't expect the list to be sorted if sort throws an exception, of course, but they probably also don't expect the elements to go missing.
This interacts very poorly with #2 above, because now it's not just technical undefined behavior: the destructor of those intermediate lists will be deallocating and destroying the nodes spliced into them with the wrong allocator.
Are those defects fixable? Probably. #1 and #2 can be fixed by passing get_allocator() to the constructor of the lists:
_Myt _Templist(get_allocator());
_Myt _Binlist[_MAXBINS] = { _Myt(get_allocator()), _Myt(get_allocator()),
_Myt(get_allocator()), /* ... repeat _MAXBINS times */ };
The exception safety problem can be fixed by surrounding the loop with a try-catch that splices all the nodes in the intermediate lists back into *this without regard to order if an exception is thrown.
Fixing #3 is harder, because that means not using list at all as the holder of nodes, which probably requires a decent amount of refactoring, but it's doable.
The question is: is it worth jumping through all these hoops to improve the performance of a container that has reduced performance by design? After all, someone who really cares about performance probably won't be using list in the first place.
I have a std::vector<int> and I want to throw away the x first and y last elements. Just copying the elements is not an option, since this is O(n).
Is there something like vector.begin()+=x to let the vector just start later and end earlier?
I also tried
items = std::vector<int> (&items[x+1],&items[0]+items.size()-y);
where items is my vector, but this gave me bad_alloc
C++ standard algorithms work on ranges, not on actual containers, so you don't need to extract anything: you just need to adjust the iterator range you're working with.
void foo(const std::vector<T>& vec, const size_t start, const size_t end)
{
assert(vec.size() >= end-start);
auto it1 = vec.begin() + start;
auto it2 = vec.begin() + end;
std::whatever(it1, it2);
}
I don't see why it needs to be any more complicated than that.
(trivial live demo)
If you only need a range of values, you can represent that as a pair of iterators from first to last element of the range. These can be acquired in constant time.
Edit: According to the description in the comments, this seems like the most sensible solution. If your functions expect a vector reference, then you'll need to refactor a bit.
Other solutions:
If you don't need the original vector, and therefore can modify it, and the order of elements is not relevant, you can swap the first x elements with the n-x-y...n-y elements and then remove the last x+y elements. This can be done in O(x+y) time.
If appropriate, you could choose to use std::list for which what you're asking can be done in constant time if you have iterators to the first and last node of the sublist. This also requires that you can modify the original list but the order of elements won't change.
If those are not options, then you need to copy and are stuck with O(n).
The other answers are correct: usually iterators will do.
Nevertheless, you can also write a vector view. Here is a sketch:
template<typename T>
struct vector_view
{
vector_view(std::vector<T> const& v, size_t ind_begin, size_t ind_end)
: _v(v)
, _size(/* size of range */)
, _ind_begin(ind_begin) {}
auto size() const { return _size; }
auto const& operator[](size_t i) const
{
//possibly check for input outside range
return _v[ i + _ind_begin ];
}
//conversion of view to std::vector
operator std::vector<T>() const
{
std::vector<T> ret(_size);
//fill it
return ret;
}
private:
std::vector<T> const& _v;
size_t _size;
size_t _ind_begin;
}
Expose further methods as required (some iterator stuff might be appropriate when you want to use that with the standard library algorithms).
Further, take care on the validity of the const reference std::vector<T> const& v; -- if that could be an issue, one should better work with shared-pointers.
One can also think of more general approaches here, for example, use strides or similar things.
I have code that creates several object instances (each instance having a fitness value, among other things) from which I want to sample N unique objects using weighted selection based on their fitness values. All objects not sampled are then discarded (but they need to be initially created to determine their fitness value).
my current code looks something like this:
vector<Item> getItems(..) {
std::vector<Item> items
.. // generate N values for items
int N = items.size();
std::vector<double> fitnessVals;
for(auto it = items.begin(); it != items.end(); ++it)
fitnessVals.push_back(it->getFitness());
std::mt19937& rng = getRng();
for(int i = 0, i < N, ++i) {
std::discrete_distribution<int> dist(fitnessVals.begin() + i, fitnessVals.end());
unsigned int pick = dist(rng);
std::swap(fitnessVals.at(i), fitnessVals.at(pick));
std::swap(items.at(i), items.at(pick));
}
items.erase(items.begin() + N, items.end());
return items;
}
Typically ~10,000 instances are initially created, with N being ~200. The fitness value is non-negative, usually valued at ~70. It could go as high as ~3000, but higher values are increasingly more unlikely.
Is there an elegant way to get rid of the fitnessVals vector? Or perhaps a better way to do this in general? Efficiency is important, but I'm also wondering about good C++ coding practices.
If you're asking whether you can do this just with the items in your items vector, the answer is yes. The following is a rather hideous but none-the-less effective way to do that: I apologize in advance for the density.
This wraps the unsuspecting container iterator in another iterator of our own devices; one that pairs it with a member function of your choice. You may have to dance with const in this to get it to work correctly with your member function choice. That task i leave to you.
template<typename Iter, typename R>
struct memfn_iterator_s :
public std::iterator<std::input_iterator_tag, R>
{
using value_type = typename std::iterator_traits<Iter>::value_type;
memfn_iterator_s(Iter it, R(value_type::*fn)())
: m_it(it), mem_fn(fn) {}
R operator*()
{
return ((*m_it).*mem_fn)();
}
bool operator ==(const memfn_iterator_s& arg) const
{
return m_it == arg.m_it;
}
bool operator !=(const memfn_iterator_s& arg) const
{
return m_it != arg.m_it;
}
memfn_iterator_s& operator ++() { ++m_it; return *this; }
private:
R (value_type::*mem_fn)();
Iter m_it;
};
A generator function follows to create the above monstrosity:
template<typename Iter, typename R>
memfn_iterator_s<Iter,R> memfn_iterator(
Iter it,
R (std::iterator_traits<Iter>::value_type::*fn)())
{
return memfn_iterator_s<Iter,R>(it, fn);
}
What this buys you is the ability to do this:
auto it_end = memfn_iterator(items.end(), &Item::getFitness);
for(unsigned int i = 0; i < N; ++i)
{
auto it_begin = memfn_iterator(items.begin()+i, &Item::getFitness);
std::discrete_distribution<unsigned int> dist(it_begin, it_end);
std::swap(items.at(i), items.at(i+dist(rng)));
}
items.erase(items.begin() + N, items.end());
No temporary array is required. The member function is called for the respective item when required by the discrete distribution (which usually keeps it own vector of weights, and as such replicating that effort would be redundant).
Dunno if you'll get anything helpful or useful out of that, but it was fun to think about.
It's pretty nice that they have a discrete distribution in STL. As far as I know, the most efficient algorithm for sampling from a set of weighted objects (i.e., with probability proportional to weights) is the alias method. There's a Java implementation here: http://www.keithschwarz.com/interesting/code/?dir=alias-method
I suspect that's what the STL discrete_distribution uses anyway. If you're going to be calling your getItems function frequently, you might want to create a "FitnessSet" class or something so that you don't have to build your distribution every time you want to sample from the same set.
EDIT: Another suggestion... If you want to be able to delete items, you could instead store your objects in a binary tree. Each node would contain the sum of the weights in the subtree beneath it, and the objects themselves could be in the leaves. You could select an object through a series of log(N) coin tosses: at a given node, choose a random number between 0 and node.subtreeweight. If it's less than node.left.subtreeweight, go left; otherwise go right. Continue recursively until you reach a leaf.
I would try something like the following (see code comments):
#include <algorithm> // For std::swap and std::transform
#include <functional> // For std::mem_fun_ref
#include <random> // For std::discrete_distribution
#include <vector> // For std::vector
size_t
get_items(std::vector<Item>& results, const std::vector<Item>& items)
{
// Copy the items to the results vector. All operations should be
// done on it, rather than the original items vector.
results.assign(items.begin(), items.end());
// Create the fitness values vector, immediately allocating
// the number of doubles required to match the size of the
// input item vector.
std::vector<double> fitness_vals(results.size());
// Use some STL "magic" ...
// This will iterate over the items vector, calling the
// getFitness() method on each item, and storing the result
// in the fitness_vals vector.
std::transform(results.begin(), results.end(),
fitness_vals.begin(),
std::mem_fun_ref(&Item::getFitness));
//
std::mt19937& rng = getRng();
for (size_t i=0; i < results.size(); ++i) {
std::discrete_distribution<int> dist(fitness_vals.begin() + i, fitness_vals.end());
unsigned int pick = dist(rng);
std::swap(fitness_vals[ii], fitness_vals[pick]);
std::swap(results[i], results[pick]);
}
return (results.size());
}
Instead of returning the results vector, the caller provides a vector into which the results should be added. Also, the original vector (passed as the second parameter) remains unchanged. If this is not something that concerns you, you can always pass just the one vector and work with it directly.
I don't see a way to not have the fitness values vector; the discrete_distribution constructor needs to have the begin and end iterators, so from what I can tell, you will need to have this vector.
The rest of it is basically the same, with the return value being the number of items in the result vector, rather than the vector itself.
This example makes use of a number of STL features (algorithms, containers, functors) which I have found to be useful and part of my day-to-day development.
Edit: the call to items.erase() is superfluous; items.begin() + N where N == items.size() is equivalent to items.end(). The call to items.erase() would equate to a no-op.
I've seen some special cases where std::rotate could be used or a combination with one of the search algorithms but generally: when one has a vector of N items and wants to code function like:
void move( int from, int count, int to, std::vector<int>& numbers );
I've been thinking about creation of a new vector + std::copy or combination of insert/erase but I can't say I ended up with some nice and elegant solution.
It's always important to profile before jumping to any conclusions. The contiguity of vector's data memory may offer significant caching benefits that node-based containers don't. So, perhaps you could give the direct approach a try:
void move_range(size_t start, size_t length, size_t dst, std::vector<T> & v)
{
const size_t final_dst = dst > start ? dst - length : dst;
std::vector<T> tmp(v.begin() + start, v.begin() + start + length);
v.erase(v.begin() + start, v.begin() + start + length);
v.insert(v.begin() + final_dst, tmp.begin(), tmp.end());
}
In C++11, you'd wrap the iterators in the first and third line into std::make_move_iterator.
(The requirement is that dst not lie within [start, start + length), or otherwise the problem is not well-defined.)
Depending on the size of the vector and the ranges involved, this might be less expensive than performing copy/erase/insert.
template <typename T>
void move_range(size_t start, size_t length, size_t dst, std::vector<T> & v)
{
typename std::vector<T>::iterator first, middle, last;
if (start < dst)
{
first = v.begin() + start;
middle = first + length;
last = v.begin() + dst;
}
else
{
first = v.begin() + dst;
middle = v.begin() + start;
last = middle + length;
}
std::rotate(first, middle, last);
}
(This assumes the ranges are valid and they don't overlap.)
Pre-C++11 (although the following remains valid) you can get more efficient "moves" for contained types which specialise/overload std::swap. To take advantage of this, you would need to do something like
std::vector<Foo> new_vec;
Foo tmp;
for (/* each Foo&f in old_vec, first section */) {
swap (f, tmp);
new_vec .push_back (tmp);
}
for (/* each Foo&f in old_vec, second section */) {
swap (f, tmp);
new_vec .push_back (tmp);
}
for (/* each Foo&f in old_vec, third section */) {
swap (f, tmp);
new_vec .push_back (tmp);
}
swap (new_vec, old_vec);
The above may also give good results for C++11 if Foo has a move-operator but hasn't specialised swap.
Linked lists or some clever sequence type might work out better if Foo doesn't have move semantics or an otherwise-optimised swap
Note also that if the above is in a function
std::vector<Foo> move (std::vector<Foo> old_vec, ...)`
then you might be able to perform the whole operation without copying anything, even in C++98 but for this to work you will need to pass by value and not by reference, which goes against the conventional prefer-pass-by-reference wisdom.