Why is there no std::inplace_merge_unique? - c++

I tried looking for an algorithm that would do what std::inplace_merge
followed by std::unique would do. Seems more efficient to do it in 1 pass than in 2.
Could not find it in standard library or by oogling.
So is there implementation somewhere in boost under different name maybe?
Is such algorithn possible (in a sense that it has same complexity guarantees as normal inplace_merge)?

It doesn't operate in-place, but assuming that neither range contains duplicates beforehand, std::set_union will find the same result as merge followed by unique.

There are many interesting algorithms missing from the algorithms section. The original submission of STL was incomplete from Stepanov's view and some algorithms were even removed. The proposal by Alexander Stepanov and Meng Lee doesn't seem to include an algorithm inplace_merge_unique() or any variation thereof.
One of the potential reasons why there is no such algorithm is that it isn't clear which of the element should be dropped: since the comparison is only a strict weak ordering, the choice of element matters. One approach to implement inplace_merge_unique() is to
Use std::remove_if() to remove any element which is a duplicate from the second range.
Use inplace_merge() to do the actual merge.
The predicate to std::remove_if() would track the current position in the first part of the sequence to be merged. The code below isn't tested but something like that should work:
template <typename BiDirIt, typename Comp>
BiDirIt inplace_merge_unique(BiDirIt begin, BiDirIt middle, BiDirIt end, Comp comp) {
using reference = typename std::iterator_traits<BiDirIt>::reference;
BiDirIt result = std::remove_if(middle, end, [=](reference other) mutable -> bool {
begin = std::find_if(begin, middle, [=](reference arg)->bool {
return !comp(arg, other);
});
return begin != middle && !comp(other, *begin);
});
std::inplace_merge(begin, middle, result, comp);
return result;
}

Related

Is there a std::unique-style library algorithm that has user-defined collision handler?

I have a basic std::vector of key/value pairs. It is sorted by key. I would like to reduce all of the adjacent duplicate key entries using a user-defined binary operator while compacting the vector.
This is basically a std::unique application where the user can decide how to handle the collision rather than just keeping the first entry.
Is there a library algorithm that satisfies this requirement? I can write my own but I would prefer to rely on something that an expert has written.
The map-as-sorted-vector is core to other parts of the algorithm and can't be changed. I am limited to C++14.
I can't think of a standard algo for this. std::unique almost satisfies the requirement, but unfortunately the BinaryPredicate you supply to compare elements isn't allowed to modify them ("binary_pred shall not apply any non-constant function through the dereferenced iterators." - [algorithms.requirements] paragraph 7 in the C++17 Standard) - a requirement that lets the implementation optimise more freely (e.g. parallel processing of different parts of the vector).
An implementation's not too hard though...
template <typename Iterator, typename BinaryPredicate, typename Compaction>
Iterator compact(Iterator begin, Iterator end, BinaryPredicate equals, Compaction compaction)
{
if (begin == end) return begin;
Iterator compact_to = begin;
while (++begin != end)
if (equals(*begin, *compact_to))
compaction(*compact_to, *begin);
else
*++compact_to = *begin;
return ++compact_to;
}
The return value will be the new "end" for the compacted vector - you can erase therefrom like you would for remove_if.
You can see it running here.

Inplace versions of set_difference, set_intersection and set_union

I implemented versions of set_union, set_intersection and set_difference that take a sorted container and a sorted range (that must not be within the container), and write the result of the operation into the container.
template<class Container, class Iter>
void assign_difference(Container& cont, Iter first, Iter last)
{
auto new_end = std::set_difference( // (1)
cont.begin(), cont.end(), first, last, cont.begin());
cont.erase(new_end, cont.end());
}
template<class Container, class Iter>
void assign_intersection(Container& cont, Iter first, Iter last)
{
auto new_end = std::set_intersection( // (2)
cont.begin(), cont.end(), first, last, cont.begin());
cont.erase(new_end, cont.end());
}
template<class Container, class Iter>
void assign_union(Container& cont, Iter first, Iter last)
{
auto insert_count = last - first;
cont.resize(cont.size() + insert_count); // T must be default-constructible
auto rfirst1 = cont.rbegin() + insert_count, rlast1 = cont.rend();
auto rfirst2 = std::make_reverse_iterator(last);
auto rlast2 = std::make_reverse_iterator(first);
rlast1 = std::set_union( // (3)
rfirst1, rlast1, rfirst2, rlast2, cont.rbegin(), std::greater<>());
cont.erase(std::copy(rlast1.base(), cont.end(), cont.begin()), cont.end());
}
The goal was:
No allocation is performed if the container has enaugh capacity to hold the result.
Otherwise exactly one allocation is performed to give the container the capacity to hold the result.
As you can see in the lines marked (1), (2) and (3), the same container is used as input and output for those STL algorithms. Assuming a usual implementation of those STL algorithms, this code works, since it only writes to parts of the container that have already been processed.
As pointed out in the comments, it's not guaranteed by the standard that this works. set_union, set_intersection and set_difference require that the resulting range doesn't overlap with one of the input ranges.
However, can there be a STL implementation that breaks the code?
If your answer is yes, please provide a conforming implementations of one of the three used STL algorithms that breaks the code.
A conforming implementation could check if argument 1 and 5 of set_intersection are equal, and if they are format your harddrive.
If you violate the requirements, the behaviour of your program is not constrained by the standard; your program is ill formed.
There are situations where UB may be worth the risk and cost (auditing all compiler changes and assembly output). I do not see the point here; write your own. Any fancy optimizations that the std library comes up with could cause problems when you violate requirements as you are doing, and as you have noted the naive implementation is simple.
As rule of thumb I use do not write on a container on which you are iterating. Everything can happen. In general it's odd.
As #Yakk said, it sounds ill. That's it. Something to be removed from your code base an sleep peacefully.
If you really need those functions, I would suggest to write by yourself the inner loop (eg: the inner of std::set_intersection) in order to handle the constraint you need for your algorithm to work.
I don't think that seeking for an STL implementation on which it doesn't work is the right approach. It doesn't sound like a long term solution. For the long term: the standard should be your reference, and as someone already pointed out, your solution doesn't seems to properly deal with it.
My 2 cents

Why does std::binary_search return bool?

According to draft N4431, the function std::binary_search in the algorithms library returns a bool, [binary.search]:
template<class ForwardIterator, class T>
bool binary_search(ForwardIterator first, ForwardIterator last,
const T& value);
template<class ForwardIterator, class T, class Compare>
bool binary_search(ForwardIterator first, ForwardIterator last,
const T& value, Compare comp);
Requires: The elements e of [first,last) are partitioned with respect to the expressions e < value and !(value < e) or comp(e, value) and !comp(value, e). Also, for all elements e of [first,last), e < value implies !(value < e) or comp(e, value) implies !comp(value, e).
Returns: true if there is an iterator i in the range [first,last) that satisfies the corresponding conditions:
!(*i < value) && !(value < *i) or comp(*i, value) == false && comp(value, *i) ==
false.
Complexity: At most log2(last - first) + O(1) comparisons.
Does anyone know why this is the case?
Most other generic algorithms either return an iterator to the element or an iterator that is equivalent to the iterator denoting the end of the sequence of elements (i.e., one after the last element to be considered in the sequence), which is what I would have expected.
The name of this function in 1994 version of STL was isMember. I think you'd agree that a function with that name should return bool
http://www.stepanovpapers.com/Stepanov-The_Standard_Template_Library-1994.pdf
It's split into multiple different functions in C++, as for the reasoning it's nearly impossible to tell why someone made something one way or another. binary_search will tell you if such an element exists. If you need to know the location of them use lower_bound and upper_bound which will give the begin/end iterator respectively. There's also equal_range that gives you both the begin and end at once.
Since others seem to think that it's obvious why it was created that way I'll argue my points why it's hard/impossible to answer if you aren't Alexander Stepanov or someone who worked with him.
Sadly the SGI STL FAQ doesn't mention binary_search at all. It explains reasoning for list<>::size being linear time or pop returning void. It doesn't seem like they deemed binary_search special enough to document it.
Let's look at the possible performance improvement mentioned by #user2899162:
You can find the original implementation of the SGI STL algorithm binary_search here. Looking at it one can pretty much simplify it (we all know how awful the internal names in the standard library are) to:
template <class ForwardIter, class V>
bool binary_search(ForwardIter first, ForwardIter last, const V& value) {
ForwardIter it = lower_bound(first, last, value);
return it != last && !(value < *it);
}
As you can see it was implemented in terms of lower_bound and got the same exact performance. If they really wanted it to take advantage of possible performance improvements they wouldn't have implemented it in terms of the slower one, so it doesn't seem like that was the reason they did it that way.
Now let's look at it simply being a convenience function
It being simply a convenience function seems more likely, but looking through the STL you'll find numerous other algorithms where this could have been possible. Looking at the above implementation you'll see that it's only trivially more to do than a std::find(begin, end, value) != end; yet we have to write that all the time and don't have a convenience function that returns a bool. Why exactly here and not all the other algorithms too? It's not really obvious and can't simply be explained.
In conclusion I find it far from obvious and don't really know if I could confidently and honestly answer it.
The binary search algorithm relies on strict weak ordering. Meaning that the elements are supposed to be partitioned according to the operator < or according to a custom comparator that has the same guarantees. This means that there isn't necessarily only one element that could be found for a given query. Thus you need the lower_bound, upper_bound and equal_range functions to retrieve iterators.
The standard library contains variants of binary search algorithm that return iterators. They are called std::lower_bound and std::upper_bound. I think the rationale behind std::binary_search returning bool is that it wouldn't be clear what iterator to return in case of equivalent elements, while in case of std::lower_bound and std::upper_bound it is clear.
There might have been performance considerations as well, because in theory std::binary_search could be implemented to perform better in case of multiple equivalent elements and certain types. However, at least one popular implementation of the standard library (libstdc++) implements std::binary_search using std::lower_bound and, moreover, they have the same theoretical complexity.
If you want to get an iterator on a value, you can use std::equal_range which will return 2 iterators, one on the lower bound and one on the higher bound of the range of values that are equal to the one you're looking for.
Since the only requirement is that values are sorted and not unique, there's is no simple "find" that would return an iterator on the one element you're looking for. If there's only one element equal to the value you're looking for, there will only be a difference of 1 between the two iterators.
Here's a C++20 binary-seach alternative that returns an iterator:
template<typename RandomIt, typename T, typename Pred>
inline
RandomIt xbinary_search( RandomIt begin, RandomIt end, T const &key, Pred pred )
requires std::random_access_iterator<RandomIt>
&&
requires( Pred pred, typename std::iterator_traits<RandomIt>::value_type &elem, T const &key )
{
{ pred( elem, key ) } -> std::convertible_to<std::strong_ordering>;
}
{
using namespace std;
size_t lower = 0, upper = end - begin, mid;
strong_ordering so;
while( lower != upper )
{
mid = (lower + upper) / 2;
so = pred( begin[mid], key );
if( so == 0 )
{
assert(mid == 0 || pred( begin[mid - 1], key ) < 0);
assert(begin + mid + 1 == end || pred( begin[mid + 1], key ) > 0);
return begin + mid;
}
if( so > 0 )
upper = mid;
else
lower = mid + 1;
}
return end;
}
This code only works correctly if there's only one value between begin and end that matches the key. But if you debug and NDEBUG is not defined, the code stops in your debugger.

Obtaining `std::priority_queue` elements in reverse order?

I've written some K-nearest-neighbor query methods which build a list of points that are nearest to a given query point. To maintain that list of neighbors, I use the std::priority_queue such that the top element is the farthest neighbor to the query point. This way I know if I should push the new element that is currently being examined (if at a lesser distance than the current farthest neighbor) and can pop() the farthest element when my priority-queue has more than K elements.
So far, all is well. However, when I output the elements, I would like to order them from the closest to the farthest. Currently, I simply pop all the elements from the priority-queue and put them on the output-container (through an iterator), which results in a sequence of points ordered from farthest to closest, so then, I call std::reverse on the output iterator range.
As a simple example, here is a linear-search that uses the priority-queue (obviously, the actual nearest-neighbor query methods I use are far more complicated):
template <typename DistanceValue,
typename ForwardIterator,
typename OutputIterator,
typename GetDistanceFunction,
typename CompareFunction>
inline
OutputIterator min_dist_linear_search(ForwardIterator first,
ForwardIterator last,
OutputIterator output_first,
GetDistanceFunction distance,
CompareFunction compare,
std::size_t max_neighbors = 1,
DistanceValue radius = std::numeric_limits<DistanceValue>::infinity()) {
if(first == last)
return output_first;
typedef std::priority_queue< std::pair<DistanceValue, ForwardIterator>,
std::vector< std::pair<DistanceValue, ForwardIterator> >,
detail::compare_pair_first<DistanceValue, ForwardIterator, CompareFunction> > PriorityQueue;
PriorityQueue output_queue = PriorityQueue(detail::compare_pair_first<DistanceValue, ForwardIterator, CompareFunction>(compare));
for(; first != last; ++first) {
DistanceValue d = distance(*first);
if(!compare(d, radius))
continue;
output_queue.push(std::pair<DistanceValue, ForwardIterator>(d, first));
while(output_queue.size() > max_neighbors)
output_queue.pop();
if(output_queue.size() == max_neighbors)
radius = output_queue.top().first;
};
OutputIterator it = output_first;
while( !output_queue.empty() ) {
*it = *(output_queue.top().second);
output_queue.pop(); ++it;
};
std::reverse(output_first, it);
return it;
};
The above is all dandy except for one thing: it requires the output-iterator type to be bidirectional and essentially be pointing to a pre-allocated container. Now, this practice of storing the output in a range prescribed by some output iterator is great and pretty standard too (e.g. std::copy and other STL algorithms are good examples of that). However, in this case I would like to be able to only require a forward output-iterator type, which would make it possible to use back-inserter iterators like those provided for STL containers and iostreams.
So, this boils down to reversing the priority-queue before dumping its content in the output iterator. So, these are the better options I've been able to come up with:
Create a std::vector, dump the priority-queue content in it, and dump the elements into the output-iterator using a reverse-iterator on the vector.
Replace the std::priority_queue with a sorted container (e.g. std::multimap), and then dump the content into the output-iterator using the appropriate traversal order.
Are there any other reasonable option?
I used to employ a std::multimap in a previous implementation of this algorithm and others, as of my second option above. However, when I switched to std::priority_queue, the performance gain was significant. So, I'd rather not use the second option, as it really seems that using a priority-queue for maintaining the list of neighbors is much better than relying on a sorted array. Btw, I also tried a std::vector that I maintain sorted with std::inplace_merge, which was better than multimap, but didn't match up to the priority-queue.
As for the first option, which is my best option at this point, it just seems wasteful to me to have to do this double transfer of data (queue -> vector -> output). I'm just inclined to think that there must be a simpler way to do this... something that I'm missing..
The first option really isn't that bad in this application (considering the complexity of the algorithm that precedes it), but if there is a trick to avoid this double memory transfer, I'd like to know about it.
Problem solved!
I'm such an idiot... I knew I was missing something obvious. In this case, the std::sort_heap() function. The reference page even has an example that does exactly what I need (and since the std::priority_queue is just implemented in terms of a random-access container and the heap-functions (pop_heap, push_heap, make_heap) it makes no real difference to use these functions directly in-place of the std::priority_queue class). I don't know how I could have missed that.
Anyways, I hope this helps anyone who had the same problem.
One dirty idea, which would nevertheless be guaranteed to work, would be the following:
std::priority_queue<int, std::vector<int>, std::less<int> > queue;
queue.push(3);
queue.push(5);
queue.push(9);
queue.push(2);
// Prints in reverse order.
int* front = const_cast<int*>(&queue.top());
int* back = const_cast<int*>(front + queue.size());
std::sort(front, back);
while (front < back) {
printf("%i ", *front);
++front;
}
It may be noted that the in-place sorting will likely break the queue.
why don't you just specify the opposite comparison function in the declaration:
#include <iostream>
#include <queue>
#include <vector>
#include <functional>
int main() {
std::priority_queue<int, std::vector<int>, std::greater<int> > pq;
pq.push(1);
pq.push(10);
pq.push(15);
std::cout << pq.top() << std::endl;
}

What's wrong with my vector<T>::erase here?

I have two vector<T> in my program, called active and non_active respectively. This refers to the objects it contains, as to whether they are in use or not.
I have some code that loops the active vector and checks for any objects that might have gone non active. I add these to a temp_list inside the loop.
Then after the loop, I take my temp_list and do non_active.insert of all elements in the temp_list.
After that, I do call erase on my active vector and pass it the temp_list to erase.
For some reason, however, the erase crashes.
This is the code:
non_active.insert(non_active.begin(), temp_list.begin(), temp_list.end());
active.erase(temp_list.begin(), temp_list.end());
I get this assertion:
Expression:("_Pvector == NULL || (((_Myvec*)_Pvector)->_Myfirst <= _Ptr && _Ptr <= ((_Myvect*)_Pvector)->_Mylast)",0)
I've looked online and seen that there is a erase-remove idiom, however not sure how I'd apply that to a removing a range of elements from a vector<T>
I'm not using C++11.
erase expects a range of iterators passed to it that lie within the current vector. You cannot pass iterators obtained from a different vector to erase.
Here is a possible, but inefficient, C++11 solution supported by lambdas:
active.erase(std::remove_if(active.begin(), active.end(), [](const T& x)
{
return std::find(temp_list.begin(), temp_list.end(), x) != temp_list.end();
}), active.end());
And here is the equivalent C++03 solution without the lambda:
template<typename Container>
class element_of
{
Container& container;
element_of(Container& container) : container(container) {}
public:
template<typename T>
bool operator()(const T& x) const
{
return std::find(container.begin(), container.end(), x)
!= container.end();
}
};
// ...
active.erase(std::remove_if(active.begin(), active.end(),
element_of<std::vector<T> >(temp_list)),
active.end());
If you replace temp_list with a std::set and the std::find_if with a find member function call on the set, the performance should be acceptable.
The erase method is intended to accept iterators to the same container object. You're trying to pass in iterators to temp_list to use to erase elements from active which is not allowed for good reasons, as a Sequence's range erase method is intended to specify a range in that Sequence to remove. It's important that the iterators are in that sequence because otherwise we're specifying a range of values to erase rather than a range within the same container which is a much more costly operation.
The type of logic you're trying to perform suggests to me that a set or list might be better suited for the purpose. That is, you're trying to erase various elements from the middle of a container that match a certain condition and transfer them to another container, and you could eliminate the need for temp_list this way.
With list, for example, it could be as easy as this:
for (ActiveList::iterator it = active.begin(); it != active.end();)
{
if (it->no_longer_active())
{
inactive.push_back(*it);
it = active.erase(it);
}
else
++it;
}
However, sometimes vector can outperform these solutions, and maybe you have need for vector for other reasons (like ensuring contiguous memory). In that case, std::remove_if is your best bet.
Example:
bool not_active(const YourObjectType& obj);
active_list.erase(
remove_if(active_list.begin(), active_list.end(), not_active),
active_list.end());
More info on this can be found under the topic, 'erase-remove idiom' and you may need predicate function objects depending on what external states are required to determine if an object is no longer active.
You can actually make the erase/remove idiom usable for your case. You just need to move the value over to the other container before std::remove_if possibly shuffles it around: in the predicate.
template<class OutIt, class Pred>
struct copy_if_predicate{
copy_if_predicate(OutIt dest, Pred p)
: dest(dest), pred(p) {}
template<class T>
bool operator()(T const& v){
if(pred(v)){
*dest++ = v;
return true;
}
return false;
}
OutIt dest;
Pred pred;
};
template<class OutIt, class Pred>
copy_if_predicate<OutIt,Pred> copy_if_pred(OutIt dest, Pred pred){
return copy_if_predicate<OutIt,Pred>(dest,pred);
}
Live example on Ideone. (I directly used bools to make the code shorter, not bothering with output and the likes.)
The function std::vector::erase requires the iterators to be iterators into this vector, but you are passing iterators from temp_list. You cannot erase elements from a container that are in a completely different container.
active.erase(temp_list.begin(), temp_list.end());
You try to erase elements from one list, but you use iterators for second list. First list iterators aren't the same, like in second list.
I would like to suggest that this is an example of where std::list should be used. You can splice members from one list to another. Look at std::list::splice()for this.
Do you need random access? If not then you don't need a std::vector.
Note that with list, when you splice, your iterators, and references to the objects in the list remain valid.
If you don't mind making the implementation "intrusive", your objects can contain their own iterator value, so they know where they are. Then when they change state, they can automate their own "moving" from one list to the other, and you don't need to transverse the whole list for them. (If you want this sweep to happen later, you can get them to "register" themselves for later moving).
I will write an algorithm here now to run through one collection and if a condition exists, it will effect a std::remove_if but at the same time will copy the element into your "inserter".
//fwd iterator must be writable
template< typename FwdIterator, typename InputIterator, typename Pred >
FwdIterator copy_and_remove_if( FwdIterator inp, FwdIterator end, InputIterator outp, Pred pred )
{
for( FwdIterator test = inp; test != end; ++test )
{
if( pred(*test) ) // insert
{
*outp = *test;
++outp;
}
else // keep
{
if( test != inp )
{
*inp = *test;
}
++inp;
}
}
return inp;
}
This is a bit like std::remove_if but will copy the ones being removed into an alternative collection. You would invoke it like this (for a vector) where isInactive is a valid predicate that indicates it should be moved.
active.erase( copy_and_remove_if( active.begin(), active.end(), std::back_inserter(inactive), isInactive ), active.end() );
The iterators you pass to erase() should point into the vector itself; the assertion is telling you that they don't. This version of erase() is for erasing a range out of the vector.
You need to iterate over temp_list yourself and call active.erase() on the result of dereferencing the iterator at each step.