Is There an Alternative to insert and then sort - c++

If I have vector<int> foo and vector<int> bar both of which are sorted, and I want to merge them into foo such that the final result is sorted, does the standard provide me a method for doing this?
Obviously I can do:
foo.insert(foo.end(), bar.begin(), bar.end());
sort(foo.begin(), foo.end());
But I was hoping there was a one step algorithm to accomplish this.

It might be faster to use std::inplace_merge instead of std::sort. If there is additional memory available it has linear complexity otherwise it falls back to NlogN.
auto middle = foo.insert(foo.end(), bar.begin(), bar.end());
std::inplace_merge(foo.begin(), middle, foo.end());

To elaborate on Mat's comment your code could look like this using std::merge:
std::vector<int> result;
std::merge(
foo.begin(), foo.end(),
bar.begin(), bar.end(),
std::back_inserter(result));
foo = result; // if desired

If you need this kind of merge, why not make one yourself?
template <class Vector>
void insert_sorted(Vector& where, Vector& what)
{
typename Container::iterator src = what.begin();
typename Container::iterator src_end = what.end();
size_t index = 0;
while(src != src_end)
{
if(*src < where[index])
{
where.insert(where.begin() + index, *src);
++src;
}
++index;
}
}
Sample usage:
vector<int> foo{ 0, 5, 7, 9, 11, 14 };
vector<int> bar{ 1, 2, 4, 8, 10, 12 };
insert_sorted(foo, bar);
for(vector<int>::iterator i = foo.begin(); i != foo.end(); ++i)
cout << *i << " ";
Output:
0 1 2 4 5 7 8 9 10 11 12 14
Live sample: link.

So after looking through all the standard algorithms I can confirm that, there is no alternative to insert and sort. As I was searching the standard algorithms I did note that all the copying algorithms use input iterators and output iterators the only time an input-output iterators are used is when a single range is being operated on. (For example sort uses input-output iterators but any copy uses input iterators and an output iterator.)
I'd like to give an illustration of my point. So lets make an example of what an insertion merge algorithm with an input-output iterator would look like:
template <class BidirectionalIterator, class InputIterator>
void func(BidirectionalIterator first1, BidirectionalIterator last1, InputIterator first2, InputIterator last2){
bool is1Empty = first1 == last1;
bool is2Empty = first2 == last2;
BidirectionalIterator end = next(last1, distance(first2, last2));
if (!is1Empty){
--last1;
}
if (!is2Empty){
--last2;
}
while (!is1Empty || !is2Empty){
--end;
if (!is1Empty){
if (!is2Empty && *last2 > *last1){
*end = *last2;
if (last2 == first2){
is2Empty = true;
}else{
--last2;
}
}else{
*end = *last1;
if (last1 == first1){
is1Empty = true;
}
else{
--last1;
}
}
}else{
*end = *last2;
if (last2 == first2){
is2Empty = true;
}
else{
--last2;
}
}
}
}
Two things should be noted about this func algorithm:
It doesn't respect last1 it is assumed that sufficient space is allocated beyond last1 to also contain all the elements in the input range
func's input-output range cannot be called with a back_inserter like any other output only range in a standard algorithm
Because of this even func cannot be a "one step algorithm". It must be called like this:
foo.resize(foo.size() + bar.size());
func(foo.begin(), next(foo.begin(), foo.size() - bar.size()), bar.begin(), bar.end());
Note that Blastfurnace's answer takes advantage of the knowledge that it is merging two sorted ranges, and as such is of equivalent speed to func:
auto middle = foo.insert(foo.end(), bar.begin(), bar.end());
inplace_merge(foo.begin(), middle, foo.end());
The only actual "one step algorithm" is to roll this Blastfurnace's answer into a function that you could call by passing in the containers to be merged.

Related

check if all item array equal in array [duplicate]

If I have a vector of values and want to check that they are all the same, what is the best way to do this in C++ efficiently? If I were programming in some other language like R one way my minds jumps to is to return only the unique elements of the container and then if the length of the unique elements is more than 1, I know all the elements cannot be the same. In C++ this can be done like this:
//build an int vector
std::sort(myvector.begin(), myvector.end());
std::vector<int>::iterator it;
//Use unique algorithm to get the unique values.
it = std::unique(myvector.begin(), myvector.end());
positions.resize(std::distance(myvector.begin(),it));
if (myvector.size() > 1) {
std::cout << "All elements are not the same!" << std::endl;
}
However reading on the internet and SO, I see other answers such using a set or the find_if algorithm. So what is the most efficient way of doing this and why? I imagine mine is not the best way since it involves sorting every element and then a resizing of the vector - but maybe I'm wrong.
You need not to use std::sort. It can be done in a simpler way:
if ( std::adjacent_find( myvector.begin(), myvector.end(), std::not_equal_to<>() ) == myvector.end() )
{
std::cout << "All elements are equal each other" << std::endl;
}
you can use std::equal
version 1:
//assuming v has at least 1 element
if ( std::equal(v.begin() + 1, v.end(), v.begin()) )
{
//all equal
}
This will compare each element with the previous one.
version 2:
//assuming v has at least 1 element
int e = v[0]; //preferably "const auto& e" instead
bool all_equal = true;
for(std::size_t i = 1,s = v.size();i<s && all_equal;i++)
all_equal = e == v[i];
Edit:
Regarding performance, after testing with 100m elements i found out that in Visual Studio 2015 version 1 is about twice as fast as version 2. This is because the latest compiler for vs2015 uses sse instructions in c++ std implementations when you use ints, float , etc..
if you use _mm_testc_si128 you will get a similar performance to std::equal
using std::all_of and C++11 lambda
if (all_of(values.begin(), values.end(), [&] (int i) {return i == values[0];})){
//all are the same
}
Given no constraints on the vector, you have to iterate through the vector at least once, no matter the approach. So just pick the first element and check that all others are equal to it.
While the asymptotic complexity of std::unique is linear, the actual cost of the operation is probably much larger than you need, and it is an inplace algorithm (it will modify the data as it goes).
The fastest approach is to assume that if the vector contains a single element, it is unique by definition. If the vector contains more elements, then you just need to check whether all of them are exactly equal to the first. For that you only need to find the first element that differs from the first, starting the search from the second. If there is such an element, the elements are not unique.
if (v.size() < 2) return true;
auto different = std::find_if(v.begin()+1, v.end(),
[&v](auto const &x) { x != v[0]; });
return different == v.end();
That is using C++14 syntax, in an C++11 toolchain you can use the correct type in the lambda. In C++03 you could use a combination of std::not, std::bind1st/std::bind2nd and std::equal in place of the lambda.
The cost of this approach is distance(start,different element) comparisons and no copies. Expected and worst case linear cost in the number of comparisons (and no copies!)
Sorting is an O(NlogN) task.
This is easily solvable in O(N), so your current method is poor.
A simple O(N) would be as Luchian Grigore suggests, iterate over the vector, just once, comparing every element to the first element.
if(std::all_of(myvector.begin()+1, myvector.end(), std::bind(std::equal_to<int>(),
std::placeholders::_1, myvector.front())) {
// all members are equal
}
You can use FunctionalPlus(https://github.com/Dobiasd/FunctionalPlus):
std::vector<std::string> things = {"same old", "same old"};
if (fplus::all_the_same(things))
std::cout << "All things being equal." << std::endl;
Maybe something like this. It traverses vector just once and does not mess with the vector content.
std::vector<int> values { 5, 5, 5, 4 };
bool equal = std::count_if(values.begin(), values.end(), [ &values ] (auto size) { return size == values[0]; }) == values.size();
If the values in the vector are something different than basic type you have to implement equality operator.
After taking into account underscore_d remarks, I'm changing possible solution
std::vector<int> values { 5, 5, 5, 4 };
bool equal = std::all_of(values.begin(),values.end(),[ &values ] (auto item) { return item == values[0]; });
In your specific case, iterating over vector element and finding a different element from the first one would be enough. You may even be lucky enough to stop before evaluating all the elements in your vector. (A while loop could be used but I sticked with a for loop for readability reasons)
bool uniqueElt = true;
int firstItem = *myvector.begin();
for (std::vector<int>::const_iterator it = myvector.begin()+1; it != myvector.end() ; ++it) {
if(*it != firstItem) {
uniqueElt = false;
break;
}
}
In case you want to know how many different values your vector contains, you could build a set and check its size to see how many different values are inside:
std::set mySet;
std::copy(mySet.begin(), myvector.begin(), myvector.end());
You can simply use std::count to count all the elements that match the starting element:
std::vector<int> numbers = { 5, 5, 5, 5, 5, 5, 5 };
if (std::count(std::begin(numbers), std::end(numbers), numbers.front()) == numbers.size())
{
std::cout << "Elements are all the same" << std::endl;
}
LLVM provides some independently usable headers+libraries:
#include <llvm/ADT/STLExtras.h>
if (llvm::is_splat(myvector))
std::cout << "All elements are the same!" << std::endl;
https://godbolt.org/z/fQX-jc
for the sake of completeness, because it still isn't the most efficient, you can use std::unique in a more efficient way to decide whether all members are the same, but beware that after using std::unique this way the container is useless:
#include <algorithm>
#include <iterator>
if (std::distance(cntnr.begin(), std::unique(cntnr.begin(), cntnr.end()) == 1)
{
// all members were the same, but
}
Another approach using C++ 14:
bool allEqual = accumulate(v.begin(), v.end(), true, [first = v[0]](bool acc, int b) {
return acc && (b == first);
});
which is also order N.
Here is a readable C++17 solution which might remind students of the other constructors of std::vector:
if (v==std::vector(v.size(),v[0])) {
// you guys are all the same
}
...before C++17, the std::vector rvalue would need its type provided explicitly:
if (v==std::vector<typename decltype(v)::value_type>(v.size(),v[0])) {
// you guys are all the same
}
The C++ function is defined in library in STL. This function operates on whole range of array elements and can save time to run a loop to check each elements one by one. It checks for a given property on every element and returns true when each element in range satisfies specified property, else returns false.
// C++ code to demonstrate working of all_of()
#include <vector>
#include <algorithm>
#include <iostream>
int main()
{
std::vector<int> v(10, 2);
// illustrate all_of
if (std::all_of(v.cbegin(), v.cend(), [](int i){ return i % 2 == 0; }))
{
std::cout << "All numbers are even\n";
}
}

Inserting multiple values into a vector at specific positions

Say I have a vector of integers like this std::vector<int> _data;
I know that if I want to remove multiple items from _data, then I can simply call
_data.erase( std::remove_if( _data.begin(), _data.end(), [condition] ), _data.end() );
Which is much faster than eraseing multiple elements, as less movement of data is required within the vector. I'm wondering if there's something similar for insertions.
For example, if I have the following pairs
auto pair1 = { _data.begin() + 5, 5 };
auto pair2 = { _data.begin() + 12, 12 };
Can I insert both of these in one iteration using some existing std function? I know I can do something like:
_data.insert( pair2.first, pair2.second );
_data.insert( pair1.first, pair1.second );
But this is (very) slow for large vectors (talking 100,000+ elements).
EDIT: Basically, I have a custom set (and map) which use a vector as the underlying containers. I know I can just use std::set or std::map, but the number of traversals I do far outweighs the insertion/removals. Switching from a set and map to this custom set/map already cut 20% of run-time off. Currently though, insertions take approximately 10% of the remaining run time, so reducing that is important.
The order is also required, unfortunately. As much as possible, I use the unordered_ versions, but in some places the order does matter.
One way is to create another vector with capacity equal to the original size plus the number of the elements being inserted and then do an insert loop with no reallocations, O(N) complexity:
template<class T>
std::vector<T> insert_elements(std::vector<T> const& v, std::initializer_list<std::pair<std::size_t, T>> new_elements) {
std::vector<T> u;
u.reserve(v.size() + new_elements.size());
auto src = v.begin();
size_t copied = 0;
for(auto const& element : new_elements) {
auto to_copy = element.first - copied;
auto src_end = src + to_copy;
u.insert(u.end(), src, src_end);
src = src_end;
copied += to_copy;
u.push_back(element.second);
}
u.insert(u.end(), src, v.end());
return u;
}
int main() {
std::vector<int> v{1, 3, 5};
for(auto e : insert_elements(v, {{1,2}, {2,4}}))
std::cout << e << ' ';
std::cout << '\n';
}
Output:
1 2 3 4 5
Ok, we need some assumptions. Let old_end be a reverse iterator to the last element of your vector. Assume that your _data has been resized to exactly fit both its current content and what you want to insert. Assume that inp is a container of std::pair containing your data to be inserted that is ordered reversely (so first the element that is to be inserted at the hindmost position and so on). Then we can do:
std::merge(old_end, _data.rend(), inp.begin(), inp.end(), data.rend(), [int i = inp.size()-1](const &T t, const &std::pair<Iter, T> p) mutable {
if( std::distance(_data.begin(), p.first) == i ) {
--i;
return false;
}
return true;
}
But I think that is not more clear than using a good old for. The problem with the stl-algorithms is that the predicates work on values and not on iterators thats a bit annoying for this problem.
Here's my take:
template<class Key, class Value>
class LinearSet
{
public:
using Node = std::pair<Key, Value>;
template<class F>
void insert_at_multiple(F&& f)
{
std::queue<Node> queue;
std::size_t index = 0;
for (auto it = _kvps.begin(); it != _kvps.end(); ++it)
{
// The container size is left untouched here, no iterator invalidation.
if (std::optional<Node> toInsert = f(index))
{
queue.push(*it);
*it = std::move(*toInsert);
}
else
{
++index;
// Replace current node with queued one.
if (!queue.empty())
{
queue.push(std::move(*it));
*it = std::move(queue.front());
queue.pop();
}
}
}
// We now have as many displaced items in the queue as were inserted,
// add them to the end.
while (!queue.empty())
{
_kvps.emplace_back(std::move(queue.front()));
queue.pop();
}
}
private:
std::vector<Node> _kvps;
};
https://godbolt.org/z/EStKgQ
This is a linear time algorithm that doesn't need to know the number of inserted elements a priori. For each index, it asks for an element to insert there. If it gets one, it pushes the corresponding existing vector element to a queue and replaces it with the new one. Otherwise, it extracts the current item to the back of the queue and puts the item at the front of the queue into the current position (noop if no elements were inserted yet). Note that the vector size is left untouched during all this. Only at the end do we push back all items still in the queue.
Note that the indices we use for determining inserted item locations here are all pre-insertion. I find this a point of potential confusion (and it is a limitation - you can't add an element at the very end with this algorithm. Could be remedied by calling f during the second loop too, working on that...).
Here's a version that allows inserting arbitrarily many elements at the end (and everywhere else). It passes post-insertion indices to the functor!
template<class F>
void insert_at_multiple(F&& f)
{
std::queue<Node> queue;
std::size_t index = 0;
for (auto it = _kvps.begin(); it != _kvps.end(); ++it)
{
if (std::optional<Node> toInsert = f(index))
queue.push(std::move(*toInsert));
if (!queue.empty())
{
queue.push(std::move(*it));
*it = std::move(queue.front());
queue.pop();
}
++index;
}
// We now have as many displaced items in the queue as were inserted,
// add them to the end.
while (!queue.empty())
{
if (std::optional<Node> toInsert = f(index))
{
queue.push(std::move(*toInsert));
}
_kvps.emplace_back(std::move(queue.front()));
queue.pop();
++index;
}
}
https://godbolt.org/z/DMuCtJ
Again, this leaves potential for confusion over what it means to insert at indices 0 and 1 (do you end up with an original element in between the two? In the first snippet you would, in the second you wouldn't). Can you insert at the same index multiple times? With pre-insertion indices that makes sense, with post-insertion indices it doesn't. You could also write this in terms of passing the current *it (i.e. key value pair) to the functor, but that alone seems not too useful...
This is an attempt I made, which inserts in reverse order. I did get rid of the iterators/indices for this.
template<class T>
void insert( std::vector<T> &vector, const std::vector<T> &values ) {
size_t last_index = vector.size() - 1;
vector.resize( vector.size() + values.size() ); // relies on T being default constructable
size_t move_position = vector.size() - 1;
size_t last_value_index = values.size() - 1;
size_t values_size = values.size();
bool isLastIndex = false;
while ( !isLastIndex && values_size ) {
if ( values[last_value_index] > vector[last_index] ) {
vector[move_position] = std::move( values[last_value_index--] );
--values_size;
} else {
isLastIndex = last_index == 0;
vector[move_position] = std::move( vector[last_index--] );
}
--move_position;
}
if ( isLastIndex && values_size ) {
while ( values_size ) {
vector[move_position--] = std::move( values[last_value_index--] );
--values_size;
}
}
}
Tried with ICC, Clang, and GCC on Godbolt, and vector's insert was faster (for 5 numbers inserted). On my machine, MSVC, same result but less severe. I also compared with Maxim's version from his answer. I realize using Godbolt isn't a good method for comparison, but I don't have access to the 3 other compilers on my current machine.
https://godbolt.org/z/vjV2wA
Results from my machine:
My insert: 659us
Maxim insert: 712us
Vector insert: 315us
Godbolt's ICC
My insert: 470us
Maxim insert: 139us
Vector insert: 127us
Godbolt's GCC
My insert: 815us
Maxim insert: 97us
Vector insert: 97us
Godbolt's Clang:
My insert: 477us
Maxim insert: 188us
Vector insert: 96us

What is the fastest way to return a range of numbers from a sorted array of numbers?

For C++ language, what's the fastest way in processing run-time (in multi core processors), from an algorithm design viewpoint, to search numbers (e.g. between 100 and 1000) that are within an array (or splice or whatever faster data structures for the purpose of this) and return the range of numbers limited to only 10 items returned? e.g. pseudocode in golang:
var listofnums := []uint64
var numcounter := 1
// splice of [1,2,3,4,5,31,32 .. 932536543] this list has 1 billion numeric items.
// the listofnums are already sorted each time an item is added but we do not know the lower_bound or upper_bound of the item list.
// I know I can use binary search to find listofnums[i] where it is smallest at [i] too... I'm asking for suggestions.
for i:=uint(0); i < len(listofnums); i++ {
if listofnums[i] > 100 && listofnums[i] < 1000 {
if listofnums[i]> 1000 || numcounter == 10 {
return
}
fmt.Println("%d",listofnums[i])
numcounter++
}
}
is this the fastest way? I saw bitmap structures in C++ but not sure if can be applied here.
I've come across this question, which is perfectly fine for veteran programmers to ask but I have no idea why it's down voted.
What is the fastest search method for array?
Can someone please not remove this question but let me rephrase it? Thanks in advance. I hope to find the most optimum way to return a range of numbers from a large array of numeric items.
If I understand your problem correctly you need to find two positions in your array, the first of which all numbers are greater than or equal to 100 and the second of which all numbers are less than or equal to 1000.
The functions std::lower_bound and std::upper_bound do binary searches designed to find such a range.
For arrays, in C++ we usually use a std::vector and denote the beginning and end of ranges using a pair of iterators.
So something like this may be what you need:
std::pair<std::vector<int>::iterator, std::vector<int>::iterator>
find_range(std::vector<int>& v, int min, int max)
{
auto begin = std::lower_bound(std::begin(v), std::end(v), min);
// start searching after the previously found value
auto end = std::upper_bound(begin, std::end(v), max);
return {begin, end};
}
You can iterate over that range like this:
auto range = find_range(v, 100, 1000);
for(auto i = range.first; i != range.second; ++i)
std::cout << *i << '\n';
You can create a new vector from the range (slow) like this:
std::vector<int> selection{range.first, range.second};
My first attempt.
Features:
logN time complexity
creates an array slice, no copying of data
second binary search minimises the search space on the basis of the first
possible improvements:
if n is small, the second binary search would be a pessimisation. Better to simply count forward up to n times.
#include <vector>
#include <cstdint>
#include <algorithm>
#include <iterator>
#include <iostream>
template <class Iter> struct range
{
range(Iter first, std::size_t size) : begin_(first), end_(first + size) {}
auto begin() const { return begin_; }
auto end() const { return end_; }
Iter begin_, end_;
};
template<class Iter> range(Iter, std::size_t) -> range<Iter>;
auto find_first_n_between(std::vector<std::int64_t>& vec,
std::size_t n,
std::int64_t from, std::int64_t to)
{
auto lower = std::lower_bound(begin(vec), end(vec), from);
auto upper = std::upper_bound(lower, end(vec), to);
auto size = std::min(n, std::size_t(std::distance(lower, upper)));
return range(lower, size);
}
int main()
{
std::vector<std::int64_t> vec { 1,2,3,4,5,6,7,8,15,17,18,19,20 };
auto slice = find_first_n_between(vec, 5, 6, 15);
std::copy(std::begin(slice), std::end(slice), std::ostream_iterator<std::int64_t>(std::cout, ", "));
}

Loop over all (unordered) pairs of elements in a vector

I have a std::vector of some data (Points in my case) and I want to loop over all distinct pairs of elements. The order of the pair is not important (as I am only interested in the distance of the points). With a classic for loop what I would want to do would be something like:
std::vector<double> vec{-1., 3., 5., -8., 123., ...};
for (std::vector<double>::size_type first = 0; first < vec.size(); ++first) {
for (std::vector<double>::size_type second = first+1; second < vec.size();
++second) {
// Compute something using std::fabs(vec.at(first)-vec.at(second))
}
}
My question is now if one can achieve this more elegantly using range based loops.
I wouldn't attempt to coerce it into a range based loop (since contriving the start of the inner loop will be tricky), but I would work directly with the iterators to clarify the loop body and to make the code less dependent on the specific container you're using:
for (auto first = vec.begin(); first != vec.end(); ++first){
for (auto second = first + 1; second != vec.end(); ++second){
// Your vec.at(first) is now simply *first.
}
}
Note that first + 1 is always valid since first is never vec.end() when first + 1 is evaluated.
std::vector::at is also required by the C++ standard to check that the supplied index is in the bounds of the vector (and throw a std::out_of_range exception if it isn't within the bounds), which is an unnecessary overhead in your case.
I provide this answer only because OP want a way of doing that with
range based for loops. It isn't more elegant than ordinary loops.
If your vector doesn't have duplicate numbers you can use reverse iteration instead of beginning from a specific point in the second loop, so that you can use range based for in your iterations.
for reverse iteration by range based for loops you want an adapter class.
template <typename It>
class reverse_adapter
{
public:
reverse_adapter(It rbegin, It rend)
: _rbegin(rbegin), _rend(rend)
{}
It begin() const { return _rbegin; }
It end() const { return _rend; }
private:
It _rbegin;
It _rend;
};
template<typename Container>
reverse_adapter<typename Container::reverse_iterator> make_reverse(Container& container)
{
reverse_adapter<typename Container::reverse_iterator> adapter(std::rbegin(container), std::rend(container));
return adapter;
}
And use this adapter for reverse iteration in second loop.
for(auto val : vec)
{
for (auto second_val : make_reverse(vec)) // Start from last to current item in first loop
{
if (val == second_val) break; // Instead of first + 1 in second loop
auto dif = val - second_val;
}
}

Combining arrays/lists in an specific fashion

I'm trying to find a sensible algorithm to combine multiple lists/vectors/arrays as defined below.
Each element contains a float declaring the start of its range of validity and a constant that is used over this range. Where ranges from different lists overlap their constants need to be added to produce one global list.
I've done an attempt at an illustration below to try and give a good idea of what I mean:
First List:
0.5---------------2------------3.2--------4
a1 a2 a3
Second List:
1----------2----------3---------------4.5
b1 b2 b3
Desired Output:
0.5----1----------2----------3-3.2--------4--4.5
a1 a1+b1 a2+b2 ^ a3+b3 b3
b3+a2
I can't think of a sensible way of going about this in the case of n lists; Just 2 is quite easy to brute force.
Any hints or ideas would be welcome. Each list is represented as a C++ std::vector (so feel free to use standard algorithms) and are sorted by start of range value.
Cheers!
Edit: Thanks for the advice, I've come up with a naive implementation, not sure why I couldn't get here on my own first. To my mind the obvious improvement would be to store an iterator for each vector since they're already sorted and not have to re-traverse each vector for each point. Given that most vectors will contain less than 100 elements, but there may be many vectors this may or may not be worthwhile. I'd have to profile to see.
Any thoughts on this?
#include <vector>
#include <iostream>
struct DataType
{
double intervalStart;
int data;
// More data here, the data is not just a single int, but that
// works for our demonstration
};
int main(void)
{
// The final "data" of each vector is meaningless as it refers to
// the coming range which won't be used as this is only for
// bounded ranges
std::vector<std::vector<DataType> > input = {{{0.5, 1}, {2.0, 3}, {3.2, 3}, {4.0, 4}},
{{1.0, 5}, {2.0, 6}, {3.0, 7}, {4.5, 8}},
{{-34.7895, 15}, {-6.0, -2}, {1.867, 5}, {340, 7}}};
// Setup output vector
std::vector<DataType> output;
std::size_t inputSize = 0;
for (const auto& internalVec : input)
inputSize += internalVec.size();
output.reserve(inputSize);
// Fill output vector
for (const auto& internalVec : input)
std::copy(internalVec.begin(), internalVec.end(), std::back_inserter(output));
// Sort output vector by intervalStartPoints
std::sort(output.begin(), output.end(),
[](const DataType& data1, const DataType& data2)
{
return data1.intervalStart < data2.intervalStart;
});
// Remove DataTypes with same intervalStart - each interval can only start once
output.erase(std::unique(output.begin(), output.end(),
[](const DataType& dt1, const DataType& dt2)
{
return dt1.intervalStart == dt2.intervalStart;
}), output.end());
// Output now contains all the right intersections, just not with the right data
// Lambda to find the associated data value associated with an
// intervsalStart value in a vector
auto FindDataValue = [&](const std::vector<DataType> v, double startValue)
{
auto iter = std::find_if(v.begin(), v.end(), [startValue](const DataType& data)
{
return data.intervalStart > startValue;
});
if (iter == v.begin() || iter == v.end())
{
return 0;
}
return (iter-1)->data;
};
// For each interval in the output traverse the input and sum the
// data constants
for (auto& val : output)
{
int sectionData = 0;
for (const auto& iv : input)
sectionData += FindDataValue(iv, val.intervalStart);
val.data = sectionData;
}
for (const auto& i : output)
std::cout << "loc: " << i.intervalStart << " data: " << i.data << std::endl;
return 0;
}
Edit2: #Stas's code is a very good way to approach this problem. I've just tested it on all the edge cases I could think of.
Here's my merge_intervals implementation in case anyone is interested. The only slight change I've had to make to the snippets Stas provided is:
for (auto& v : input)
v.back().data = 0;
Before combining the vectors as suggested. Thanks!
template<class It1, class It2, class OutputIt>
OutputIt merge_intervals(It1 first1, It1 last1,
It2 first2, It2 last2,
OutputIt destBegin)
{
const auto begin1 = first1;
const auto begin2 = first2;
auto CombineData = [](const DataType& d1, const DataType& d2)
{
return DataType{d1.intervalStart, (d1.data+d2.data)};
};
for (; first1 != last1; ++destBegin)
{
if (first2 == last2)
{
return std::copy(first1, last1, destBegin);
}
if (first1->intervalStart == first2->intervalStart)
{
*destBegin = CombineData(*first1, *first2);
++first1; ++first2;
}
else if (first1->intervalStart < first2->intervalStart)
{
if (first2 > begin2)
*destBegin = CombineData(*first1, *(first2-1));
else
*destBegin = *first1;
++first1;
}
else
{
if (first1 > begin1)
*destBegin = CombineData(*first2, *(first1-1));
else
*destBegin = *first2;
++first2;
}
}
return std::copy(first2, last2, destBegin);
}
Unfortunately, your algorithm is inherently slow. It doesn't make sense to profile or apply some C++ specific tweaks, it won't help. It will never stop calculation on pretty small sets like merging 1000 lists of 10000 elements each.
Let's try to evaluate time complexity of your algo. For the sake of simplicity, let's merge only lists of the same length.
L - length of a list
N - number of lists to be merged
T = L * N - length of a whole concatenated list
Complexity of your algorithm steps:
create output vector - O(T)
sort output vector - O(T*log(T))
filter output vector - O(T)
fix data in output vector - O(T*T)
See, the last step defines the whole algorithm complexity: O(T*T) = O(L^2*N^2). It is not acceptable for practical application. See, to merge 1000 lists of 10000 elements each, the algorithm should run 10^14 cycles.
Actually, the task is pretty complex, so do not try to solve it in one step. Divide and conquer!
Write an algorithm that merges two lists into one
Use it to merge a list of lists
Merging two lists into one
This is relatively easy to implement (but be careful with corner cases). The algorithm should have linear time complexity: O(2*L). Take a look at how std::merge is implemented. You just need to write your custom variant of std::merge, let's call it merge_intervals.
Applying a merge algorithm to a list of lists
This is a little bit tricky, but again, divide and conquer! The idea is to do recursive merge: split a list of lists on two halves and merge them.
template<class It, class Combine>
auto merge_n(It first, It last, Combine comb)
-> typename std::remove_reference<decltype(*first)>::type
{
if (first == last)
throw std::invalid_argument("Empty range");
auto count = std::distance(first, last);
if (count == 1)
return *first;
auto it = first;
std::advance(it, count / 2);
auto left = merge_n(first, it, comb);
auto right = merge_n(it, last, comb);
return comb(left, right);
}
Usage:
auto combine = [](const std::vector<DataType>& a, const std::vector<DataType>& b)
{
std::vector<DataType> result;
merge_intervals(a.begin(), a.end(), b.begin(), b.end(),
std::back_inserter(result));
return result;
};
auto output = merge_n(input.begin(), input.end(), combine);
The nice property of such recursive approach is a time complexity: it is O(L*N*log(N)) for the whole algorithm. So, to merge 1000 lists of 10000 elements each, the algorithm should run 10000 * 1000 * 9.966 = 99,660,000 cycles. It is 1,000,000 times faster than original algorithm.
Moreover, such algorithm is inherently parallelizable. It is not a big deal to write parallel version of merge_n and run it on thread pool.
I know I'm a bit late to the party, but when I started writing this you hadn't a suitable answer yet, and my solution should have a relatively good time complexity, so here you go:
I think the most straightforward way to approach this is to see each of your sorted lists as a stream of events: At a given time, the value (of that stream) changes to a new value:
template<typename T>
struct Point {
using value_type = T;
float time;
T value;
};
You want to superimpose those streams into a single stream (i.e. having their values summed up at any given point). For that you take the earliest event from all streams, and apply its effect on the result stream. Therefore, you need to first "undo" the effect that the previous value from that stream made on the result stream, and then add the new value to the current value of the result stream.
To be able to do that, you need to remember for each stream the last value, the next value (and when the stream is empty):
std::vector<std::tuple<Value, StreamIterator, StreamIterator>> streams;
The first element of the tuple is the last effect of that stream onto the result stream, the second is an iterator pointing to the streams next event, and the last is the end iterator of that stream:
transform(from, to, inserter(streams, begin(streams)),
[] (auto & stream) {
return make_tuple(static_cast<Value>(0), begin(stream), end(stream));
});
To be able to always get the earliest event of all the streams, it helps to keep the (information about the) streams in a (min) heap, where the top element is the stream with the next (earliest) event. That's the purpose of the following comparator:
auto heap_compare = [] (auto const & lhs, auto const & rhs) {
bool less = (*get<1>(lhs)).time < (*get<1>(rhs)).time;
return (not less);
};
Then, as long as there are still some events (i.e. some stream that is not empty), first (re)build the heap, take the top element and apply its next event to the result stream, and then remove that element from the stream. Finally, if the stream is now empty, remove it.
// The current value of the result stream.
Value current = 0;
while (streams.size() > 0) {
// Reorder the stream information to get the one with the earliest next
// value into top ...
make_heap(begin(streams), end(streams), heap_compare);
// .. and select it.
auto & earliest = streams[0];
// New value is the current one, minus the previous effect of the selected
// stream plus the new value from the selected stream
current = current - get<0>(earliest) + (*get<1>(earliest)).value;
// Store the new time point with the new value and the time of the used
// time point from the selected stream
*out++ = Point<Value>{(*get<1>(earliest)).time, current};
// Update the effect of the selected stream
get<0>(earliest) = (*get<1>(earliest)).value;
// Advance selected stream to its next time point
++(get<1>(earliest));
// Remove stream if empty
if (get<1>(earliest) == get<2>(earliest)) {
swap(streams[0], streams[streams.size() - 1u]);
streams.pop_back();
}
}
This will return a stream where there might be multiple points with the same time, but a different value. This occurs when there are multiple "events" at the same time. If you only want the last value, i.e. the value after all these events happened, then one needs to combine them:
merge_point_lists(begin(input), end(input), inserter(merged, begin(merged)));
// returns points with the same time, but with different values. remove these
// duplicates, by first making them REALLY equal, i.e. setting their values
// to the last value ...
for (auto write = begin(merged), read = begin(merged), stop = end(merged);
write != stop;) {
for (++read; (read != stop) and (read->time == write->time); ++read) {
write->value = read->value;
}
for (auto const cached = (write++)->value; write != read; ++write) {
write->value = cached;
}
}
// ... and then removing them.
merged.erase(
unique(begin(merged), end(merged),
[](auto const & lhs, auto const & rhs) {
return (lhs.time == rhs.time);}),
end(merged));
(Live example here)
Concerning the time complexity: This is iterating over all "events", so it depends on the number of events e. The very first make_heap call has to built a complete new heap, this has worst case complexity of 3 * s where s is the number of streams the function has to merge. On subsequent calls, make_heap only has to correct the very first element, this has worst case complexity of log(s'). I write s' because the number of streams (that need to be considered) will decrease to zero. This
gives
3s + (e-1) * log(s')
as complexity. Assuming the worst case, where s' decreases slowly (this happens when the events are evenly distributed across the streams, i.e. all streams have the same number of events:
3s + (e - 1 - s) * log(s) + (sum (log(i)) i = i to s)
Do you really need a data structure as result? I don't think so. Actually you're defining several functions that can be added. The examples you give are encoded using a 'start, value(, implicit end)' tuple. The basic building block is a function that looks up it's value at a certain point:
double valueAt(const vector<edge> &starts, float point) {
auto it = std::adjacent_find(begin(starts), end(starts),
[&](edge e1, edge e2) {
return e1.x <= point && point < e2.x;
});
return it->second;
};
The function value for a point is the sum of the function values for all code-series.
If you really need a list in the end, you can join and sort all edge.x values for all series, and create the list from that.
Unless performance is an issue :)
If you can combine two of these structures, you can combine many.
First, encapsulate your std::vector into a class. Implement what you know as operator+= (and define operator+ in terms of this if you want). With that in place, you can combine as many as you like, just by repeated addition. You could even use std::accumulate to combine a collection of them.