How to do average operation on stl iterator in c++ - c++

I am trying to do merge algorithm based on stl vector and iterator, i know there is also std::merge, but I would like to make my own version, I want to average two iterator and gain the middle index as I did in normal integeriMiddle = (iEnd + iBegin) / 2. Nevertheless, it never work, then I try to add value after by getting the vector size and divided by 2
iMiddle = iBegin+A.size()/2;
, but this is kind of stubid method to do and also run into problem when split more than 1 time, does any one have better solution for this?
void TopDownMergeSort(int n, vector<int> A, vector<int> B)
{
TopDownSplitMerge(A, A.begin(), A.end(), B);
}
void TopDownSplitMerge(vector<int> A, vector<int>::iterator iBegin, vector<int>::iterator iEnd, vector<int> B)
{
vector<int>::iterator iMiddle;
// if run size == 1
if((iEnd - iBegin) < 2){
cout<<"merge finshed!"<<endl;
return; // consider it sorted
}
iMiddle = iBegin+A.size()/2;
TopDownSplitMerge(A, iBegin, iMiddle, B); // split / merge left half
}
UPDATE
USE
iMiddle = iBegin+distance(iBegin,iEnd)/2;
Everything works fine!

middle = begin + std::distance(begin, end) / 2;

Related

C++ insert element at the beginning of a vector using insert()

Below I have a function which is supposed to extract from a vector into another vector the odd numbers and in the old vector i want to insert the even numbers at the beginning of the vector so that later to resize is. I know it is not really an effective way but I have a problem from a book to test the speed of the variant with resize instead of erasing, which I think would have the same speed anyway.. My problem is that in else when I want to insert the element which even, it does not work in this way.. What am I doing wrong in the insert function? Am I passing in a wrong way the iterator?
std::vector<int> extract(std::vector<int>& even)
{
std::vector<int> odd;
std::vector<int>::size_type size = even.size();
std::vector<int>::const_iterator a = even.begin();
std::vector<int>::const_iterator b = even.end();
while (a != b)
{
if ((*a) % 2 != 0)
{
odd.push_back((*a));
}
else
{
even.insert(even.begin(),(*a));
}
a++;
}
//even.resize(size);
return odd;
}

How to merge sorted vectors into a single vector in C++

I have 10,000 vector<pair<unsigned,unsigned>> and I want to merge them into a single vector such that it is lexicographically sorted and does not contain duplicates. In order to do so I wrote the following code. However, to my surprise the below code is taking a lot of time. Can someone please suggest as to how can I reduce the running time of my code?
using obj = pair<unsigned, unsigned>
vector< vector<obj> > vecOfVec; // 10,000 vector<obj>, each sorted with size()=10M
vector<obj> result;
for(auto it=vecOfVec.begin(), l=vecOfVec.end(); it!=l; ++it)
{
// append vectors
result.insert(result.end(),it->begin(),it->end());
// sort result
std::sort(result.begin(), result.end());
// remove duplicates from result
result.erase(std::unique(result.begin(), result.end()), result.end());
}
I think you should use the fact that the vector in vectOfVect are sorted.
So detecting the min value in the front on the single vectors, push_back() it in the result and remove all the values detected from the front of the vectors matching the min values (avoiding duplicates in result).
If you can delete the vecOfVec variable, something like (caution: code not tested: just to give an idea)
while ( vecOfVec.size() )
{
// detect the minimal front value
auto itc = vecOfVec.cbegin();
auto lc = vecOfVec.cend();
auto valMin = itc->front();
while ( ++itc != lc )
valMin = std::min(valMin, itc->front());
// push_back() the minimal front value in result
result.push_back(valMin);
for ( auto it = vecOfVec.begin() ; it != vecOfVec.end() ; )
{
// remove all the front values equals to valMin (this remove the
// duplicates from result)
while ( (false == it->empty()) && (valMin == it->front()) )
it->erase(it->begin());
// when a vector is empty is removed
it = ( it->empty() ? vecOfVec.erase(it) : ++it );
}
}
If you can, I suggest you to switch vecOfVec from a vector< vector<obj> > to something that permit an efficient removal from the front of single containers (stacks?) and an efficient removal of single containers (a list?).
If there are lot of duplicates, you should use set rather than vector for your result, as set is the most natural thing to store something without duplicates:
set< pair<unsigned,unsigned> > resultSet;
for (auto it=vecOfVec.begin(); it!=vecOfVec.end(); ++it)
resultSet.insert(it->begin(), it->end());
If you need to turn it into a vector, you can write
vector< pair<unsigned,unsigned> > resultVec(resultSet.begin(), resultSet.end());
Note that since your code runs over 800 billion elements, it would still take a lot of time, no matter what. At least hours, if not days.
Other ideas are:
recursively merge vectors (10000 -> 5000 -> 2500 -> ... -> 1)
to merge 10000 vectors, store 10000 iterators in a heap structure
One problem with your code is the excessive use of std::sort. Unfortunately, the quicksort algorithm (which usually is the working horse used by std::sort) is not particularly faster when encountering an already sorted array.
Moreover, you're not exploiting the fact that your initial vectors are already sorted. This can be exploited by using a heap of their next values, when you will not need to call sort again. This may be coded as follows (code tested using obj=int), but perhaps it can be made more concise.
// represents the next unused entry in one vector<obj>
template<typename obj>
struct feed
{
typename std::vector<obj>::const_iterator current, end;
feed(std::vector<obj> const&v)
: current(v.begin()), end(v.end()) {}
friend bool operator> (feed const&l, feed const&r)
{ return *(l.current) > *(r.current); }
};
// - returns the smallest element
// - set corresponding feeder to next and re-establish the heap
template<typename obj>
obj get_next(std::vector<feed<obj>>&heap)
{
auto&f = heap[0];
auto x = *(f.current++);
if(f.current == f.end) {
std::pop_heap(heap.begin(),heap.end(),std::greater<feed<obj>>{});
heap.pop_back();
} else
std::make_heap(heap.begin(),heap.end(),std::greater<feed<obj>>{});
return x;
}
template<typename obj>
std::vector<obj> merge(std::vector<std::vector<obj>>const&vecOfvec)
{
// create min heap of feed<obj> and count total number of objects
std::vector<feed<obj>> input;
input.reserve(vecOfvec.size());
size_t num_total = 0;
for(auto const&v:vecOfvec)
if(v.size()) {
num_total += v.size();
input.emplace_back(v);
}
std::make_heap(input.begin(),input.end(),std::greater<feed<obj>>{});
// append values in ascending order, avoiding duplicates
std::vector<obj> result;
result.reserve(num_total);
while(!input.empty()) {
auto x = get_next(input);
result.push_back(x);
while(!input.empty() &&
!(*(input[0].current) > x)) // remove duplicates
get_next(input);
}
return result;
}

How do I decrease the count of an element in a multiset in C++?

I am using a multi-set in c++, which I believe stores an element and the respective count of it when it is inserted.
Here, when I want to delete an element, I just want to decrease the count of that element in the set by 1 till it is greater than 0.
Example C++ code:
multiset<int>mset;
mset.insert(2);
mset.insert(2);
printf("%d ",mset.count(2)); //this returns 2
// here I need an O(1) constant time function (in-built or whatever )
// to decrease the count of 2 in the set without deleting it
// Remember constant time only
-> Function and its specifications
printf("%d ",mset.count(2)); // it should print 1 now .
Is there any way to achieve that or should i go by deleting that and inserting the element 2 by the required (count-1) times?
... I am using a multi-set in c++, which stores an element and the respective count of it ...
No you aren't. You're using a multi-set which stores n copies of a value which was inserted n times.
If you want to store something relating a value to a count, use an associative container like std::map<int, int>, and use map[X]++ to increment the number of Xs.
... i need an O(1) constant time function ... to decrease the count ...
Both map and set have O(log N) complexity just to find the element you want to alter, so this is impossible with them. Use std::unordered_map/set to get O(1) complexity.
... I just want to decrease the count of that element in the set by 1 till it is >0
I'm not sure what that means.
with a set:
to remove all copies of an element from the set, use equal_range to get a range (pair of iterators), and then erase that range
to remove all-but-one copies in a non-empty range, just increment the first iterator in the pair and check it's still not equal to the second iterator before erasing the new range.
these both have an O(log N) lookup (equal_range) step followed by a linear-time erase step (although it's linear with the number of elements having the same key, not N).
with a map:
to remove the count from a map, just erase the key
to set the count to one, just use map[key]=1;
both of these have an O(log N) lookup followed by a constant-time erase
with an unordered map ... for your purposes it's identical to the map above, except with O(1) complexity.
Here's a quick example using unordered_map:
template <typename Key>
class Counter {
std::unordered_map<Key, unsigned> count_;
public:
unsigned inc(Key k, unsigned delta = 1) {
auto result = count_.emplace(k, delta);
if (result.second) {
return delta;
} else {
unsigned& current = result.first->second;
current += delta;
return current;
}
}
unsigned dec(Key k, unsigned delta = 1) {
auto iter = count_.find(k);
if (iter == count_.end()) return 0;
unsigned& current = iter->second;
if (current > delta) {
current -= delta;
return current;
}
// else current <= delta means zero
count_.erase(iter);
return 0;
}
unsigned get(Key k) const {
auto iter = count_.find(k);
if (iter == count_.end()) return 0;
return iter->second;
}
};
and use it like so:
int main() {
Counter<int> c;
// test increment
assert(c.inc(1) == 1);
assert(c.inc(2) == 1);
assert(c.inc(2) == 2);
// test lookup
assert(c.get(0) == 0);
assert(c.get(1) == 1);
// test simple decrement
assert(c.get(2) == 2);
assert(c.dec(2) == 1);
assert(c.get(2) == 1);
// test erase and underflow
assert(c.dec(2) == 0);
assert(c.dec(2) == 0);
assert(c.dec(1, 42) == 0);
}

Combining arrays/lists in an specific fashion

I'm trying to find a sensible algorithm to combine multiple lists/vectors/arrays as defined below.
Each element contains a float declaring the start of its range of validity and a constant that is used over this range. Where ranges from different lists overlap their constants need to be added to produce one global list.
I've done an attempt at an illustration below to try and give a good idea of what I mean:
First List:
0.5---------------2------------3.2--------4
a1 a2 a3
Second List:
1----------2----------3---------------4.5
b1 b2 b3
Desired Output:
0.5----1----------2----------3-3.2--------4--4.5
a1 a1+b1 a2+b2 ^ a3+b3 b3
b3+a2
I can't think of a sensible way of going about this in the case of n lists; Just 2 is quite easy to brute force.
Any hints or ideas would be welcome. Each list is represented as a C++ std::vector (so feel free to use standard algorithms) and are sorted by start of range value.
Cheers!
Edit: Thanks for the advice, I've come up with a naive implementation, not sure why I couldn't get here on my own first. To my mind the obvious improvement would be to store an iterator for each vector since they're already sorted and not have to re-traverse each vector for each point. Given that most vectors will contain less than 100 elements, but there may be many vectors this may or may not be worthwhile. I'd have to profile to see.
Any thoughts on this?
#include <vector>
#include <iostream>
struct DataType
{
double intervalStart;
int data;
// More data here, the data is not just a single int, but that
// works for our demonstration
};
int main(void)
{
// The final "data" of each vector is meaningless as it refers to
// the coming range which won't be used as this is only for
// bounded ranges
std::vector<std::vector<DataType> > input = {{{0.5, 1}, {2.0, 3}, {3.2, 3}, {4.0, 4}},
{{1.0, 5}, {2.0, 6}, {3.0, 7}, {4.5, 8}},
{{-34.7895, 15}, {-6.0, -2}, {1.867, 5}, {340, 7}}};
// Setup output vector
std::vector<DataType> output;
std::size_t inputSize = 0;
for (const auto& internalVec : input)
inputSize += internalVec.size();
output.reserve(inputSize);
// Fill output vector
for (const auto& internalVec : input)
std::copy(internalVec.begin(), internalVec.end(), std::back_inserter(output));
// Sort output vector by intervalStartPoints
std::sort(output.begin(), output.end(),
[](const DataType& data1, const DataType& data2)
{
return data1.intervalStart < data2.intervalStart;
});
// Remove DataTypes with same intervalStart - each interval can only start once
output.erase(std::unique(output.begin(), output.end(),
[](const DataType& dt1, const DataType& dt2)
{
return dt1.intervalStart == dt2.intervalStart;
}), output.end());
// Output now contains all the right intersections, just not with the right data
// Lambda to find the associated data value associated with an
// intervsalStart value in a vector
auto FindDataValue = [&](const std::vector<DataType> v, double startValue)
{
auto iter = std::find_if(v.begin(), v.end(), [startValue](const DataType& data)
{
return data.intervalStart > startValue;
});
if (iter == v.begin() || iter == v.end())
{
return 0;
}
return (iter-1)->data;
};
// For each interval in the output traverse the input and sum the
// data constants
for (auto& val : output)
{
int sectionData = 0;
for (const auto& iv : input)
sectionData += FindDataValue(iv, val.intervalStart);
val.data = sectionData;
}
for (const auto& i : output)
std::cout << "loc: " << i.intervalStart << " data: " << i.data << std::endl;
return 0;
}
Edit2: #Stas's code is a very good way to approach this problem. I've just tested it on all the edge cases I could think of.
Here's my merge_intervals implementation in case anyone is interested. The only slight change I've had to make to the snippets Stas provided is:
for (auto& v : input)
v.back().data = 0;
Before combining the vectors as suggested. Thanks!
template<class It1, class It2, class OutputIt>
OutputIt merge_intervals(It1 first1, It1 last1,
It2 first2, It2 last2,
OutputIt destBegin)
{
const auto begin1 = first1;
const auto begin2 = first2;
auto CombineData = [](const DataType& d1, const DataType& d2)
{
return DataType{d1.intervalStart, (d1.data+d2.data)};
};
for (; first1 != last1; ++destBegin)
{
if (first2 == last2)
{
return std::copy(first1, last1, destBegin);
}
if (first1->intervalStart == first2->intervalStart)
{
*destBegin = CombineData(*first1, *first2);
++first1; ++first2;
}
else if (first1->intervalStart < first2->intervalStart)
{
if (first2 > begin2)
*destBegin = CombineData(*first1, *(first2-1));
else
*destBegin = *first1;
++first1;
}
else
{
if (first1 > begin1)
*destBegin = CombineData(*first2, *(first1-1));
else
*destBegin = *first2;
++first2;
}
}
return std::copy(first2, last2, destBegin);
}
Unfortunately, your algorithm is inherently slow. It doesn't make sense to profile or apply some C++ specific tweaks, it won't help. It will never stop calculation on pretty small sets like merging 1000 lists of 10000 elements each.
Let's try to evaluate time complexity of your algo. For the sake of simplicity, let's merge only lists of the same length.
L - length of a list
N - number of lists to be merged
T = L * N - length of a whole concatenated list
Complexity of your algorithm steps:
create output vector - O(T)
sort output vector - O(T*log(T))
filter output vector - O(T)
fix data in output vector - O(T*T)
See, the last step defines the whole algorithm complexity: O(T*T) = O(L^2*N^2). It is not acceptable for practical application. See, to merge 1000 lists of 10000 elements each, the algorithm should run 10^14 cycles.
Actually, the task is pretty complex, so do not try to solve it in one step. Divide and conquer!
Write an algorithm that merges two lists into one
Use it to merge a list of lists
Merging two lists into one
This is relatively easy to implement (but be careful with corner cases). The algorithm should have linear time complexity: O(2*L). Take a look at how std::merge is implemented. You just need to write your custom variant of std::merge, let's call it merge_intervals.
Applying a merge algorithm to a list of lists
This is a little bit tricky, but again, divide and conquer! The idea is to do recursive merge: split a list of lists on two halves and merge them.
template<class It, class Combine>
auto merge_n(It first, It last, Combine comb)
-> typename std::remove_reference<decltype(*first)>::type
{
if (first == last)
throw std::invalid_argument("Empty range");
auto count = std::distance(first, last);
if (count == 1)
return *first;
auto it = first;
std::advance(it, count / 2);
auto left = merge_n(first, it, comb);
auto right = merge_n(it, last, comb);
return comb(left, right);
}
Usage:
auto combine = [](const std::vector<DataType>& a, const std::vector<DataType>& b)
{
std::vector<DataType> result;
merge_intervals(a.begin(), a.end(), b.begin(), b.end(),
std::back_inserter(result));
return result;
};
auto output = merge_n(input.begin(), input.end(), combine);
The nice property of such recursive approach is a time complexity: it is O(L*N*log(N)) for the whole algorithm. So, to merge 1000 lists of 10000 elements each, the algorithm should run 10000 * 1000 * 9.966 = 99,660,000 cycles. It is 1,000,000 times faster than original algorithm.
Moreover, such algorithm is inherently parallelizable. It is not a big deal to write parallel version of merge_n and run it on thread pool.
I know I'm a bit late to the party, but when I started writing this you hadn't a suitable answer yet, and my solution should have a relatively good time complexity, so here you go:
I think the most straightforward way to approach this is to see each of your sorted lists as a stream of events: At a given time, the value (of that stream) changes to a new value:
template<typename T>
struct Point {
using value_type = T;
float time;
T value;
};
You want to superimpose those streams into a single stream (i.e. having their values summed up at any given point). For that you take the earliest event from all streams, and apply its effect on the result stream. Therefore, you need to first "undo" the effect that the previous value from that stream made on the result stream, and then add the new value to the current value of the result stream.
To be able to do that, you need to remember for each stream the last value, the next value (and when the stream is empty):
std::vector<std::tuple<Value, StreamIterator, StreamIterator>> streams;
The first element of the tuple is the last effect of that stream onto the result stream, the second is an iterator pointing to the streams next event, and the last is the end iterator of that stream:
transform(from, to, inserter(streams, begin(streams)),
[] (auto & stream) {
return make_tuple(static_cast<Value>(0), begin(stream), end(stream));
});
To be able to always get the earliest event of all the streams, it helps to keep the (information about the) streams in a (min) heap, where the top element is the stream with the next (earliest) event. That's the purpose of the following comparator:
auto heap_compare = [] (auto const & lhs, auto const & rhs) {
bool less = (*get<1>(lhs)).time < (*get<1>(rhs)).time;
return (not less);
};
Then, as long as there are still some events (i.e. some stream that is not empty), first (re)build the heap, take the top element and apply its next event to the result stream, and then remove that element from the stream. Finally, if the stream is now empty, remove it.
// The current value of the result stream.
Value current = 0;
while (streams.size() > 0) {
// Reorder the stream information to get the one with the earliest next
// value into top ...
make_heap(begin(streams), end(streams), heap_compare);
// .. and select it.
auto & earliest = streams[0];
// New value is the current one, minus the previous effect of the selected
// stream plus the new value from the selected stream
current = current - get<0>(earliest) + (*get<1>(earliest)).value;
// Store the new time point with the new value and the time of the used
// time point from the selected stream
*out++ = Point<Value>{(*get<1>(earliest)).time, current};
// Update the effect of the selected stream
get<0>(earliest) = (*get<1>(earliest)).value;
// Advance selected stream to its next time point
++(get<1>(earliest));
// Remove stream if empty
if (get<1>(earliest) == get<2>(earliest)) {
swap(streams[0], streams[streams.size() - 1u]);
streams.pop_back();
}
}
This will return a stream where there might be multiple points with the same time, but a different value. This occurs when there are multiple "events" at the same time. If you only want the last value, i.e. the value after all these events happened, then one needs to combine them:
merge_point_lists(begin(input), end(input), inserter(merged, begin(merged)));
// returns points with the same time, but with different values. remove these
// duplicates, by first making them REALLY equal, i.e. setting their values
// to the last value ...
for (auto write = begin(merged), read = begin(merged), stop = end(merged);
write != stop;) {
for (++read; (read != stop) and (read->time == write->time); ++read) {
write->value = read->value;
}
for (auto const cached = (write++)->value; write != read; ++write) {
write->value = cached;
}
}
// ... and then removing them.
merged.erase(
unique(begin(merged), end(merged),
[](auto const & lhs, auto const & rhs) {
return (lhs.time == rhs.time);}),
end(merged));
(Live example here)
Concerning the time complexity: This is iterating over all "events", so it depends on the number of events e. The very first make_heap call has to built a complete new heap, this has worst case complexity of 3 * s where s is the number of streams the function has to merge. On subsequent calls, make_heap only has to correct the very first element, this has worst case complexity of log(s'). I write s' because the number of streams (that need to be considered) will decrease to zero. This
gives
3s + (e-1) * log(s')
as complexity. Assuming the worst case, where s' decreases slowly (this happens when the events are evenly distributed across the streams, i.e. all streams have the same number of events:
3s + (e - 1 - s) * log(s) + (sum (log(i)) i = i to s)
Do you really need a data structure as result? I don't think so. Actually you're defining several functions that can be added. The examples you give are encoded using a 'start, value(, implicit end)' tuple. The basic building block is a function that looks up it's value at a certain point:
double valueAt(const vector<edge> &starts, float point) {
auto it = std::adjacent_find(begin(starts), end(starts),
[&](edge e1, edge e2) {
return e1.x <= point && point < e2.x;
});
return it->second;
};
The function value for a point is the sum of the function values for all code-series.
If you really need a list in the end, you can join and sort all edge.x values for all series, and create the list from that.
Unless performance is an issue :)
If you can combine two of these structures, you can combine many.
First, encapsulate your std::vector into a class. Implement what you know as operator+= (and define operator+ in terms of this if you want). With that in place, you can combine as many as you like, just by repeated addition. You could even use std::accumulate to combine a collection of them.

How to extract subvector from a vector?

I'm writing recursive solution for finding minimum element in rotated sorted array. The input for the function is const vector , and i have to get a subarray for recursion.
findMin(const vector<int> &A) {
int l=0,r=A.size()-1;
if(r<l)return A[0];
if(r==l)return A[l];
int m=(l+r)/2;
if((m<r)&&(A[m+1]<A[m]))
return A[m+1];
if((m<r)&&(A[m]<A[m-1]))
return A[m];
if(A[m]<A[r]){
const vector<int> &B(&A[0],&A[m-1]);
return findMin(B);
}
const vector<int> &C(&A[m+1],&A[r]);
return findMin(C);
}
The error is about the sub-vector B and C
Vectors store and own data, they are not views into it. A vector does not have a "subvector", as there are no other objects that own the vector's data.
You can copy data from your vector to another vector, but calling that a "subvector" is misleading.
The easiest solution is to rewrite your function to work with iterator start and finish instead of a container. You can take your existing interface, and have it call the two-iterator version, to maintain the API.
The harder solution is to write an array_view<T> class that stores two T* and behaves like a range with the interface you want, including an implicit cast-from-vector. Replace your const vector<int>&B and similarly C with a properly written array_view<int const> B, as well as your A, and (assuming no other errors in your code) you are done.
here is an array_view I have written. here is one under the process of being added to std.
const vector<int> &C(&A[m+1],&A[r]);
The error you get here is that you are trying to bind a reference, but you are using the syntax for initializing a vector.
The correct way to obtain a sub-vector is
const vector<int> C(&A[m+1],&A[r]);
note the missing &. This, however, will make a copy of the given range, which is undesirable.
As advised in the comment above, change you function parameters to be the vector and a couple of indices, or (perhaps better) to be a couple of iterators.
For this type of recursive functions, using a helper function would be more appropriate as illustrated below:
int findMinHelper(std::vector<int> const& rotatedSorted, int left, int right)
{
if (right == left) return rotatedSorted[left];
int mid = (right + left) / 2;
if (mid < right && rotatedSorted[mid + 1] < rotatedSorted[mid])
return rotatedSorted[mid + 1];
if (mid > left && rotatedSorted[mid] < rotatedSorted[mid - 1])
return rotatedSorted[mid];
if (rotatedSorted[right] > rotatedSorted[mid])
return findMinHelper(rotatedSorted, left, mid - 1);
return findMinHelper(rotatedSorted, mid + 1, right);
}
int findMin(std::vector<int> const& rotatedSorted)
{
// proper handling of the case when rotatedSorted.empty() is true
// must be done
return findMinHelper(rotatedSorted, 0, rotatedSorted.size() - 1);
}
Then, you can call:
std::vector<int> input{10, 21, 4, 5, 8};
int minValue = findMin(input);
std::cout << minValue << std::endl;
which will print 4 as expected.