Let us say I've got a collection of items and a score function on them:
struct Item { /* some data */ };
std::vector<Item> items;
double score(Item);
I'd like to find the item from that collection whose score is the lowest. An easy way to write this is:
const auto argmin = std::min_element(begin(items), end(items), [](Item a, Item b) {
return score(a) < score(b);
});
But if score is a heavy-to-compute function, the fact that std::min_element actually calls it multiple times on some items may be worrying. And this is expected because the compiler cannot guess score is a pure function.
How could I find argmin but with score being called only once per item? Memoization is one possibility, anything else?
My objective is to write a code snippet which is easy to read, in a dream world as obvious as calling std::min_element on the collection is.
As I commented above, if the vector is not too big, you can use std::transform to store all scores first, then apply std::min_element.
However, if you want to take benefit of "lazy evaluation", and still want to use C++'s STL, there are some tricks to work it out.
The point is std::accumulate can be regarded as a general reduce or fold operation (like foldl in haskell). With C++17's syntax sugar for std::tuple, we can write something like:
auto [min_ind, _, min_value] = std::accumulate(items.begin(), items.end(),
std::make_tuple(-1LU, 0LU, std::numeric_limits<double>::max()),
[] (std::tuple<std::size_t, std::size_t, double> accu, const Item &s) {
// up to this point, the index of min, the current index, and the last minimal value
auto [min_ind, cur_ind, prev_min] = accu;
double r = score(s);
if ( r < prev_min ) {
return std::make_tuple(cur_ind, cur_ind + 1, r);
} else {
return std::make_tuple(min_ind, cur_ind + 1, prev_min);
}
});
Here's a function that does what you want--even going beyond the intuitive "call score exactly once per element" by realizing that there's nothing smaller than negative infinity!
const Item* smallest(const std::vector<Item>& items)
{
double min_score = items.empty() ? NAN : INFINITY;
const Item* min_item = items.empty() ? nullptr : &*begin(items);
for (const auto& item : items) {
double item_score = score(item);
if (item_score < min_score) {
min_score = item_score;
min_item = &item;
if (item_score == -INFINITY) {
break;
}
}
}
return min_item;
}
As suggested bu user #liliscent, one could:
generate a collection of precalculated scores,
find the minimum score from it,
and infer the position of the minimizing item from the position of the minimum score.
This is my reading of their suggestion:
template<class InputIt, class Scoring>
auto argmin(InputIt first, InputIt last, Scoring scoring)
{
using score_type = typename std::result_of_t<Scoring(typename std::iterator_traits<InputIt>::value_type)>;
std::vector<score_type> scores(std::distance(first, last));
std::transform(first, last, begin(scores), scoring);
const auto scoremin = std::min_element(begin(scores), end(scores));
return first + std::distance(begin(scores), scoremin);
}
With a live demo.
Related
For C++ language, what's the fastest way in processing run-time (in multi core processors), from an algorithm design viewpoint, to search numbers (e.g. between 100 and 1000) that are within an array (or splice or whatever faster data structures for the purpose of this) and return the range of numbers limited to only 10 items returned? e.g. pseudocode in golang:
var listofnums := []uint64
var numcounter := 1
// splice of [1,2,3,4,5,31,32 .. 932536543] this list has 1 billion numeric items.
// the listofnums are already sorted each time an item is added but we do not know the lower_bound or upper_bound of the item list.
// I know I can use binary search to find listofnums[i] where it is smallest at [i] too... I'm asking for suggestions.
for i:=uint(0); i < len(listofnums); i++ {
if listofnums[i] > 100 && listofnums[i] < 1000 {
if listofnums[i]> 1000 || numcounter == 10 {
return
}
fmt.Println("%d",listofnums[i])
numcounter++
}
}
is this the fastest way? I saw bitmap structures in C++ but not sure if can be applied here.
I've come across this question, which is perfectly fine for veteran programmers to ask but I have no idea why it's down voted.
What is the fastest search method for array?
Can someone please not remove this question but let me rephrase it? Thanks in advance. I hope to find the most optimum way to return a range of numbers from a large array of numeric items.
If I understand your problem correctly you need to find two positions in your array, the first of which all numbers are greater than or equal to 100 and the second of which all numbers are less than or equal to 1000.
The functions std::lower_bound and std::upper_bound do binary searches designed to find such a range.
For arrays, in C++ we usually use a std::vector and denote the beginning and end of ranges using a pair of iterators.
So something like this may be what you need:
std::pair<std::vector<int>::iterator, std::vector<int>::iterator>
find_range(std::vector<int>& v, int min, int max)
{
auto begin = std::lower_bound(std::begin(v), std::end(v), min);
// start searching after the previously found value
auto end = std::upper_bound(begin, std::end(v), max);
return {begin, end};
}
You can iterate over that range like this:
auto range = find_range(v, 100, 1000);
for(auto i = range.first; i != range.second; ++i)
std::cout << *i << '\n';
You can create a new vector from the range (slow) like this:
std::vector<int> selection{range.first, range.second};
My first attempt.
Features:
logN time complexity
creates an array slice, no copying of data
second binary search minimises the search space on the basis of the first
possible improvements:
if n is small, the second binary search would be a pessimisation. Better to simply count forward up to n times.
#include <vector>
#include <cstdint>
#include <algorithm>
#include <iterator>
#include <iostream>
template <class Iter> struct range
{
range(Iter first, std::size_t size) : begin_(first), end_(first + size) {}
auto begin() const { return begin_; }
auto end() const { return end_; }
Iter begin_, end_;
};
template<class Iter> range(Iter, std::size_t) -> range<Iter>;
auto find_first_n_between(std::vector<std::int64_t>& vec,
std::size_t n,
std::int64_t from, std::int64_t to)
{
auto lower = std::lower_bound(begin(vec), end(vec), from);
auto upper = std::upper_bound(lower, end(vec), to);
auto size = std::min(n, std::size_t(std::distance(lower, upper)));
return range(lower, size);
}
int main()
{
std::vector<std::int64_t> vec { 1,2,3,4,5,6,7,8,15,17,18,19,20 };
auto slice = find_first_n_between(vec, 5, 6, 15);
std::copy(std::begin(slice), std::end(slice), std::ostream_iterator<std::int64_t>(std::cout, ", "));
}
I'm trying to find a sensible algorithm to combine multiple lists/vectors/arrays as defined below.
Each element contains a float declaring the start of its range of validity and a constant that is used over this range. Where ranges from different lists overlap their constants need to be added to produce one global list.
I've done an attempt at an illustration below to try and give a good idea of what I mean:
First List:
0.5---------------2------------3.2--------4
a1 a2 a3
Second List:
1----------2----------3---------------4.5
b1 b2 b3
Desired Output:
0.5----1----------2----------3-3.2--------4--4.5
a1 a1+b1 a2+b2 ^ a3+b3 b3
b3+a2
I can't think of a sensible way of going about this in the case of n lists; Just 2 is quite easy to brute force.
Any hints or ideas would be welcome. Each list is represented as a C++ std::vector (so feel free to use standard algorithms) and are sorted by start of range value.
Cheers!
Edit: Thanks for the advice, I've come up with a naive implementation, not sure why I couldn't get here on my own first. To my mind the obvious improvement would be to store an iterator for each vector since they're already sorted and not have to re-traverse each vector for each point. Given that most vectors will contain less than 100 elements, but there may be many vectors this may or may not be worthwhile. I'd have to profile to see.
Any thoughts on this?
#include <vector>
#include <iostream>
struct DataType
{
double intervalStart;
int data;
// More data here, the data is not just a single int, but that
// works for our demonstration
};
int main(void)
{
// The final "data" of each vector is meaningless as it refers to
// the coming range which won't be used as this is only for
// bounded ranges
std::vector<std::vector<DataType> > input = {{{0.5, 1}, {2.0, 3}, {3.2, 3}, {4.0, 4}},
{{1.0, 5}, {2.0, 6}, {3.0, 7}, {4.5, 8}},
{{-34.7895, 15}, {-6.0, -2}, {1.867, 5}, {340, 7}}};
// Setup output vector
std::vector<DataType> output;
std::size_t inputSize = 0;
for (const auto& internalVec : input)
inputSize += internalVec.size();
output.reserve(inputSize);
// Fill output vector
for (const auto& internalVec : input)
std::copy(internalVec.begin(), internalVec.end(), std::back_inserter(output));
// Sort output vector by intervalStartPoints
std::sort(output.begin(), output.end(),
[](const DataType& data1, const DataType& data2)
{
return data1.intervalStart < data2.intervalStart;
});
// Remove DataTypes with same intervalStart - each interval can only start once
output.erase(std::unique(output.begin(), output.end(),
[](const DataType& dt1, const DataType& dt2)
{
return dt1.intervalStart == dt2.intervalStart;
}), output.end());
// Output now contains all the right intersections, just not with the right data
// Lambda to find the associated data value associated with an
// intervsalStart value in a vector
auto FindDataValue = [&](const std::vector<DataType> v, double startValue)
{
auto iter = std::find_if(v.begin(), v.end(), [startValue](const DataType& data)
{
return data.intervalStart > startValue;
});
if (iter == v.begin() || iter == v.end())
{
return 0;
}
return (iter-1)->data;
};
// For each interval in the output traverse the input and sum the
// data constants
for (auto& val : output)
{
int sectionData = 0;
for (const auto& iv : input)
sectionData += FindDataValue(iv, val.intervalStart);
val.data = sectionData;
}
for (const auto& i : output)
std::cout << "loc: " << i.intervalStart << " data: " << i.data << std::endl;
return 0;
}
Edit2: #Stas's code is a very good way to approach this problem. I've just tested it on all the edge cases I could think of.
Here's my merge_intervals implementation in case anyone is interested. The only slight change I've had to make to the snippets Stas provided is:
for (auto& v : input)
v.back().data = 0;
Before combining the vectors as suggested. Thanks!
template<class It1, class It2, class OutputIt>
OutputIt merge_intervals(It1 first1, It1 last1,
It2 first2, It2 last2,
OutputIt destBegin)
{
const auto begin1 = first1;
const auto begin2 = first2;
auto CombineData = [](const DataType& d1, const DataType& d2)
{
return DataType{d1.intervalStart, (d1.data+d2.data)};
};
for (; first1 != last1; ++destBegin)
{
if (first2 == last2)
{
return std::copy(first1, last1, destBegin);
}
if (first1->intervalStart == first2->intervalStart)
{
*destBegin = CombineData(*first1, *first2);
++first1; ++first2;
}
else if (first1->intervalStart < first2->intervalStart)
{
if (first2 > begin2)
*destBegin = CombineData(*first1, *(first2-1));
else
*destBegin = *first1;
++first1;
}
else
{
if (first1 > begin1)
*destBegin = CombineData(*first2, *(first1-1));
else
*destBegin = *first2;
++first2;
}
}
return std::copy(first2, last2, destBegin);
}
Unfortunately, your algorithm is inherently slow. It doesn't make sense to profile or apply some C++ specific tweaks, it won't help. It will never stop calculation on pretty small sets like merging 1000 lists of 10000 elements each.
Let's try to evaluate time complexity of your algo. For the sake of simplicity, let's merge only lists of the same length.
L - length of a list
N - number of lists to be merged
T = L * N - length of a whole concatenated list
Complexity of your algorithm steps:
create output vector - O(T)
sort output vector - O(T*log(T))
filter output vector - O(T)
fix data in output vector - O(T*T)
See, the last step defines the whole algorithm complexity: O(T*T) = O(L^2*N^2). It is not acceptable for practical application. See, to merge 1000 lists of 10000 elements each, the algorithm should run 10^14 cycles.
Actually, the task is pretty complex, so do not try to solve it in one step. Divide and conquer!
Write an algorithm that merges two lists into one
Use it to merge a list of lists
Merging two lists into one
This is relatively easy to implement (but be careful with corner cases). The algorithm should have linear time complexity: O(2*L). Take a look at how std::merge is implemented. You just need to write your custom variant of std::merge, let's call it merge_intervals.
Applying a merge algorithm to a list of lists
This is a little bit tricky, but again, divide and conquer! The idea is to do recursive merge: split a list of lists on two halves and merge them.
template<class It, class Combine>
auto merge_n(It first, It last, Combine comb)
-> typename std::remove_reference<decltype(*first)>::type
{
if (first == last)
throw std::invalid_argument("Empty range");
auto count = std::distance(first, last);
if (count == 1)
return *first;
auto it = first;
std::advance(it, count / 2);
auto left = merge_n(first, it, comb);
auto right = merge_n(it, last, comb);
return comb(left, right);
}
Usage:
auto combine = [](const std::vector<DataType>& a, const std::vector<DataType>& b)
{
std::vector<DataType> result;
merge_intervals(a.begin(), a.end(), b.begin(), b.end(),
std::back_inserter(result));
return result;
};
auto output = merge_n(input.begin(), input.end(), combine);
The nice property of such recursive approach is a time complexity: it is O(L*N*log(N)) for the whole algorithm. So, to merge 1000 lists of 10000 elements each, the algorithm should run 10000 * 1000 * 9.966 = 99,660,000 cycles. It is 1,000,000 times faster than original algorithm.
Moreover, such algorithm is inherently parallelizable. It is not a big deal to write parallel version of merge_n and run it on thread pool.
I know I'm a bit late to the party, but when I started writing this you hadn't a suitable answer yet, and my solution should have a relatively good time complexity, so here you go:
I think the most straightforward way to approach this is to see each of your sorted lists as a stream of events: At a given time, the value (of that stream) changes to a new value:
template<typename T>
struct Point {
using value_type = T;
float time;
T value;
};
You want to superimpose those streams into a single stream (i.e. having their values summed up at any given point). For that you take the earliest event from all streams, and apply its effect on the result stream. Therefore, you need to first "undo" the effect that the previous value from that stream made on the result stream, and then add the new value to the current value of the result stream.
To be able to do that, you need to remember for each stream the last value, the next value (and when the stream is empty):
std::vector<std::tuple<Value, StreamIterator, StreamIterator>> streams;
The first element of the tuple is the last effect of that stream onto the result stream, the second is an iterator pointing to the streams next event, and the last is the end iterator of that stream:
transform(from, to, inserter(streams, begin(streams)),
[] (auto & stream) {
return make_tuple(static_cast<Value>(0), begin(stream), end(stream));
});
To be able to always get the earliest event of all the streams, it helps to keep the (information about the) streams in a (min) heap, where the top element is the stream with the next (earliest) event. That's the purpose of the following comparator:
auto heap_compare = [] (auto const & lhs, auto const & rhs) {
bool less = (*get<1>(lhs)).time < (*get<1>(rhs)).time;
return (not less);
};
Then, as long as there are still some events (i.e. some stream that is not empty), first (re)build the heap, take the top element and apply its next event to the result stream, and then remove that element from the stream. Finally, if the stream is now empty, remove it.
// The current value of the result stream.
Value current = 0;
while (streams.size() > 0) {
// Reorder the stream information to get the one with the earliest next
// value into top ...
make_heap(begin(streams), end(streams), heap_compare);
// .. and select it.
auto & earliest = streams[0];
// New value is the current one, minus the previous effect of the selected
// stream plus the new value from the selected stream
current = current - get<0>(earliest) + (*get<1>(earliest)).value;
// Store the new time point with the new value and the time of the used
// time point from the selected stream
*out++ = Point<Value>{(*get<1>(earliest)).time, current};
// Update the effect of the selected stream
get<0>(earliest) = (*get<1>(earliest)).value;
// Advance selected stream to its next time point
++(get<1>(earliest));
// Remove stream if empty
if (get<1>(earliest) == get<2>(earliest)) {
swap(streams[0], streams[streams.size() - 1u]);
streams.pop_back();
}
}
This will return a stream where there might be multiple points with the same time, but a different value. This occurs when there are multiple "events" at the same time. If you only want the last value, i.e. the value after all these events happened, then one needs to combine them:
merge_point_lists(begin(input), end(input), inserter(merged, begin(merged)));
// returns points with the same time, but with different values. remove these
// duplicates, by first making them REALLY equal, i.e. setting their values
// to the last value ...
for (auto write = begin(merged), read = begin(merged), stop = end(merged);
write != stop;) {
for (++read; (read != stop) and (read->time == write->time); ++read) {
write->value = read->value;
}
for (auto const cached = (write++)->value; write != read; ++write) {
write->value = cached;
}
}
// ... and then removing them.
merged.erase(
unique(begin(merged), end(merged),
[](auto const & lhs, auto const & rhs) {
return (lhs.time == rhs.time);}),
end(merged));
(Live example here)
Concerning the time complexity: This is iterating over all "events", so it depends on the number of events e. The very first make_heap call has to built a complete new heap, this has worst case complexity of 3 * s where s is the number of streams the function has to merge. On subsequent calls, make_heap only has to correct the very first element, this has worst case complexity of log(s'). I write s' because the number of streams (that need to be considered) will decrease to zero. This
gives
3s + (e-1) * log(s')
as complexity. Assuming the worst case, where s' decreases slowly (this happens when the events are evenly distributed across the streams, i.e. all streams have the same number of events:
3s + (e - 1 - s) * log(s) + (sum (log(i)) i = i to s)
Do you really need a data structure as result? I don't think so. Actually you're defining several functions that can be added. The examples you give are encoded using a 'start, value(, implicit end)' tuple. The basic building block is a function that looks up it's value at a certain point:
double valueAt(const vector<edge> &starts, float point) {
auto it = std::adjacent_find(begin(starts), end(starts),
[&](edge e1, edge e2) {
return e1.x <= point && point < e2.x;
});
return it->second;
};
The function value for a point is the sum of the function values for all code-series.
If you really need a list in the end, you can join and sort all edge.x values for all series, and create the list from that.
Unless performance is an issue :)
If you can combine two of these structures, you can combine many.
First, encapsulate your std::vector into a class. Implement what you know as operator+= (and define operator+ in terms of this if you want). With that in place, you can combine as many as you like, just by repeated addition. You could even use std::accumulate to combine a collection of them.
I have a big vector of items that are sorted based on one of their fields, e.g. a cost attribute, and I want to do a bit of processing on each of these items to find the maximum value of a different attribute... The constraint here is that we cannot use an item to calculate a maximum value if that item's cost exceeds some arbitrary price.
The single threaded for-loop looks like this:
auto maxValue = -MAX_FLT;
for(const auto& foo: foos) {
// Break if the cost is too high.
if(foo.cost() > 46290) {
break;
}
maxValue = max(maxValue , foo.value());
}
I've been able to somewhat convert this into a parallel_for_each. (Disclaimer: I'm new to PPL.)
combinable<float> localMaxValue([]{ return -MAX_FLT; });
parallel_for_each(begin(foos), end(foos), [&](const auto& foo) {
// Attempt to early out if the cost is too high.
if(foo.getCost() > 46290) {
return;
}
localMaxValue.local() = max(localMaxValue.local(), foo.getValue());
}
auto maxValue = localMaxValue.combine(
[](const auto& first, const auto& second) {
return max<float>(first, second);
});
The return statement inside the parallel_for feels inefficient since it's still executing over every item, and in this case, it's quite possible that the parallel_for could end up iterating over multiple portions of the vector that are costed too high.
How can I take advantage of the fact that the vector is already sorted by cost?
I looked into using a cancellation token, but that approach seems incorrect as it would cause all sub tasks of the parallel_for to be cancelled which means I may get the wrong maximum value.
Is there something like a cancellation token that could cancel that specific sub task of the parallel_for, or is there a better tool than the parallel_for in this case?
If the vector is sorted by cost then you can iterate over only the items whose cost is lower then the cost limit.
If the cost is x.
find the first item iterator which is equal or larger than x.
you can use std::lower_bound.
then you use your parallel_for_each from the beginning of the vector to the iterator you found.
combinable<float> localMaxValue([]{ return -MAX_FLT; });
//I'm assuming foos is std::vector.
int cost_limit = 46290;
auto it_end = std::lower_bound(foos.begin(), foos.end(), cost_limit, [](const auto& foo, int cost_limit)
{
return foo.getCost() < cost_limit;
});
parallel_for_each(foos.begin(), foos.end(), [&](const auto& foo) {
localMaxValue.local() = max(localMaxValue.local(), foo.getValue());
}
auto maxValue = localMaxValue.combine(
[](const auto& first, const auto& second) {
return max<float>(first, second);
});
From this, we know the method to solve the intersection of two sorted arrays. So how to get the intersection of multiple sorted arrays?
Based on the answers of two sorted arrays, we can apply it to multiple arrays. Here are the codes
vector<int> intersectionVector(vector<vector<int> > vectors){
int vec_num = vectors.size();
vector<int> vec_pos(vec_num);// hold the current position for every vector
vector<int> inter_vec; // collection of intersection elements
while (true){
int max_val = INT_MIN;
for (int index = 0; index < vec_num; ++index){
// reach the end of one array, return the intersection collection
if (vec_pos[index] == vectors[index].size()){
return inter_vec;
}
max_val = max(max_val, vectors[index].at(vec_pos[index]));
}
bool bsame = true;
for (int index = 0; index < vec_num; ++index){
while (vectors[index].at(vec_pos[index]) < max_val){
vec_pos[index]++; // advance the position of vector, once less than max value
bsame = false;
}
}
// find same element in all vectors
if (bsame){
inter_vec.push_back(vectors[0].at(vec_pos[0]));
// advance the position of all vectors
for (int index = 0; index < vec_num; ++index){
vec_pos[index]++;
}
}
}
}
Is any better approach to solve it?
Update1
From those two topics 1 and 2, it seem that Hash set is more efficient method to do that.
Update2
To improve the performance, maybe the min-heap can be used instead of vec_pos in my codes above. And the variable max_val holds the current max value of all vectors. So just compare the root value with max_val, if they are same, this element can be put into intersection list.
To get the intersection of two sorted ranges, std::set_intersection can be used:
std::vector<int> intersection (const std::vector<std::vector<int>> &vecs) {
auto last_intersection = vecs[0];
std::vector<int> curr_intersection;
for (std::size_t i = 1; i < vecs.size(); ++i) {
std::set_intersection(last_intersection.begin(), last_intersection.end(),
vecs[i].begin(), vecs[i].end(),
std::back_inserter(curr_intersection));
std::swap(last_intersection, curr_intersection);
curr_intersection.clear();
}
return last_intersection;
}
This looks a lot cleaner than your solution which is too confusing to check for correctness.
It also has optimal complexity.
The standard library algorithm set_intersection may be implemented in any way that uses
at most 2ยท(N1+N2-1) comparisons, where N1 = std::distance(first1, last1) and N2 = std::distance(first2, last2).
first1 etc. are the iterators defining the input ranges. You can check out the actual implementation in the source code of your standard-library if it is open source (like libstd++ or libc++).
This assumes you know the number of containers you are intersecting:
template<class Output, class... Cs>
Output intersect( Output out, Cs const&... cs ) {
using std::begin; using std::end;
auto its = std::make_tuple( begin(cs)... );
const auto ends = std::make_tuple( end(cs)... );
while( !at_end( its, ends ) ) {
if ( all_same( its ) ) {
*out++ = *std::get<0>(its);
advance_all( its );
} else {
advance_least( its );
}
}
return out;
}
To complete simply implement:
bool at_end( std::tuple<Iterators...> const& its, std::tuple<Iterators...> const& ends );
bool all_same( std::tuple<Iterators...> const& its );
void advance_all( std::tuple<Iterators...>& its );
void advance_least( std::tuple<Iterators...>& its );
The first is easy (use indexes trick, compare pairwise, check that you returned true if the tuples are empty).
The second is similar. It should be easier if you compare std::get<i>(its) == std::get<i+1>(its) I think rather than compare all to zero. A special case for empty might be required.
advance_all is even easier.
The last is the tricky one. The requirements are that you advance at least one iterator, and you do not advance the one that dereferences the most, and you advance iterators at most once, and you advance the most you can up to efficiency.
I suppose the easiest method is to find the greatest element, the advance everything less than that by 1.
If you don't know the number of containers you are intersecting, the above can be refactored to use dynamic storage for the iteration. This will look similar to your own solution, except with the details factored out into sub functions.
I have a class(object), User. This user has 2 private attributes, "name" and "popularity". I store the objects into a vector (container).
From the container, I need to find the top 5 most popular user, how do I do that? (I have an ugly code, I will post here, if you have a better approach, please let me know. Feel free to use other container, if you think vector is not a good choice, but please use only: map or multimap, list, vector or array, because I only know how to use these.) My current code is:
int top5 = 0, top4 = 0, top3 = 0, top2 = 0, top1 = 0;
vector<User>::iterator it;
for (it = user.begin(); it != user.end(); ++it)
{
if( it->getPopularity() > top5){
if(it->getPopularity() > top4){
if(it->getPopularity() > top3){
if(it->getPopularity() > top2){
if(it->getPopularity() > top1){
top1 = it->getPopularity();
continue;
} else {
top2 = it->getPopularity();
continue;
}
} else {
top3 = it->getPopularity();
continue;
}
}
} else {
top4 = it->getPopularity();
continue;
}
} else {
top5 = it->getPopularity();
continue;
}
}
I know the codes is ugly and might be prone to error, thus if you have better codes, please do share with us (us == cpp newbie). Thanks
You can use the std::partial_sort algorithm to sort your vector so that the first five elements are sorted and the rest remains unsorted. Something like this (untested code):
bool compareByPopularity( User a, User b ) {
return a.GetPopularity() > b.GetPopularity();
}
vector<Users> getMostPopularUsers( const vector<User> &users, int num ) {
if ( users.size() <= num ) {
sort( users.begin(), users.end(), compareByPopularity );
} else {
partial_sort( users.begin(), users.begin() + num, users.end(),
compareByPopularity );
}
return vector<Users>( users.begin(), users.begin() + num );
}
Why don't you sort (std::sort or your own implementation of Quick Sort) the vector based on popularity and take the first 5 values ?
Example:
bool UserCompare(User a, User b) { return a.getPopularity() > b.getPopularity(); }
...
std::sort(user.begin(), user.end(), UserCompare);
// Print first 5 users
If you just want top 5 popular uses, then use std::partial_sort().
class User
{
private:
string name_m;
int popularity_m;
public:
User(const string& name, int popularity) : name_m(name), popularity_m(popularity) { }
friend ostream& operator<<(ostream& os, const User& user)
{
return os << "name:" << user.name_m << "|popularity:" << user.popularity_m << "\n";
return os;
}
int Popularity() const
{
return popularity_m;
}
};
bool Compare(const User& lhs, const User& rhs)
{
return lhs.Popularity() > rhs.Popularity();
}
int main()
{
// c++0x. ignore if you don't want it.
auto compare = [](const User& lhs, const User& rhs) -> bool
{ return lhs.Popularity() > rhs.Popularity(); };
partial_sort(users.begin(), users.begin() + 5, users.end(), Compare);
copy(users.begin(), users.begin() + 5, ostream_iterator<User>(std::cout, "\n"));
}
First off, cache that it->getPopularity() so you don't have to keep repeating it.
Secondly (and this is much more important): Your algorithm is flawed. When you find a new top1 you have to push the old top1 down to the #2 slot before you save the new top1, but before you do that you have to push the old top2 down to the #3 slot, etc. And that is just for a new top1. You are going to have to do something similar for a new top2, a new top3, etc. The only one you can paste in without worrying about pushing things down the list is when you get a new top5. The correct algorithm is hairy. That said, the correct algorithm is much easier to implement when your topN is an array rather than a bunch of separate values.
Thirdly (and this is even more important than the second point): You shouldn't care about performance, at least not initially. The easy way to do this is to sort the entire list and pluck off the first five off the top. If this suboptimal but simple algorithm doesn't affect your performance, done. Don't bother with the ugly but fast first N algorithm unless performance mandates that you toss the simple solution out the window.
Finally (and this is the most important point of all): That fast first N algorithm is only fast when the number of elements in the list is much, much larger than five. The default sort algorithm is pretty dang fast. It has to be wasting a lot of time sorting the dozens / hundreds of items you don't care about before a pushdown first N algorithm becomes advantageous. In other words, that pushdown insertion sort algorithm may well be a case of premature disoptimization.
Sort your objects, maybe with the library if this is allowed, and then simply selecte the first 5 element. If your container gets too big you could probably use a std::list for the job.
Edit : #itsik you beat me to the sec :)
Do this pseudo code.
Declare top5 as an array of int[5] // or use a min-heap
Initialize top5 as 5 -INF
For each element A
if A < top5[4] // or A < root-of-top5
Remove top5[4] from top5 // or pop min element from heap
Insert A to top // or insert A to the heap
Well, I advise you improve your code by using an array or list or vector to store the top five, like this
struct TopRecord
{
int index;
int pop;
} Top5[5];
for(int i = 0; i<5; i++)
{
Top5[i].index = -1;
// Set pop to a value low enough
Top5[i].pop = -1;
}
for(int i = 0; i< users.size(); i++)
{
int currentpop = i->getPopularity()
int currentindex = i;
int j = 0;
int temp;
while(j < 5 && Top5[j].pop < currentpop)
{
temp = Top5[j].pop;
Top[j].pop = currentpop;
currentpop = temp;
temp = Top5[j].index;
Top[j].index = currentindex;
currentindex = temp;
j++;
}
}
You also may consider using Randomized Select if Your aim is performance, since originally Randomized Select is good enough for ordered statistics and runs in linear time, You just need to run it 5 times. Or to use partial_sort solution provided above, either way counts, depends on Your aim.