How to sum up elements of a C++ vector? - c++

What are the good ways of finding the sum of all the elements in a std::vector?
Suppose I have a vector std::vector<int> vector with a few elements in it. Now I want to find the sum of all the elements. What are the different ways for the same?

Actually there are quite a few methods.
int sum_of_elems = 0;
C++03
Classic for loop:
for(std::vector<int>::iterator it = vector.begin(); it != vector.end(); ++it)
sum_of_elems += *it;
Using a standard algorithm:
#include <numeric>
sum_of_elems = std::accumulate(vector.begin(), vector.end(), 0);
Important Note: The last argument's type is used not just for the initial value, but for the type of the result as well. If you put an int there, it will accumulate ints even if the vector has float. If you are summing floating-point numbers, change 0 to 0.0 or 0.0f (thanks to nneonneo). See also the C++11 solution below.
C++11 and higher
b. Automatically keeping track of the vector type even in case of future changes:
#include <numeric>
sum_of_elems = std::accumulate(vector.begin(), vector.end(),
decltype(vector)::value_type(0));
Using std::for_each:
std::for_each(vector.begin(), vector.end(), [&] (int n) {
sum_of_elems += n;
});
Using a range-based for loop (thanks to Roger Pate):
for (auto& n : vector)
sum_of_elems += n;
C++17 and above
Using std::reduce which also takes care of the result type, e.g if you have std::vector<int>, you get int as result. If you have std::vector<float>, you get float. Or if you have std::vector<std::string>, you get std::string (all strings concatenated). Interesting, isn't it?
auto result = std::reduce(v.begin(), v.end());
There are other overloads of this function which you can run even parallelly, in case if you have a large collection and you want to get the result quickly.

The easiest way is to use std:accumulate of a vector<int> A:
#include <numeric>
cout << accumulate(A.begin(), A.end(), 0);

Prasoon has already offered up a host of different (and good) ways to do this, none of which need repeating here. I'd like to suggest an alternative approach for speed however.
If you're going to be doing this quite a bit, you may want to consider "sub-classing" your vector so that a sum of elements is maintained separately (not actually sub-classing vector which is iffy due to the lack of a virtual destructor - I'm talking more of a class that contains the sum and a vector within it, has-a rather than is-a, and provides the vector-like methods).
For an empty vector, the sum is set to zero. On every insertion to the vector, add the element being inserted to the sum. On every deletion, subtract it. Basically, anything that can change the underlying vector is intercepted to ensure the sum is kept consistent.
That way, you have a very efficient O(1) method for "calculating" the sum at any point in time (just return the sum currently calculated). Insertion and deletion will take slightly longer as you adjust the total and you should take this performance hit into consideration.
Vectors where the sum is needed more often than the vector is changed are the ones likely to benefit from this scheme, since the cost of calculating the sum is amortised over all accesses. Obviously, if you only need the sum every hour and the vector is changing three thousand times a second, it won't be suitable.
Something like this would suffice:
class UberVector:
private Vector<int> vec
private int sum
public UberVector():
vec = new Vector<int>()
sum = 0
public getSum():
return sum
public add (int val):
rc = vec.add (val)
if rc == OK:
sum = sum + val
return rc
public delindex (int idx):
val = 0
if idx >= 0 and idx < vec.size:
val = vec[idx]
rc = vec.delindex (idx)
if rc == OK:
sum = sum - val
return rc
Obviously, that's pseudo-code and you may want to have a little more functionality, but it shows the basic concept.

Why perform the summation forwards when you can do it backwards? Given:
std::vector<int> v; // vector to be summed
int sum_of_elements(0); // result of the summation
We can use subscripting, counting backwards:
for (int i(v.size()); i > 0; --i)
sum_of_elements += v[i-1];
We can use range-checked "subscripting," counting backwards (just in case):
for (int i(v.size()); i > 0; --i)
sum_of_elements += v.at(i-1);
We can use reverse iterators in a for loop:
for(std::vector<int>::const_reverse_iterator i(v.rbegin()); i != v.rend(); ++i)
sum_of_elements += *i;
We can use forward iterators, iterating backwards, in a for loop (oooh, tricky!):
for(std::vector<int>::const_iterator i(v.end()); i != v.begin(); --i)
sum_of_elements += *(i - 1);
We can use accumulate with reverse iterators:
sum_of_elems = std::accumulate(v.rbegin(), v.rend(), 0);
We can use for_each with a lambda expression using reverse iterators:
std::for_each(v.rbegin(), v.rend(), [&](int n) { sum_of_elements += n; });
So, as you can see, there are just as many ways to sum the vector backwards as there are to sum the vector forwards, and some of these are much more exciting and offer far greater opportunity for off-by-one errors.

#include<boost/range/numeric.hpp>
int sum = boost::accumulate(vector, 0);

One can also use std::valarray<T> like this
#include<iostream>
#include<vector>
#include<valarray>
int main()
{
std::vector<int> seq{ 1,2,3,4,5,6,7,8,9,10 };
std::valarray<int> seq_add{ seq.data(), seq.size() };
std::cout << "sum = " << seq_add.sum() << "\n";
return 0;
}
Some may not find this way efficient since the size of valarray needs to be as big as the size of the vector and initializing valarray will also take time.
In that case, don't use it and take it as yet another way of summing up the sequence.

C++0x only:
vector<int> v; // and fill with data
int sum {}; // or = 0 ... :)
for (int n : v) sum += n;
This is similar to the BOOST_FOREACH mentioned elsewhere and has the same benefit of clarity in more complex situations, compared to stateful functors used with accumulate or for_each.

I'm a Perl user, an a game we have is to find every different ways to increment a variable... that's not really different here. The answer to how many ways to find the sum of the elements of a vector in C++ is probably an infinity...
My 2 cents:
Using BOOST_FOREACH, to get free of the ugly iterator syntax:
sum = 0;
BOOST_FOREACH(int & x, myvector){
sum += x;
}
iterating on indices (really easy to read).
int i, sum = 0;
for (i=0; i<myvector.size(); i++){
sum += myvector[i];
}
This other one is destructive, accessing vector like a stack:
while (!myvector.empty()){
sum+=myvector.back();
myvector.pop_back();
}

#include<iostream>
#include<vector>
#include<numeric>
using namespace std;
int main() {
vector<int> v = {2,7,6,10};
cout<<"Sum of all the elements are:"<<endl;
cout<<accumulate(v.begin(),v.end(),0);
}

Using inclusive_scan (C++17 and above):
The advantage is you can get sums of first "N" elements in a vector. Below is the code. Explanation in comments.
To use inclusive_scan , need to include "numeric" header.
//INPUT VECTOR
std::vector<int> data{ 3, 1, 4, 1, 5, 9, 2, 6 };
//OUTPUT VECTOR WITH SUMS
//FIRST ELEMENT - 3
//SECOND ELEMENT - 3 + 1
//THIRD ELEMENT - 3 + 1 + 4
//FOURTH ELEMENT - 3 + 1 + 4 + 1
// ..
// ..
//LAST ELEMENT - 3 + 1 + 4 + 1 + 5 + 9 + 2 + 6
std::vector<int> sums(data.size());
//SUM ALL NUMBERS IN A GIVEN VECTOR.
inclusive_scan(data.begin(), data.end(),
sums.begin());
//SUM OF FIRST 5 ELEMENTS.
std::cout << "Sum of first 5 elements :: " << sums[4] << std::endl;
//SUM OF ALL ELEMENTS
std::cout << "Sum of all elements :: " << sums[data.size() - 1] << std::endl;
Also there is an overload where the execution policy can be specified. Sequential execution or Parallel execution. Need to include "execution" header.
//SUM ALL NUMBERS IN A GIVEN VECTOR.
inclusive_scan(std::execution::par,data.begin(), data.end(),
sums.begin());
Using reduce :
One more option which I did not notice in the answers is using std::reduce which is introduced in c++17.
But you may notice many compilers not supporting it (Above GCC 10 may be good). But eventually the support will come.
With std::reduce, the advantage comes when using the execution policies. Specifying execution policy is optional. When the execution policy specified is std::execution::par, the algorithm may use hardware parallel processing capabilities. The gain may be more clear when using big size vectors.
Example:
//SAMPLE
std::vector<int> vec = {2,4,6,8,10,12,14,16,18};
//WITHOUT EXECUTION POLICY
int sum = std::reduce(vec.begin(),vec.end());
//TAKING THE ADVANTAGE OF EXECUTION POLICIES
int sum2 = std::reduce(std::execution::par,vec.begin(),vec.end());
std::cout << "Without execution policy " << sum << std::endl;
std::cout << "With execution policy " << sum2 << std::endl;
You need <numeric> header for std::reduce.
And '<execution>' for execution policies.

std::accumulate could have overflow issues so the best approach could be to do range based accumulation on bigger data type variable to avoid overflow issues.
long long sum = 0;
for (const auto &n : vector)
sum += n;
And then downcast to appropriate data type further using static_cast<>.

Nobody seems to address the case of summing elements of a vector that can have NaN values in it, e.g. numerical_limits<double>::quite_NaN()
I usually loop through the elements and bluntly check.
vector<double> x;
//...
size_t n = x.size();
double sum = 0;
for (size_t i = 0; i < n; i++){
sum += (x[i] == x[i] ? x[i] : 0);
}
It's not fancy at all, i.e. no iterators or any other tricks but I this is how I do it. Some times if there are other things to do inside the loop and I want the code to be more readable I write
double val = x[i];
sum += (val == val ? val : 0);
//...
inside the loop and re-use val if needed.

It is easy. C++11 provides an easy way to sum up elements of a vector.
sum = 0;
vector<int> vec = {1,2,3,4,5,....}
for(auto i:vec)
sum+=i;
cout<<" The sum is :: "<<sum<<endl;

Related

Find duplicate in unsorted array with best time Complexity

I know there were similar questions, but not of such specificity
Input: n-elements array with unsorted emelents with values from 1 to (n-1).
one of the values is duplicate (eg. n=5, tab[n] = {3,4,2,4,1}.
Task: find duplicate with best Complexity.
I wrote alghoritm:
int tab[] = { 1,6,7,8,9,4,2,2,3,5 };
int arrSize = sizeof(tab)/sizeof(tab[0]);
for (int i = 0; i < arrSize; i++) {
tab[tab[i] % arrSize] = tab[tab[i] % arrSize] + arrSize;
}
for (int i = 0; i < arrSize; i++) {
if (tab[i] >= arrSize * 2) {
std::cout << i;
break;
}
but i dont think it is with best possible Complexity.
Do You know better method/alghoritm? I can use any c++ library, but i don't have any idea.
Is it possible to get better complexity than O(n) ?
In terms of big-O notation, you cannot beat O(n) (same as your solution here). But you can have better constants and simpler algorithm, by using the property that the sum of elements 1,...,n-1 is well known.
int sum = 0;
for (int x : tab) {
sum += x;
}
duplicate = sum - ((n*(n-1)/2))
The constants here will be significntly better - as each array index is accessed exactly once, which is much more cache friendly and efficient to modern architectures.
(Note, this solution does ignore integer overflow, but it's easy to account for it by using 2x more bits in sum than there are in the array's elements).
Adding the classic answer because it was requested. It is based on the idea that if you xor a number with itself you get 0. So if you xor all numbers from 1 to n - 1 and all numbers in the array you will end up with the duplicate.
int duplicate = arr[0];
for (int i = 1; i < arr.length; i++) {
duplicate = duplicate ^ arr[i] ^ i;
}
Don't focus too much on asymptotic complexity. In practice the fastest algorithm is not necessarily the one with lowest asymtotic complexity. That is because constants are not taken into account: O( huge_constant * N) == O(N) == O( tiny_constant * N).
You cannot inspect N values in less than O(N). Though you do not need a full pass through the array. You can stop once you found the duplicate:
#include <iostream>
#include <vector>
int main() {
std::vector<int> vals{1,2,4,6,5,3,2};
std::vector<bool> present(vals.size());
for (const auto& e : vals) {
if (present[e]) {
std::cout << "duplicate is " << e << "\n";
break;
}
present[e] = true;
}
}
In the "lucky case" the duplicate is at index 2. In the worst case the whole vector has to be scanned. On average it is again O(N) time complexity. Further it uses O(N) additional memory while yours is using no additional memory. Again: Complexity alone cannot tell you which algorithm is faster (especially not for a fixed input size).
No matter how hard you try, you won't beat O(N), because no matter in what order you traverse the elements (and remember already found elements), the best and worst case are always the same: Either the duplicate is in the first two elements you inspect or it's the last, and on average it will be O(N).

Picking 6 random unique numbers

I have a problem trying to get this to work. I am meant to be picking 6 unique numbers between 1 & 49. I have a function doing this correctly but struggling to check the array for the duplicate and replacing.
srand(static_cast<unsigned int>(time(NULL))); // Seeds a random number
int picked[6];
int number,i,j;
const int MAX_NUMBERS = 6;
for (i = 0; i < MAX_NUMBERS; i++)
{
number = numberGen();
for (int j = 0; j < MAX_NUMBERS; j++)
{
if (picked[i] == picked[j])
{
picked[j] = numberGen();
}
}
}
My number generator just creates a random number between 1 & 49 which i think works ok. I have just started on C++ and any help would be great
int numberGen()
{
int number = rand();
int target = (number % 49) + 1;
return target;
}
C++17 sample
C++17 provides an algorithm for exactly this (go figure):
std::sample
template< class PopulationIterator, class SampleIterator,
class Distance, class UniformRandomBitGenerator >
SampleIterator sample( PopulationIterator first, PopulationIterator last,
SampleIterator out, Distance n,
UniformRandomBitGenerator&& g);
(since C++17)
Selects n elements from the sequence [first; last) such that each
possible sample has equal probability of appearance, and writes those
selected elements into the output iterator out. Random numbers are
generated using the random number generator g. [...]
constexpr int min_value = 1;
constexpr int max_value = 49;
constexpr int picked_size = 6;
constexpr int size = max_value - min_value + 1;
// fill array with [min value, max_value] sequence
std::array<int, size> numbers{};
std::iota(numbers.begin(), numbers.end(), min_value);
// select 6 radom
std::array<int, picked_size> picked{};
std::sample(numbers.begin(), numbers.end(), picked.begin(), picked_size,
std::mt19937{std::random_device{}()});
C++11 shuffle
If you can't use C++17 yet then the way to do this is to generate all the numbers in an array, shuffle the array and then pick the first 6 numbers in the array:
// fill array with [min value, max_value] sequence
std::array<int, size> numbers{};
std::iota(numbers.begin(), numbers.end(), min_value);
// shuffle the array
std::random_device rd;
std::mt19937 e{rd()};
std::shuffle(numbers.begin(), numbers.end(), e);
// (optional) copy the picked ones:
std::array<int, picked_size> picked{};
std::copy(numbers.begin(), numbers.begin() + picked_size, picked.begin());
A side note: please use the new C++11 random library. And prefer std::array to bare C arrays. They don't decay to pointers and provide begin, end, size etc. methods.
Let's break this code down.
for (i = 0; i < MAX_NUMBERS; i++)
We're doing a for-loop with 6 iterations.
number = numberGen();
We're generating a new number, and storing it into the variable number. This variable isn't used anywhere else.
for (int j = 0; j < MAX_NUMBERS; j++)
We're looping through the array again...
if (picked[i] == picked[j])
Checking to see if the two values match (fyi, picked[n] == picked[n] will always match)
picked[j] = numberGen();
And assigning a new random number to the existing value if they do match.
A better approach here would be to eliminate a duplicate value if one exists, then assign it to your array. For example:
for (i = 0; i < MAX_NUMBERS; i++)
{
bool isDuplicate = false;
do
{
number = numberGen(); // Generate the number
// Check for duplicates
for (int j = 0; j < MAX_NUMBERS; j++)
{
if (number == picked[j])
{
isDuplicate = true;
break; // Duplicate detected
}
}
}
while (isDuplicate); // equivalent to while(isDuplicate == true)
picked[j] = number;
}
Here, we run a do-while loop. The first iteration of the loop will generate a random number, and checks to see if it's a duplicate already in the array. If it is, it re-runs the loop until a non-duplicate is found. Once the loop breaks, we have a valid, non-duplicate number available, and then we assign it to the array.
There are going to be better solutions available as you progress through your course.
Efficient approach: Limited Fisher–Yates shuffle
For drawing n numbers from a pool of m you need n calls to random for this approach (6 in your case) instead of m-1 (49 in your case) used when simply shuffling the whole array or vector. So the approach shown below is much more efficient than simply shuffling the whole array and does not require any duplicate checking.
random numbers can get really expensive, so I thought it might be a good idea never to generate more random numbers than necessary. Simply running rand() multiple times until a fitting number comes out seems no good idea.
repetitive double check drawing gets especially expensive in the case that nearly all of the available numbers need to be drawn
I wanted to do it stateful, so it doesn´t matter how many numbers of the 49 you actually request
The solution below does not do any duplicate checking and calls rand() exactly n times for n random numbers. A slight modification of your numberGen was necessary therefore. Albeit you really should use the random library functions instead of rand().
The code below draws all numbers, just to verify that everything works fine, but its easy to see how you would draw only 6 numbers :-)
If you need repetitive draws you can simply add a reset() member function that sets drawn = 0 again. The vector is in shuffled state then, but that doesn´t do any harm.
If you can´t afford the range checking in std::vector.at() you can of course easily replace it by the index access operator[]. But I thought for experimenting with the code at() is a better choice and in this way you get error checking for the case that too many numbers are drawn.
Usage:
Create a class instance of n_out_of_m using the constructor which takes as an argument the amount of available numbers.
Call draw() repetitively to draw numbers.
If you call draw() more often then numbers are available the std::vector.at() will throw an out_of_range exception, if you don´t like that you need to add a check for that case.
I hope someone likes this approach.
#include <iostream>
#include <vector>
#include <algorithm>
#include <cstdlib>
size_t numberGen(size_t limit)
{
size_t number = rand();
size_t target = (number % limit) + 1;
return target;
}
class n_out_of_m {
public:
n_out_of_m(int m) {numbers.reserve(m); for(int i=1; i<=m; ++i) numbers.push_back(i);}
int draw();
private:
std::vector<int> numbers;
size_t drawn = 0;
};
int n_out_of_m::draw()
{
size_t index = numberGen(numbers.size()-drawn) - 1;
std::swap(numbers.at(index), numbers.at(numbers.size()-drawn-1));
drawn++;
return numbers.at(numbers.size()-drawn);
};
int main(int argc, const char * argv[]) {
n_out_of_m my_gen(49);
for(int n=0; n<49; ++n)
std::cout << n << "\t" << my_gen.draw() << "\n";
return 0;
}

Sets and Vectors. Are sets fast in C++?

Please read the question here - http://www.spoj.com/problems/MRECAMAN/
The question was to compute the recaman's sequence where, a(0) = 0 and, a(i) = a(i-1)-i if, a(i-1)-i > 0 and does not come into the sequence before else, a(i) = a(i-1) + i.
Now when I use vectors to store the sequence, and use the find function, the program times out. But when I use an array and a set to see if the element exists, it gets accepted (very fast). IS using set faster?
Here are the codes:
Vector implementation
vector <int> sequence;
sequence.push_back(0);
for (int i = 1; i <= 500000; i++)
{
a = sequence[i - 1] - i;
b = sequence[i - 1] + i;
if (a > 0 && find(sequence.begin(), sequence.end(), a) == sequence.end())
sequence.push_back(a);
else
sequence.push_back(b);
}
Set Implementation
int a[500001]
set <int> exists;
a[0] = 0;
for (int i = 1; i <= MAXN; ++i)
{
if (a[i - 1] - i > 0 && exists.find(a[i - 1] - i) == exists.end()) a[i] = a[i - 1] - i;
else a[i] = a[i - 1] + i;
exists.insert(a[i]);
}
Lookup in an std::vector:
find(sequence.begin(), sequence.end(), a)==sequence.end()
is an O(n) operation (n being the number of elements in the vector).
Lookup in an std::set (which is a balanced binary search tree):
exists.find(a[i-1] - i) == exists.end()
is an O(log n) operation.
So yes, lookup in a set is (asymptotically) faster than a linear lookup in vector.
If you can sort the vector, the look up is faster in most cases than in set because it is much more cache friendly.
There is only one valid answer to most "Is XY faster than UV in C++" questions:
Use a profiler.
While most algorithms (including container insertions, searches etc.) have a guaranteed complexity, these complexities can only tell you about the approximate behavior for large amounts of data. The performance for any given smaller set of data can not be easily compared, and the optimizations that a compiler can apply can not be reasonably guessed by humans. So use a profiler and see what is faster. If it matters at all. To see if performance matters in that special part of your program, use a profiler.
However, in your case it might be a safe bet that searching a set of ~250k elements can be faster than searching an unsorted vector of tat size. However, if you use the vector only for storing the inserted values and leave the sequence[i-1] out in a separate variable, you can keep the vector sorted and use an algorithm for sorted ranges like binary_search, which can be way faster than the set.
A sample implementation with a sorted vector:
const static size_t NMAX = 500000;
vector<int> values = {0};
values.reserve(NMAX );
int lastInserted = 0;
for (int i = 1; i <= NMAX) {
auto a = lastInserted - i;
auto b = lastInserted + i;
auto iter = lower_bound(begin(values), end(values), a);
//a is always less than the last inserted value, so iter can't be end(values)
if (a > 0 && a < *iter) {
lastInserted = a;
}
else {
//b > a => lower_bound(b) >= lower_bound(a)
iter = lower_bound(iter, end(values), b);
lastInserted = b;
}
values.insert(iter, lastInserted);
}
I hope I did not introduce any bugs...
For the task at hand, set is faster than vector because it keeps its contents sorted and does a binary search to find a specified item, giving logarithmic complexity instead of linear complexity. When the set is small, that difference is also small, but when the set gets large the difference grows considerably. I think you can improve things a bit more than just that though.
First, I'd avoid the clumsy lookup to see if an item is already present by just attempting to insert an item, then see if that succeeded:
if (b>0 && exists.insert(b).second)
a[i] = b;
else {
a[i] = c;
exists.insert(c);
}
This avoids looking up the same item twice, once to see if it was already present, and again to insert the item. It only does a second lookup when the first one was already present, so we're going to insert some other value.
Second, and even more importantly, you can use std::unordered_set to improve the complexity from logarithmic to (expected) constant. Since unordered_set uses (mostly) the same interface as std::set, this substitution is easy to make (including the optimization above.
Here's some code to compare the three methods:
#include <iostream>
#include <string>
#include <set>
#include <unordered_set>
#include <vector>
#include <numeric>
#include <chrono>
static const int MAXN = 500000;
unsigned original() {
static int a[MAXN+1];
std::set <int> exists;
a[0] = 0;
for (int i = 1; i <= MAXN; ++i)
{
if (a[i - 1] - i > 0 && exists.find(a[i - 1] - i) == exists.end()) a[i] = a[i - 1] - i;
else a[i] = a[i - 1] + i;
exists.insert(a[i]);
}
return std::accumulate(std::begin(a), std::end(a), 0U);
}
template <class container>
unsigned reduced_lookup() {
container exists;
std::vector<int> a(MAXN + 1);
a[0] = 0;
for (int i = 1; i <= MAXN; ++i) {
int b = a[i - 1] - i;
int c = a[i - 1] + i;
if (b>0 && exists.insert(b).second)
a[i] = b;
else {
a[i] = c;
exists.insert(c);
}
}
return std::accumulate(std::begin(a), std::end(a), 0U);
}
template <class F>
void timer(F f) {
auto start = std::chrono::high_resolution_clock::now();
std::cout << f() <<"\t";
auto stop = std::chrono::high_resolution_clock::now();
std::cout << "Time: " << std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count() << " ms\n";
}
int main() {
timer(original);
timer(reduced_lookup<std::set<int>>);
timer(reduced_lookup<std::unordered_set<int>>);
}
Note how std::set and std::unordered_set provide similar enough interfaces that I've written the code as a single template that can use either type of container, then for timing just instantiated that for both set and unordered_set.
Anyway, here's some results from g++ (version 4.8.1, compiled with -O3):
212972756 Time: 137 ms
212972756 Time: 101 ms
212972756 Time: 63 ms
Changing the lookup strategy improves speed by about 30%1 and using unordered_set with the improved lookup strategy better than doubles the speed compared to the original--not bad, especially when the result actually looks cleaner, at least to me. You might not agree that it's cleaner looking, but I think we can at least agree that I didn't write code that was a lot longer or more complex to get the speed improvement.
1. Simplistic analysis indicates that it should be around 25%. Specifically, if we assume there are even odds of a given number being in the set already, then this eliminates half the lookups about half the time, or about 1/4th of the lookups.
The set is a huge speedup because it's faster to look up. (Btw, exists.count(a) == 0 is prettier than using find.)
That doesn't have anything to do with vector vs array though. Adding the set to the vector version should work just as fine.
It is classic space-time tradeoff. When you use only vector your program uses minimum memory but you should to find existing numbers on every step. It is slowly. When you use additional index data structure (like a set in your case) you dramatically speed up your code but your code now takes at least twice greater memory. More about tradeoff here.

Time-efficient way to count number of distinct numbers

get_number() returns an integer. I'm going to call it 30 times and count the number of distinct integers returned. My plan is to put these numbers into an std::array<int,30>, sort it and then use std::unique.
Is that a good solution? Is there a better one? This piece of code will be the bottleneck of my program.
I'm thinking there should be a hash-based solution, but maybe its overhead would be too much when I've only got 30 elements?
Edit I changed unique to distinct. Example:
{1,1,1,1} => 1
{1,2,3,4} => 4
{1,3,3,1} => 2
I would use std::set<int> as it's simpler:
std::set<int> s;
for(/*loop 30 times*/)
{
s.insert(get_number());
}
std::cout << s.size() << std::endl; // You get count of unique numbers
If you want to count return times of each unique number, I'd suggest map
std::map<int, int> s;
for(int i=0; i<30; i++)
{
s[get_number()]++;
}
cout << s.size() << std::endl; // total count of distinct numbers returned
for (auto it : s)
{
cout << it.first << " " << it.second<< std::endl; // each number and return counts
}
The simplest solution would be to use a std::map:
std::map<int, size_t> counters;
for (size_t i = 0; i != 30; ++i) {
counters[getNumber()] += 1;
}
std::vector<int> uniques;
for (auto const& pair: counters) {
if (pair.second == 1) { uniques.push_back(pair.first); }
}
// uniques now contains the items that only appeared once.
Using a std::map, std::set or the std::sort algorithm will give you a O(n*log(n)) complexity. For a small to large number of elements it is perfectly correct. But you use a known integer range and this opens the door to lot of optimizations.
As you say (in a comment) that the range of your integers is known and short: [0..99]. I would recommend to implement a modified counting sort. See: http://en.wikipedia.org/wiki/Counting_sort
You can count the number of distinct items while doing the sort itself, removing the need for the std::unique call. The whole complexity would be O(n). Another advantage is that the memory needed is independent of the number of input items. If you had 30.000.000.000 integers to sort, it would not need a single supplementary byte to count the distinct items.
Even is the range of allowed integer value is large, says [0..10.000.000] the memory consumed would be quite low. Indeed, an optimized version could consume as low as 1 bit per allowed integer value. That is less than 2 MB of memory or 1/1000th of a laptop ram.
Here is a short example program:
#include <cstdlib>
#include <algorithm>
#include <iostream>
#include <vector>
// A function returning an integer between [0..99]
int get_number()
{
return rand() % 100;
}
int main(int argc, char* argv[])
{
// reserves one bucket for each possible integer
// and initialize to 0
std::vector<int> cnt_buckets(100, 0);
int nb_distincts = 0;
// Get 30 numbers and count distincts
for(int i=0; i<30; ++i)
{
int number = get_number();
std::cout << number << std::endl;
if(0 == cnt_buckets[number])
++ nb_distincts;
// We could optimize by doing this only the first time
++ cnt_buckets[number];
}
std::cerr << "Total distincts numbers: " << nb_distincts << std::endl;
}
You can see it working:
$ ./main | sort | uniq | wc -l
Total distincts numbers: 26
26
The simplest way is just to use std::set.
std::set<int> s;
int uniqueCount = 0;
for( int i = 0; i < 30; ++i )
{
int n = get_number();
if( s.find(n) != s.end() ) {
--uniqueCount;
continue;
}
s.insert( n );
}
// now s contains unique numbers
// and uniqueCount contains the number of unique integers returned
Using an array and sort seems good, but unique may be a bit overkill if you just need to count distinct values. The following function should return number of distinct values in a sorted range.
template<typename ForwardIterator>
size_t distinct(ForwardIterator begin, ForwardIterator end) {
if (begin == end) return 0;
size_t count = 1;
ForwardIterator prior = begin;
while (++begin != end)
{
if (*prior != *begin)
++count;
prior = begin;
}
return count;
}
In contrast to the set- or map-based approaches this one does not need any heap allocation and elements are stored continuously in memory, therefore it should be much faster. Asymptotic time complexity is O(N log N) which is the same as when using an associative container. I bet that even your original solution of using std::sort followed by std::unique would be much faster than using std::set.
Try a set, try an unordered set, try sort and unique, try something else that seems fun.
Then MEASURE each one. If you want the fastest implementation, there is no substitute for trying out real code and seeing what it really does.
Your particular platform and compiler and other particulars will surely matter, so test in an environment as close as possible to where it will be running in production.

Find the biggest 3 numbers in a vector

I'm trying to make a function to get the 3 biggest numbers in a vector. For example:
Numbers: 1 6 2 5 3 7 4
Result: 5 6 7
I figured I could sort them DESC, get the 3 numbers at the beggining, and after that resort them ASC, but that would be a waste of memory allocation and execution time. I know there is a simpler solution, but I can't figure it out. And another problem is, what if I have only two numbers...
BTW: I use as compiler BorlandC++ 3.1 (I know, very old, but that's what I'll use at the exam..)
Thanks guys.
LE: If anyone wants to know more about what I'm trying to accomplish, you can check the code:
#include<fstream.h>
#include<conio.h>
int v[1000], n;
ifstream f("bac.in");
void citire();
void afisare_a();
int ultima_cifra(int nr);
void sortare(int asc);
void main() {
clrscr();
citire();
sortare(2);
afisare_a();
getch();
}
void citire() {
f>>n;
for(int i = 0; i < n; i++)
f>>v[i];
f.close();
}
void afisare_a() {
for(int i = 0;i < n; i++)
if(ultima_cifra(v[i]) == 5)
cout<<v[i]<<" ";
}
int ultima_cifra(int nr) {
return nr - 10 * ( nr / 10 );
}
void sortare(int asc) {
int aux, s;
if(asc == 1)
do {
s = 0;
for(int i = 0; i < n-1; i++)
if(v[i] > v[i+1]) {
aux = v[i];
v[i] = v[i+1];
v[i+1] = aux;
s = 1;
}
} while( s == 1);
else
do {
s = 0;
for(int i = 0; i < n-1; i++)
if(v[i] < v[i+1]) {
aux = v[i];
v[i] = v[i+1];
v[i+1] = v[i];
s = 1;
}
} while(s == 1);
}
Citire = Read
Afisare = Display
Ultima Cifra = Last digit of number
Sortare = Bubble Sort
If you were using a modern compiler, you could use std::nth_element to find the top three. As is, you'll have to scan through the array keeping track of the three largest elements seen so far at any given time, and when you get to the end, those will be your answer.
For three elements that's a trivial thing to manage. If you had to do the N largest (or smallest) elements when N might be considerably larger, then you'd almost certainly want to use Hoare's select algorithm, just like std::nth_element does.
You could do this without needing to sort at all, it's doable in O(n) time with linear search and 3 variables keeping your 3 largest numbers (or indexes of your largest numbers if this vector won't change).
Why not just step through it once and keep track of the 3 highest digits encountered?
EDIT: The range for the input is important in how you want to keep track of the 3 highest digits.
Use std::partial_sort to descending sort the first c elements that you care about. It will run in linear time for a given number of desired elements (n log c) time.
If you can't use std::nth_element write your own selection function.
You can read about them here: http://en.wikipedia.org/wiki/Selection_algorithm#Selecting_k_smallest_or_largest_elements
Sort them normally and then iterate from the back using rbegin(), for as many as you wish to extract (no further than rend() of course).
sort will happen in place whether ASC or DESC by the way, so memory is not an issue since your container element is an int, thus has no encapsulated memory of its own to manage.
Yes sorting is good. A especially for long or variable length lists.
Why are you sorting it twice, though? The second sort might actually be very inefficient (depends on the algorithm in use). A reverse would be quicker, but why even do that? If you want them in ascending order at the end, then sort them into ascending order first ( and fetch the numbers from the end)
I think you have the choice between scanning the vector for the three largest elements or sorting it (either using sort in a vector or by copying it into an implicitly sorted container like a set).
If you can control the array filling maybe you could add the numbers ordered and then choose the first 3 (ie), otherwise you can use a binary tree to perform the search or just use a linear search as birryree says...
Thank #nevets1219 for pointing out that the code below only deals with positive numbers.
I haven't tested this code enough, but it's a start:
#include <iostream>
#include <vector>
int main()
{
std::vector<int> nums;
nums.push_back(1);
nums.push_back(6);
nums.push_back(2);
nums.push_back(5);
nums.push_back(3);
nums.push_back(7);
nums.push_back(4);
int first = 0;
int second = 0;
int third = 0;
for (int i = 0; i < nums.size(); i++)
{
if (nums.at(i) > first)
{
third = second;
second = first;
first = nums.at(i);
}
else if (nums.at(i) > second)
{
third = second;
second = nums.at(i);
}
else if (nums.at(i) > third)
{
third = nums.at(i);
}
std::cout << "1st: " << first << " 2nd: " << second << " 3rd: " << third << std::endl;
}
return 0;
}
The following solution finds the three largest numbers in O(n) and preserves their relative order:
std::vector<int>::iterator p = std::max_element(vec.begin(), vec.end());
int x = *p;
*p = std::numeric_limits<int>::min();
std::vector<int>::iterator q = std::max_element(vec.begin(), vec.end());
int y = *q;
*q = std::numeric_limits<int>::min();
int z = *std::max_element(vec.begin(), vec.end());
*q = y; // restore original value
*p = x; // restore original value
A general solution for the top N elements of a vector:
Create an array or vector topElements of length N for your top N elements.
Initialise each element of topElements to the value of your first element in your vector.
Select the next element in the vector, or finish if no elements are left.
If the selected element is greater than topElements[0], replace topElements[0] with the value of the element. Otherwise, go to 3.
Starting with i = 0, swap topElements[i] with topElements[i + 1] if topElements[i] is greater than topElements[i + 1].
While i is less than N, increment i and go to 5.
Go to 3.
This should result in topElements containing your top N elements in reverse order of value - that is, the largest value is in topElements[N - 1].