Have anybody written a C++ STL-compliant algorithm that combines std::transform and std::accumulate into a single pass algorithm supporting both the unary, binary and perhaps even (n-ary!) variant, say std::transformed_accumulate? I want this because I have found this pattern highly reusable in for example linear algebra for example in (l1-)norm calculations. The l1-norm calculates the sum of the absolute values of the elements.
Uhm... My bet is that you can do that by embedding your transformation into the binary predicate, tranform the element and accumulate after the transformation.
struct times2accumulator {
int operator()( int oldvalue, int newvalue ) const {
return oldvalue + 2*newvalue;
}
};
int r = std::accumulate( v.begin(), v.end(), 2, times2accumulator() );
That functor would be equivalent to:
struct times2 {
int operator()( int x ) {
return 2*x;
}
};
std::vector<int> tmp; tmp.reserve( v.size() );
std::transform( v.begin(), v.end(), std::back_inserter(tmp), times2 );
int r = std::accumulate( tmp.begin(), tmp.end(), 0 );
Of course this could be made generic, just pass the transformation functor to a generic base functor:
template <typename Transform>
struct transform_accumulator_t {
Transform t;
transform_accumulator_t( Transform t ) : t(t) {}
int operator()( int oldvalue, int newvalue ) const {
return oldvalue + t(newvalue);
}
};
// syntactic sugar:
template <typename T>
transform_accumulator_t<T> transform_accumulator( T t ) {
return transform_accumulator_t<T>(t);
}
int r = std::accumulate(v.begin(), v.end(), 0, transform_accumulator(times2));
And you could also generalize on the type in the container... or even create a more generic transform_accumulator that takes both an accumulator and a transformation functors and applies them in order. Actual implementation left as an exercise for the reader.
Although it may not exactly fit the original intent, std::inner_product is basically your binary version. You pass it an initial value, two ranges, and two functors, and it applies them as:
T acc = initial_value;
while (begin1 != end1) {
acc = binary_op1(acc, binary_op2(begin1, begin2);
++begin1;
++begin2;
return acc;
So, for your L1 you'd do something on this general order:
norm = std::inner_product(input1.begin(), input1.end(),
input2.begin(), input2.end(),
std::plus<int>(), std::abs);
Only that doesn't quite work -- right now, it's trying to pass std::abs where you really need a binary function that combines the two inputs, but I'm not sure how the two inputs are really supposed to be combined.
std::partial_sum is fairly close to your unary version, except that along with accumulating a result, it (attempts to) write out each intermediate result, not just the final result. To just get the final result, you'd have to write (and pass an instance of) a kind of do-nothing iterator that just holds a single value:
template<class T, class Dist=size_t, class Ptr = T*, class Ref = T&>
class unique_it : public std::iterator<std::random_access_iterator_tag, T, Dist, Ptr, Ref> {
T &value;
public:
unique_it(T &v) : value(v) {}
T &operator*() { return value; }
unique_it &operator++() { return *this; }
unique_it &operator+(size_t) { return *this; }
unique_it &operator++(int) { return *this; }
};
template <class T>
unique_it<T> make_res(T &v) { return unique_it<T>(v); }
With this, your L1 normalization would look something like this:
int main(){
double result=0.0;
double inputs[] = {1, -2, 3, -4, 5, -6};
std::partial_sum(
inputs, inputs+6,
make_res(result),
[](double acc, double v) {return acc + std::abs(v);});
std::cout << result << "\t";
return 0;
}
If you want to use some parallelism, I made a quick version using OpenMP :
template <class T,
class InputIterator,
class MapFunction,
class ReductionFunction>
T MapReduce_n(InputIterator in,
unsigned int size,
T baseval,
MapFunction mapper,
ReductionFunction reducer)
{
T val = baseval;
#pragma omp parallel
{
T map_val = baseval;
#pragma omp for nowait
for (auto i = 0U; i < size; ++i)
{
map_val = reducer(map_val, mapper(*(in + i)));
}
#pragma omp critical
val = reducer(val, map_val);
}
return val;
}
It is fast but there is certainly room for optimisation, especially around for (auto i = 0U; i < size; ++i) I think. (But I couldn't figure how to make an iterator-only version with OpenMP, any help would be appreciated!).
On a quick test with 1000000 elements array, and the computation iterated 1000 times to have a mean value, I made some comparisons.
Version 1 :
for (auto i = 0U; i < size; ++i)
val += std::pow(in[i][0], 2) + std::pow(in[i][1], 2);
score when compiled with:
g++ : 30 seconds
g++ -O3 : 2.6 seconds
Version 2 :
This version is the most optimized for this computation I think. (It gives the best result).
#pragma omp parallel reduction( + : val )
{
double map_val = 0.0;
#pragma omp for
for (int i=0; i < size; ++i)
{
map_val += std::pow(in[i][0], 2) + std::pow(in[i][1], 2);
}
val += map_val;
}
g++ -O3 : 0.2 seconds (it's the best one)
Version 3
This version uses the MapReduce_n function template I shown earlier :
double val = MapReduce_n(in, size, 0.0, [] (fftw_complex val)
{
return std::pow(val[0], 2.0) + std::pow(val[1], 2.0);
}, std::plus<double>());
g++ -O3 : 0.4 seconds, so there is a slight overhead for not using directly the OMP reduce directly. However, it doesn't allows custom operators, so at one point you (sadly) have to trade speed for genericity.
I am surprised noone said how to do this with Boost.Range:
accumulate(v | transformed((int(*)(int))&std::abs), 0);
where v is a Singe Pass Range (ie, any STL container). The abs overload has to be specified, otherwise this would be as elegant as Haskell.
As of C++17 there is also std::transform_reduce, which also has the benefit of being parallelizable.
https://en.cppreference.com/w/cpp/algorithm/transform_reduce
Related
I have algorithm that uses iterators, but there is a problem with transforming values, when we need more than single source value.
All transform iterators just get some one arg and transforms it. (see similar question from the past)
Code example:
template<typename ForwardIt>
double some_algorithm(ForwardIt begin, ForwardIt end) {
double result = 0;
for (auto it = begin; it != end; ++it) {
double t = *it;
/*
do some calculations..
*/
result += t;
}
return result;
}
int main() {
{
std::vector<double> distances{ 1, 2, 3, 4 };
double t = some_algorithm(distances.begin(), distances.end());
std::cout << t << std::endl;
/* works great */
}
{
/* lets now work with vector of points.. */
std::vector<double> points{ 1, 2, 4, 7, 11 };
/* convert to distances.. */
std::vector<double> distances;
distances.resize(points.size() - 1);
for (size_t i = 0; i + 1 < points.size(); ++i)
distances[i] = points[i + 1] - points[i];
/* invoke algorithm */
double t = some_algorithm(distances.begin(), distances.end());
std::cout << t << std::endl;
}
}
Is there a way (especialy using std) to create such an iterator wrapper to avoid explicitly generating distances value?
It could be fine to perform something like this:
template<typename BaseIterator, typename TransformOperator>
struct GenericTransformIterator {
GenericTransformIterator(BaseIterator it, TransformOperator op) : it(it), op(op) {}
auto operator*() {
return op(it);
}
GenericTransformIterator& operator++() {
++it;
return *this;
}
BaseIterator it;
TransformOperator op;
friend bool operator!=(GenericTransformIterator a, GenericTransformIterator b) {
return a.it != b.it;
}
};
and use like:
{
/* lets now work with vector of points.. */
std::vector<double> points{ 1, 2, 4, 7, 11 };
/* use generic transform iterator.. */
/* invoke algorithm */
auto distance_op = [](auto it) {
auto next_it = it;
++next_it;
return *next_it - *it;
};
double t = some_algorithm(
generic_transform_iterator(points.begin(), distance_op),
generic_transform_iterator(points.end() -1 , distance_op));
std::cout << t << std::endl;
}
So general idea is that transform function is not invoked on underlying object, but on iterator (or at least has some index value, then lambda can capture whole container and access via index).
I used to use boost which has lot of various iterator wrapping class.
But since cpp20 and ranges I'm curious if there is a way to use something existing from std:: rather than writing own wrappers.
With C++23, use std::views::pairwise.
In the meantime, you can use iota_view. Here's a solution which will work with any bidirectional iterators (e.g. points could be a std::list):
auto distances =
std::views::iota(points.cbegin(), std::prev(points.cend()))
| std::views::transform([](auto const &it) { return *std::next(it) - *it; });
This can also be made to work with any forward iterators. Example:
std::forward_list<double> points{1, 2, 4, 7, 11};
auto distances =
std::views::iota(points.cbegin())
| std::views::take_while([end = points.cend()](auto const &it) { return std::next(it) != end; })
| std::views::transform([](auto const &it) { return *std::next(it) - *it; })
| std::views::common;
Note that both of these snippets have undefined behaviour if points is empty.
I'm not sure this addresses your problem (let me know if it doesn't and I'll remove the answer), but you may be able to achieve that with ranges (unfortunately, not with standard ranges yet, but Eric Niebler's range-v3).
The code below:
groups the points vector in pairs,
calculates the difference between the second and the first element of each pair, and then
sums all those differences up.
[Demo]
auto t{ accumulate(
points | views::sliding(2) | views::transform([](const auto& v) { return v[1] - v[0]; }),
0.0
)};
Sort Integers by The Number of 1 Bits
Leetcode : Problem Link
Example Testcase :
Example 1:
Input: arr = [0,1,2,3,4,5,6,7,8]
Output: [0,1,2,4,8,3,5,6,7]
Explantion: [0] is the only integer with 0 bits.
[1,2,4,8] all have 1 bit.
[3,5,6] have 2 bits.
[7] has 3 bits.
The sorted array by bits is [0,1,2,4,8,3,5,6,7]\
Example 2:
Input: arr = [1024,512,256,128,64,32,16,8,4,2,1]
Output: [1,2,4,8,16,32,64,128,256,512,1024]
Explantion: All integers have 1 bit in the binary representation, you should just sort them in ascending order.
My Solution :
class Solution {
public:
unsigned int setBit(unsigned int n){
unsigned int count = 0;
while(n){
count += n & 1;
n >>= 1;
}
return count;
}
vector<int> sortByBits(vector<int>& arr) {
map<int,vector<int>>mp;
for(auto it:arr){
mp[setBit(it)].push_back(it);
}
for(auto it:mp){
vector<int>vec;
vec=it.second;
sort(vec.begin(),vec.end()); //This Sort Function of vector is not working
}
vector<int>ans;
for(auto it:mp){
for(auto ele:it.second){
ans.push_back(ele);
}
}
return ans;
}
};
In my code why sort function is not working ?
[1024,512,256,128,64,32,16,8,4,2,1]
For the above testcase output is [1024,512,256,128,64,32,16,8,4,2,1] because of sort function is not working. It's correct output is [1,2,4,8,16,32,64,128,256,512,1024]
Note : In the above example testcase every elements of the testcase has only one set-bit(1)
As your iteration in //This sort function ...
refers to mp as the copy of the value inside the map, sort function will not sort the vector inside it, but the copy of it. Which does not affecting the original vector<int> inside the mp. Therefore, no effect occurs. You should refer the vector inside the map as a reference like this:
class Solution {
public:
unsigned int setBit(unsigned int n) {
unsigned int count = 0;
while (n) {
count += n & 1;
n >>= 1;
}
return count;
}
vector<int> sortByBits(vector<int>& arr) {
map<int, vector<int>>mp;
for (auto it : arr) {
mp[setBit(it)].push_back(it);
}
for (auto& it : mp) {
sort(it.second.begin(), it.second.end()); //Now the sort function works
}
vector<int>ans;
for (auto it : mp) {
for (auto ele : it.second) {
ans.push_back(ele);
}
}
return ans;
}
};
Although there is more design problem inside your solution, this will be a solution with minimized modification.
vector<int>vec is a copy of a copy of the one in the map which is then discarded. Try:
for(auto& entry:mp){
vector<int>&vec=entry.second;
sort(vec.begin(),vec.end());
}
Your other for loops should also use references for efficiency but it won't affect the behaviour.
I assume the OP is just learning, so fiddling with various data structures etc. can carry some educational value. Still, only one of the comments pointed out that the starting approach to the problem is wrong, and the whole point of the exercise is to find a custom method of comparing the numbers, by number of bits first, then - by value.
Provided std::sort is allowed (OP uses it), I guess the whole solution comes down to, conceptually, sth likes this (but I haven't verified it against LeetCode):
template <typename T>
struct Comp
{
std::size_t countBits(T number) const
{
size_t count;
while(number) {
count += number & 1;
number>>=1;
}
return count;
}
bool operator()(T lhs, T rhs) const
{
/*
auto lb{countBits(lhs)};
auto rb{countBits(rhs)};
return lb==rb ? lhs < rhs : lb < rb;
* The code above is the manual implementation of the line below
* that utilizes the standard library
*/
return std::tuple{countBits(lhs), lhs} < std::tuple{countBits(rhs), rhs};
}
};
class Solution {
public:
void sortByBits(vector<int>& arr) {
std::sort(begin(arr), end(arr), Comp<int>{});
}
};
Probably it can improved even further, but I'd take it as starting point for analysis.
Here is memory efficient and fast solution. I don't know why you are using map and extra vector. we can solve this questions without any extra memory efficiently. We just have to make a Comparator function which will sort elements according to our own requirements. Please let me know in comments if you require further help in code (or if you find difficult to understand my code). I am using __builtin_popcount() function which will return me number of set bits in a number.
bool sortBits(const int a, const int b){ //Comparator function to sort elements according to number of set bits
int numOfBits1 = __builtin_popcount(a);
int numOfBits2 = __builtin_popcount(b);
if(numOfBits1 == numOfBits2){ //if number of set bits are same, then sorting the elements according to magnitude of element (greater/smaller element)
return a < b;
}
return (numOfBits1 < numOfBits2); //if number of set bits are not same, then sorting the elements according to number of set bits in element
}
class Solution {
public:
vector<int> sortByBits(vector<int>& arr) {
sort(arr.begin(),arr.end(), sortBits);
return arr;
}
};
The problem is already evaluated and the fix is aready explained.
I want to give 2 additional/alternative solution proposals.
In C++17 we have the std::bitset count function. Please see here
And in C++20 we have directly the std::popcount function. Please see here.
(Elderly and grey haired people like me would also find 5 additional most efficient solutions in the Book "Hackers Delight")
Both variants lead to a one statement solution using std::sort with a lambda.
Please see:
#include <algorithm>
#include <vector>
#include <iostream>
#include <bitset>
// Solution
class Solution {
public:
std::vector<int> sortByBits(std::vector<int>& arr) {
std::sort(arr.begin(), arr.end(), [](const unsigned int i1, const unsigned int i2)
{ size_t c1{ std::bitset<14>(i1).count() }, c2{ std::bitset<14>(i2).count() }; return c1 == c2 ? i1 < i2 : c1 < c2; });
//{ int c1=std::popcount(i1), c2=std::popcount(i2); return c1 == c2 ? i1 < i2 : c1 < c2; });
return arr;
}
};
// Test
int main() {
std::vector<std::vector<int>> testData{
{0,1,2,3,4,5,6,7,8},
{1024,512,256,128,64,32,16,8,4,2,1}
};
Solution s;
for (std::vector<int>& test : testData) {
for (const int i : s.sortByBits(test)) std::cout << i << ' ';
std::cout << '\n';
}
}
I've often seen that you can replace all handwritten/raw loops with stl algorithms. Just to improve my C++ knowledge I've been trying just that.
To populate a std::vector with data I use a for loop and the loops index.
unsigned int buffer_size = (format.getBytesPerSecond() * playlen) / 1000;
// pcm data stored in a 'short type' vector
vector<short> pcm_data;
for (unsigned int i = 0; i < buffer_size; ++i)
{
pcm_data.push_back( static_cast<short>(amplitude * sin((2 * M_PI * i * frequency) / format.SampleRate)) );
}
The above code works fine, as you can see I use the for loops index 'i' for the algorithm to be correct.
How can someone replace that for loop with something from the standard?
The only functions i've seen that almost allow me to do it are std::transform and std::generate, but both of those wouldn't work because I require an index value to increment for the code.
EG:
generate_n(begin(pcm_data), buffer_size, [] ()
{
return static_cast<short>(amplitude * sin((2 * M_PI * i * frequency) / format.SampleRate)); //what is i??
});
transform(begin(pcm_data), end(pcm_data), begin(pcm_data) [] (???)
{
return static_cast<short>(amplitude * sin((2 * M_PI * i * frequency) / format.SampleRate)); //what is i??
});
Or am I simply going too far into the idea of "no raw loops"?
The real solution here would be to define an appropriate
iterator, something like:
class PcmIter : public std::iterator<std::forward_iterator_tag, short>
{
int myIndex;
double myAmplitude;
double myFrequency;
short myValue;
void calculate()
{
myValue = myAmplitude * std::sin( 2 * M_PI * myIndex * frequency );
}
public:
PcmIter( int index, amplitude = 0.0, frequency = 0.0 )
: myIndex( index )
, myAmplitude( amplitude )
, myFrequency( frequency )
{
calculate();
}
bool operator==( PcmIter const& other ) const
{
return myIndex == other.myIndex;
}
bool operator!=( PcmIter const& other ) const
{
return myIndex != other.myIndex;
}
const short& operator*() const
{
return myValue;
}
PcmIter& operator++()
{
++ myIndex;
calculate();
}
PcmIter operator++( int )
{
PcmIter results( *this );
operator++();
return results;
}
};
In practice, I suspect that you could get by with having
operator* return a value, which you calculate at that point,
and not having a myValue member.
To use:
std::vector<short> pcmData(
PcmIter( 0, amplitude, frequency),
PcmIter( buffer_size ) );
(The amplitude and the frequency are irrelevant for the end
iterator, since it will never be dereferenced.)
Ideally, this would be a random_access_iterator, so that the
constructor to vector will calculate the number of elements, and
pre-allocate them. This involves implementing a lot more
functions, however.
If you're courageous, and have to do similar things a lot, you
could consider making the iterator a template, to be
instantiated over the function you're interested in.
And while I've not had a chance to play with them lately, if
you're using Boost, you might consider chaining
a transform_iterator and a counting_iterator. It's still
a bit wordy, but the people who did the iterators at Boost did
the best they could, given the somewhat broken design of STL
iterators.
You can simply use a variable in the scope of your "generate_n" to declare your variable.
unsigned int i = 0;
generate_n(begin(pcm_data), buffer_size, [&] ()
{
return static_cast<short>(amplitude * sin((2 * M_PI * (i++) * frequency) / format.SampleRate)); //what is i??
});
I would recommend counting_iterator in Boost Library. A pair of counting iterators provides you a range of integer. Obviously, there is no underlying container. It provides the integer "lazily". The library provides factory function make_counting_iterator for creating it.
back_insert_iterator (with factory function back_inserter) in Standard Library (header iterator) effectively calls the member push_back of the container.
With these ingredients, you can use transform with the "index".
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
using namespace std;
#include <boost/iterator/counting_iterator.hpp>
int main(int argc, char* argv[])
{
// Create a pair of counting iterators
auto first = boost::make_counting_iterator(0);
auto last = boost::make_counting_iterator(10);
vector<int> vi;
// Construct a vector of a few even number, as an example.
transform(first, last, back_inserter(vi), [](int i){ return 2 * i; });
// Print the result for check
copy(vi.begin(), vi.end(), ostream_iterator<int>{cout, " "});
return 0;
}
The print-out:
0 2 4 6 8 10 12 14 16 18
not necessarily better but a solution with stl:
struct generate_value {
short operator() () const {return amplitude * sin((2 * M_PI * i++ * frequency) / format.SampleRate);}
private:
unsigned i = 0;
};
generate_n(back_inserter(pcm_data), buffer_size, generate_value{});
I see a couple of possibilities I haven't seen mentioned yet. One would start with an iterator for a range of numbers:
template <class T>
class xrange_t {
T start;
T stop;
public:
xrange_t(T start, T stop) : start(start), stop(stop) {}
class iterator : public std::iterator<std::forward_iterator_tag, T> {
T current;
public:
iterator(T t) : current(t) {}
T operator *() { return current; }
iterator &operator++() { ++current; return *this; }
bool operator!=(iterator const &other) const { return current != other.current; }
bool operator==(iterator const &other) const { return current == other.current; }
};
iterator begin() { return iterator(start); }
iterator end() { return iterator(stop); }
};
template <class T>
xrange_t<T> xrange(T start, T stop) {
return xrange_t<T>(start, stop);
}
Then you'd use this with a ranged-for loop to do the real work:
#include "xrange"
for (auto i : xrange(0, buffer_size))
pcm_data.push_back( static_cast<short>(amplitude * sin((2 * M_PI * i * frequency) / format.SampleRate)) );
Another possibility would be to carry out the job in a couple of steps:
std::vector<short> pcm_data(buffer_size);
std::iota(pcm_data.begin(), pcm_data.end(), 0);
std::transform(pcm_data.begin(), pcm_data.end(), pcm_data.begin(),
[](short i) {
return static_cast<short>(amplitude * sin((2 * M_PI * i * frequency) / format.SampleRate)));
}
);
This starts by filling the array with the successive values of i (i.e., the inputs to the function) then transforms each of those inputs to the matching output value.
This has two potential shortcomings though:
If the value of i might exceed the value that can be stored in a short, it might truncate the input value during the initial storage phase. It's not clear whether your use of int for i reflects the possibility that it might have a larger magnitude, or just using int by default.
It traverses the result vector twice. If the vector is large (especially if it's too large to fit in cache) this could be substantially slower.
I'm using Visual Studio 2012 so C++11 is mostly OK...
boost is also fine, but I would prefer to avoid other libreries, at least not widley used ones.
I want to create a forward only iterator that returns an infinite sequence, in the most elegant way possible. For example a sequence of all the natural numbers.
Basically I want the C++ equivilent of this f# code:
let nums =
seq { while true do
yield 1
yield 2
}
the above code basically creates an enumerator that returns [1;2;1;2...]
I know I could do this by writing a class, but there's got to be a shorter way with all the new lambdas and all...
Is this what you want:
#include <iostream>
#include <vector>
int main()
{
auto nums = []
{
static unsigned x = 2;
return ( x++ % 2 ) + 1;
};
std::vector< int > v{ nums(), nums(), nums(), nums(), nums() };
for( auto i : v )
{
std::cout << i;
}
return 0;
}
or I have misunderstood the question?
The simpler thing, if you can depend on boost is to write something like this:
int i = 0;
auto gen = boost::make_generator_iterator([=]() { return i++; });
C++14 version:
auto gen = boost::make_generator_iterator([i=0]() { return i++;});
Documentation is here.
P.S.: I'm not sure if it will work without result_type member, which C++03 functor would need.
I've written a library called Pipeline using which you can write such things easily, as:
auto infinite_seq = generate(1, [](int i) { return (i % 2) + 1; });
Now infinite_seq is a deferred-range which means it will generate the values and give you when you ask for it. If you ask for 10 values, it will generate exactly 10 values — this can be expressed as:
auto values = infinite_seq | take(10);
Or you can write this:
auto values = generate(1, [](int i) { return (i % 2) + 1; }) | take(10);
for(auto i : values)
//working with i
Have a look at the documentation of generate.
Standard C++ has no real iterator generators which help you avoid writing the class manually. You can take a look at my range library for such an iterator generator to get going. This code essentially allows you to write
for (auto i : range(1))
…
which generates the infinite sequence 1, 2, 3, …. Boost.Iterator contains tools for transforming one iterator output into another, related output. You could use that to repeatedly cycle over elements from a two-item container (containing the elements 1 and 2, in your case).
When you have a hammer in your hand, everything around looks like a nail. Lambdas and other C++11 features are sure cool, but you should choose valid tools for problems. In this case I see nothing simpler and more elegant than short class with overloaded operators:
class MyInfiniteIter
{
private:
unsigned int i;
public:
MyInfiniteIter()
{
i = 0;
}
int operator *() {
return i;
}
int operator++ () {
i++;
if (i == 10)
i = 0;
return i;
}
};
int main(int argc, char * argv[])
{
for (MyInfiniteIter i;; ++i)
{
printf("%d\n", *i);
}
}
Here is C++14 index_sequence comes helping:
#include <iostream>
#include <vector>
namespace std
{
template< int ...i> struct index_sequence{};
template< int N, int ...i>
struct make_seq_impl : make_seq_impl< N-1, N-1,i...> {};
template< int ...i>
struct make_seq_impl<0,i...>{ typedef index_sequence<i...> type; };
template< int N >
using make_index_sequence = typename make_seq_impl<N>::type;
} // namespace std
typedef std::vector<int> integer_list;
template< typename F, int ...i >
integer_list make_periodic_list_impl(F f, std::index_sequence<i...> )
{
// { 1 2 1 2 1 2... }
return { f(i) ... };
}
template< int N , typename F>
integer_list make_periodic_list(F f)
{
return make_periodic_list_impl(f, std::make_index_sequence<N>{} );
}
int main()
{
std::vector<int> v = make_periodic_list<20>([](int i){return 1 + (i&1);});
for( auto e : v ) std::cout << e << ' ';
}
I have an assignment to read a file and output the average test scores.
It is pretty simple but I don't like how the average is done.
average = (test1 + test2 + test3 + test4 + test5) / 5.0;
Is there a way to just have it divide by the number of test scores? I couldn't find anything like this in the book or from google. Something like
average = (test + test + test + test) / ntests;
If you have the values in a vector or an array, just use std::accumulate from <numeric>:
std::vector<double> vec;
// ... fill vec with values (do not use 0; use 0.0)
double average = std::accumulate(vec.begin(), vec.end(), 0.0) / vec.size();
Step 1. Via iteration (if you want to be done) or recursion (if you want to be brave) place all test scores into an array (if you want simplicity and speed) or a linked list (if you want flexibility but slow)
Step 2. Iterate through the array/list until you reach the end; adding the contents of each cell/node as you go. Keep a count of what cell/node you are currently at as you go as well.
Step 3. Take the sum from the first variable and divide it by the second variable that kept track of where you were. This will yield the mean.
Wondering, why no one mentioned boost::accumulators. It is not the shortest of the already posted solutions, but can be more easily extended for more general statistical values. Like standard deviation or higher moments.
#include <iostream>
#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics/stats.hpp>
#include <boost/accumulators/statistics/mean.hpp>
#include <algorithm>
#include <vector>
double mean(const std::vector<double>& values) {
namespace bo = boost::accumulators;
if (values.empty()) return 0.;
bo::accumulator_set<double, bo::stats<bo::tag::mean>> acc;
acc=std::for_each(values.begin(), values.end(), acc);
return bo::mean(acc);
}
int main()
{
std::vector<double> test = { 2.,6.,4.,7. };
std::cout << "Mean: " << mean(test) << std::endl;
std::cout << "Mean: " << mean({}) << std::endl;
return 0;
}
Here is my generalization of getting the average of the elements of a container by specifying a lambda function to obtain each value and then add up:
template <typename ForwardIterator, typename F>
double inline averageOf (ForwardIterator first, ForwardIterator last, F function) {
std::vector<typename std::result_of<F(typename ForwardIterator::value_type)>::type> values;
while (first != last) {
values.emplace_back (function(*first));
++first;
}
return static_cast<double>(std::accumulate (values.begin(), values.end(), 0)) / values.size();
}
The client code I tested it with goes like
const std::list<CharmedObserver*> devotees =
charmer->getState<CharmerStateBase>(CHARMER)->getDevotees();
const int averageHitPointsOfDevotees = averageOf (devotees.begin(), devotees.end(),
[](const CharmedObserver* x)->int {return x->getCharmedBeing()->getHitPoints();});
C++11 gives nice solution:
constexpr auto countArguments() -> size_t
{
return 0;
}
template<class T1, class ... Ti>
constexpr auto countArguments(T1, Ti ...xi) -> size_t
{
return 1 + countArguments(xi...);
}
template<class T>
constexpr auto sumAruguments(T x) -> double
{
return x;
}
template<class T1, class ... Ti>
constexpr auto sumAruguments(T1 x1, Ti ...xi) -> double // decltype(x1 + sumAruguments(xi...))
{
return x1 + sumAruguments(xi...);
}
template<class...T>
constexpr auto avarage(T...xi) -> double
{
return sumAruguments(xi...) / countArguments(xi...);
}
I was unable to write it so it auto-deduce return type.
When I tried I get weird result for average(-2).
https://wandbox.org/permlink/brssPjggn64lBGVq
You can also calculate average using variable number of arguments. The principle of this a function that an unknown number of arguments is stored in a stack and we can take them.
double average(int n, ...) // where n - count of argument (number)
{
int *p = &n; // get pointer on list of number in stack
p++; // get first number
double *pp = (double *)p; // transformation of the pointer type
double sum = 0;
for ( int i = 0; i < n; pp++, i++ ) //looking all stack
sum+=(*pp); // summarize
return sum/n; //return average
}
And you can using this function like:
double av1 = average( 5, 3.0, 1.5, 5.0, 1.0, 2.0 );
double av2 = average( 2, 3.0, 1.5 );
But the number of arguments must match with the n.