What is good coding idiom to decompose a complicated expression? - c++

My question regards coding style and the decomposition of complicated expressions in C++.
My program has a complicated hierarchy of classes composed of classes composed of classes, etc. Many of the program's objects hold pointers to, or indices into, other objects. There are a few namespaces. As I said, it's complicated—by which I mean that it is a pretty typical 10,000-line C++ program.
A problem of scale emerges. My code is starting to have lots of unreadable expressions like p->a(b.c(r).d()).q(s)->f(g(h.i())). (As I said, it's unreadable. I have trouble reading it, and I was the one who wrote it! You can just look at it to catch the mood.) I have tried rewriting such expressions as
{
const I ii = h.i();
const G &gg = g(ii);
const C &cc = b.c(r);
// ..
T *const qq = aa.q(s);
qq->f(gg);
}
All those locally scoped symbols arguably make the code more readable, but I admit that I do not care for the overall style. After all, some of the local symbols are const & references, while others represent copies of data. What if I accidentally omitted one of the &, thereby invoking an unnecessary copy of some large object? My compiler would hardly warn me.
Besides, the version with the local symbols is verbose.
Neither solution suits. Does there not exist a more idiomatic, less bug-prone way to decompose unreadable expressions in C++?
ILLUSTRATION
If a minimal, compilable illustration of the problem helps, then here is one.
#include <iostream>
class A {
private:
int m1;
int n1;
public:
int m() const { return m1; }
int n() const { return n1; }
A(const int m0, const int n0) : m1(m0), n1(n0) {}
};
class B {
private:
A a1;
public:
const A &a() const { return a1; }
B(const A &a0) : a1(a0) {}
};
B f(int k) {
return B(A(k, -k));
}
int main() {
const B &my_b = f(15);
{
// Here is a local scope in which to decompose an expression
// like my_b.a().m() via intermediate symbols.
const A &aa = my_b.a();
const int mm = aa.m();
const int nn = aa.n();
std::cout << "m == " << mm << ", n == " << nn << "\n";
}
return 0;
}
ADDITIONAL INFORMATION
I doubt that it is relevant to the question, but in case it is: My program defines several templates, but does not presently use any run-time polymorphism.
AN ACTUAL EXAMPLE
One of the commenters has kindly requested an actual example out of my code. Here it is:
bool Lgl::is_order_legal_for_movement(
const Mb::Mapboard &mb, const size_t ix, Chains *const p_chns1
) {
if (!mb.accepts_orders()) return false;
const Basic_mapboard &bmb = mb.basic();
if (!is_location_valid(bmb, ix, false)) return false;
const Space &x = mb.spc(ix);
if (!x.p_unit()) return true;
const size_t cx = x.i_cst_occu();
const Basic_space &bx = x.basic();
const Unit &u = x.unit();
const bool army = u.is_army();
const bool fleet = u.is_fleet();
const Order order = u.order();
const size_t is = u.source();
const Location lt = u.target_loc();
const size_t it = lt.i_spc;
const size_t ct = lt.i_cst;
// ...
{
const Space &s = mb.spc(is);
const Basic_space &bs = s.basic();
result = (
(army_s && (
bs.nbor_land(it) || count_chains_if(
Does_chain_include(ix), chns_s, false
)
)) || (fleet_s && (
// By rule, support for a fleet can name a coast,
// but need not.
ct == NO_CST
? is_nbor_sea_no_cst(bs, cs, it)
: is_nbor_sea (bs, cs, lt)
))
) && is_nbor_no_cst(army, fleet, bx, cx, it);
}
// ...
}

For your actual code example, I can see why you'd like to make it more readable. I'd probably recode it something like this:
if (army_s) {
result = bs.nbor_land(it) ||
count_chains_if(Does_chain_include(ix), chns_s, false);
} else if (fleet_s) {
// By rule, support for a fleet can name a coast,
// but need not.
if (ct == NO_CST)
result = is_nbor_sea_no_cst(bs, cs, it);
else
result = is_nbor_sea(bs, cs, lt);
}
result &= is_nbor_no_cst(army, fleet, bx, cx, it);
It executes the same logic, including short-circuit evaluations, but is a little better for human interpretation, I think. I have also encountered compilers that also generate better code with this style versus the very complex compound line the original code contained.
Hope that helps.

Related

What does Obj::* mean? [duplicate]

I came across this strange code snippet which compiles fine:
class Car
{
public:
int speed;
};
int main()
{
int Car::*pSpeed = &Car::speed;
return 0;
}
Why does C++ have this pointer to a non-static data member of a class? What is the use of this strange pointer in real code?
It's a "pointer to member" - the following code illustrates its use:
#include <iostream>
using namespace std;
class Car
{
public:
int speed;
};
int main()
{
int Car::*pSpeed = &Car::speed;
Car c1;
c1.speed = 1; // direct access
cout << "speed is " << c1.speed << endl;
c1.*pSpeed = 2; // access via pointer to member
cout << "speed is " << c1.speed << endl;
return 0;
}
As to why you would want to do that, well it gives you another level of indirection that can solve some tricky problems. But to be honest, I've never had to use them in my own code.
Edit: I can't think off-hand of a convincing use for pointers to member data. Pointer to member functions can be used in pluggable architectures, but once again producing an example in a small space defeats me. The following is my best (untested) try - an Apply function that would do some pre &post processing before applying a user-selected member function to an object:
void Apply( SomeClass * c, void (SomeClass::*func)() ) {
// do hefty pre-call processing
(c->*func)(); // call user specified function
// do hefty post-call processing
}
The parentheses around c->*func are necessary because the ->* operator has lower precedence than the function call operator.
This is the simplest example I can think of that conveys the rare cases where this feature is pertinent:
#include <iostream>
class bowl {
public:
int apples;
int oranges;
};
int count_fruit(bowl * begin, bowl * end, int bowl::*fruit)
{
int count = 0;
for (bowl * iterator = begin; iterator != end; ++ iterator)
count += iterator->*fruit;
return count;
}
int main()
{
bowl bowls[2] = {
{ 1, 2 },
{ 3, 5 }
};
std::cout << "I have " << count_fruit(bowls, bowls + 2, & bowl::apples) << " apples\n";
std::cout << "I have " << count_fruit(bowls, bowls + 2, & bowl::oranges) << " oranges\n";
return 0;
}
The thing to note here is the pointer passed in to count_fruit. This saves you having to write separate count_apples and count_oranges functions.
Another application are intrusive lists. The element type can tell the list what its next/prev pointers are. So the list does not use hard-coded names but can still use existing pointers:
// say this is some existing structure. And we want to use
// a list. We can tell it that the next pointer
// is apple::next.
struct apple {
int data;
apple * next;
};
// simple example of a minimal intrusive list. Could specify the
// member pointer as template argument too, if we wanted:
// template<typename E, E *E::*next_ptr>
template<typename E>
struct List {
List(E *E::*next_ptr):head(0), next_ptr(next_ptr) { }
void add(E &e) {
// access its next pointer by the member pointer
e.*next_ptr = head;
head = &e;
}
E * head;
E *E::*next_ptr;
};
int main() {
List<apple> lst(&apple::next);
apple a;
lst.add(a);
}
Here's a real-world example I am working on right now, from signal processing / control systems:
Suppose you have some structure that represents the data you are collecting:
struct Sample {
time_t time;
double value1;
double value2;
double value3;
};
Now suppose that you stuff them into a vector:
std::vector<Sample> samples;
... fill the vector ...
Now suppose that you want to calculate some function (say the mean) of one of the variables over a range of samples, and you want to factor this mean calculation into a function. The pointer-to-member makes it easy:
double Mean(std::vector<Sample>::const_iterator begin,
std::vector<Sample>::const_iterator end,
double Sample::* var)
{
float mean = 0;
int samples = 0;
for(; begin != end; begin++) {
const Sample& s = *begin;
mean += s.*var;
samples++;
}
mean /= samples;
return mean;
}
...
double mean = Mean(samples.begin(), samples.end(), &Sample::value2);
Note Edited 2016/08/05 for a more concise template-function approach
And, of course, you can template it to compute a mean for any forward-iterator and any value type that supports addition with itself and division by size_t:
template<typename Titer, typename S>
S mean(Titer begin, const Titer& end, S std::iterator_traits<Titer>::value_type::* var) {
using T = typename std::iterator_traits<Titer>::value_type;
S sum = 0;
size_t samples = 0;
for( ; begin != end ; ++begin ) {
const T& s = *begin;
sum += s.*var;
samples++;
}
return sum / samples;
}
struct Sample {
double x;
}
std::vector<Sample> samples { {1.0}, {2.0}, {3.0} };
double m = mean(samples.begin(), samples.end(), &Sample::x);
EDIT - The above code has performance implications
You should note, as I soon discovered, that the code above has some serious performance implications. The summary is that if you're calculating a summary statistic on a time series, or calculating an FFT etc, then you should store the values for each variable contiguously in memory. Otherwise, iterating over the series will cause a cache miss for every value retrieved.
Consider the performance of this code:
struct Sample {
float w, x, y, z;
};
std::vector<Sample> series = ...;
float sum = 0;
int samples = 0;
for(auto it = series.begin(); it != series.end(); it++) {
sum += *it.x;
samples++;
}
float mean = sum / samples;
On many architectures, one instance of Sample will fill a cache line. So on each iteration of the loop, one sample will be pulled from memory into the cache. 4 bytes from the cache line will be used and the rest thrown away, and the next iteration will result in another cache miss, memory access and so on.
Much better to do this:
struct Samples {
std::vector<float> w, x, y, z;
};
Samples series = ...;
float sum = 0;
float samples = 0;
for(auto it = series.x.begin(); it != series.x.end(); it++) {
sum += *it;
samples++;
}
float mean = sum / samples;
Now when the first x value is loaded from memory, the next three will also be loaded into the cache (supposing suitable alignment), meaning you don't need any values loaded for the next three iterations.
The above algorithm can be improved somewhat further through the use of SIMD instructions on eg SSE2 architectures. However, these work much better if the values are all contiguous in memory and you can use a single instruction to load four samples together (more in later SSE versions).
YMMV - design your data structures to suit your algorithm.
You can later access this member, on any instance:
int main()
{
int Car::*pSpeed = &Car::speed;
Car myCar;
Car yourCar;
int mySpeed = myCar.*pSpeed;
int yourSpeed = yourCar.*pSpeed;
assert(mySpeed > yourSpeed); // ;-)
return 0;
}
Note that you do need an instance to call it on, so it does not work like a delegate.
It is used rarely, I've needed it maybe once or twice in all my years.
Normally using an interface (i.e. a pure base class in C++) is the better design choice.
IBM has some more documentation on how to use this. Briefly, you're using the pointer as an offset into the class. You can't use these pointers apart from the class they refer to, so:
int Car::*pSpeed = &Car::speed;
Car mycar;
mycar.*pSpeed = 65;
It seems a little obscure, but one possible application is if you're trying to write code for deserializing generic data into many different object types, and your code needs to handle object types that it knows absolutely nothing about (for example, your code is in a library, and the objects into which you deserialize were created by a user of your library). The member pointers give you a generic, semi-legible way of referring to the individual data member offsets, without having to resort to typeless void * tricks the way you might for C structs.
It makes it possible to bind member variables and functions in the uniform manner. The following is example with your Car class. More common usage would be binding std::pair::first and ::second when using in STL algorithms and Boost on a map.
#include <list>
#include <algorithm>
#include <iostream>
#include <iterator>
#include <boost/lambda/lambda.hpp>
#include <boost/lambda/bind.hpp>
class Car {
public:
Car(int s): speed(s) {}
void drive() {
std::cout << "Driving at " << speed << " km/h" << std::endl;
}
int speed;
};
int main() {
using namespace std;
using namespace boost::lambda;
list<Car> l;
l.push_back(Car(10));
l.push_back(Car(140));
l.push_back(Car(130));
l.push_back(Car(60));
// Speeding cars
list<Car> s;
// Binding a value to a member variable.
// Find all cars with speed over 60 km/h.
remove_copy_if(l.begin(), l.end(),
back_inserter(s),
bind(&Car::speed, _1) <= 60);
// Binding a value to a member function.
// Call a function on each car.
for_each(s.begin(), s.end(), bind(&Car::drive, _1));
return 0;
}
You can use an array of pointer to (homogeneous) member data to enable a dual, named-member (i.e. x.data) and array-subscript (i.e. x[idx]) interface.
#include <cassert>
#include <cstddef>
struct vector3 {
float x;
float y;
float z;
float& operator[](std::size_t idx) {
static float vector3::*component[3] = {
&vector3::x, &vector3::y, &vector3::z
};
return this->*component[idx];
}
};
int main()
{
vector3 v = { 0.0f, 1.0f, 2.0f };
assert(&v[0] == &v.x);
assert(&v[1] == &v.y);
assert(&v[2] == &v.z);
for (std::size_t i = 0; i < 3; ++i) {
v[i] += 1.0f;
}
assert(v.x == 1.0f);
assert(v.y == 2.0f);
assert(v.z == 3.0f);
return 0;
}
One way I've used it is if I have two implementations of how to do something in a class and I want to choose one at run-time without having to continually go through an if statement i.e.
class Algorithm
{
public:
Algorithm() : m_impFn( &Algorithm::implementationA ) {}
void frequentlyCalled()
{
// Avoid if ( using A ) else if ( using B ) type of thing
(this->*m_impFn)();
}
private:
void implementationA() { /*...*/ }
void implementationB() { /*...*/ }
typedef void ( Algorithm::*IMP_FN ) ();
IMP_FN m_impFn;
};
Obviously this is only practically useful if you feel the code is being hammered enough that the if statement is slowing things done eg. deep in the guts of some intensive algorithm somewhere. I still think it's more elegant than the if statement even in situations where it has no practical use but that's just my opnion.
Pointers to classes are not real pointers; a class is a logical construct and has no physical existence in memory, however, when you construct a pointer to a member of a class it gives an offset into an object of the member's class where the member can be found; This gives an important conclusion: Since static members are not associated with any object so a pointer to a member CANNOT point to a static member(data or functions) whatsoever
Consider the following:
class x {
public:
int val;
x(int i) { val = i;}
int get_val() { return val; }
int d_val(int i) {return i+i; }
};
int main() {
int (x::* data) = &x::val; //pointer to data member
int (x::* func)(int) = &x::d_val; //pointer to function member
x ob1(1), ob2(2);
cout <<ob1.*data;
cout <<ob2.*data;
cout <<(ob1.*func)(ob1.*data);
cout <<(ob2.*func)(ob2.*data);
return 0;
}
Source: The Complete Reference C++ - Herbert Schildt 4th Edition
Here is an example where pointer to data members could be useful:
#include <iostream>
#include <list>
#include <string>
template <typename Container, typename T, typename DataPtr>
typename Container::value_type searchByDataMember (const Container& container, const T& t, DataPtr ptr) {
for (const typename Container::value_type& x : container) {
if (x->*ptr == t)
return x;
}
return typename Container::value_type{};
}
struct Object {
int ID, value;
std::string name;
Object (int i, int v, const std::string& n) : ID(i), value(v), name(n) {}
};
std::list<Object*> objects { new Object(5,6,"Sam"), new Object(11,7,"Mark"), new Object(9,12,"Rob"),
new Object(2,11,"Tom"), new Object(15,16,"John") };
int main() {
const Object* object = searchByDataMember (objects, 11, &Object::value);
std::cout << object->name << '\n'; // Tom
}
Suppose you have a structure. Inside of that structure are
* some sort of name
* two variables of the same type but with different meaning
struct foo {
std::string a;
std::string b;
};
Okay, now let's say you have a bunch of foos in a container:
// key: some sort of name, value: a foo instance
std::map<std::string, foo> container;
Okay, now suppose you load the data from separate sources, but the data is presented in the same fashion (eg, you need the same parsing method).
You could do something like this:
void readDataFromText(std::istream & input, std::map<std::string, foo> & container, std::string foo::*storage) {
std::string line, name, value;
// while lines are successfully retrieved
while (std::getline(input, line)) {
std::stringstream linestr(line);
if ( line.empty() ) {
continue;
}
// retrieve name and value
linestr >> name >> value;
// store value into correct storage, whichever one is correct
container[name].*storage = value;
}
}
std::map<std::string, foo> readValues() {
std::map<std::string, foo> foos;
std::ifstream a("input-a");
readDataFromText(a, foos, &foo::a);
std::ifstream b("input-b");
readDataFromText(b, foos, &foo::b);
return foos;
}
At this point, calling readValues() will return a container with a unison of "input-a" and "input-b"; all keys will be present, and foos with have either a or b or both.
Just to add some use cases for #anon's & #Oktalist's answer, here's a great reading material about pointer-to-member-function and pointer-to-member-data.
https://www.dre.vanderbilt.edu/~schmidt/PDF/C++-ptmf4.pdf
with pointer to member, we can write generic code like this
template<typename T, typename U>
struct alpha{
T U::*p_some_member;
};
struct beta{
int foo;
};
int main()
{
beta b{};
alpha<int, beta> a{&beta::foo};
b.*(a.p_some_member) = 4;
return 0;
}
I love the * and & operators:
struct X
{
int a {0};
int *ptr {NULL};
int &fa() { return a; }
int *&fptr() { return ptr; }
};
int main(void)
{
X x;
int X::*p1 = &X::a; // pointer-to-member 'int X::a'. Type of p1 = 'int X::*'
x.*p1 = 10;
int *X::*p2 = &X::ptr; // pointer-to-member-pointer 'int *X::ptr'. Type of p2 = 'int *X::*'
x.*p2 = nullptr;
X *xx;
xx->*p2 = nullptr;
int& (X::*p3)() = X::fa; // pointer-to-member-function 'X::fa'. Type of p3 = 'int &(X::*)()'
(x.*p3)() = 20;
(xx->*p3)() = 30;
int *&(X::*p4)() = X::fptr; // pointer-to-member-function 'X::fptr'. Type of p4 = 'int *&(X::*)()'
(x.*p4)() = nullptr;
(xx->*p4)() = nullptr;
}
Indeed all is true as long as the members are public, or static
I think you'd only want to do this if the member data was pretty large (e.g., an object of another pretty hefty class), and you have some external routine which only works on references to objects of that class. You don't want to copy the member object, so this lets you pass it around.
A realworld example of a pointer-to-member could be a more narrow aliasing constructor for std::shared_ptr:
template <typename T>
template <typename U>
shared_ptr<T>::shared_ptr(const shared_ptr<U>, T U::*member);
What that constructor would be good for
assume you have a struct foo:
struct foo {
int ival;
float fval;
};
If you have given a shared_ptr to a foo, you could then retrieve shared_ptr's to its members ival or fval using that constructor:
auto foo_shared = std::make_shared<foo>();
auto ival_shared = std::shared_ptr<int>(foo_shared, &foo::ival);
This would be useful if want to pass the pointer foo_shared->ival to some function which expects a shared_ptr
https://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr
Pointer to members are C++'s type safe equivalent for C's offsetof(), which is defined in stddef.h: Both return the information, where a certain field is located within a class or struct. While offsetof() may be used with certain simple enough classes also in C++, it fails miserably for the general case, especially with virtual base classes. So pointer to members were added to the standard. They also provide easier syntax to reference an actual field:
struct C { int a; int b; } c;
int C::* intptr = &C::a; // or &C::b, depending on the field wanted
c.*intptr += 1;
is much easier than:
struct C { int a; int b; } c;
int intoffset = offsetof(struct C, a);
* (int *) (((char *) (void *) &c) + intoffset) += 1;
As to why one wants to use offsetof() (or pointer to members), there are good answers elsewhere on stackoverflow. One example is here: How does the C offsetof macro work?

What does it mean for a lambda to be static?

Say I have a binary search function which initializes and uses a lambda:
bool custom_binary_search(std::vector<int> const& search_me)
{
auto comp = [](int const a, int const b)
{
return a < b;
};
return std::binary_search(search_me.begin(), search_me.end(), comp);
}
Without pointing out that this is completely redundant and just focusing on the lambda; is it expensive to be declaring and defining that lambda object every time? Should it be static? What would it mean for a lambda to be static?
The variable 'comp' with type <some anonymous lambda class> can be made static, pretty much as any other local variable, i.e. it is the same variable, pointing to the same memory address, every time this function is run).
However, beware of using closures, which will lead to subtle bugs (pass by value) or runtime errors (pass-by-reference) since the closure objects are also initialized only once:
bool const custom_binary_search(std::vector<int> const& search_me, int search_value, int max)
{
static auto comp_only_initialized_the_first_time = [max](int const a, int const b)
{
return a < b && b < max;
};
auto max2 = max;
static auto comp_error_after_first_time = [&max2](int const a, int const b)
{
return a < b && b < max2;
};
bool incorrectAfterFirstCall = std::binary_search(std::begin(search_me), std::end(search_me), search_value, comp_only_initialized_the_first_time);
bool errorAfterFirstCall = std::binary_search(std::begin(search_me), std::end(search_me), search_value, comp_error_after_first_time);
return false; // does it really matter at this point ?
}
Note that the 'max' parameter is just there to introduce a variable that you might want to capture in your comparator, and the functionality this "custom_binary_search" implements is probably not very useful.
the following code compiles and runs ok in visual studio 2013:
bool const test(int & value)
{
//edit `&value` into `&` #log0
static auto comp = [&](int const a, int const b)
{
return a < (b + value);
};
return comp(2,1);
}
And later:
int q = 1;
cout << test(q); // prints 0 //OK
q++;
cout << test(q); // prints 1 //OK
The compiler will transform any lambda declaration into a regular function and this is done at compile time. The actual definition in the test function is just a regular assignment to the comp variable with the pointer to a c function.
Closures are the generaly the same but will work ok only in the scope they were defined. In any other scope they will fail or generate a memory corruption bug.
Defining comp static would only improve the performance insignificantly or not at all.
Hope this helps:
Razvan.

Loop from one integer to another regardless of direction, with minimal overhead

Assume I'm given two unsigned integers:
size_t A, B;
They're loaded out with some random numbers, and A may be larger, equal, or smaller than B. I want to loop from A to B. However, the comparison and increment both depend on which is larger.
for (size_t i = A; i <= B; ++i) //A <= B
for (size_t i = A; i >= B; --i) //A >= B
The obvious brute force solution is to embed these in if statements:
if (A <= B)
{
for (size_t i = A; i <= B; ++i) ...
}
else
{
for (size_t i = A; i >= B; --i) ...
}
Note that I must loop from A to B, so I can't have two intermediate integers and toss A and B into the right slots then have the same comparison and increment. In the "A is larger" case I must decrement, and the opposite must increment.
I'm going to have potentially many nested loops that require this same setup, which means every if/else will have a function call, which I have to pass lots of variables through, or another if/else with another if/else etc.
Is there any tricky shortcut to avoid this without sacrificing much speed? Function pointers and stuff in a tight, often repeated loop sound extremely painful to me. Is there some crazy templates solution?
My mistake, originally misinterpreting the question.
To make an inclusive loop from A to B, you have a tricky situation. You need to loop one past B. So you work out that value prior to your loop. I've used the comma operator inside the for loop, but you can always put it outside for clarity.
int direction = (A < B) ? 1 : -1;
for( size_t i = A, iEnd = B+direction; i != iEnd; i += direction ) {
...
}
If you don't mind modifying A and B, you can do this instead (using A as the loop variable):
for( B+=direction, A != B; A += direction ) {
}
And I had a play around... Don't know what the inlining rules are when it comes to function pointers, or whether this is any faster, but it's an exercise in any case. =)
inline const size_t up( size_t& val ) { return val++; }
inline const size_t down( size_t& val ) { return val--; }
typedef const size_t (*FnIncDec)( size_t& );
inline FnIncDec up_or_down( size_t A, size_t B )
{
return (A <= B) ? up : down;
}
int main( void )
{
size_t A = 4, B = 1;
FnIncDec next = up_or_down( A, B );
for( next(B); A != B; next(A) ) {
std::cout << A << endl;
}
return 0;
}
In response to this:
This won't work for case A = 0, B = UINT_MAX (and vice versa)
That is correct. The problem is that the initial value for i and iEnd become the same due to overflow. To handle that, you would instead use a do->while loop. That removes the initial test, which is redundant because you will always execute the loop body at least once... By removing that first test, you iterate past the terminating condition the first time around.
size_t i = A;
size_t iEnd = B+direction;
do {
// ...
i += direction;
} while( i != iEnd );
size_t const delta = size_t(A < B? 1 : -1);
size_t i = A;
for( ;; )
{
// blah
if( i == B ) { break; }
i += delta;
}
What are you going to do with the iterated value?
If this is going to be some index in an array, you should use the relevant iterator or reverse_iterator class, and implement your algorithms around these. Your code will be more robust, and easier to maintain or evolve. Besides, a lot of tools in the standard library are built using these interfaces.
Actually, even if you don't, you may implement an iterator class which returns its own index.
You can also use a little bit of metaprogramming magic to define how your iterator will behave according to the order of A and B.
Before going further, please consider that this would only work on constant values of A and B.
template <int A,int B>
struct ordered {
static const bool value = A > B ? false: true;
};
template <bool B>
int pre_incr(int &v){
return ++v;
}
template <>
int pre_incr<false>(int &v){
return --v;
}
template <int A, int B>
class const_int_iterator : public iterator<input_iterator_tag, const int>
{
int p;
public:
typedef const_int_iterator<A,B> self_type;
const_int_iterator() : p(A) {}
const_int_iterator(int s) : p(s) {}
const_int_iterator(const self_type& mit) : p(mit.p) {}
self_type& operator++() {pre_incr< ordered<A,B>::value >(p);return *this;}
self_type operator++(int) {self_type tmp(*this); operator++(); return tmp;}
bool operator==(const self_type& rhs) {return p==rhs.p;}
bool operator!=(const self_type& rhs) {return p!=rhs.p;}
const int& operator*() {return p;}
};
template <int A, int B>
class iterator_factory {
public:
typedef const_int_iterator<A,B> iterator_type;
static iterator_type begin(){
return iterator_type();
}
static iterator_type end(){
return iterator_type(B);
}
};
In the code above, I defined a barebone iterator class going accross the values from A to B. There's simple metaprogramming test to determine whether A and B are in ascending order, and pick the correct operator (++ or --) to go through the values.
Finally, I also defined a simple factory class to hold begin and end iterators methods, Using this class let you have only one single point of declaration for your dependent type values A and B (I mean here that you only need to use A and B once for this container, and the iterators generated from there will be depending on these same A and B, thus simplifying code somewhat).
Here I provide a simple test program, outputing values from 20 to 11.
#define A 20
#define B 10
typedef iterator_factory<A,B> factory;
int main(){
auto it = factory::begin();
for (;it != factory::end();it++)
cout << "iterator is : " << *it << endl;
}
There might better ways of doing this with the standard library though.
The issue of using O and UINT_MAX for A and B was brought up. I think it should be possible to handle these cases by overloading the templates using these particular values (left as an exercise for the reader).
size_t A, B;
if (A > B) swap(A,B); // Assuming A <= B, if not, make B to be A
for (size_t i = A; A <= B; ++A) ...

Is there some ninja trick to make a variable constant after its declaration?

I know the answer is 99.99% no, but I figured it was worth a try, you never know.
void SomeFunction(int a)
{
// Here some processing happens on a, for example:
a *= 50;
a %= 10;
if(example())
a = 0;
// From this point on I want to make "a" const; I don't want to allow
// any code past this comment to modify it in any way.
}
I can do something somewhat similar with const int b = a;, but it's not really the same and it creates a lot of confusion. A C++0x-only solution is acceptable.
EDIT: another less abstracted example, the one that made me ask this question:
void OpenFile(string path)
{
boost::to_lower(path);
// I want path to be constant now
ifstream ...
}
EDIT: another concrete example: Recapture const-ness on variables in a parallel section.
One solution would be to factor all of the mutation code into a lambda expression. Do all of the mutation in the lambda expression and assign the result out to a const int in the method scope. For example
void SomeFunction(const int p1) {
auto calcA = [&]() {
int a = p1;
a *= 50;
a %= 10;
if(example())
a = 0;
..
return a;
};
const int a = calcA();
...
}
or even
void SomeFunction(const int p1) {
const int a = [&]() {
int a = p1;
a *= 50;
a %= 10;
if(example())
a = 0;
..
return a;
}();
...
}
You could move the code to generate a into another function:
int ComputeA(int a) {
a *= 50;
a %= 10;
if (example())
a = 0;
return a;
}
void SomeFunction(const int a_in) {
const int a = ComputeA(a_in);
// ....
}
Otherwise, there's no nice way to do this at compile time.
A pattern I used to use is to "hide" the argument with an _, so the code becomes
void SomeFunction(int _a)
{
// Here some processing happens on a, for example:
_a *= 50;
_a %= 10;
if(example())
_a = 0;
const int a = _a;
// From this point on I want to make "a" const; I don't want to allow
// any code past this comment to modify it in any way.
}
You could also use only const variables and make a function to compute the new value of a, if necessary. I tend more en more to not "reuse" variables en make as much as possible my variables immutable : if you change the value of something , then give it a new name.
void SomeFunction(const int _a)
{
const int a = preprocess(_a);
....
}
Why not refactor your code in to two separate functions. One that returns a modified a and another that works on this value (without ever changing it).
You could possibly wrap your object too around a holder class object and work with this holder.
template <class T>
struct Constify {
Constify(T val) : v_( val ) {}
const T& get() const { return v_; }
};
void SomeFuncion() {
Constify ci( Compute() ); // Compute returns `a`
// process with ci
}
Your example has an easy fix: Refactoring.
// expect a lowercase path or use a case insensitive comparator for basic_string
void OpenFile(string const& path)
{
// I want path to be constant now
ifstream ...
}
OpenFile( boost::to_lower(path) ); // temporaries can bind to const&
this might be one way to do it, if you are just trying to avoid another name. i suggest you think twice before using this.
int func ()
{
int a;
a %= 10;
const int const_a = a;
#define a const_a
a = 10; // this will cause an error, as needed.
#undef a
}
I don't actually suggest doing this, but you could use creative variable shadowing to simulate something like what you want:
void SomeFunction(int a)
{
// Here some processing happens on a, for example:
a *= 50;
a %= 10;
if(example())
a = 0;
{
const int b = a;
const int a = b; // New a, shadows the outside one.
// Do whatever you want inside these nested braces, "a" is now const.
}
}
Answers were pretty solid, but honestly I can't really think of a GOOD situation to use this in. However in the event you want to Pre-Calculate a constant which is basically what you are doing you have a few main ways You can do this.
First we can do the following. So the compiler will simply set CompileA# for us in this case it's 50, 100, and 150.
const int CompileA1 = EarlyCalc(1);
const int CompileA2 = EarlyCalc(2);
const int CompileA3 = EarlyCalc(3);
int EarlyCalc(int a)
{
a *= 50;
return a;
}
Now anything beyond that there's so many ways you can handle this. I liked the suggestion as someone else had mentioned of doing.
void SomeFunc(int a)
{
const int A = EarlyCalc(a);
//We Can't edit A.
}
But another way could be...
SomeFunc(EarlcCalc(a));
void SomeFunc(const int A)
{
//We can't edit A.
}
Or even..
SomeFunction(int a)
{
a *= 50;
ActualFunction(a);
}
void ActualFunction(const int A)
{
//We can't edit A.
}
Sure, there is no way to do it using the same variable name in C++.

Lazy evaluation in C++

C++ does not have native support for lazy evaluation (as Haskell does).
I'm wondering if it is possible to implement lazy evaluation in C++ in a reasonable manner. If yes, how would you do it?
EDIT: I like Konrad Rudolph's answer.
I'm wondering if it's possible to implement it in a more generic fashion, for example by using a parametrized class lazy that essentially works for T the way matrix_add works for matrix.
Any operation on T would return lazy instead. The only problem is to store the arguments and operation code inside lazy itself. Can anyone see how to improve this?
I'm wondering if it is possible to implement lazy evaluation in C++ in a reasonable manner. If yes, how would you do it?
Yes, this is possible and quite often done, e.g. for matrix calculations. The main mechanism to facilitate this is operator overloading. Consider the case of matrix addition. The signature of the function would usually look something like this:
matrix operator +(matrix const& a, matrix const& b);
Now, to make this function lazy, it's enough to return a proxy instead of the actual result:
struct matrix_add;
matrix_add operator +(matrix const& a, matrix const& b) {
return matrix_add(a, b);
}
Now all that needs to be done is to write this proxy:
struct matrix_add {
matrix_add(matrix const& a, matrix const& b) : a(a), b(b) { }
operator matrix() const {
matrix result;
// Do the addition.
return result;
}
private:
matrix const& a, b;
};
The magic lies in the method operator matrix() which is an implicit conversion operator from matrix_add to plain matrix. This way, you can chain multiple operations (by providing appropriate overloads of course). The evaluation takes place only when the final result is assigned to a matrix instance.
EDIT I should have been more explicit. As it is, the code makes no sense because although evaluation happens lazily, it still happens in the same expression. In particular, another addition will evaluate this code unless the matrix_add structure is changed to allow chained addition. C++0x greatly facilitates this by allowing variadic templates (i.e. template lists of variable length).
However, one very simple case where this code would actually have a real, direct benefit is the following:
int value = (A + B)(2, 3);
Here, it is assumed that A and B are two-dimensional matrices and that dereferencing is done in Fortran notation, i.e. the above calculates one element out of a matrix sum. It's of course wasteful to add the whole matrices. matrix_add to the rescue:
struct matrix_add {
// … yadda, yadda, yadda …
int operator ()(unsigned int x, unsigned int y) {
// Calculate *just one* element:
return a(x, y) + b(x, y);
}
};
Other examples abound. I've just remembered that I have implemented something related not long ago. Basically, I had to implement a string class that should adhere to a fixed, pre-defined interface. However, my particular string class dealt with huge strings that weren't actually stored in memory. Usually, the user would just access small substrings from the original string using a function infix. I overloaded this function for my string type to return a proxy that held a reference to my string, along with the desired start and end position. Only when this substring was actually used did it query a C API to retrieve this portion of the string.
Boost.Lambda is very nice, but Boost.Proto is exactly what you are looking for. It already has overloads of all C++ operators, which by default perform their usual function when proto::eval() is called, but can be changed.
What Konrad already explained can be put further to support nested invocations of operators, all executed lazily. In Konrad's example, he has an expression object that can store exactly two arguments, for exactly two operands of one operation. The problem is that it will only execute one subexpression lazily, which nicely explains the concept in lazy evaluation put in simple terms, but doesn't improve performance substantially. The other example shows also well how one can apply operator() to add only some elements using that expression object. But to evaluate arbitrary complex expressions, we need some mechanism that can store the structure of that too. We can't get around templates to do that. And the name for that is expression templates. The idea is that one templated expression object can store the structure of some arbitrary sub-expression recursively, like a tree, where the operations are the nodes, and the operands are the child-nodes. For a very good explanation i just found today (some days after i wrote the below code) see here.
template<typename Lhs, typename Rhs>
struct AddOp {
Lhs const& lhs;
Rhs const& rhs;
AddOp(Lhs const& lhs, Rhs const& rhs):lhs(lhs), rhs(rhs) {
// empty body
}
Lhs const& get_lhs() const { return lhs; }
Rhs const& get_rhs() const { return rhs; }
};
That will store any addition operation, even nested one, as can be seen by the following definition of an operator+ for a simple point type:
struct Point { int x, y; };
// add expression template with point at the right
template<typename Lhs, typename Rhs> AddOp<AddOp<Lhs, Rhs>, Point>
operator+(AddOp<Lhs, Rhs> const& lhs, Point const& p) {
return AddOp<AddOp<Lhs, Rhs>, Point>(lhs, p);
}
// add expression template with point at the left
template<typename Lhs, typename Rhs> AddOp< Point, AddOp<Lhs, Rhs> >
operator+(Point const& p, AddOp<Lhs, Rhs> const& rhs) {
return AddOp< Point, AddOp<Lhs, Rhs> >(p, rhs);
}
// add two points, yield a expression template
AddOp< Point, Point >
operator+(Point const& lhs, Point const& rhs) {
return AddOp<Point, Point>(lhs, rhs);
}
Now, if you have
Point p1 = { 1, 2 }, p2 = { 3, 4 }, p3 = { 5, 6 };
p1 + (p2 + p3); // returns AddOp< Point, AddOp<Point, Point> >
You now just need to overload operator= and add a suitable constructor for the Point type and accept AddOp. Change its definition to:
struct Point {
int x, y;
Point(int x = 0, int y = 0):x(x), y(y) { }
template<typename Lhs, typename Rhs>
Point(AddOp<Lhs, Rhs> const& op) {
x = op.get_x();
y = op.get_y();
}
template<typename Lhs, typename Rhs>
Point& operator=(AddOp<Lhs, Rhs> const& op) {
x = op.get_x();
y = op.get_y();
return *this;
}
int get_x() const { return x; }
int get_y() const { return y; }
};
And add the appropriate get_x and get_y into AddOp as member functions:
int get_x() const {
return lhs.get_x() + rhs.get_x();
}
int get_y() const {
return lhs.get_y() + rhs.get_y();
}
Note how we haven't created any temporaries of type Point. It could have been a big matrix with many fields. But at the time the result is needed, we calculate it lazily.
I have nothing to add to Konrad's post, but you can look at Eigen for an example of lazy evaluation done right, in a real world app. It is pretty awe inspiring.
I'm thinking about implementing a template class, that uses std::function. The class should, more or less, look like this:
template <typename Value>
class Lazy
{
public:
Lazy(std::function<Value()> function) : _function(function), _evaluated(false) {}
Value &operator*() { Evaluate(); return _value; }
Value *operator->() { Evaluate(); return &_value; }
private:
void Evaluate()
{
if (!_evaluated)
{
_value = _function();
_evaluated = true;
}
}
std::function<Value()> _function;
Value _value;
bool _evaluated;
};
For example usage:
class Noisy
{
public:
Noisy(int i = 0) : _i(i)
{
std::cout << "Noisy(" << _i << ")" << std::endl;
}
Noisy(const Noisy &that) : _i(that._i)
{
std::cout << "Noisy(const Noisy &)" << std::endl;
}
~Noisy()
{
std::cout << "~Noisy(" << _i << ")" << std::endl;
}
void MakeNoise()
{
std::cout << "MakeNoise(" << _i << ")" << std::endl;
}
private:
int _i;
};
int main()
{
Lazy<Noisy> n = [] () { return Noisy(10); };
std::cout << "about to make noise" << std::endl;
n->MakeNoise();
(*n).MakeNoise();
auto &nn = *n;
nn.MakeNoise();
}
Above code should produce the following message on the console:
Noisy(0)
about to make noise
Noisy(10)
~Noisy(10)
MakeNoise(10)
MakeNoise(10)
MakeNoise(10)
~Noisy(10)
Note that the constructor printing Noisy(10) will not be called until the variable is accessed.
This class is far from perfect, though. The first thing would be the default constructor of Value will have to be called on member initialization (printing Noisy(0) in this case). We can use pointer for _value instead, but I'm not sure whether it would affect the performance.
Johannes' answer works.But when it comes to more parentheses ,it doesn't work as wish. Here is an example.
Point p1 = { 1, 2 }, p2 = { 3, 4 }, p3 = { 5, 6 }, p4 = { 7, 8 };
(p1 + p2) + (p3+p4)// it works ,but not lazy enough
Because the three overloaded + operator didn't cover the case
AddOp<Llhs,Lrhs>+AddOp<Rlhs,Rrhs>
So the compiler has to convert either (p1+p2) or(p3+p4) to Point ,that's not lazy enough.And when compiler decides which to convert ,it complains. Because none is better than the other .
Here comes my extension: add yet another overloaded operator +
template <typename LLhs, typename LRhs, typename RLhs, typename RRhs>
AddOp<AddOp<LLhs, LRhs>, AddOp<RLhs, RRhs>> operator+(const AddOp<LLhs, LRhs> & leftOperandconst, const AddOp<RLhs, RRhs> & rightOperand)
{
return AddOp<AddOp<LLhs, LRhs>, AddOp<RLhs, RRhs>>(leftOperandconst, rightOperand);
}
Now ,the compiler can handle the case above correctly ,and no implicit conversion ,volia!
As it's going to be done in C++0x, by lambda expressions.
Anything is possible.
It depends on exactly what you mean:
class X
{
public: static X& getObjectA()
{
static X instanceA;
return instanceA;
}
};
Here we have the affect of a global variable that is lazily evaluated at the point of first use.
As newly requested in the question.
And stealing Konrad Rudolph design and extending it.
The Lazy object:
template<typename O,typename T1,typename T2>
struct Lazy
{
Lazy(T1 const& l,T2 const& r)
:lhs(l),rhs(r) {}
typedef typename O::Result Result;
operator Result() const
{
O op;
return op(lhs,rhs);
}
private:
T1 const& lhs;
T2 const& rhs;
};
How to use it:
namespace M
{
class Matrix
{
};
struct MatrixAdd
{
typedef Matrix Result;
Result operator()(Matrix const& lhs,Matrix const& rhs) const
{
Result r;
return r;
}
};
struct MatrixSub
{
typedef Matrix Result;
Result operator()(Matrix const& lhs,Matrix const& rhs) const
{
Result r;
return r;
}
};
template<typename T1,typename T2>
Lazy<MatrixAdd,T1,T2> operator+(T1 const& lhs,T2 const& rhs)
{
return Lazy<MatrixAdd,T1,T2>(lhs,rhs);
}
template<typename T1,typename T2>
Lazy<MatrixSub,T1,T2> operator-(T1 const& lhs,T2 const& rhs)
{
return Lazy<MatrixSub,T1,T2>(lhs,rhs);
}
}
In C++11 lazy evaluation similar to hiapay's answer can be achieved using std::shared_future. You still have to encapsulate calculations in lambdas but memoization is taken care of:
std::shared_future<int> a = std::async(std::launch::deferred, [](){ return 1+1; });
Here's a full example:
#include <iostream>
#include <future>
#define LAZY(EXPR, ...) std::async(std::launch::deferred, [__VA_ARGS__](){ std::cout << "evaluating "#EXPR << std::endl; return EXPR; })
int main() {
std::shared_future<int> f1 = LAZY(8);
std::shared_future<int> f2 = LAZY(2);
std::shared_future<int> f3 = LAZY(f1.get() * f2.get(), f1, f2);
std::cout << "f3 = " << f3.get() << std::endl;
std::cout << "f2 = " << f2.get() << std::endl;
std::cout << "f1 = " << f1.get() << std::endl;
return 0;
}
C++0x is nice and all.... but for those of us living in the present you have Boost lambda library and Boost Phoenix. Both with the intent of bringing large amounts of functional programming to C++.
Lets take Haskell as our inspiration - it being lazy to the core.
Also, let's keep in mind how Linq in C# uses Enumerators in a monadic (urgh - here is the word - sorry) way.
Last not least, lets keep in mind, what coroutines are supposed to provide to programmers. Namely the decoupling of computational steps (e.g. producer consumer) from each other.
And lets try to think about how coroutines relate to lazy evaluation.
All of the above appears to be somehow related.
Next, lets try to extract our personal definition of what "lazy" comes down to.
One interpretation is: We want to state our computation in a composable way, before executing it. Some of those parts we use to compose our complete solution might very well draw upon huge (sometimes infinite) data sources, with our full computation also either producing a finite or infinite result.
Lets get concrete and into some code. We need an example for that! Here, I choose the fizzbuzz "problem" as an example, just for the reason that there is some nice, lazy solution to it.
In Haskell, it looks like this:
module FizzBuzz
( fb
)
where
fb n =
fmap merge fizzBuzzAndNumbers
where
fizz = cycle ["","","fizz"]
buzz = cycle ["","","","","buzz"]
fizzBuzz = zipWith (++) fizz buzz
fizzBuzzAndNumbers = zip [1..n] fizzBuzz
merge (x,s) = if length s == 0 then show x else s
The Haskell function cycle creates an infinite list (lazy, of course!) from a finite list by simply repeating the values in the finite list forever. In an eager programming style, writing something like that would ring alarm bells (memory overflow, endless loops!). But not so in a lazy language. The trick is, that lazy lists are not computed right away. Maybe never. Normally only as much as subsequent code requires it.
The third line in the where block above creates another lazy!! list, by means of combining the infinite lists fizz and buzz by means of the single two elements recipe "concatenate a string element from either input list into a single string". Again, if this were to be immediately evaluated, we would have to wait for our computer to run out of resources.
In the 4th line, we create tuples of the members of a finite lazy list [1..n] with our infinite lazy list fizzbuzz. The result is still lazy.
Even in the main body of our fb function, there is no need to get eager. The whole function returns a list with the solution, which itself is -again- lazy. You could as well think of the result of fb 50 as a computation which you can (partially) evaluate later. Or combine with other stuff, leading to an even larger (lazy) evaluation.
So, in order to get started with our C++ version of "fizzbuzz", we need to think of ways how to combine partial steps of our computation into larger bits of computations, each drawing data from previous steps as required.
You can see the full story in a gist of mine.
Here the basic ideas behind the code:
Borrowing from C# and Linq, we "invent" a stateful, generic type Enumerator, which holds
- The current value of the partial computation
- The state of a partial computation (so we can produce subsequent values)
- The worker function, which produces the next state, the next value and a bool which states if there is more data or if the enumeration has come to an end.
In order to be able to compose Enumerator<T,S> instance by means of the power of the . (dot), this class also contains functions, borrowed from Haskell type classes such as Functor and Applicative.
The worker function for enumerator is always of the form: S -> std::tuple<bool,S,T where S is the generic type variable representing the state and T is the generic type variable representing a value - the result of a computation step.
All this is already visible in the first lines of the Enumerator class definition.
template <class T, class S>
class Enumerator
{
public:
typedef typename S State_t;
typedef typename T Value_t;
typedef std::function<
std::tuple<bool, State_t, Value_t>
(const State_t&
)
> Worker_t;
Enumerator(Worker_t worker, State_t s0)
: m_worker(worker)
, m_state(s0)
, m_value{}
{
}
// ...
};
So, all we need to create a specific enumerator instance, we need to create a worker function, have the initial state and create an instance of Enumerator with those two arguments.
Here an example - function range(first,last) creates a finite range of values. This corresponds to a lazy list in the Haskell world.
template <class T>
Enumerator<T, T> range(const T& first, const T& last)
{
auto finiteRange =
[first, last](const T& state)
{
T v = state;
T s1 = (state < last) ? (state + 1) : state;
bool active = state != s1;
return std::make_tuple(active, s1, v);
};
return Enumerator<T,T>(finiteRange, first);
}
And we can make use of this function, for example like this: auto r1 = range(size_t{1},10); - We have created ourselves a lazy list with 10 elements!
Now, all is missing for our "wow" experience, is to see how we can compose enumerators.
Coming back to Haskells cycle function, which is kind of cool. How would it look in our C++ world? Here it is:
template <class T, class S>
auto
cycle
( Enumerator<T, S> values
) -> Enumerator<T, S>
{
auto eternally =
[values](const S& state) -> std::tuple<bool, S, T>
{
auto[active, s1, v] = values.step(state);
if (active)
{
return std::make_tuple(active, s1, v);
}
else
{
return std::make_tuple(true, values.state(), v);
}
};
return Enumerator<T, S>(eternally, values.state());
}
It takes an enumerator as input and returns an enumerator. Local (lambda) function eternally simply resets the input enumeration to its start value whenever it runs out of values and voilà - we have an infinite, ever repeating version of the list we gave as an argument:: auto foo = cycle(range(size_t{1},3)); And we can already shamelessly compose our lazy "computations".
zip is a good example, showing that we can also create a new enumerator from two input enumerators. The resulting enumerator yields as many values as the smaller of either of the input enumerators (tuples with 2 element, one for each input enumerator). I have implemented zip inside class Enumerator itself. Here is how it looks like:
// member function of class Enumerator<S,T>
template <class T1, class S1>
auto
zip
( Enumerator<T1, S1> other
) -> Enumerator<std::tuple<T, T1>, std::tuple<S, S1> >
{
auto worker0 = this->m_worker;
auto worker1 = other.worker();
auto combine =
[worker0,worker1](std::tuple<S, S1> state) ->
std::tuple<bool, std::tuple<S, S1>, std::tuple<T, T1> >
{
auto[s0, s1] = state;
auto[active0, newS0, v0] = worker0(s0);
auto[active1, newS1, v1] = worker1(s1);
return std::make_tuple
( active0 && active1
, std::make_tuple(newS0, newS1)
, std::make_tuple(v0, v1)
);
};
return Enumerator<std::tuple<T, T1>, std::tuple<S, S1> >
( combine
, std::make_tuple(m_state, other.state())
);
}
Please note, how the "combining" also ends up in combining the state of both sources and the values of both sources.
As this post is already TL;DR; for many, here the...
Summary
Yes, lazy evaluation can be implemented in C++. Here, I did it by borrowing the function names from haskell and the paradigm from C# enumerators and Linq. There might be similarities to pythons itertools, btw. I think they followed a similar approach.
My implementation (see the gist link above) is just a prototype - not production code, btw. So no warranties whatsoever from my side. It serves well as demo code to get the general idea across, though.
And what would this answer be without the final C++ version of fizzbuz, eh? Here it is:
std::string fizzbuzz(size_t n)
{
typedef std::vector<std::string> SVec;
// merge (x,s) = if length s == 0 then show x else s
auto merge =
[](const std::tuple<size_t, std::string> & value)
-> std::string
{
auto[x, s] = value;
if (s.length() > 0) return s;
else return std::to_string(x);
};
SVec fizzes{ "","","fizz" };
SVec buzzes{ "","","","","buzz" };
return
range(size_t{ 1 }, n)
.zip
( cycle(iterRange(fizzes.cbegin(), fizzes.cend()))
.zipWith
( std::function(concatStrings)
, cycle(iterRange(buzzes.cbegin(), buzzes.cend()))
)
)
.map<std::string>(merge)
.statefulFold<std::ostringstream&>
(
[](std::ostringstream& oss, const std::string& s)
{
if (0 == oss.tellp())
{
oss << s;
}
else
{
oss << "," << s;
}
}
, std::ostringstream()
)
.str();
}
And... to drive the point home even further - here a variation of fizzbuzz which returns an "infinite list" to the caller:
typedef std::vector<std::string> SVec;
static const SVec fizzes{ "","","fizz" };
static const SVec buzzes{ "","","","","buzz" };
auto fizzbuzzInfinite() -> decltype(auto)
{
// merge (x,s) = if length s == 0 then show x else s
auto merge =
[](const std::tuple<size_t, std::string> & value)
-> std::string
{
auto[x, s] = value;
if (s.length() > 0) return s;
else return std::to_string(x);
};
auto result =
range(size_t{ 1 })
.zip
(cycle(iterRange(fizzes.cbegin(), fizzes.cend()))
.zipWith
(std::function(concatStrings)
, cycle(iterRange(buzzes.cbegin(), buzzes.cend()))
)
)
.map<std::string>(merge)
;
return result;
}
It is worth showing, since you can learn from it how to dodge the question what the exact return type of that function is (as it depends on the implementation of the function alone, namely how the code combines the enumerators).
Also it demonstrates that we had to move the vectors fizzes and buzzes outside the scope of the function so they are still around when eventually on the outside, the lazy mechanism produces values. If we had not done that, the iterRange(..) code would have stored iterators to the vectors which are long gone.
Using a very simple definition of lazy evaluation, which is the value is not evaluated until needed, I would say that one could implement this through the use of a pointer and macros (for syntax sugar).
#include <stdatomic.h>
#define lazy(var_type) lazy_ ## var_type
#define def_lazy_type( var_type ) \
typedef _Atomic var_type _atomic_ ## var_type; \
typedef _atomic_ ## var_type * lazy(var_type); //pointer to atomic type
#define def_lazy_variable(var_type, var_name ) \
_atomic_ ## var_type _ ## var_name; \
lazy_ ## var_type var_name = & _ ## var_name;
#define assign_lazy( var_name, val ) atomic_store( & _ ## var_name, val )
#define eval_lazy(var_name) atomic_load( &(*var_name) )
#include <stdio.h>
def_lazy_type(int)
void print_power2 ( lazy(int) i )
{
printf( "%d\n", eval_lazy(i) * eval_lazy(i) );
}
typedef struct {
int a;
} simple;
def_lazy_type(simple)
void print_simple ( lazy(simple) s )
{
simple temp = eval_lazy(s);
printf("%d\n", temp.a );
}
#define def_lazy_array1( var_type, nElements, var_name ) \
_atomic_ ## var_type _ ## var_name [ nElements ]; \
lazy(var_type) var_name = _ ## var_name;
int main ( )
{
//declarations
def_lazy_variable( int, X )
def_lazy_variable( simple, Y)
def_lazy_array1(int,10,Z)
simple new_simple;
//first the lazy int
assign_lazy(X,111);
print_power2(X);
//second the lazy struct
new_simple.a = 555;
assign_lazy(Y,new_simple);
print_simple ( Y );
//third the array of lazy ints
for(int i=0; i < 10; i++)
{
assign_lazy( Z[i], i );
}
for(int i=0; i < 10; i++)
{
int r = eval_lazy( &Z[i] ); //must pass with &
printf("%d\n", r );
}
return 0;
}
You'll notice in the function print_power2 there is a macro called eval_lazy which does nothing more than dereference a pointer to get the value just prior to when it's actually needed. The lazy type is accessed atomically, so it's completely thread-safe.