Passing result of expression by reference - c++

I have a function:
int f(std::atomic<bool>& flag);
That needs to check the value of flag, an atomic bool which gets changed by another thread. This function works as expected (f sees changes to the flag made by the other thread) if I call it like so:
f(m_flag1);
What I am trying to achieve now, is slightly more complicated, since I don't want to pass a reference to a single atomic bool, but to an expression instead, like:
std::atomic<bool> x = m_flag1 && m_flag2;
f(x);
As far as I understood, it would be wrong to directly pass the expression to the function, since that would be a temporary value that would gets destroyed once the function is called. It seems to me that r-values cannot be passed as reference too.
Having a reference to x does not help much, though, since I think the expression is only evaluated once, so f is not actually seeing m_flag1 && m_flag2.
What is a clean way to get a reference to an expression? Should my other thread continuously evaluate x = m_flag1 && m_flag2; or is there a cleaner way?

From question comments:
"[...]that fuction needs to check a boolean expression[...]"
emphasis mine
If your function is not supposed to modify the boolean flags but only evaluate its value, then you can just take the parameter as a const reference instead.
int f(const std::atomic<bool> & flag)
{
// ...
}
And the call may be like:
int result = f(flag1 && flag2);
Note: If you need to evaluate both flags "at the same time" (which is most probably the case), you'll need to add a synchronization mechanism such as mutexes because the expression evaluation flag1 && flag2 is not atomic, no matter if flag1 and flag2 are.
Now, if your function may modify the flags, and thus taking as a const reference being impossible, you don't have any other choice than to give the flags as two separate arguments and defer the evaluation inside the function.
From question comments again:
"The thing is, that function is usually called with one flag only, this is a special case in which I need two flags"
Considering this, you can overload the function to handle the cases with one or two parameters, something as follows:
int f(std::atomic<bool> & flag)
{
// ...
}
int f(std::atomic<bool> & flag1, std::atomic<bool> & flag2)
{
auto expr_result = flag1 && flag2; // Evaluation of the expression deferred into the function
// ...
}
The note above about the need of a synchronization mechanism (such as mutexes) if you want to make the evaluation of the expression atomic still applies here.

Related

infinite for loops in c++

I am playing around a little with for loops , tried the following code and got an infinite loop.
#include<iostream>
int main(){
int i {0};
bool condition = i < 5;
for ( ; condition ; ){
std::cout << "Hello World!" << std::endl;
i++;
}
}
Can someone explain why ?
bool condition = i < 5;
This line defines a variable named condition which has the value true from this line onwards.
It does not bind the expression from the right side, but only copies the result at the time of assignment.
What you intended is more complicated:
auto condition = [&i](){ return i < 5; };
for ( ; condition() ; )
Now condition is a function object which can be evaluated repeatedly.
The right hand of the assignment is called a lambda expression, and follows the form [capture scope](parameters){ body with return statement }.
In the capture scope, you can list variables either by value (without &) in which case they get copied once when the lambda is declared, or by reference (with leading &) in which case they don't get copied but the variable inside the lambda is a reference to the variable of the same name outside the lambda. There is also the short form [&] which captures all variables in the parent scope by reference, and [=] which captures everything by value.
auto can be used for brevity in combined declarations + assignments, and automatically resolves the type of the variable from the right hand side.
The closest compatible type you could explicitly specify would be std::function<bool(void)> (generic container for functions with that signature), the actual type of that object is some internal type representing the naked lambda which you can't write explicitly. So if you can't know the exact type, and you don't want to use a generic container type, auto is occasionally even necessary.
Variables in C++ store values. You seem to be under the impression that condition somehow remains connected to the expression i < 5. It is not. Once you set the value which is true at the time of the assignment it will keep that value until you change it. You never change it again so the value of condition is forever true.
citing cppref:
Executes init-statement once, then executes statement and iteration-expression repeatedly, until the value of condition becomes false. The test takes place before each iteration.
Now, your condition evaluates to true:
bool condition = i < 5; // true
hence, the loop will continue until false, which does not happen in the loop body of your code; condition is not written to after initialization, and therefore the loop will not stop.
The for loop runs on a condition, such as while i is below a specified value, and will keep running until this is no longer satisfied. In your case, it is never satisfied, guaranteeing an infinite loop.

C++ constexpr - Value can be evaluated at compile time?

I was reading about constexpr here:
The constexpr specifier declares that it is possible to evaluate the value of the function or variable at compile time.
When I first read this sentence, it made perfect sense to me. However, recently I've come across some code that completely threw me off. I've reconstructed a simple example below:
#include <iostream>
void MysteryFunction(int *p);
constexpr int PlusOne(int input) {
return input + 1;
}
int main() {
int i = 0;
MysteryFunction(&i);
std::cout << PlusOne(i) << std::endl;
return 0;
}
Looking at this code, there is no way for me to say what the result of PlusOne(i) should be, however it actually compiles! (Of course linking will fail, but g++ -std=c++11 -c succeeds without error.)
What would be the correct interpretation of "possible to evaluate the value of the function at compile time"?
The quoted wording is a little misleading in a sense. If you just take PlusOne in isolation, and observe its logic, and assume that the inputs are known at compile-time, then the calculations therein can also be performed at compile-time. Slapping the constexpr keyword on it ensures that we maintain this lovely state and everything's fine.
But if the input isn't known at compile-time then it's still just a normal function and will be called at runtime.
So the constexpr is a property of the function ("possible to evaluate at compile time" for some input, not for all input) not of your function/input combination in this specific case (so not for this particular input either).
It's a bit like how a function could take a const int& but that doesn't mean the original object had to be const. Here, similarly, constexpr adds constraints onto the function, without adding constraints onto the function's input.
Admittedly it's all a giant, confusing, nebulous mess (C++! Yay!). Just remember, your code describes the meaning of a program! It's not a direct recipe for machine instructions at different phases of compilation.
(To really enforce this you'd have the integer be a template argument.)
A constexpr function may be called within a constant expression, provided that the other requirements for the evaluation of the constant expression are met. It may also be called within an expression that is not a constant expression, in which case it behaves the same as if it had not been declared with constexpr. As the code in your question demonstrates, the result of calling a constexpr function is not automatically a constant expression.
What would be the correct interpretation of "possible to evaluate the value of the function at compile time"?
If all the arguments to the function are evaluatable at compile time, then the return value of the function can be evaluated at compile time.
However, if the values of the arguments to the function are known only at run time, then the retun value of the function can only be evaluated at run time.
Hence, it possible to evaluate the value of the function at compile time but it is not a requirement.
All the answers are correct, but I want to give one short example that I use to explain how ridiculously unintuitive constexpr is.
#include <cstdlib>
constexpr int fun(int i){
if (i==42){
return 47;
} else {
return rand();
}
}
int main()
{
int arr[fun(42)];
}
As a side note:
some people find constexpr status unsatisfying so they proposed constexpr! keyword addition to the language.

If you include const in a function, is & redundant?

Will the two function specifications below always compile to the same thing? I can't see that a copy would be needed if you're using const. If they aren't the same, why?
void(const int y);
void(const int& y);
Not the same. If the argument changes after it's passed (e.g. because it's changed by another thread), the first version is unaffected because it has a copy. In the second variant, the function called may not change the argument itself, but it would be affected by changes to y. With threads, this might mean it requires a mutex lock.
Without optimization ... this is not the same.
The first line of code gets a copy of the value passed into this function.
The second line of code gets the reference of the variable and your function will read the value always directly from the calling location variable .
In both cases the compiler is informed (by the keyword const), that these variables (inside the function) MUST not be modified. If there are any modifications in the function, an error will be generated.
The & specifies that the object is passed by reference (similar to passing by pointer at least on the assembly level). Thus
void fval(type x)
{
// x is a local copy of the data passed by the caller.
// modifying x has no effect on the data hold by the caller
}
type a;
fval(a); // a will not be changed
void fref(type &x)
{
// x is a mere reference to an object
// changing x affects the data hold by the caller
}
type b;
fref(b); // b may get changed.
Now adding the const keyword merely expresses that the function promises not to change the object.
void fcval(const type x)
{
// x is a local copy of the data passed by the caller.
// modifying x is not allowed
}
type a;
fcval(a); // a will not be changed
void fcref(const type &x)
{
// x is a mere reference to an object
// changing x is not allowed
}
type b;
fcref(b); // b will not be changed
Since a copy may be expensive, it should be avoided if not needed. Therefore, the method of choice for passing a constant object a la fcref (except for builtin types when fval is fast).

Effects of declaring a function as pure or const to GCC, when it isn't

GCC can suggest functions for attribute pure and attribute const with the flags -Wsuggest-attribute=pure and -Wsuggest-attribute=const.
The GCC documentation says:
Many functions have no effects except the return value and their return value depends only on the parameters and/or global variables. Such a function can be subject to common subexpression elimination and loop optimization just as an arithmetic operator would be. These functions should be declared with the attribute pure.
But what can happen if you attach __attribute__((__pure__)) to a function that doesn't match the above description, and does have side effects? Is it simply the possibility that the function will be called fewer times than you would want it to be, or is it possible to create undefined behaviour or other kinds of serious problems?
Similarly for __attribute__((__const__)) which is stricter again - the documentation states:
Basically this is just slightly more strict class than the pure attribute below, since function is not allowed to read global memory.
But what can actually happen if you attach __attribute__((__const__)) to a function that does access global memory?
I would prefer technical answers with explanations of actual possible scenarios within the scope of GCC / G++, rather than the usual "nasal demons" handwaving that appears whenever undefined behaviour gets mentioned.
But what can happen if you attach __attribute__((__pure__))
to a function that doesn't match the above description,
and does have side effects?
Exactly. Here's a short example:
extern __attribute__((pure)) int mypure(const char *p);
int call_pure() {
int x = mypure("Hello");
int y = mypure("Hello");
return x + y;
}
My version of GCC (4.8.4) is clever enough to remove second call to mypure (result is 2*mypure()). Now imagine if mypure were printf - the side effect of printing string "Hello" would be lost.
Note that if I replace call_pure with
char s[];
int call_pure() {
int x = mypure("Hello");
s[0] = 1;
int y = mypure("Hello");
return x + y;
}
both calls will be emitted (because assignment to s[0] may change output value of mypure).
Is it simply the possibility that the function will be called fewer times
than you would want it to be, or is it possible to create
undefined behaviour or other kinds of serious problems?
Well, it can cause UB indirectly. E.g. here
extern __attribute__((pure)) int get_index();
char a[];
int i;
void foo() {
i = get_index(); // Returns -1
a[get_index()]; // Returns 0
}
Compiler will most likely drop second call to get_index() and use the first returned value -1 which will result in buffer overflow (well, technically underflow).
But what can actually happen if you attach __attribute__((__const__))
to a function that does access global memory?
Let's again take the above example with
int call_pure() {
int x = mypure("Hello");
s[0] = 1;
int y = mypure("Hello");
return x + y;
}
If mypure were annotated with __attribute__((const)), compiler would again drop the second call and optimize return to 2*mypure(...). If mypure actually reads s, this will result in wrong result being produced.
EDIT
I know you asked to avoid hand-waving but here's some generic explanation. By default function call blocks a lot of optimizations inside compiler as it has to be treated as a black box which may have arbitrary side effects (modify any global variable, etc.). Annotating function with const or pure instead allows compiler to treat it more like expression which allows for more aggressive optimization.
Examples are really too numerous to give. The one which I gave above is common subexpression elimination but we could as well easily demonstrate benefits for loop invariants, dead code elimination, alias analysis, etc.

Premature optimization or am I crazy?

I recently saw a piece of code at comp.lang.c++ moderated returning a reference of a static integer from a function. The code was something like this
int& f()
{
static int x;
x++;
return x;
}
int main()
{
f()+=1; //A
f()=f()+1; //B
std::cout<<f();
}
When I debugged the application using my cool Visual Studio debugger I saw just one call to statement A and guess what I was shocked. I always thought i+=1 was equal to i=i+1 so
f()+=1 would be equal to f()=f()+1 and I would be seeing two calls to f(), but I saw only one. What the heck is this? Am I crazy or is my debugger gone crazy or is this a result of premature optimization?
This is what The Standard says about += and friends:
5.17-7: The behavior of an expression of the form E1 op= E2 is equivalent to
E1 = E1 op E2 except that E1 is
evaluated only once.[...]
So the compiler is right on that.
i+=1 is functionally the same as i=i+1. It's actually implemented differently (basically, it's designed to take advantage of CPU level optimization).
But essencially the left side is evaluated only once. It yields a non-const l-value, which is all it needs to read the value, add one and write it back.
This is more obvious when you create an overloaded operator for a custom type. operator+= modifies the this instance. operator+ returns a new instance. It is generally recommended (in C++) to write the oop+= first, and then write op+ in terms of it.
(Note this is applies only to C++; in C#, op+= is exactly as you assumed: just a short hand for op+, and you cannot create your own op+=. It is automatically created for you out of the Op+)
Your thinking is logical but not correct.
i += 1;
// This is logically equivalent to:
i = i + 1;
But logically equivalent and identical are not the same.
The code should be looked at as looking like this:
int& x = f();
x += x;
// Now you can use logical equivalence.
int& x= f();
x = x + 1;
The compiler will not make two function calls unless you explicitly put two function calls into the code. If you have side effects in your functions (like you do) and the compiler started adding extra hard to see implicit calls it would be very hard to actually understand the flow of the code and thus make maintenance very hard.
f() returns a reference to the static integer. Then += 1 adds one to this memory location – there's no need to call it twice in statement A.
In every language I've seen which supports a += operator, the compiler evaluates the operand of the left-hand side once to yield some type of a address which is then used both to read the old value and write the new one. The += operator is not just syntactic sugar; as you note, it can achieve expression semantics which would be awkward to achieve via other means.
Incidentally, the "With" statements in vb.net and Pascal both have a similar feature. A statement like:
' Assime Foo is an array of some type of structure, Bar is a function, and Boz is a variable.
With Foo(Bar(Boz))
.Fnord = 9
.Quack = 10
End With
will compute the address of Foo(Bar(Boz)), and then set two fields of that structure to the values nine and ten. It would be equivalent in C to
{
FOOTYPE *tmp = Foo(Bar(Boz));
tmp->Fnord = 9;
tmp->Quack = 10;
}
but vb.net and Pascal do not expose the temporary pointer. While one could achieve the same effect in VB.net without using "With" to hold the result of Bar(), using "With" allows one to avoid the temporary variable.