I have three functions, funt1(), funt2(), and funt3().
int funt1()
{
cout<<"funt1 called"<<endl;
return 10;
}
int funt2()
{
cout<<"funt2 called"<<endl;
return 20;
}
void funt3(int x=funt1(), int y=funt2())
{
cout << x << y << endl;
}
My main function:
int main()
{
funt3();
return 0;
}
When I am calling funt3() in my main() method, why is funt1() is called first, and then funt2()?
It depends on your compiler. Others may call funct2() first. Neither C or C++ guarantee the order of evaluation of function arguments.
See Parameter evaluation order before a function calling in C
C++ standard does not define that, so it's totally compiler-specific. That said, you should never rely on an instance of undefined behaviour.
EDIT: if you really want to keep functions invocations as default parameters to reduce the numbers of parameters you have to pass each time I suggest you do the following:
void funt3(int x, int y)
{
cout<<x<<y<<endl;
}
void funt3(int x)
{
funt3(x, funt2());
}
void funt3()
{
funt3(funt1());
}
The language does not require any particular order. The order used will be compiler dependent.
Compiler specific and from there it goes down to the CPU. CPU might branch that call into separate branches, it might try to do some predictions, or if the CPU thinks func1 is way faster than func2 it will run func1 and a bunch of other operations before func2 so it optimizes.
As C++ standard doesn't define the order so it's depend on compiler.
You can simply try a few popular c++ compilers: GCC, VS2008/VS2010 etc.
Then you will see totally different result.
compiler dependent. it maybe funt1 then funt2 then funt3 or any other combination.
As noted by everybody else, the order of evaluation of function parameters is unspecified by the C++ standard. This allows each compiler to choose an optimal order, whether that order is determined by convenience or efficiency.
You may even find that the order can change based on the optimization flags you give to your compiler.
In order to guarantee the sequence of the function calls you need to introduce a sequence point between the calls. I accomplish this below by creating two different versions of the function, so that only one function call is made as a parameter.
void funt3(int x, int y=funt2())
{
cout << x << y << endl;
}
void funt3()
{
int x = funt1();
funt3(x);
}
It is because the parameters of funt3(sp?) needs to work out x then y. i.e. funt1() then funt2() before considering the contents of funt3.
Related
#include <iostream>
using namespace std;
int main() {
int n,x;
int fact(int);
cin >> n;
x = fact(n);
cout << x;
return 0;
}
int fact(int n) {
if(n!=1)
return n*fact(n-1);
}
In the last case, when the argument passed to the function fact is 1, how is it able to calculate that fact(1) is equal to 1, without me specifying it in my code?
This program relies on undefined behavior. It is certainly not guaranteed to work, but you may have found a situation in which the parameter you send (1) is in a place where the calling code perceives it as the return value. Do not depend on this behavior.
Many C++ compilers would reject this code as having a semantic issue: Not all control paths return a value from fact()
int fact(int n);
This function signature returns an integer, but when n=1 is given there is no corresponding return statement in your implementation. In that case, the function may return any int value (i.e. garbage in memory) and the behaviour is undefined. You should not depend on this, even though your compiler allows it to run.
I'm quite certain you saw a warning when compiling your program. In my environment (g++ on Mac OSX), the compiler issued the following warning:
warning: control may reach end of non-void function [-Wreturn-type]
Personally, I don't think there is any good reason for a compiler to allow this kind of bug (i.e. it should fail to compile).
Reference: A similar question is found below:
C++ return value without return statement
I would like to know the difference between functions that return a value and functions that set a value to a pointer, for example:
Pointer:
int myVal;
ptr_multiply(5, 5, &myVal);
cout << myVal << endl; //Output: 25
Return:
int myVal = rtn_multiply(5, 5);
cout << myVal << endl; //Output: 25
My question focuses in what method to choose when creating functions and why, as well as what advantages/disadvantages can we find in each of them.
Readability is the main reason functions typically return what you expect them to return. However, as this is highly subjective, I suggest you strive for consistency within your project.
When you want a function to return multiple things and not group them all together in a structure just so you can return them, an alternative is to return them in output parameters.
If you have big pieces of data to return, then you might come across to performance-bottlenceks. The reason is that that the returning value must be copied.
But in the most cases you don't have to worry about that, because modern compilers are capable of auto-inline things like that.
Side note: In C++ try to avoid naked pointers as much as possible (use references, smart-pointers)
One advantage to use the "pointer" method is that you can have multiple "return" values by passing in non-const references or pointers. For example:
int output1;
int output2;
get_values("input", &output1, &output2)
You can return a success/failure:
int v = -1;
if(ToInt("wibble",&v)){
// do something with v
}
Would fail as "wibble" can't be converted to an int.
The other useful thing is that the function doesn't have to change v:
int v = previousValue;
UpdateIfNewValueFound( &v )
Return the value. With modern compilers there should be almost no difference in performance between the two, but returning the value has several advantages:
By definition, it's what a function should do (i.e. a function maps from a domain to a codomain).
It's self-documenting; if I'm using an unfamiliar API, unless there is good documentation, it can get confusing what is input/output if values are returned in the parameter set. With a returned value there is no ambiguity, and requires less documenting.
It's easier to use - can half the number of lines needed! Compare
int r = f();
to
int r;
f(&r);
It could make you a better programmer; you have to put more effort into returning multiple values from a function (e.g. via a struct or pair, as opposed to adding another parameter). If you find you need to do this often, the additional effort may force you to think more carefully about your code design - it's usually better for a function to just do/return one thing.
Another difference is stack or heap.
A return-value lies on top of the stack. The pointer-variant lies in heap.
For instance (sensless recursive code to demonstrate return on stack):
typedef std::array<int,1000> KByte;
KByte tmp;
KByte f(int nr) {
if (nr == 1) { tmp[nr]=nr; return tmp; }
else { tmp[nr]=nr; return f(nr-1); };
}
void f2(int nr, KByte& ret) {
if (nr == 1) { ret[1]=1; }
else { ret[nr]=nr; f2( nr-1, ret ); }
};
Calling
KByte t = f(999); /* Stack error */
shoud give you an stack-size error, since 1000 KByte (1GB) on the stack is to much.
Calling
KByte t2;
f2( 999, t2 );
should work without stack problem. (It also uses recursion depth 1000, but does not lie the return value on the stack.
Does the C++ language allow the following code to print e.g. 1 instead of 16? According to other answers I would guess yes but this case specifically doesn't seem to have been covered.
#include "iostream"
#include "cstdlib"
using namespace std;
struct as_array {
double &a, &b;
as_array(double& A, double& B)
: a(A), b(B) {}
double& operator[](const int i) {
switch (i) {
case 0:
return this->a;
break;
case 1:
return this->b;
break;
default:
abort();
}
}
};
int main() {
cout << sizeof(as_array) << endl;
}
The Standard says under [dcl.ref]:
It is unspecified whether or not a reference requires storage
Also it is up to the compiler to decide what the size of an object is, so you could get any non-zero number here.
There is also the as-if rule (aka. permission to optimize). So it would be legal for the compiler to use storage for these references if and only if the way the references were used required it.
Having said all that; in the interests of having a stable ABI I would still expect that a compiler assigns storage to these references.
The way the compiler implements reference behaviours - including where and how they're stored - is not specified in the C++ Standard. Consequently, some compiler could "print e.g. 1 instead of 16" as you ask.
Separately, you don't need to break after returning.
I believe that
cout << sizeof(as_array) << endl;
always returns the required storage for two pointers to double on the given machine, maybe extended with gaps to fulfill packing rules. Optimizing did mot mean to reduce the size of the storage of given data structures. Instead the compiler can optimize your code completely away in a real world scenario. So if you have the code:
double a=100;
double b=200;
as_array arr(&a, &b);
std::cout << arr[0] << std::endl;
can result in optimizing the storage for the struct completely away, because the compiler knows how the values are handled through your code. But the print out of sizeof(arr) still give you the theoretical size of the struct.
Anyway: If you want to get better optimizing results, you should write better code! Make methods const if they are const! If you use c++11 use constexpr if possible.
I like to refactor complicated conditionals like this:
if (foo(blah) || (bar(param1, param2, param3) && !obj.longMethodName())) ...
into this:
bool foo_true = foo(blah);
bool bar_true = bar(param1, param2, param3);
bool long_true = obj.longMethodName();
if (foo_true || (bar_true && !long_true)) ...
I think this makes the code a lot easier to understand, and it helps with debugging because you can see the intermediate results used to compute the final condition.
But: in the original code, due to short circuiting, bar will only be evaluated if foo returns true, and longMethodName only if bar returns true.
Assume that functions are "pure", methods are const, and the compiler can see the function bodies of everything. Are C++ optimizers allowed to defer evaluating my intermediate values until they are needed?
Of course. Provided the compiler can see enough to determine
that foo, bar and obj.longMethodName() don't have any
other impact on the observable behavior of your code.
Whether any compilers do is another question. I rather doubt
it; it would require some very special logic, which isn't in the
usual list of optimization techniques, for something that
practically nobody does. (I suspect that most programmers would
find the original version, formatted correctly, to be more
readable than the one with a lot of extra variables.)
EDIT:
I wonder if it's worth pointing out that the compiler is allowed
to call all three functions even if the if is written:
if ( foo( blah ) || (bar( p1, p2, p3 ) && ! obj.lMN() ) )
(Although I can't imagine one that would.) The standard makes
no requirements with regards to which functions are called when;
it only requires that the observable behavior be the same (same
values and in the same order---no guarantees with regards to
time) "as if" the formal semantics of the program were followed.
And the only things that count as observable behavior is IO
(in some form) and accesses to volatile objects.
No. You compiler is not allowed to make the optimisation because it can not determine wether you have meant the short circuit or wether you want to have a potential side effect of evaluating bar no matter what.
No. C++ has no concept of a pure method with no side effects, so there really isn't a way to optimize that.
The problem here is that foo and bar could be implemented in another compilation unit and C++ does not have the concept of function purity. This means that foo and bar might have side effects (changes to screen or global variables) and therefore must be evaluated in order for you to get expected behaviour.
Interestingly enough, with GCC, functions can be declared with the pure attribute. This tells the compiler that the function does not have any side effects. And therefore can be called lazily. See here for more details.
I am not sure if the assignment would be counted as a side-effect already. To say the least it is probably hard to determine if it is safe to move the actual call.
But, I would like to point out that in c++11 it is possible to achieve what OP pursues with nearly exact same syntax that OP uses in examples utilizing std::bind.
It is just that foo_true, would not be defined as
bool foo_true = foo(blah);
but rather
auto foo_true = std::bind(foo, blah).
The if could be then checked as if( foo_true() || bar_true() ).
Whether it is cleaner or not, is up to personal matter IMO. But I believe it behaves as both wanted, and expected. Full code:
#include <iostream>
#include <functional>
using namespace std;
bool foo(int blah){
cout << "blah: " << blah << '\n';
return blah;
}
bool bar(bool negate_me){
cout << "negate_me: " << negate_me << '\n';
return !negate_me;
}
int main() {
bool test = true;
int param = 42;
auto foo_true = std::bind(foo, test);
auto bar_true = std::bind(bar, param);
if (foo_true() || bar_true() ) cout << "TEST\n";
return 0;
}
Output:
blah: 1
TEST
bar wasn't called. Change test to false and it will be.
This question already has answers here:
What is double evaluation and why should it be avoided?
(4 answers)
Closed 3 years ago.
reading combase.cpp code, I find following:
/* We have to ensure that we DON'T use a max macro, since these will typically */
/* lead to one of the parameters being evaluated twice. Since we are worried */
/* about concurrency, we can't afford to access the m_cRef twice since we can't */
/* afford to run the risk that its value having changed between accesses. */
template<class T> inline static T ourmax( const T & a, const T & b )
{
return a > b ? a : b;
}
I don't understand why "max macro leads to one of the parameters being evaluated twice"?
Consider an usage like in this code sample:
#define max(a,b) (a>b?a:b)
int main()
{
int a = 0;
int b = 1;
int c = max(a++, b++);
cout << a << endl << b << endl;
return 0;
}
The intention probably was to print 1 and 2, but macro expands to:
int c = a++ > b++ ? a++ : b++;
b gets incremented twice, and the program prints 1 and 3.
Hence,
In some cases, expressions passed as arguments to macros can be evaluated more than once.
Although Als has quite clearly explained the immediate issue, I see two
larger issues. The first is simple: max can't be a macro, since it is
a standard function template, defined in <algorithm>. (In the case of
VC++, you need to define NOMINMAX in order to use <algorithm>. But
since it's always preferable to use a standard function when it does the
job, you should practically always add NOMINMAX to your preprocessor
defines, and be done with it.)
The second is even more worrisome, since it shows a lack of
understanding concerning the code. The comments make reference to
"concurrency", and suggest that by using the function, there are no
concurrency issues. This is simply incorrect. If any other thread (or
process, in the case of shared memory) may modify either of the
arguments, the behavior is undefined. In particular, as written, the
compiler likely would read one of the values twice; the arguments are
references. But regardless of how you write it, the compiler is allowed
to reread the values; and even if it doesn't, there's nothing to ensure
that the accesses are atomic. Whoever wrote the comment does not
understand the basic principles of multithreaded code.