I have a custom stack class. Most of the code can be seen here:
Member functions of a templated class, that take a template type as argument
I fill the stack like so:
stack <int> Astack;
Astack.Push(1); Astack.Push(2); Astack.Push(3); Astack.Push(4);
Then I do this:
cout << Astack.Pop() << Astack.Pop() << Astack.Pop() << Astack.Pop() <<endl;
and get this: 1234
However, if I do this:
cout << Astack.Pop(); cout << Astack.Pop(); cout << Astack.Pop(); cout << Astack.Pop();
I get this: 4321, which is obviously what I want.
So, what gives?
The order of evaluation of the function calls is unspecified. Your first expression basically boils down to this:
cout << a << b << c << d;
Each of a, b, c, and d are calls to Astack.Pop(). The compiler can generate code that evaluates those calls in any order it chooses.
You should avoid writing expressions that rely on a particular order of evaluation of parts of the expression. In general, it's not safe (and even when it is safe, it is usually quite confusing).
In the first version the arguments to cout are evaluated from right to left. You never actually specify which order they should be evaluated in, so the one on the right is evaluated first, popping the 4, and so on.
There is something known as Unspecified Behaviour defined by the ISO C++ Standard. Your code-snippet is just an example of that.
Related
When I use cout to print the value of my variable, it doesn't give me the same answer if I use two statements in one line or two lines. Can you help me?
int a= 5;
cout << a << endl;
cout << a-- << endl;
cout << a << a-- << endl;
// it gives me different answer, why?
//they are basically same thing
cout << a << a-- << endl;
is translated as:
cout.operator<<(a).operator<<(a--).operator<<(endl);
In such a case, the language does not guarantee which of the arguments will be evaluated first. A compiler is free to choose whichever evaluation order makes sense to them. Please note that the function call order is guaranteed but not the evaluation order of the function arguments.
If you are able to use c++17, the standard has changed for the << operator. It guarantees an evaluation order that makes sense and you will get the expected result.
There are no guarantees of evaluation order prior to C++17 with operator <<, but since" C++17, left to right order is guarantee for operator <<.
See eval_order for more details.
I feel I'm asking a very basic question, but I haven't been able to find an answer here or in Google. I recall we were taught this at school, but alas it has vanished over years.
Why does cout.precision() (std::ios_base::precision()) affect the whole stream when called in the middle of the output list? I know the rule that std::setprecision() should be used to change precision on the fly, and that cout.precision() is going to spoil the output with the value it returns. But what is the mechanics of this? Is it due to buffering? The manuals state these "do the same thing", but empirically I can see it's not entirely true.
MCVE:
const double test = 1.2345;
cout.setf(ios::fixed);
cout.setf(ios::showpoint);
cout.precision(2);
cout << test << endl << cout.precision(3) << test << endl;
// Output:
// 1.234
// 21.234
// after the same init
cout.precision(2);
cout << test << endl << setprecision(3) << test << endl;
// Output
// 1.23
// 1.234
Is this "implementation specific / not defined by the standard"?
Feel free to mark this as duplicate, for I haven't been able to find it on SO.
The order of function argument evaluation is unspecified. When you call std::cout.precision(n) the precision of std::cout' is set at the point this call is evaluated. In your expression
cout << test << endl << cout.precision(3) << test << endl;
the cout.precision(3) is, apparently, called first thing which the compiler is entirely allowed to do. Remember that the compiler considers the above statement equivalent to
std::cout.operator<<(test)
.operator<<(std::endl)
.operator<<(std::cout.preision(3))
.operator<<(test)
.operator<< (std::endl);
Practically, it seems for your compiler function arguments are evaluated right to left. Only then the different shift operators are executed. As a result, the precision gets changed before the output is done.
The use of manipulators like std::setprecision(n) avoids relying on the order subexpressions are evaluated: the temporary created from std::setprecision(n) is created whenever it is. The effect is then applied when the appropriate shift operator is called. Since the shift operators have to be called in the appropriate order, the use of the manipulator happens at a known place.
The exact time when cout.precision(3) is evaluated in your first example is not defined, because its value serves as an argument for a function call. OTOH with the second example, the time when setprecision(3) is inserted to the stream is very well defined - i.e. after endl is inserted and before test (2nd time). Therefor the first version will not produce reliable results (what you get may be different on different implementations), but the second version will.
See this question for a more detailed explanation. (The question there doesn't use operators, but the principle is the same - calling a member function on the return value of another function.)
I came across this rather vague behavior when messing around with code , here's the example :
#include <iostream>
using namespace std;
int print(void);
int main(void)
{
cout << "The Lucky " << print() << endl; //This line
return 0;
}
int print(void)
{
cout << "No : ";
return 3;
}
In my code, the statement with comment //This lineis supposed to print out
The Lucky No : 3, but instead it was printed No : The Lucky 3. What causes this behavior? Does this have to do with C++ standard or its behavior vary from one compiler to another?
The order of evaluation of arguments to a function is unspecified. Your line looks like this to the compiler:
operator<<(operator<<(operator<<(cout, "The Lucky "), print()), endl);
The primary call in the statement is the one with endl as an argument. It is unspecified whether the second argument, endl, is evaluated first or the larger sub-expression:
operator<<(operator<<(cout, "The Lucky "), print())
And breaking that one down, it is unspecified whether the function print() is called first, or the sub-expression:
operator<<(cout, "The Lucky ")
So, to answer your question:
What causes this behavior? Does this has to do with C++ standard or its behavior vary from one compiler to another?
It could vary from compiler to compiler.
Let's call the operator << simply operator .
Now we can write
cout << "The Lucky"
as
operator(cout, "The Lucky")
The result of this operation is cout, and it is passed to next <<, so we can write
operator(operator(cout, "The Lucky"), print() )
It is a function invocation with two parameters, and the standard doesn't say anything about the order of their evaluation.
So with some compilers you really may get
The Lucky No : 3
In my compiler No: The Lucky 3 is the output.... it means that its behaviour varies from compiler to compiler.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Modifying a const through a non-const pointer
I'm studying C++, and very interesting about pointers. And I tried to change value of a constant value (my teacher called it backdoor, please clarify if I'm wrong) like this:
const int i = 0;
const int* pi = &i;
int hackingAddress = (int)pi;
int *hackingPointer = (int*)pi;
*hackingPointer = 1;
cout << "Address:\t" << &i << "\t" << hackingPointer << endl;
cout << "Value: \t" << i << "\t\t" << *hackingPointer << endl;
system("PAUSE");
return 0;
However, the result is very strange. Although the two addresses are the same, the values are different.
How is my code executed? And where is 0 and 1 value is stored exactly?
You've discovered a little thing that C++ developers call undefined behavior. You told the compiler that "i is a constant with the value 0". So when you ask the compiler for the value of i, it tells you that it is 0.
Mucking around with trying to change the value of a constant violates the assumptions made by the compiler (that constants are going to be, well, constant), and so, the compiler is going to generate invalid or inconsistent code.
There are a lot of situations in C++ where it is possible to do something without the compiler catching it as an error, but the result is undefined. And if you do that, then you get results like what you're seeing. The compiler does something weird and unexpected.
Oh, and if your teacher is trying to teach you anything from an example such as this, he's wrong, and you should be very scared.
The only guarantee you get from code like this is this:
the compiler can do literally anything it likes
When you write code, you have an implicit contract with the compiler:
"If I write well-defined C++ code, then you convert it into an executable with the same effects as described by the C++ standard".
When you do something like this, you violate the contract. And then the compiler isn't obliged to follow it either. If you give the compiler code that is not well-defined according to the C++ standard, then it can't, and isn't going to, create an executable which does as the C++ standard specifies.
It seems, that compiler has optimized (inlined int const value)
cout << "Value: \t" << i << "\t\t" << *hackingPointer << endl;
to
cout << "Value: \t" << 0 << "\t\t" << *(0x0044ff28) << endl;
Anyway, you have still succeeded to change value of memory where i is stored. But do not try this at home :-)
It is not permitted to change the values of a constant, in fact it's undefined behaviour so your program could do anything as a result.
In this instance it looks like your compiler optimised the read away at compile time because it knew the value is fixed. Lots of implementations might just crash when you try and change it, but you cannot and should not bet or rely upon the result of any undefined behaviour ever.
Our project uses a macro to make logging easy and simple in one-line statements, like so:
DEBUG_LOG(TRACE_LOG_LEVEL, "The X value = " << x << ", pointer = " << *x);
The macro translates the 2nd parameter into stringstream arguments, and sends it off to a regular C++ logger. This works great in practice, as it makes multi-parameter logging statements very concise. However, Scott Meyers has said, in Effective C++ 3rd Edition, "You can get all the efficiency of a macro plus all the predictable behavior and type safety of a regular function by using a template for an inline function" (Item 2). I know there are many issues with macro usage in C++ related to predictable behavior, so I'm trying to eliminate as many macros as possible in our code base.
My logging macro is defined similar to:
#define DEBUG_LOG(aLogLevel, aWhat) { \
if (isEnabled(aLogLevel)) { \
std::stringstream outStr; \
outStr<< __FILE__ << "(" << __LINE__ << ") [" << getpid() << "] : " << aWhat; \
logger::log(aLogLevel, outStr.str()); \
}
I've tried several times to rewrite this into something that doesn't use macros, including:
inline void DEBUG_LOG(LogLevel aLogLevel, const std::stringstream& aWhat) {
...
}
And...
template<typename WhatT> inline void DEBUG_LOG(LogLevel aLogLevel, WhatT aWhat) {
... }
To no avail (neither of the above 2 rewrites will compile against our logging code in the 1st example). Any other ideas? Can this be done? Or is it best to just leave it as a macro?
Logging remains one of the few places were you can't completely do away with macros, as you need call-site information (__LINE__, __FILE__, ...) that isn't available otherwise. See also this question.
You can, however, move the logging logic into a seperate function (or object) and provide just the call-site information through a macro. You don't even need a template function for this.
#define DEBUG_LOG(Level, What) \
isEnabled(Level) && scoped_logger(Level, __FILE__, __LINE__).stream() << What
With this, the usage remains the same, which might be a good idea so you don't have to change a load of code. With the &&, you get the same short-curcuit behaviour as you do with your if clause.
Now, the scoped_logger will be a RAII object that will actually log what it gets when it's destroyed, aka in the destructor.
struct scoped_logger
{
scoped_logger(LogLevel level, char const* file, unsigned line)
: _level(level)
{ _ss << file << "(" << line << ") [" << getpid() << "] : "; }
std::stringstream& stream(){ return _ss; }
~scoped_logger(){ logger::log(_level, _ss.str()); }
private:
std::stringstream _ss;
LogLevel _level;
};
Exposing the underlying std::stringstream object saves us the trouble of having to write our own operator<< overloads (which would be silly). The need to actually expose it through a function is important; if the scoped_logger object is a temporary (an rvalue), so is the std::stringstream member and only member overloads of operator<< will be found if we don't somehow transform it to an lvalue (reference). You can read more about this problem here (note that this problem has been fixed in C++11 with rvalue stream inserters). This "transformation" is done by calling a member function that simply returns a normal reference to the stream.
Small live example on Ideone.
No, it is not possible to rewrite this exact macro as a template since you are using operators (<<) in the macro, which can't be passed as a template argument or function argument.
We had the same issue and solved it with a class based approach, using a syntax like
DEBUG_LOG(TRACE_LOG_LEVEL) << "The X value = " << x << ", pointer = " << *x << logger::flush;
This would indeed require to rewrite the code (by using a regular expression) and introduce some class magic, but gives the additional benefit of greater flexibiliy (delayed output, output options per log level (to file or stdout) and things like that).
The problem with converting that particular macro into a function is that things like "The X value = " << x are not valid expressions.
The << operator is left-associative, which means something in the form A << B << C is treated as (A << B) << C. The overloaded insertion operators for iostreams always return a reference to the same stream so you can do more insertions in the same statement. That is, if A is a std::stringstream, since A << B returns A, (A << B) << C; has the same effect as A << B; A << C;.
Now you can pass B << C into a macro just fine. The macro just treats it as a bunch of tokens, and doesn't worry about what they mean until all the substituting is done. At that point, the left-associative rule can kick in. But for any function argument, even if inlined and templated, the compiler needs to figure out what the type of the argument is and how to find its value. If B << C is invalid (because B is neither a stream nor an integer), compiler error. Even if B << C is valid, since function parameters are always evaluated before anything in the invoked function, you'll end up with the behavior A << (B << C), which is not what you want here.
If you're willing to change all the uses of the macro (say, use commas instead of << tokens, or something like #svenihoney's suggestion), there are ways to do something. If not, that macro just can't be treated like a function.
I'd say there's no harm in this macro though, as long as all the programmers who have to use it would understand why on a line starting with DEBUG_LOG, they might see compiler errors relating to std::stringstream and/or logger::log.
If you keep a macro, check out C++ FAQ answers 39.4 and 39.5 for tricks to avoid a few nasty ways macros like this can surprise you.