Does VS2010 pre-calculate preprocessor defined by #define? - c++

For Visual Studio 2010, if I define
#define PI 4.0f*atan(1.0f)
when PI is used somewhere later in the code, does the value needs to be calculate again or simply 3.1415926... being plugged in? Thanks.
EDIT:
Because I heard someone says the compiler might optimize to replace it with 3.1415926.., depending on the compiler.

the #define will do a direct text replacement. Because of that everywhere you have PI it will get replaced with 4.0f*atan(1.0f). I would suspect the compiler would optimize this away during code generation but the only real way to know is to compile it and check the assembly.
I found this little online tool that will take c++ code and generate the assembly output. If you turn on optimizations you will see that the code generated to display PI is gone and it is now just a constant that gets referenced.

#define is a "copy-paste" type of thing. If your code says std::cout << PI; then the compiler pretends you typed std::cout << 4.0f*atan(1.0f);.
The values of defines are not calculated until they're used, and they're theoretically recalculated every time they're used. However, most modern compilers will see std::cout << 4.0f*atan(1.0f); and do that calculation at compile time and will emit assembly for std::cout << 3.14159265f;, so the code is just as fast as if it were precalculated.
Unrelated, #include is also a copy-paste kind of thing, which is why we need include guards.

When the preprocessor runs it will replace every instance of PI with 4.0*atan(1.0f).

Related

How to ensure some code is optimized away?

tl;dr: Can it be ensured somehow (e.g. by writing a unit test) that some things are optimized away, e.g. whole loops?
The usual approach to be sure that something is not included in the production build is wrapping it with #if...#endif. But I prefer to stay with C++ mechanics instead. Even there, instead of complicated template specializations I like to keep implementations simple and argue "hey, the compiler will optimize this out anyway".
Context is embedded SW in automotive (binary size matters) with often poor compilers. They are certified in the sense of safety, but usually not good in optimizations.
Example 1: In a container the destruction of elements is typically a loop:
for(size_t i = 0; i<elements; i++)
buffer[i].~T();
This works also for build-in types such as int, as the standard allows the explicit call of the destructor also for any scalar types (C++11 12.4-15). In such case the loop does nothing and is optimized out. In GCC it is, but in another (Aurix) not, I saw a literally empty loop in the disassembly! So that needed a template specialization to fix it.
Example 2: Code, which is intended for debugging, profiling or fault-injection etc. only:
constexpr bool isDebugging = false; // somehow a global flag
void foo(int arg) {
if( isDebugging ) {
// Albeit 'dead' section, it may not appear in production binary!
// (size, security, safety...)
// 'if constexpr..' not an option (C++11)
std::cout << "Arg was " << arg << std::endl;
}
// normal code here...
}
I can look at the disassembly, sure. But being an upstream platform software it's hard to control all targets, compilers and their options one might use. The fear is big that due to any reason a downstream project has a code bloat or performance issue.
Bottom line: Is it possible to write the software in a way, that certain code is known to be optimized away in a safe manner as a #if would do? Or a unit tests, which give a fail if optimization is not as expected?
[Timing tests come to my mind for the first problem, but being bare-metal I don't have convenient tools yet.]
There may be a more elegant way, and it's not a unit test, but if you're just looking for that particular string, and you can make it unique,
strings $COMPILED_BINARY | grep "Arg was"
should show you if the string is being included
if constexpr is the canonical C++ expression (since C++17) for this kind of test.
constexpr bool DEBUG = /*...*/;
int main() {
if constexpr(DEBUG) {
std::cerr << "We are in debugging mode!" << std::endl;
}
}
If DEBUG is false, then the code to print to the console won't generate at all. So if you have things like log statements that you need for checking the behavior of your code, but which you don't want to interact with in production code, you can hide them inside if constexpr expressions to eliminate the code entirely once the code is moved to production.
Looking at your question, I see several (sub-)questions in it that require an answer. Not all answers might be possible with your bare-metal compilers as hardware vendors don't care that much about C++.
The first question is: How do I write code in a way that I'm sure it gets optimized. The obvious answer here is to put everything in a single compilation unit so the caller can see the implementation.
The second question is: How can I force a compiler to optimize. Here constexpr is a bless. Depending on whether you have support for C++11, C++14, C++17 or even the upcoming C++20, you'll get different feature sets of what you can do in a constexpr function. For the usage:
constexpr char c = std::string_view{"my_very_long_string"}[7];
With the code above, c is defined as a constexpr variable. Because you apply it to the variable, you require some things:
Your compiler should optimize the code so the value of c is known at compile time. This even holds true for -O0 builds!
All functions used for calculate c are constexpr and available. (and by result, enforce the behaviour of the first question)
No undefined behaviour is allowed to be triggered in the calculation of c. (For the given value)
The negative about this is: Your input needs to be known at compile time.
C++17 also provides if constexpr which has similar requirements: condition needs to be calculated at compile time. The result is that 1 branch of the code ain't allowed to be compiled (as it even can contain elements that don't work on the type you are using).
Which than brings us to the question: How do I ensure sufficient optimizations for my program to run fast enough, even if my compiler ain't well behaving. Here the only relevant answer is: create benchmarks and compare the results. Take the effort to setup a CI job that automates this for you. (And yes, you can even use external hardware although not being that easy) In the end, you have some requirements: handling A should take less than X seconds. Do A several times and time it. Even if they don't handle everything, as long as it's within the requirements, its fine.
Note: As this is about debug, you most likely can track the size of an executable as well. As soon as you start using streams, a lot of conversions to string ... your exe size will grow. (And you'll find it a bless as you will immediately find commits which add 10% to the image size)
And than the final question: You have a buggy compiler, it doesn't meet my requirements. Here the only answer is: Replace it. In the end, you can use any compiler to compiler your code to bare metal, as long as the linker scripts work. If you need a start, C++Now 2018: Michael Caisse “Modern C++ in Embedded Systems” gives you a very good idea of what you need to use a different compiler. (Like a recent Clang or GCC, on which you even can log bugs if the optimization ain't good enough)
Insert a reference to external data or function into the block that should be verified to be optimised away. Like this:
extern void nop();
constexpr bool isDebugging = false; // somehow a global flag
void foo(int arg) {
if( isDebugging ) {
nop();
std::cout << "Arg was " << arg << std::endl; // may not appear in production binary!
}
// normal code here...
}
In Debug-Builds, link with an implementation of nop() in a extra compilation unit nop.cpp:
void nop() {}
In Release-Builds, don't provide an implementation.
Release builds will only link if the optimisable code is eliminated.
`- kisch
Here's another nice solution using inline assembly.
This uses assembler directives only, so it might even be kind of portable (checked with clang).
constexpr bool isDebugging = false; // somehow a global flag
void foo(int arg) {
if( isDebugging ) {
asm(".globl _marker\n_marker:\n");
std::cout << "Arg was " << arg << std::endl; // may not appear in production binary!
}
// normal code here...
}
This would leave an exported linker symbol in the compiled executable, if the code isn't optimised away. You can check for this symbol using nm(1).
clang can even stop the compilation right away:
constexpr bool isDebugging = false; // somehow a global flag
void foo(int arg) {
if( isDebugging ) {
asm("_marker=1\n");
std::cout << "Arg was " << arg << std::endl; // may not appear in production binary!
}
asm volatile (
".ifdef _marker\n"
".err \"code not optimised away\"\n"
".endif\n"
);
// normal code here...
}
This is not an answer to "How to ensure some code is optimized away?" but to your summary line "Can a unit test be written that e.g. whole loops are optimized away?".
First, the answer depends on how far you see the scope of unit-testing - so if you put in performance tests, you might have a chance.
If in contrast you understand unit-testing as a way to test the functional behaviour of the code, you don't. For one thing, optimizations (if the compiler works correctly) shall not change the behaviour of standard-conforming code.
With incorrect code (code that has undefined behaviour) optimizers can do what they want. (Well, for code with undefined behaviour the compiler can do it also in the non-optimizing case, but sometimes only the deeper analyses peformed during optimization make it possible for the compiler to detect that some code has undefined behaviour.) Thus, if you write unit-tests for some piece of code with undefined behaviour, the test results may differ when you run the tests with and without optimization. But, strictly speaking, this only tells you that the compiler translated the code both times in a different way - it does not guarantee you that the code is optimized in the way you want it to be.
Here's another different way that also covers the first example.
You can verify (at runtime) that the code has been eliminated, by comparing two labels placed around it.
This relies on the GCC extension "Labels as Values" https://gcc.gnu.org/onlinedocs/gcc/Labels-as-Values.html
before:
for(size_t i = 0; i<elements; i++)
buffer[i].~T();
behind:
if (intptr_t(&&behind) != intptr_t(&&before)) abort();
It would be nice if you could check this in a static_assert(), but sadly the difference of &&label expressions is not accepted as compile-time constant.
GCC insists on inserting a runtime comparison, even though both labels are in fact at the same address.
Interestingly, if you compare the addresses (type void*) directly, without casting them to intptr_t, GCC falsely optimises away the if() as "always true", whereas clang correctly optimises away the complete if() as "always false", even at -O1.

What's the simplest "unused" code that won't be optimized out?

Very frequently, I want to test the value of a variable in a function via a breakpoint. In many cases, that variable never actually gets referenced by any code that "does stuff", but that doesn't mean I don't still want to see it. Unfortunately, the optimizer is working against me and simply removing all such code when compiling, so I have to come up with convoluted grossness to fool the compiler into thinking those values actually matter so they don't get optimized away. I don't want to turn the optimizer off, as it's doing important stuff in other places, but just for this one block of code I'd like to temporarily disable it for the sake of debugging.
Code the produces observable behavior fits the requirements by definition. For example, printf("").
Access to volatile variable also formally constitutes observable behavior, although I won't be surprised if some compilers still discarded "unnecessary" volatile variables.
For this reason, a call to an I/O function appears to be the best canditate to me.
You can try "volatile" keyword. Some intro is at http://en.wikipedia.org/wiki/Volatile_variable .
Generally speaking, the volatile keyword is intended to prevent the compiler from applying any optimizations on the code that assume values of variables cannot change "on their own."
Have you tried
#pragma optimize("",off)
?
MSVS specific I think - http://msdn.microsoft.com/en-us/library/chh3fb0k(v=vs.80).aspx
You do not specify your compiler so I will just add here that I have been using the volatile specifier on any variables that I use specifically for debugging. In my compiler (Embarcadero RAD Studio C++ Builder) I have been using this for a couple of years and not once has the variable been optimized out. Maybe you don't use this compiler, but if you do I can say volatile has certainly always worked for me.
Here's an example of a trick I've used:
#include <ctime>
#include <iostream>
int main() {
std::cout << "before\n";
if (std::time(NULL) == 0) {
std::cout << "This should not appear\n";
}
std::cout << "after\n";
}
The time() call should always return a positive value, but the compiler has no way of knowing that.
If the only purpose of the variable is to be viewed while in a breakpoint with a debugger, you can make the variable global. You could, for instance, maintain a global buffer:
#ifdef DEBUG
char dbg_buffer[512];
template <typename T>
void poke_dbg (const T &t) {
memcpy(dbg_buffer, &t, sizeof(T));
}
#else
#define poke_dbg(x)
#endif
Then during debugging, you can inspect the contents of the dbg_buffer (with an appropriate cast if desired).

Alternatives to using "#define" in C++? Why is it frowned upon?

I have been developing C++ for less than a year, but in that time, I have heard multiple people talk about how horrible #define is. Now, I realize that it is interpreted by the preprocessor instead of the compiler, and thus, cannot be debugged, but is this really that bad?
Here is an example (untested code, but you get the general idea):
#define VERSION "1.2"
#include <string>
class Foo {
public:
string getVersion() {return "The current version is "+VERSION;}
};
Why is this this code bad?
Is there an alternative to using #define?
Why is this this code bad?
Because VERSION can be overwritten and the compiler won't tell you.
Is there an alternative to using #define?
const char * VERSION = "1.2";
or
const std::string VERSION = "1.2";
The real problem is that defines are handled by a different tool from the rest of the language (the preprocessor). As a consequence, the compiler doesn’t know about it, and cannot help you when something goes wrong – such as reuse of a preprocessor name.
Consider the case of max which is sometimes implemented as a macro. As a consequence, you cannot use the identifier max anywhere in your code. Anywhere. But the compiler won’t tell you. Instead, your code will go horribly wrong and you have no idea why.
Now, with some care this problem can be minimised (if not completely eliminated). But for most uses of #define there are better alternatives anyway so the cost/benefit calculation becomes skewed: slight disadvantage for no benefit whatsoever. Why use a defective feature when it offers no advantage?
So here is a very simple diagram:
Need a constant? Use a constant (not a define)
Need a function? Use a function (not a define)
Need something that cannot be modelled using a constant or a function? Use a define, but do it properly.
Doing it “properly” is an art in itself but there are a few easy guidelines:
Use a unique name. All capitals, always prefixed by a unique library identifier. max? Out. VERSION? Out. Instead, use MY_COOL_LIBRARY_MAX and MY_COOL_LIBRARY_VERSION. For instance, Boost libraries, big users of macros, always use macros starting with BOOST_<LIBRARY_NAME>_.
Beware of evaluation. In effect, a parameter in a macro is just text that is replaced. As a consequence, #define MY_LIB_MULTIPLY(x) x * x is broken: it could be used as MY_LIB_MULTIPLY(2 + 5), resulting in 2 + 5 * 2 + 5. Not what we wanted. To guard against this, always parenhesise all uses of the arguments (unless you know exactly what you’re doing – spoiler: you probably don’t; even experts get this wrong alarmingly often).
The correct version of this macro would be:
#define MY_LIB_MULTIPLY(x) ((x) * (x))
But there are still plenty of ways of getting macros horribly wrong, and, to reiterate, the compiler won’t help you here.
#define isn't inherently bad, it's just easy to abuse. For something like a version string it works fine, although a const char* would be better, but many programmers use it for much more than that. Using #define as a typedef for example is silly when, in most cases, a typedef would be better. So there's nothing wrong with #define statements, and some things can't be done without them. They have to be evaluated on a case by case basis. If you can figure out a way to solve a problem without using the preprocessor, you should do it.
I would not use #define to define a constant use static keyword or better yet
const int kMajorVer = 1;
const int kMinorVer = 2;
OR
const std::string kVersion = "1.2";
Herb sutter has an excellent article here detailing why #define is bad and lists some examples where there is really no other way to achieve the same thing: http://www.gotw.ca/gotw/032.htm.
Basically like with many things its fine so long as you use it correctly but it is easy to abuse and macro errors are particularly cryptic and a bugger to debug.
I personally use them for conditional debug code and also variant data representations, which is detailed at the end of the sutter article.
In general the preprocessor is bad because it creates a two pass compilation process that is unsafe, creates difficult to decode error messages and can lead to hard-to-read code. You should not use it if possible:
const char* VERSION = "1.2"
However there are cases where it is impossible to do what you want to do without the preprocessor:
#define Log(x) cout << #x << " = " << (x) << endl;

How are macros handled by preprocessor?

I am reading Efficient c++ (older version) and have some doubts.
Here, for example, it says:
When you do something like this
#define ASPECT_RATIO 1.653
the symbolic name ASPECT_RATIO may never be seen by the compilers; it may be removed by the preprocessors before the source code ever gets compiled. As a results the ASPECT_RATIO may never get entered to SYMBOLIC_TABLE. It an be confusing if you get an error during compilation involving the constant, because the error message may refer to 1.653 and not ASPECT_RATIO
I don't understand this paragraph.How can anything be removed the preprocessor, just like that. what could be the reasons and how feasible are they in real world.
Thanks
I don't understand this paragraph under inverted quotes.How can
anything be removed the preprocessor, just like that. what could be
the reasons and how feasible are they in real world.
Basically what it describes is exactly how C and C++ pre-processor works. The reason is to replace macros/constants (that are made using the #define directive) with their actual values, instead of repeating the same values over and over again. In C++ it is considered a bad style using C-style macros, but they're supported for C compatibility.
The preprocessor, as the name suggests, runs prior to the actual compilation, and is basically changing the source code as directed through the pre-processor directives (those starting with #). This also includes replacement of the macros with their values, the inclusion of the header files as directed by the #include directive, etc etc.
This is used in order to avoid code repetitions, magic numbers, to share interfaces (header files) and many other useful things.
It's simply a global search and replace of "ASPECT_RATIO" with "1.653" in the file before passing it to the compiler
That's why macros are so dangerous. If you have #define max 123 and a variable int max = 100 the compiler will get int 123 = 100 and you will get a confusing error message
The pre-processor will replace all instances of the token ASPECT_RATIO that appear in the code with the actual token 1.653 ... thus the compiler will never see the token ASPECT_RATIO. By the time it compiles the code, it only sees the literal token 1.653 that was substituted in by the pre-processor.
Basically the "problem" you will encounter with this approach is that ASPECT_RATIO will not be seen as a symbol by the compiler, thus in a debugger, etc., you can't query the value ASPECT_RATIO as-if it were a variable. It's not a value that will have a memory address like a static const int may have (I say "may", because an optimizing compiler may decide to act like the pre-processor, and optimize-out the need for an explicit memory address to store the constant value, instead simply substituting the literal value where-ever it appears in the code). In a larger function macro it also won't have an instruction address like actual C/C++ function will have, thus you can't set break-points inside a function macro. But in a more general sense I'm not sure I would call this a "problem" unless you were intending to use the macro as a debug-symbol, and/or set debugging break-points inside your macro. Otherwise the macro is doing its job.

Why is it that an int in C++ that isnt initialized (then used) doesn't return an error?

I am new to C++ (just starting). I come from a Java background and I was trying out the following piece of code that would sum the numbers between 1 and 10 (inclusive) and then print out the sum:
/*
* File: main.cpp
* Author: omarestrella
*
* Created on June 7, 2010, 8:02 PM
*/
#include <cstdlib>
#include <iostream>
using namespace std;
int main() {
int sum;
for(int x = 1; x <= 10; x++) {
sum += x;
}
cout << "The sum is: " << sum << endl;
return 0;
}
When I ran it it kept printing 32822 for the sum. I knew the answer was supposed to be 55 and realized that its print the max value for a short (32767) plus 55. Changing
int sum;
to
int sum = 0;
would work (as it should, since the variable needs to be initialized!). Why does this behavior happen, though? Why doesnt the compiler warn you about something like this? I know Java screams at you when something isnt initialized.
Thank you.
Edit:
Im using g++. Here is the output from g++ --version:
Im on Mac OS X and am using g++.
nom24837c:~ omarestrella$ g++ --version
i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5646)
Reading from an uninitialized variable results in undefined behavior and the compiler isn't required to diagnose the error.
Note that most modern compilers will warn you if you attempt to read an uninitialized variable. With gcc you can use the -Wuninitialized flag to enable that warning; Visual C++ will warn even at warning level 1.
Because it's not part of the C++ standard. The variable will just take whatever value is currently sitting in the memory it's assigned. This saves operations which can sometimes be unnecessary because the variable will be assigned a value later.
It's interesting to note and very important for Java/.Net programmers to note when switching to C/C++. A program written in C++ is native and machine-level. It is not running on a VM or a some other sort of framework. It is a collection of raw operations (for the most part). You do not have a virtual machine running in the background checking you variables and catching exceptions or segfaults for you. This is a big difference which can lead to a lot of confusion in the way C++ handles variables and memory, as opposed to Java or a .Net language. Hell, in .Net all your integers are implicitly initialised to 0!
Consider this code fragment:
int x;
if ( some_complicated_condition ) {
x = foo;
} else if ( another_condition ) {
// ...
if ( yet_another_condition ) x = bar;
} else {
return;
}
printf("%d\n",x);
Is x used uninitialized? How do you know? What if there are preconditions to the code?
These are hard questions to answer automatically, and enforcing initialization might be inefficient in some small way.
In point of fact modern compilers do a pretty good job of static analysis and can often tell if a variable is or might be used uninitialized, and they generally warn you in that case (at least if you turn the warning level up high enough). But c++ follows closely on the c tradition of expecting the programmer to know what he or she is doing.
It's how the spec is defined--that uninitialized variables have no guarantee's, and I believe the reason is that it has to do with optimizations (though I may be wrong here...)
Many compilers will warn you, depending on the warning level used. I know by default VC++ 2010 warns for this case.
What compiler are you using? There is surely a warning level you can turn on via command-line switch that would warn you of this. Try compiling with /W3 or /W4 on Windows and you'll get 4700 or 6001.
This is because C++ was designed as a superset of C, to allow easy upgrading of existing code. C works that way because it dates back to the 70s when CPU cycles were rare and precious things, so weren't wasted initialising variables when it might not be necessary (also, programmers were trusted to know that they'd have to do it themselves).
Obviously that wasn't really the case once Java appeared, so they found it a better tradeoff to spend a few CPU cycles to avoid that class of bugs. As others have noted though, modern C or C++ compilers will normally have warnings for this kind of thing.
Because C developers care about speed. Uselessly initializing a variable is a crime against performance.
Detecting an uninitialized variable is a Quality-of-Implementation (QoI) issue. It is not mandatory, since the language standard does not require it.
Most compilers I know will actually warn you about the potential problem with an initialized variable at compile-time. On top of that, compilers like MS Visual Studio 2005 will actually trap the use of an uninitialized variable during run-time in debug builds.
So, what compiler are you using?
Well, it depends on what compiler you use. Use a smart one. :)
http://msdn.microsoft.com/en-us/library/axhfhh6x.aspx
Initialising variables is one of the most important tenets of C/C++. Any types without a constructor should be initialised, period. The reason for compiler not enforcing this is largely historical. It stems from the fact sometimes it's not necessary to init something and it would be wasteful to do so.
These days this sort of optimisation is best left to compiler and it's a good habit to always initialise everything. You can get the compiler to generate a warning for you as others suggested. Also you can make it treat warnings as errors to further simulate javac behaviour.
C++ is not a pure object oriented programming language.
In C++ there is no implicit way of memory management and variable initialization.
If a variable is not initialized in C++ then it may take any value at runtime because the compiler does not understand such internal errors in C++.