I am building a program that needs to run on an ARM.
The processor has plenty of resources to run the program, so this question is not directly related to this type of processor, but is related to non powerful ones, where resources and computing power are 'limited'.
To print debug informations (or even to activate portions of code) I am using a header file where I define macros that I set to true or false, like this:
#define DEBUG_ADCS_OBC true
and in the main program:
if (DEBUG_ADCS_OBC == true) {
printf("O2A ");
for (j = 0; j < 50; j++) {
printf("%x ", buffer_obc[jj]);
}
}
Is this a bad habit? Are there better ways to do this?
In addition, will having these IF checks affect performances in a measurable way?
Or is it safe to assume that when the code is compiled the IFs are somehow removed from the flow, as the comparison is made between two values that cannot change?
Since the expression DEBUG_ADCS_OBC == true can be evaluated at compile time, optimizing compilers will figure out that the branch is either always taken or is always bypassed, and eliminate the condition altogether. Therefore, there is zero runtime cost to the expression when you use an optimized compiler.
If you are compiling with all optimization turned off, use conditional compilation instead. This will do the same thing an optimizing compiler does with a constant expression, but at the preprocessor stage. Hence the compiler will not "see" the conditional even with optimization turned off.
Note 1: Since DEBUG_ADCS_OBC has a meaning of boolean variable, use DEBUG_ADCS_OBC without == true for a somewhat cleaner look.
Note 2: Rather than defining the value in the body of your program, consider passing a value on the command line, for example -DDEBUG_ADCS_OBC=true. This lets you change the debug setting without modifying your source code, simply by manipulating the make file or one of its options.
The code you are using is evaluated everytime when your program reaches this line. Since every change of DEBUG_ADCS_OBC will require a recompile of your code, you should use #ifdef/#ifndef expressions instead. The advantage of them is, that they are only evaluated once at compile time.
Your code segment could look like the following:
Header:
//Remove this line if debugging should be disabled
#define DEBUG_DCS_OBS
Source:
#ifdef DEBUG_DCS_OBS
printf("O2A ");
for (j = 0; j < 50; j++) {
printf("%x ", buffer_obc[jj]);
}
#endif
The problem with getting the compiler to do this is the unnecessary run-time test of a constant expression. An optimising compiler will remove this, but equally it may issue warnings about constant expressions or when the macro is undefined, issue warnings about unreachable code.
It is not a matter of "bad in embedded programming", it bears little merit in any programming domain.
The following is the more usual idiom, will not include unreachable code in the final build and in an appropriately configured a syntax highlighting editor or IDE will generally show you which code sections are active and which are not.
#define DEBUG_ADCS_OBC
...
#if defined DEBUG_ADCS_OBC
printf("O2A ");
for (j = 0; j < 50; j++)
{
printf("%x ", buffer_obc[jj]);
}
#endif
I'll add one thing that didn't see being mentioned.
If optimizations are disabled on debug builds, and even if runtime performance impact is insignificant, code is still included. As a result debug builds are usually bigger than release builds.
If you have very limited memory, you can run into situation where release build fits in the device memory and debug build does not.
For this reason I prefer compile time #if over runtime if. I can keep the memory usage between debug and release builds closer to each other, and it's easier to keep using the debugger at the end of project.
The optimizer will solve the extra resources problem as mentioned in the other replies, but I want to add another point. From the code readability point of view this code will be repeated a lot of times, so you can consider creating your specific printing macros. Those macros is what should be enclosed by the debug enable or disable macros.
#ifdef DEBUG_DCS_OBS
myCustomPrint //your custom printing code
#else
myCustomPrint //No code here
#end
Also this will decrease the probability of the macro to be forgotten in any file which will cause a real optimization problem.
Related
I am building somewhat larger c++ code bases than I'm used to. I have a need for both good logging and debugging, at least to the console, and also speed.
Generally, I like to do something like this
// Some header file
bool DEBUG = true;
And then in some other file
if (DEBUG) cout << "Some debugging information" << endl;
The issue with this (among others) is that the branching lowers the speed of the final executable. In order to fix this, I'd have to go into the files at the end and remove all these, and then I couldn't use them again later without saving them to some other file and then putting them back in.
What is the most efficient solution to this quandry? Python decorators provide a nice approach that I'm not certain exists in CPP.
Thanks!
The classic way is to make that DEBUG not a variable, but a preprocessor macro. Then you can have two builds: one with the macro defined to 1, the other with it defined to 0 (or not defined at all, depending on how you plan to use it). Then you can either do #ifdef to completely remove the debug code from being seen by the compiler, or just put it into a regular if, the optimizer will take care of removing the branch with the constant conditional.
Say I have an assert() something like
assert( x < limit );
I took a look at the behaviour of the optimiser in GDC in release and debug builds with the following snippet of code:
uint cxx1( uint x )
{
assert( x < 10 );
return x % 10;
}
uint cxx1a( uint x )
in { assert( x < 10 ); }
body
{
return x % 10;
}
uint cxx2( uint x )
{
if ( !( x < 10 ))
assert(0);
return x % 10;
}
Now when I build in debug mode, the asserts have the very pleasing effect of triggering huge optimisation. GDC gets rid of the horrid code to do the modulo operation entirely, because of its knowledge about the possible range of x due to the assert’s if-condition. But in release mode, the if-condition is discarded, so all of a sudden, the horrid code comes back, and there is no longer any optimisation in cxx1() nor even in cxx1a(). This is very ironic, that release mode generates far worse code than debug code. Of course, no-one wants executable code belonging to the if-tests to be present in release code as we must lose all that overhead.
Now ideally, I would want to express the condition in the sense of communicating information to the compiler, regardless of release / debug builds, about conditions that may always be assumed to be true, and so such assumptions can guide optimisation in very powerful ways.
I believe some C++ compilers have something called __assume() or some such, but memory fails me here. GCC has a __builtin_unreachable() special directive which might be useable to build an assume() feature. Basically if I could build my own assume() directive it would have the effect of asserting certain truths about known values or known ranges and exposing / publishing these to optimisation passes regardless of release / debug mode but without generating any actual code at all for the assume() condition in a release build, while in debug mode it would be exactly the same as assert().
I tried an experiment which you see in cxx2 which triggers optimisation always, so good job there, but it generates what is morally debug code for the assume()'s if-condition even in release mode with a test and a conditional jump to an undefined instruction in order to halt the process.
Does anyone have any ideas about whether this is solvable? Or do you think this is a useful D compiler fantasy wish-list item?
As far as I know __builtin_unreachable is the next best replacement for an assume like function in GCC. In some cases the if condition might still not get optimized out though: "Assume" clause in gcc
The GCC builtins are available in GDC by importing gcc.builtins. Here's an example how to wrap the __builtin_unreachable function:
import gcc.builtins;
void assume()(bool condition)
{
if (!condition)
__builtin_unreachable();
}
bool foo(int a)
{
assume(a > 10);
return a > 10;
}
There are two interesting details here:
We don't need string mixins or similarily complicated stuff. As long as you compile with -O GDC will completely optimize the function call anyway.
For this to work the assume function must get inlined. Unfortunately inlining normal functions is not completely supported when assume is in a different module as the calling function. As a workaround we use a template with 0 template arguments. This should make sure inlining can always work.
You can test and modify this example here:
explore.dgnu.org
Now we (GDC developers) could easily rewrite assert(...) to if(...) __builtin_unreachable() in release mode. But this could break some code so dmd should implement this first.
OK, I really dont know what you want? cxx2 is solution
some more info
I was browsing through a project and came across this:
if(!StaticAnimatedEntities)
int esko = esko = 2;
(The type of StaticAnimatedEntities here is a plain unsigned short.)
It struck me as very odd so I grepped the project for esko and found other similar ifs with nothing but that line inside them, f.ex. this:
if(ItemIDMap.find(ID) != ItemIDMap.end())
int esko = esko = 2;
(Note that there are no other variables named esko outside those ifs.)
What is the meaning of this cryptic piece of code?
You can sometimes see code like this just to serve as an anchor location to put a breakpoint on in an interactive debugger in order to trap some "unusual" (most often - erroneous) conditions.
Debugger-provided conditional breakpoints are usually very slow and/or primitive, so people often deliberately plan ahead and provide such conditional branches in order to create a compiled-in location for a breakpoint. Such a compiled-in conditional breakpoint does not slow down program execution nearly as much as a debugger-provided conditional breakpoint would.
In many cases such code is surrounded by #ifndef NDEBUG/#endif to prevent it from getting into the production builds. In other cases people just leave it unprotected, believing that optimizing compiler will remove it anyway.
In order for it to work the code under if should generate some machine code in debug builds. Otherwise, it would be impossible to put a breakpoint on it. Different people have different preferences in that regard, but the code virtually always looks weird and meaningless.
It is the meaninglessness of that code that provides programmers with full freedom to write anything they want there. And I'd say that it often becomes a part of each programmer's signature style, a fingerprint of sorts. The guy in question does int esko = esko = 2;, apparently.
class CCtrl
{
...Other Members...
RankCache m_stRankCache;
uint32 m_uSyncListTime;
};
int CCtrl::UpdateList()
{
uint32 tNow = GetNowTime();
for (uint8 i = 0; i < uRankListNum; i++)
{
m_stRankCache.Append(i);
}
m_uSyncListTime = tNow;
return 0;
}
Here are two weired things:
When step into Append(), p this = 0x7f3f467edfdc, but in UpdateGuildList(), p &m_stRankCache = 0x7f3f067edfdc, these two pointers are different.
tNow = 1418916316, after executing m_uSyncListTime = tNow, m_uSyncListTime is still 0.
How could this happen? I've used a whole day for debugging. And I checked my code there is no pack(1) and pack() mismatch.
The issue is more than likely that you're using your debugger to debug code that has been optimized. As your comment suggested, you are debugging code that has been compiled with the -O3 flag, denoting optimization.
Even though you're using gdb, the Visual Studio and other debuggers also have the same issue, and that issue is debugging optimized code and having the debugger "work" in the sense that the debugger follows along with the lines in the source code, along with the variables that have been declared.
A debugger assumes that the lines of the source code match up with the generated assembly code. With optimizations turned on, this can no longer be the case. Code and variables are eliminated, moved, etc. Therefore the lines in the code (including variable declarations) you believe should be there at a certain location may not be there in the final optimized build.
The debugger cannot discern these changes, thus you get erroneous values used for variables, or in some cases, you get "variable doesn't exist" errors reported by your debugger.
Also, it may also serve as a good check to do a simple cout or log of the values in question if there is a problem with the debugging environment. There are situations where even debuggers may get things wrong, so a backup verification system (i.e. logging, printf() or cout statements, etc.) should be used.
The title might be somewhat confusing, so I'll try to explain.
Is there a preprocessor directive that I can encapsulate a piece of code with, so that if this piece of code contains a compilation error, then some other piece of should be compiled instead?
Here is an example to illustrate my motivation:
#compile_if_ok
int a = 5;
a += 6;
int b = 7;
b += 8;
#else
int a = 5;
int b = 7;
a += 6;
b += 8;
#endif
The above example is not the problem I am dealing with, so please do not suggest specific solutions.
UPDATE:
Thank you for all the negative comments down there.
Here is the exact problem, perhaps someone with a little less negative approach will have an answer:
I'm trying to decide during compile-time whether some variable a is an array or a pointer.
I've figured I can use the fact that, unlike pointers, an array doesn't have an L-value.
So in essence, the following code would yield a compilation error for an array but not for a pointer:
int a[10];
a = (int*)5;
Can I somehow "leverage" this compilation error in order to determine that a is an array and not a pointer, without stopping the compilation process?
Thanks
No.
It's not uncommon for large C++ (and other-language) projects to have a "configuration" stage designed into their build system to attempt compilation of different snippets of code, generating a set of preprocessor definitions indicating which ones worked, so that the compilation of the project proper can then use the preprocessor definitions in #ifdef/#else/#endif statements to select between alternatives. For many UNIX/Linux software packages, running the "./configure" script coordinates this. You can read about the autoconf tool that helps create such scripts at http://www.gnu.org/software/autoconf/
This is not supported in standard C. However, many command shells make this fairly simple. For example, in bash, you can write a script such as:
#!/bin/bash
# Try to compile the program with Code0 defined.
if cc -o program -DCode0= "$*"; then
# That worked, do nothing extra. (Need some command here due to bash syntax.)
/bin/true
else
# The first compilation failed, try without Code0 defined.
cc -o program "$*"
fi
./program
Then your source code can test whether Code0 is defined:
#if defined Code0
foo bar;
#else
#include <stdio.h>
int main(void)
{
printf("Hello, world.\n");
return 0;
}
#endif
However, there are usually better ways to, in effect, make source code responsive to the environment or the target platform.
On the updated question :
If you're writing C++, use templates...
Specifically, to test the type of a variable you have helpers : std::enable_if, std::is_same, std::is_pointer, etc
See the type support module : http://en.cppreference.com/w/cpp/types
C11 _Generic macros might be able to handle this. If not, though, you're screwed in C.
Not in the C++ preprocessor. In C++ you can easily use overload resolution or a template or even expression SFINAE or anything like that to execute a different function depending on if a is an array or not. That is still occurring after preprocessing though.
If you need one that is both valid C and valid C++, the best you can do is #ifdef __cplusplus and handle it that way. Their common subset (which is mostly C89) definitely does not have something that can handle this at any stage of compilation.