Preprocessor Conditionals - c++

I was wondering if it would be possible to have a define which changes values at some point in the code be used in a conditional. Basically something like this:
//////////////////////////////////////////// SomeFile.cpp
#define SHUTDOWN false
while(window->isOpen())
{
if(SHUTDOWN)
window->close();
// Rest of the main loop
}
//////////////////////////////////////////// SomeOtherFile.cpp
if(Escape.isPressed())
{
#undef SHUTDOWN
#define SHUTDOWN true
}
Thus causing the app to close. If it's not, would having a function like
RenderWindow* getWindow()
{
return window;
}
and then calling
if(Escape.isPressed())
getWindow()->close();
The best way to do it? I'd rather not go that route because the classes that are calling the key event are members of the class controlling the main loop and the window, so I'd have to set pointers to the containing class in the smaller classes to call getWindow() and it just seems like a more complicated method. But if I can't do it with preprocessor directives I'll just have to use pointers to the parent class.

You misunderstand the use of preprocessor symbols. Think of preprocessor code as separate code that is entangled with your c/c++ code. At compile time the preprocessor code executed and that execution results in source code which the compiler then coverts into binary. You can't use preprocessor symbols at runtime because they do not exist (they were compiled away). It appears you are might want to use a globally scoped variable in what you are doing.

A pre-processor conditional is only condition prior during the first, pre-processing, phase of compilation.
Once that phase of compilation has been completed, the macro (the #define value) is effectively replaced instanations.
When you write
if (SHUTDOWN)
then what the compiler sees is the value of "SHUTDOWN" at the time that pre-processing finished and compilation proper begin. So
#define SHUTDOWN true
if (SHUTDOWN)
compiles to
if (true)
whereas
#define SHUTDOWN
if (SHUTDOWN)
failes to compile, as
if ()
If you were to #define SHUTDOWN to resolve to a variable, then yes, this is absolutely possible, because variables can change at run time. But constants don't.

Related

Two versions of a code based on a #define

I'm working with a microcontroller and writing in C/C++ and I want to separate stuff that's supposed to work only in the transmissor and stuff that will work for the receiver. For this I thought about having a #define DEVICE 0 being 0 for transmissor and 1 for receiver.
How would I use this define to cancel other defines? I have multiple defines that should only work on one of the devices.
You have the following directives:
#if (DEVICE == 0)
...
#else
...
#endif
To be sure the code will be exclusive.
Although I recommended to do it dynamically: you can have a boolean global attribute/function parameter and execute code according to its value.
The code will be optimized-out on a certain target (even with the lowest optimization setting).
One compilation will be enough to check compilation errors instead of 2 with a define change.
Bear in mind you will still need a define for the boolean value to be changed and so test every case, but this can be done automatically with any Dynamic code analysis, while not possible with a #define implementation.

C++ Compile time check if a microcontroller pin is already initialized from other source file

Typically a microcontroller pin can be identified with with port number and pin number.Both are compile time constants.
A pin can have multiple functions,if used in a big project multiple source file can initialize same pin and break functionality implemented in other module.
I want to implement a compile time list which is initially empty and each time a pin is initialized it will check if the pin is already
present in that list, if its present it will give a static assert otherwise it will insert pin information in the list. List is not required at run time.
I don't have enough knowledge of meta programming, It would be great if someone can provide me direction to implement it.If there is already some library for this kind of purpose, please provide the links
What you want is not possible. C++ metaprogramming does not have a state, it's more akin to a functional language than declarative one. So you cannot have a mutable list. The only state can be introduced by creating new types, but there's no available syntax to check if a particular non-nested name is declared or defined.
Multiple source files (compilation units) are compiled independently so there's certainly no "global state" and that makes it more impossible.
Also, note that what you are doing is inherently run-time. The compiler has no tools to check if you are calling the initialization function twice. These calls might be hidden behind some run-time if-else decisions. And simply writing HAL_GPIO_Init(); no matter how many times in the whole program is not an error.
The simplest thing I can think of is creating a C++ singleton class that is responsible for communicating with pins. You can have a dedicated int init_GPIO method using error_codes or exceptions if they are enabled. Instead of static_assert you will have to rely on tests - that singleton works correctly and the return value of init_GPIO is not ignored.
If you really do not want to bother with singleton, this function template works too:
template<std::size_t GPIO, std::size_t port> int GPIO_init(GPIO_InitStruct& s){
static bool initialized=false;
if(initialized) return <already_called>;
initialized=true;
//Assuming that you want to propagate the return value.
return HAL_GPIO_Init(GPIO, port, s);// Replace with the correct call.
}
If you require thread-safe initialization then use:
template<std::size_t GPIO, std::size_t port> int GPIO_init(GPIO_InitStruct& s){
static std::once_flag initialized;
int ret_val = <already_called>;
auto call = [&](){ret_val = HAL_GPIO_Init(GPIO, port, s)};
std::call_once(initialized, call);
return ret_val;
}
Assuming that every driver or HAL has a header file, and there's a main.cpp which includes all those headers, then you can do this with the pre-processor.
Optionally, make a project-wide header "pintype.h" with an enum such as this:
// pintype.h
typedef enum
{
PIN_GPIO,
PIN_PWM,
PIN_ADC,
PIN_UART,
...
} pin_t;
Then for every header file, write a pre-processor check, for example:
// pwm.h, header of the pwm driver or HAL
#include "pintype.h"
#ifdef PIN9
#error Pin 9 already taken
#else
#define PIN9 PIN_PWM
#endif
The #error is strictly speaking not needed, because in case of conflicts the compiler will complain about multiple definitions in the same translation unit (that of main.cpp).
When the developer writing the driver gets an error message, they can go to the pre-processor definition of the pin and find out which other module in the project that has claimed it already, without digging inside the internal implementation of that driver.

Defining a code section within which a different code path is executed

Is it possible to define a section or scope in my code within which a different code path is executed, without using a global or passed-down state variable?
For debugging purposes, I want to be able to surround a section of faulty code with a scope or #define to temporarily switch on pre-defined debugging behavior within this section, e.g. use debug data, a more precise data type, an already validated algorithm, … This needs to work in a multi-threaded application in which multiple threads will likely execute the same shared code concurrently, but only some of them have called this code from within the defined section.
For example, here is some pseudo-code that is not working, but might illustrate what I'd like to do. A static expensive function that is called from several places concurrently:
Result Algorithms::foo()
{
#ifdef DEBUG_SECTION
return Algorithms::algorithmPrecise(dataPrecise);
#else
return Algorithms::algorithmOptimized(dataOptimized);
#endif
}
Three classes of which instances need to be updated frequently:
Result A::update()
{
return Algorithms::foo();
}
Result B::update()
{
Result result;
#define DEBUG_SECTION
...
result = a.update() + 1337;
...
#undef DEBUG_SECTION
return result;
}
Result C::update()
{
return a.update();
}
As you can see, class A directly calls foo(), whereas in class B, foo() is called indirectly by calling a.update() and some other stuff. Let us assume B::update() returns a wrong result, so I want to be able to use the debug implementation of foo() only from this location. In C::update(), the optimized version should still be used.
My conceptual idea is to define a DEBUG_SECTION around the faulty code which would use the debug implementation at this location. This, however, does not work in practice, as Algorithms::foo() is compiled once with DEBUG_SECTION not being defined. In my application, Algorithms, A, B, and C are located in separate libraries.
I want that within a section defined in the code, a different code section within shared code is executed. However, outside of this section I still want execution of the original code, which at runtime will happen concurrently, so I cannot simply use a state variable. I could add a debugFlag parameter to each call within the DEBUG_SECTION that is passed down in each recursive call that is then provided to Algorithms::foo(), but this is extremely prone to errors (you must not miss any calls, but the section could be quite huge, spread over different files, …) and quite messy in a larger system. Is there any better way to do this?
I need a solution for C++11 and MSVC.
This might work by using a template:
template<bool pDebug>
Result Algorithms::foo()
{
if(pDebug)
return Algorithms::algorithmPrecise(dataPrecise);
else
return Algorithms::algorithmOptimized(dataOptimized);
}
On the other hand this means moving your function definition into a header (or forcing template instantiation, see these answers).
The downside is that changing the call to Algorithms::foo() from instance.foo<false> to instance.foo<true> every time you want to switch between debugging and release might require effort. If you have multiple affected calls you could use a compile time const variable to reduce the typing effort, but not knowing your code exactly I can't estimate if this is a feasible solution.
If the majority of your code uses the optimized version of the function you can also set the template parameter to default to false (template<bool pDebug = false>) to avoid changing existing code that will not call the debug-version.

How to detect background function being called in critical functions

I am working on very large c++ project, it has lot of real time critical functions and also lot of slow background functions. These background functions should not be called from time critical functions. So is there way to detect these background functions being called from critical functions? compile time would be good but anyway I like to detect before these background functions.
More info, both slow and critical functions are part of same class and share same header.
Some more information, Critical functions runs under really faster thread (>=10KHz) slower one runs under different slower thread (<=1KHz). Class member variables are protected using critical sections in slow functions since both use same class member variables. That's reason calling slow functions in critical functions will slowdown overall system performance. That's reason I like to find all these kind of functions automatically instead of manual checking.
Thanks....
You need to leverage the linker. Separate the "realtime" and slow functions into two modules, and link them in the correct order.
For example, split the files into two directories. Create a lib from each directory (ranlib the object files together) then link your final application using:
c++ -o myapp main.o lib1/slowfns.a lib2/realtime.a
If you try to call anything from slowfns.a in realtime.a, depending on the compiler, it will fail to link (some compilers may need options to enforce this).
In addition, this lets you easily manage compile-time declarations too: make sure that the headers from the slowfns library aren't on the include path when compiling the "realtime" funcitons library for added protection.
Getting a compile-time detection other than the one proposed by Nicholas Wilson will be extremely hard if not impossible, but assuming "background" really refers to the functions, and not to multiple threads (I saw no mention of threads in the question, so I assume it's just an odd wording) you could trivially use a global flag and a locker object, and either assert or throw an exception. Or, output a debug message. This will, of course, be runtime-only -- but you should be able to very quickly isolate the offenders. It will also be very low overhead for debug builds (almost guaranteed to run from L1 cache), and none for release builds.
Using CaptureStackBackTrace, one should be able to capture the offending function's address, which a tool like addr2line (or whatever the MS equivalent is) can directly translate to a line in your code. There is probably even a toolhelp function that can directly do this translation (though I wouldn't know).
So, something like this (untested!) might do the trick:
namespace global { int slow_flag = 0; }
struct slow_func_locker
{
slow_func_locker() { ++global::slow_flag; }
~slow_func_locker(){ --global::slow_flag; }
};
#indef NDEBUG
#define REALTIME if(global::slow_flag) \
{ \
void* backtrace; \
CaptureStackBackTrace(0, 1, &backtrace, 0); \
printf("RT function %s called from %08x\n", __FUNCTION__, backtrace); \
}
#define SLOW_FUNC slow_func_locker slow_func_locker_;
#else
#define REALTIME
#define SLOW_FUNC
#endif
foo_class::some_realtime_function(...)
{
REALTIME;
//...
};
foo_class::some_slow_function(...)
{
SLOW_FUNC;
//...
some_realtime_function(blah); // this will trigger
};
The only real downside (apart from not being compile-time) is you have to mark each and every slow and realtime function with either marker, but since the compiler cannot magically know which is what, there's not much of a choice anyway.
Note that the global "flag" is really a counter, not a flag. The reason for this is that a slow function could immediately call another slow function that returns and clears the flag -- incorrectly assuming a fast function now (the approach with critical sections suggested by xgbi might deadlock in this case!). A counter prevents this from happening. In presence of threads, one might replace int with std::atomic_int, too.
EDIT:
As it is clear now that there are really 2 threads running, and it only matters that one of them (the "fast" thread) does not ever call a "slow" function, there is another simple, working solution (example using Win32 API, but could be done with POSIX either way):
When the "fast" thread starts up (the "slow" thread does not need to do this), store the thread ID somewhere, either as global variable, or as member of the object that contains all the fast/slow functions -- anywhere where it's accessible:
global::fast_thread_id = GetCurrentThreadId();
The macro to bail out on "unwelcome" function calls could then look like:
#define CHECK_FAST_THREAD assert(GetCurrentThreadID() != global::fast_thread_id)
This macro is then added to any "slow" function that should never be called from the "fast" thread. If the fast thread calls a function that it must not call, the assert triggers and it is known which function is called.
Don't know how to do that at compile time, but for runtime, maybe use a mutex?
static Mutex critical_mutex;
#define CALL_SLOW( f ) if( critical_mutex.try_lock() == FAIL) \
printf("SLOW FUNCTION " #f" called while in CRITICAL\n");\
f
#define ENTER_CRITICAL() critical_mutex.lock()
#define EXIT_CRITICAL() critical_mutex.unlock()
Whenever you use a slow function while in a critical section, the trylock will fail.
void slow_func(){
}
ENTER_CRITICAL();
CALL_SLOW( slow_func() );
EXIT_CRITICAL();
Will print:
SLOW FUNCTION slow_func() called while in CRITICAL
If you need speed, you can implement your lightweight mutex with interlockedincrement on windows or __sync* functions on linux.
Preshing has an awesome set of blog posts about this HERE.
If you're free to modify the code as you wish, there's a type-system-level solution that involves adding some boilerplate.
Basically, you create a new class, SlowFunctionToken. Every slow function in your program takes a reference to SlowFunctionToken. Next, you make SlowFunctionToken's default and copy constructors private.
Now only functions that already have a SlowFunctionToken can call slow functions. How do you get a SlowFunctionToken? Add friend declarations to SlowFunctionToken; specifically, friend the thread entry functions of the threads that are allowed to use slow functions. Then, create local SlowFunctionToken objects there and pass them down.
class SlowFunctionToken;
class Stuff {
public:
void FastThread();
void SlowThread();
void ASlowFunction(SlowFunctionToken& sft);
void AnotherSlowFunction(SlowFunctionToken& sft);
void AFastFunction();
};
class SlowFunctionToken {
SlowFunctionToken() {}
SlowFunctionToken(const SlowFunctionToken&) {}
friend void Stuff::SlowThread();
};
void Stuff::FastThread() {
AFastFunction();
//SlowFunctionToken sft; doesn't compile
//ASlowFunction(???); doesn't compile
}
void Stuff::SlowThread() {
SlowFunctionToken sft;
ASlowFunction(sft);
}
void Stuff::ASlowFunction(SlowFunctionToken& sft) {
AnotherSlowFunction(sft);
AFastFunction(); // works, but that function can't call slow functions
}

increase c++ code verbosity with macros

I'd like to have the possibility to increase the verbosity for debug purposes of my program. Of course I can do that using a switch/flag during runtime. But that can be very inefficient, due to all the 'if' statements I should add to my code.
So, I'd like to add a flag to be used during compilation in order to include optional, usually slow debug operations in my code, without affecting the performance/size of my program when not needed. here's an example:
/* code */
#ifdef _DEBUG_
/* do debug operations here
#endif
so, compiling with -D_DEBUG_ should do the trick. without it, that part won't be included in my program.
Another option (at least for i/o operations) would be to define at least an i/o function, like
#ifdef _DEBUG_
#define LOG(x) std::clog << x << std::endl;
#else
#define LOG(x)
#endif
However, I strongly suspect this probably isn't the cleanest way to do that. So, what would you do instead?
I prefer to use #ifdef with real functions so that the function has an empty body if _DEBUG_ is not defined:
void log(std::string x)
{
#ifdef _DEBUG_
std::cout << x << std::endl;
#endif
}
There are three big reasons for this preference:
When _DEBUG_ is not defined, the function definition is empty and any modern compiler will completely optimize out any call to that function (the definition should be visible inside that translation unit, of course).
The #ifdef guard only has to be applied to a small localized area of code, rather than every time you call log.
You do not need to use lots of macros, avoiding pollution of your code.
You can use macros to change implementation of the function (Like in sftrabbit's solution). That way, no empty places will be left in your code, and the compiler will optimize the "empty" calls away.
You can also use two distinct files for the debug and release implementation, and let your IDE/build script choose the appropriate one; this involves no #defines at all. Just remember the DRY rule and make the clean code reusable in debug scenario.
I would say that his actually is very dependent on the actual problem you are facing. Some problems will benefit more of the second solution, whilst the simple code might be better with simple defines.
Both snippets that you describe are correct ways of using conditional compilation to enable or disable the debugging through a compile-time switch. However, your assertion that checking the debug flags at runtime "can be very inefficient, due to all the 'if' statements I should add to my code" is mostly incorrect: in most practical cases a runtime check does not influence the speed of your program in a detectable way, so if keeping the runtime flag offers you potential advantages (e.g. turning the debugging on to diagnose a problem in production without recompiling) you should go for a run-time flag instead.
For the additional checks, I would rely on the assert (see the assert.h) which does exactly what you need: check when you compile in debug, no check when compiled for the release.
For the verbosity, a more C++ version of what you propose would use a simple Logger class with a boolean as template parameter. But the macro is fine as well if kept within the Logger class.
For commercial software, having SOME debug output that is available at runtime on customer sites is usually a valuable thing to have. I'm not saying everything has to be compiled into the final binary, but it's not at all unusual that customers do things to your code that you don't expect [or that causes the code to behave in ways that you don't expect]. Being able to tell the customer "Well, if you run myprog -v 2 -l logfile.txt and do you usual thing, then email me logfile.txt" is a very, very useful thing to have.
As long as the "if-statement to decide if we log or not" is not in the deepest, darkest jungle in peru, eh, I mean in the deepest nesting levels of your tight, performance critical loop, then it's rarely a problem to leave it in.
So, I personally tend to go for the "always there, not always enabled" approach. THat's not to say that I don't find myself adding some extra logging in the middle of my tight loops sometimes - only to remove it later on when the bug is fixed.
You can avoid the function-like macro when doing conditional compilation. Just define a regular or template function to do the logging and call it inside the:
#ifdef _DEBUG_
/* ... */
#endif
part of the code.
At least in the *Nix universe, the default define for this kind of thing is NDEBUG (read no-debug). If it is defined, your code should skip the debug code. I.e. you would do something like this:
#ifdef NDEBUG
inline void log(...) {}
#else
inline void log(...) { .... }
#endif
An example piece of code I use in my projects. This way, you can use variable argument list and if DEBUG flag is not set, related code is cleared out:
#ifdef DEBUG
#define PR_DEBUG(fmt, ...) \
PR_DEBUG(fmt, ...) printf("[DBG] %s: " fmt, __func__, ## __VA_ARGS__)
#else
#define PR_DEBUG(fmt, ...)
#endif
Usage:
#define DEBUG
<..>
ret = do_smth();
PR_DEBUG("some kind of code returned %d", ret);
Output:
[DBG] some_func: some kind of code returned 0
of course, printf() may be replaced by any output function you use. Furthermore, it can be easily modified so additional information, as for example time stamp, is automatically appended.
For me it depends from application to application.
I've had applications where I wanted to always log (for example, we had an application where in case of errors, clients would take all the logs of the application and send them to us for diagnostics). In such a case, the logging API should probably be based on functions (i.e. not macros) and always defined.
In cases when logging is not always necessary or you need to be able to completely disable it for performance/other reasons, you can define logging macros.
In that case I prefer a single-line macro like this:
#ifdef NDEBUG
#define LOGSTREAM /##/
#else
#define LOGSTREAM std::clog
// or
// #define LOGSTREAM std::ofstream("output.log", std::ios::out|std::ios::app)
#endif
client code:
LOG << "Initializing chipmunk feeding module ...\n";
//...
LOG << "Shutting down chipmunk feeding module ...\n";
It's just like any other feature.
My assumptions:
No global variables
System designed to interfaces
For whatever you want verbose output, create two implementations, one quiet, one verbose.
At application initialisation, choose the implementation you want.
It could be a logger, or a widget, or a memory manager, for example.
Obviously you don't want to duplicate code, so extract the minimum variation you want. If you know what the strategy pattern is, or the decorator pattern, these are the right direction. Follow the open closed principle.