Where is DEBUG defined in the code? - c++

I looked at a sample code to create a log system on my server... And I found this
#if DEBUG
printf("something here");
#endif
I know what it does. It prtins something only if DEBUG has been defiend. But where is DEBUG defined? I looked at the all the header files but I could't find DEBUG..
Also, Could you please provide me a good example or tutorial about designing logging system?
Thanks in advance..

Do not use the DEBUG macro it is not defined by C++ standard. C++ Standard defines NDEBUG (No DEBUG), which is used for the standard assert macro and your logging code would go hand in hand with it. DEBUG is compiler dependent. Therefore NDEBUG is ensured by standard to be properly set. Applying it to your example use:
#ifndef NDEBUG
printf("something here");
#endif
But my opinion is: you should not design a logging library around the NDEBUG/DEBUG pair. Logging lib should always be there, just to allow you trace the application's behavior without the need of code recompilation, which in turn involves new deployment of your code and possibility to postpone the bug prone behavior. The following DDJ article by Petru Marginean about design of logging libraries describes how to ensure that fact in a very efficient manner in C++:
Part 1: http://drdobbs.com/cpp/201804215
Part 2: http://drdobbs.com/cpp/221900468
Regarding the logging library take a look at the Boost Logging library at:
http://boost-log.sourceforge.net/libs/log/doc/html/index.html
I was downvoted because NDEBUG is said not to be set without explicit definition of it in the command line. I agree with that fact, but on the other hand here I understand this question so, that compilation in debug mode, should also produce logging output. This fact is going to be better enforced when bundling the behavior to NDEBUG presence.

Compilers, at least those I know of, have an option to define preprossessor macros from "the outside" of compiled files.
For example, to define DEBUG with a microsoft compiler, you'd go with something like
cl -DDEBUG file.cpp ...

Easy way to detect position of first definition, you add #define .
#include "someone.h"
#define DEBUG "dummy" // add this
#if DEBUG
printf("something here");
#endif
You'll get
foo.c:2:0: warning: "DEBUG" redefined
<command-line>:0:0: note: this is the location of the previous definition
Or
foo.c:30:0: warning: "DEBUG" redefined
someone.h.c:2:0: note: this is the location of the previous definition
Another answer, try to use ctags. run ctasg -R on the top of the project directory, run vim /path/to/your/code.c. move cursor to DEBUG, then type CTRL-].
This way, you may find several definitions. You can find all with :tselect on the DEBUG.

In Visual Studio, you can set Preprocessor Symbols in Project Properties. As for logging system, take a look at log4cpp

Related

Bit definition error - IAR Workbench

I am a beginner with embedded programming and am using the IAR workbench for a project of mine using STM32F4Discovery. I am trying to compile an existing code and have a few errors at a few places regarding the bit definitions like the following:
Error[Pe020]: identifier "GPIO_PIN_SET" is undefined
Now, the GPIO_PIN_SET is defined in the file stm32f4xx_gpio_hal.h and is already included in my project. In order to resolve this issue when I looked up online, I have found this solution. However, I don't have the System tab in the General Options in my IAR Workbench. I have a full version of IAR Workbench and am not sure why the System tab is missing.
I also tried defining
#define ENABLE_BIT_DEFINITIONS
as stated in this link in my main.c file but to no avail.
Trying to set
#define STM32F4XX
#define USE_STDPERIPH_DRIVER
in the main.c file or defining the symbols STM32F4XX, USE_STDPERIPH_DRIVER in the Preprocessor tab in General Options as mentioned here also didn't help.
The solution could be very simple that I am probably overlooking but am not able to figure out what could I be missing. Any help would be appreciated
Including a header file in a "project" is not enough, you should actually include it (directly or indirectly) in the source file where the declarations are to be used. It would be that simple in any halfway sane development kit, but we are stuck with ST, and they force us doing it their way.
Include the "master" header in your main.c
#include "stm32f429i_discovery.h"
this would in turn include stm32f4xx_hal.h, which includes stm32f4xx_hal_conf.h, which included stm32f4xx_hal_gpio.h if the right #defines were there.
You might not have stm32f4xx_hal_conf.h
If that's the case, then copy Drivers\STM32F4xx_HAL_Driver\Inc\stm32f4xx_hal_conf_template.h into your project, rename it to stm32f4xx_hal_conf.h. Otherwise just make sure that #define HAL_GPIO_MODULE_ENABLED is not commented out.
Set the right #defines
New versions of STM32CubeF4 have been released since the tutorial you've linked was written, and a few things have apparently changed. As of version 1.6.0, define STM32F429xx in Preprocessor Options, and forget the ones above. Yes, I've noticed that there is a version 1.7.0 now, let's hope that compatibility lasts this time.

Source files in Release, header files in Debug

We appear to have developed a strange situation in our application. An ASSERT is being triggered which should only run if _DEBUG is defined, but it is being evaluated when the application is compiled in Release mode.
ASSERT is defined in a header file, and is being triggered from another header file, which is included into a source file.
On further inspection, the source file is indeed running in Release mode (_DEBUG is not defined, and NDEBUG is). However, the header files have _DEBUG defined, and not NDEBUG.
According to conventional wisdom, #including a header file is equal to cutting-and-pasting the lines of code into the source file. This would make the above behaviour impossible.
We are compiling a large, mixed language (Intel FORTRAN and C++) application in VS2010. This problem also occurs on our build server, though, so it doesn't seem to be just a VS2010 'feature'.
We have checked:
All projects are building in Release.
The affected cpp files do not have any unusual properties being set.
There are no files in our solution manually defining or undefining _DEBUG or NDEBUG.
We have established the above behaviour by including clauses such as:
bool is_debug = false;
#ifdef _DEBUG
is_debug = true
#endif
and breaking on the point immediately afterwards.
We are running out of things to test - about the only things that I can even hypothesise are:
Some standard library or external include is redefining _DEGUG and NDEBUG, or
Something has overridden the #include macro (is this possible?).
EDIT ----------------------------------------------------------
Thanks in part to the #error trick (below), we've found the immediate problem: In several of the projects the NDEBUG and _DEBUG are no longer defined. All of these project were meant to have inherited something from the macro $(PreprocessorDefinitions) - but this is not defined anywhere.
This still leaves some awkward questions:
The source file that was causing the above behaviour does have NDEBUG defined in its project settings, and yet the header files they include don't (although VS2010 does grey-out the correct #ifdef blocks).
If the PreprocessorDefinitions macro is inherited by all C++ projects (which it appears to be), then why isn't it defined anywhere?
My usual approach to problems like this is, to look where the symbol is defined or an #ifdef is used and then put `#error Some text´ in it. This way already the compilation process will break, instead of having to wait and run it. Then you can see what really is defined.
You could also add such an #ifdef - #error combination right where the assert occurs, then you can be absolutely sure what the compiler thinks should be valid.
From http://msdn.microsoft.com/en-us/library/9sb57dw4(v=vs.71).aspx:
The assert routine is available in both the release and debug versions of the C run-time libraries. Two other assertion macros, _ASSERT and _ASSERTE, are also available, but they only evaluate the expressions passed to them when the _DEBUG flag has been defined.
In other words: either use _ASSERT(...) or #define NDEBUG, so you don't get asserts in Release builds.
OK, the problem turns out to be because NDEBUG and _DEBUG are missing from the Properties->C/C++->Preprocessor->Preprocessor Definitions on several projects. Whether they were always missing, or whether they had originally been included via the $(PreprocessorDefinitions) macro is unclear.
Thanks to #Lamza, #Devolus and #Werner Henze - all of their input was useful, and the eventual problem was depressingly mundane.

How can I get the compiler tell me what file #define a value?

My code is linking against several other libraries that are also developed at my company, one of these libraries is redefining several values from errno.h, I would like to be able to fix this, however I am having trouble finding the exact file that is redefining these values, I am want to know if there is a way to make the compiler tell me when a file has defined a particular value.
You can probably do it by adding -include errno.h to the command line that builds the library in question. Here's a quick example. I have a C program called "file.c":
#define ESRCH 8
That's it - then I compile with:
cc -c -include errno.h file.c
And presto, a compiler warning:
file.c:1:1: warning: "ESRCH" redefined
In file included from /usr/include/errno.h:23,
from <command-line>:0:
/usr/include/sys/errno.h:84:1: warning: this is the location of the previous definition
That will tell you where your bad definitions are.
Have you tried searching with grep?
If you don't want to search through all your headers for the particular #define, you could use
#undef YOUR_MANIFEST_CONSTANT
after each #include in your source module and then start removing them from the bottom up and see where your definitions come from.
Also, your compiler may tell you that a #define has been redefined. Turn all your warnings on.
With GCC I did something similar with:
g++ input.cc -dD -E > cpp.out
-dD tells cpp to print all defines where they were defined. And in the cpp output there are also markers for the include file names and the line numbers.
It is possible that some environments, I'm thinking IDE's here, have configuration options tied into the "project settings" rather than using a configuration header. If you work with a lot of other developers in a place where this behavior is NOT frowned on then you might also check your tool settings.
Most compilers will tell you where the problem is, you have to look and think about what the diagnostic notification is telling you.
Short of that, grep/findstr on *nix/Windows is your friend.
If that yields nothing then check for tool settings in your build system.
Some IDE's will jump to the correct location if you right click on the usage and select 'go to definition'.
Another option if you're really stuck is a command line option on the compiler. Most compilers have an option to output the assembler they generate when compiling C++ code.
You can view this assembler (which has comments letting you know the relative line number in the C++ source file). You don't have to understand the assembler but you can see what value was used and what files and definitions were included when the compiler ran. Check your compiler's documentation for the exact option to use

Using Boost.Thread headers with MSVC Language Extensions disabled

I just discovered that when Language Extensions are disabled in MSVC, you get this error if you try to include boost/thread/thread.hpp:
fatal error C1189: #error : "Threading support unavaliable: it has been explicitly disabled with BOOST_DISABLE_THREADS"
It seems that when Boost detects that language extensions are disabled (_MSC_EXTENSIONS isn't defined), they define BOOST_DISABLE_WIN32, to indicate that it is not safe to include windows.h (which won't compile without extensions enabled).
And as a consequence of that #define, BOOST_DISABLE_THREADS is defined, even though Boost.Thread isn't a header-only library, and windows.h is only included in the .cpp files. The headers should in principle be safe to use without language extensions. All the actual win32 calls are isolated in the compiled library (the .dll or .lib)
I can see here that they're aware of the problem, but as it's remained untouched for the last two years, it's probably naive to hope for a quick fix.
It seems like it should be a fairly simple case of modifying some of the #ifdef's and #defines in the various Boost configuration files, but there are a lot of them, and they define and use a lot of macros whose purpose isn't clear to me.
Do anyone know of a simple hack or workaround to allow inclusion of the Boost.Thread headers when language extensions are disabled?
I don't see any simple way to turn off the behavior.
You could wrap the block with your own #ifdef starting at boost\config\suffix.hpp(214):
#ifndef TEMP_HACK_DONT_DISABLE_WIN32_THREADS // XXX TODO FIXME
#if defined(BOOST_DISABLE_WIN32) && defined(_WIN32) \
&& !defined(BOOST_DISABLE_THREADS) && !defined(BOOST_HAS_PTHREADS)
# define BOOST_DISABLE_THREADS
#endif
#endif // ndef TEMP_HACK_DONT_DISABLE_WIN32_THREADS
Not a perfect fix, but it should be temporary until you can get them to fix it upstream. The boost stuff is good, but it's not immutable in its perfection.
Of course, make some kind of tracking item so you don't lose track of your divergence from upstream.

How should I detect unnecessary #include files in a large C++ project?

I am working on a large C++ project in Visual Studio 2008, and there are a lot of files with unnecessary #include directives. Sometimes the #includes are just artifacts and everything will compile fine with them removed, and in other cases classes could be forward declared and the #include could be moved to the .cpp file. Are there any good tools for detecting both of these cases?
While it won't reveal unneeded include files, Visual studio has a setting /showIncludes (right click on a .cpp file, Properties->C/C++->Advanced) that will output a tree of all included files at compile time. This can help in identifying files that shouldn't need to be included.
You can also take a look at the pimpl idiom to let you get away with fewer header file dependencies to make it easier to see the cruft that you can remove.
PC Lint works quite well for this, and it finds all sorts of other goofy problems for you too. It has command line options that can be used to create External Tools in Visual Studio, but I've found that the Visual Lint addin is easier to work with. Even the free version of Visual Lint helps. But give PC-Lint a shot. Configuring it so it doesn't give you too many warnings takes a bit of time, but you'll be amazed at what it turns up.
There's a new Clang-based tool, include-what-you-use, that aims to do this.
!!DISCLAIMER!! I work on a commercial static analysis tool (not PC Lint). !!DISCLAIMER!!
There are several issues with a simple non parsing approach:
1) Overload Sets:
It's possible that an overloaded function has declarations that come from different files. It might be that removing one header file results in a different overload being chosen rather than a compile error! The result will be a silent change in semantics that may be very difficult to track down afterwards.
2) Template specializations:
Similar to the overload example, if you have partial or explicit specializations for a template you want them all to be visible when the template is used. It might be that specializations for the primary template are in different header files. Removing the header with the specialization will not cause a compile error, but may result in undefined behaviour if that specialization would have been selected. (See: Visibility of template specialization of C++ function)
As pointed out by 'msalters', performing a full analysis of the code also allows for analysis of class usage. By checking how a class is used though a specific path of files, it is possible that the definition of the class (and therefore all of its dependnecies) can be removed completely or at least moved to a level closer to the main source in the include tree.
I don't know of any such tools, and I have thought about writing one in the past, but it turns out that this is a difficult problem to solve.
Say your source file includes a.h and b.h; a.h contains #define USE_FEATURE_X and b.h uses #ifdef USE_FEATURE_X. If #include "a.h" is commented out, your file may still compile, but may not do what you expect. Detecting this programatically is non-trivial.
Whatever tool does this would need to know your build environment as well. If a.h looks like:
#if defined( WINNT )
#define USE_FEATURE_X
#endif
Then USE_FEATURE_X is only defined if WINNT is defined, so the tool would need to know what directives are generated by the compiler itself as well as which ones are specified in the compile command rather than in a header file.
Like Timmermans, I'm not familiar with any tools for this. But I have known programmers who wrote a Perl (or Python) script to try commenting out each include line one at a time and then compile each file.
It appears that now Eric Raymond has a tool for this.
Google's cpplint.py has an "include what you use" rule (among many others), but as far as I can tell, no "include only what you use." Even so, it can be useful.
If you're interested in this topic in general, you might want to check out Lakos' Large Scale C++ Software Design. It's a bit dated, but goes into lots of "physical design" issues like finding the absolute minimum of headers that need to be included. I haven't really seen this sort of thing discussed anywhere else.
Give Include Manager a try. It integrates easily in Visual Studio and visualizes your include paths which helps you to find unnecessary stuff.
Internally it uses Graphviz but there are many more cool features. And although it is a commercial product it has a very low price.
You can build an include graph using C/C++ Include File Dependencies Watcher, and find unneeded includes visually.
If your header files generally start with
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#endif
(as opposed to using #pragma once) you could change that to:
#ifndef __SOMEHEADER_H__
#define __SOMEHEADER_H__
// header contents
#else
#pragma message("Someheader.h superfluously included")
#endif
And since the compiler outputs the name of the cpp file being compiled, that would let you know at least which cpp file is causing the header to be brought in multiple times.
PC-Lint can indeed do this. One easy way to do this is to configure it to detect just unused include files and ignore all other issues. This is pretty straightforward - to enable just message 766 ("Header file not used in module"), just include the options -w0 +e766 on the command line.
The same approach can also be used with related messages such as 964 ("Header file not directly used in module") and 966 ("Indirectly included header file not used in module").
FWIW I wrote about this in more detail in a blog post last week at http://www.riverblade.co.uk/blog.php?archive=2008_09_01_archive.xml#3575027665614976318.
Adding one or both of the following #defines
will exclude often unnecessary header files and
may substantially improve
compile times especially if the code that is not using Windows API functions.
#define WIN32_LEAN_AND_MEAN
#define VC_EXTRALEAN
See http://support.microsoft.com/kb/166474
If you are looking to remove unnecessary #include files in order to decrease build times, your time and money might be better spent parallelizing your build process using cl.exe /MP, make -j, Xoreax IncrediBuild, distcc/icecream, etc.
Of course, if you already have a parallel build process and you're still trying to speed it up, then by all means clean up your #include directives and remove those unnecessary dependencies.
Start with each include file, and ensure that each include file only includes what is necessary to compile itself. Any include files that are then missing for the C++ files, can be added to the C++ files themselves.
For each include and source file, comment out each include file one at a time and see if it compiles.
It is also a good idea to sort the include files alphabetically, and where this is not possible, add a comment.
If you aren't already, using a precompiled header to include everything that you're not going to change (platform headers, external SDK headers, or static already completed pieces of your project) will make a huge difference in build times.
http://msdn.microsoft.com/en-us/library/szfdksca(VS.71).aspx
Also, although it may be too late for your project, organizing your project into sections and not lumping all local headers to one big main header is a good practice, although it takes a little extra work.
If you would work with Eclipse CDT you could try out http://includator.com to optimize your include structure. However, Includator might not know enough about VC++'s predefined includes and setting up CDT to use VC++ with correct includes is not built into CDT yet.
The latest Jetbrains IDE, CLion, automatically shows (in gray) the includes that are not used in the current file.
It is also possible to have the list of all the unused includes (and also functions, methods, etc...) from the IDE.
Some of the existing answers state that it's hard. That's indeed true, because you need a full compiler to detect the cases in which a forward declaration would be appropriate. You cant parse C++ without knowing what the symbols mean; the grammar is simply too ambiguous for that. You must know whether a certain name names a class (could be forward-declared) or a variable (can't). Also, you need to be namespace-aware.
Maybe a little late, but I once found a WebKit perl script that did just what you wanted. It'll need some adapting I believe (I'm not well versed in perl), but it should do the trick:
http://trac.webkit.org/browser/branches/old/safari-3-2-branch/WebKitTools/Scripts/find-extra-includes
(this is an old branch because trunk doesn't have the file anymore)
If there's a particular header that you think isn't needed anymore (say
string.h), you can comment out that include then put this below all the
includes:
#ifdef _STRING_H_
# error string.h is included indirectly
#endif
Of course your interface headers might use a different #define convention
to record their inclusion in CPP memory. Or no convention, in which case
this approach won't work.
Then rebuild. There are three possibilities:
It builds ok. string.h wasn't compile-critical, and the include for it
can be removed.
The #error trips. string.g was included indirectly somehow
You still don't know if string.h is required. If it is required, you
should directly #include it (see below).
You get some other compilation error. string.h was needed and isn't being
included indirectly, so the include was correct to begin with.
Note that depending on indirect inclusion when your .h or .c directly uses
another .h is almost certainly a bug: you are in effect promising that your
code will only require that header as long as some other header you're using
requires it, which probably isn't what you meant.
The caveats mentioned in other answers about headers that modify behavior
rather that declaring things which cause build failures apply here as well.