NDEBUG macro is never defined - MSVC compiler. Is _DEBUG standard? [duplicate] - c++

Which preprocessor define should be used to specify debug sections of code?
Use #ifdef _DEBUG or #ifndef NDEBUG or is there a better way to do it, e.g. #define MY_DEBUG?
I think _DEBUG is Visual Studio specific, is NDEBUG standard?

Visual Studio defines _DEBUG when you specify the /MTd or /MDd option, NDEBUG disables standard-C assertions. Use them when appropriate, ie _DEBUG if you want your debugging code to be consistent with the MS CRT debugging techniques and NDEBUG if you want to be consistent with assert().
If you define your own debugging macros (and you don't hack the compiler or C runtime), avoid starting names with an underscore, as these are reserved.

Is NDEBUG standard?
Yes it is a standard macro with the semantic "Not Debug" for C89, C99, C++98, C++2003, C++2011, C++2014 standards. There are no _DEBUG macros in the standards.
C++2003 standard send the reader at "page 326" at "17.4.2.1 Headers"
to standard C.
That NDEBUG is similar as This is the same as the Standard C library.
In C89 (C programmers called this standard as standard C) in "4.2 DIAGNOSTICS" section it was said
https://port70.net/~nsz/c/c89/c89-draft.html
If NDEBUG is defined as a macro name at the point in the source file
where <assert.h> is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
If look at the meaning of _DEBUG macros in Visual Studio
https://learn.microsoft.com/en-us/cpp/preprocessor/predefined-macros
then it will be seen, that this macro is automatically defined by your сhoice of language runtime library version.

I rely on NDEBUG, because it's the only one whose behavior is standardized across compilers and implementations (see documentation for the standard assert macro). The negative logic is a small readability speedbump, but it's a common idiom you can quickly adapt to.
To rely on something like _DEBUG would be to rely on an implementation detail of a particular compiler and library implementation. Other compilers may or may not choose the same convention.
The third option is to define your own macro for your project, which is quite reasonable. Having your own macro gives you portability across implementations and it allows you to enable or disable your debugging code independently of the assertions. Though, in general, I advise against having different classes of debugging information that are enabled at compile time, as it causes an increase in the number of configurations you have to build (and test) for arguably small benefit.
With any of these options, if you use third party code as part of your project, you'll have to be aware of which convention it uses.

The macro NDEBUG controls whether assert() statements are active or not.
In my view, that is separate from any other debugging - so I use something other than NDEBUG to control debugging information in the program. What I use varies, depending on the framework I'm working with; different systems have different enabling macros, and I use whatever is appropriate.
If there is no framework, I'd use a name without a leading underscore; those tend to be reserved to 'the implementation' and I try to avoid problems with name collisions - doubly so when the name is a macro.

Be consistent and it doesn't matter which one. Also if for some reason you must interop with another program or tool using a certain DEBUG identifier it's easy to do
#ifdef THEIRDEBUG
#define MYDEBUG
#endif //and vice-versa

Unfortunately DEBUG is overloaded heavily. For instance, it's recommended to always generate and save a pdb file for RELEASE builds. Which means one of the -Zx flags, and -DEBUG linker option. While _DEBUG relates to special debug versions of runtime library such as calls to malloc and free. Then NDEBUG will disable assertions.

Despite the name, NDEBUG has nothing to do if you are creating a debug build or not, it controls whether assertions (assert()) are active or not. I would not base anything else on it, as you may want to have debug builds without assertions or release builds with assertions from time to time and then you must set NDEBUG accordingly but that doesn't mean you also want all other code to be debug or release code.
From the perspective of compilers, there is not such thing as a debug build. You tell the compiler to build code with a specific set of settings and if you want to use different settings for different kinds of builds, then this is something you actually made up yourself and the compiler knows nothing about that. You may actually have 50 different build styles and not just release and debug (profile, test, deploy, etc.), so it's up to you how these styles are identified in your own code. If you need pre-processor tags for these, you define how those are named and the same name space rules applies as for everything else you'd define in your code.

Related

What predefined macro can be used to detect debug build with clang?

MSVC defines _DEBUG in debug mode, gcc defines NDEBUG in release mode. What macro can I use in clang to detect whether the code is being compiled for release or debug?
If you look at the project settings of your IDE, you will see that those macros are actually manually defined there, they are not automatically defined by the compiler. In fact, there is no way for the compiler to actually know if it's building a "debug" or "release", it just builds depending on the flags provided to it by the user (or IDE).
You have to make your own macros and define them manually, just like the IDE does for you when creating the projects.
Compilers don't define those macros. Your IDE/Makefile/<insert build system here> does. This doesn't depend on the compiler, but on the environment/build helper program you use.
The convention is to define the DEBUG macro in debug mode and the NDEBUG macro in release mode.
You can use the __OPTIMIZE__ flag to determine if optimization is taking place. That generally means it is not a debug build since optimizations often rearrange the code sequence. Trying to step through optimized code can be confusing.
This probably is what those most interested in this question really are attempting to figure out.
There is no such thing as a debug mode in a command line compiler. That is a IDE thing: it just sets up some options to be sent to the compiler.
If you use clang from the command line, you can use whatever you want. The same is true for gcc, so if with gcc you use NDEBUG you can use just the same.

Is there a cross platform way to detect debug mode compilation?

Is there a cross platform way to detect debug mode compilation? If not, then how to do it for the top compilers; MSVC, GNU & MINGW, mac, clang, intel.
For example MSVC you can detect debug mode like the following.
#if defined _DEBUG
// debug related stuff here
#else
// release related stuff here
#endif
For many or most compilers, "debug" mode is a multifaceted concept that includes several orthogonal settings. For example, with gcc, you can add debugging symbols to the output code using -g, enable optimizations using -O, or disable assert() macros using -DNDEBUG (to define the NDEBUG macro). In my work, we have deployed production code with many combinations of these enabled or disabled. We have left -g on in order to attach to running processes and troubleshoot them using gdb (in which case we usually have to fight with the spaghetti -O produced), left assertions on to get more information about persistent errors across releases, and disabled optimizations for legacy codebases written under a more permissive interpretation of "undefined behavior" (until we could fix/replace it).
Since the NDEBUG macro actually affects the semantics of the generated code (and some libraries change their ABIs when the macro is defined or not), that's probably the most portable answer to your question. However, if you're using that macro to detect optimized builds portably, you'll probably have mixed success.
Below code should be work:
#ifdef defined DEBUG || defined _DEBUG
#else
#endif

what does mean by debug build and release build, difference and uses [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Debug/Release difference
I want to know what do these two mean: Debug build and Release build and what is the difference between both.
Which one should I use (I mean which are the suitable conditions for each one)
and which build actually I am using now if I make a simple C++ project in Visual studio. [If I do not change any projects settings]
I am asking this because I am trying to make a GUI using wxWidgets 2.9.4 and they give different case of adding required .lib. these are
release ANSI static
debug ANSI static
release Unicode static
debug Unicode static
Please put a detailed answer.
Debug build and release build are just names. They don't mean anything.
Depending on your application, you may build it in one, two or more
different ways, using different combinations of compiler and linker
options. Most applications should only be build in a single version:
you test and debug exactly the same program that the clients use. In
some cases, it may be more practical to use two different builds:
overall, client code needs optimization, for performance reasons, but
you don't want optimization when debugging. And then there are cases
where full debugging (i.e. iterator validation, etc.) may result in code
that is too slow even for algorithm debugging, so you'll have a build
with full debugging checks, one with no optimization, but no iterator
debugging, and one with optimization.
Anytime you start on an application, you have to decide what options you
need, and create the corresponding builds. You can call them whatever
you want.
With regards to external libraries (like wxwidgets): all compilers have
some incompatibilities when different options are used. So people who
deliver libraries (other than in source form) have to provide several
different versions, depending on a number of issues:
release vs. debug: the release version will have been compiled with a
set of more or less standard optimization options (and no iterator
debugging); the debug version without optimization, and with iterator
debugging. Whether iterator debugging is present or not is one thing
which typically breaks binary compatibility. The library vendor should
document which options are compatible with each version.
ANSI vs. Unicode: this probably means narrow char vs wide wchar_t
for character data. Use which ever one corresponds to what you use in
your application. (Note that the difference between these two is much
more than just some compiler switches. You often need radically
different code, and handling Unicode correctly in all cases is far from
trivial; an application which truly supports Unicode must be aware of
things like composing characters or bidirectional writing.)
static vs. dynamic: this determines how the library is linked and
loaded. Usually, you'll want static, at least if you count on deploying
your application on other machines than the one you develop it on. But
this also depends on licensing issues: if you need a license for each
machine where the library is deployed, it might make more sense to use
dynamic.
When doing a DEBUG build the project is set up to not optimize (or only very lightly optimize) the generated code, and to tell the compiler to add debug information (which includes information about functions, variables, and other information needed for debugging). The pre-processor is set up to define the _DEBUG macro.
A RELEASE build on the other hand have higher level of optimization, and no debug information is saved. The pre-processor is set up to define the NDEBUG macro.
Another difference is that certain "system" macros, for example ASSERT-like macros, do different things depending on if _DEBUG or NDEBUG is defined. ASSERT does nothing in a release build, but does checks and abort in debug builds.
The difference between Unicode and non-Unicode is mostly the UNICODE pre-processor macro, which tells header files if certain Unicode functionality should be enabled or not. One thing is that TCHAR will be defined to wchar_t in Unicode builds but as char in non-Unicode builds.
In the debug build you get a lot more error checjking, so if something goes wrong you may get a more informative message ( and it will run more slowly )
In the debug build you will get more information when you run it under the debugger.
You can tell if the build is debug build by looking at the preprocessor definitions of the project properties: _DEBUG will be defined.
You will send the release build to your clients. ( The debug build uses the debug libraries which are not present on most non development machines )
if you want to link a static library to a project, it needs to be compiled with the same settings that you use to compile your code. That's why there is a Debug & a Release version of the library. Additionally, you need to specify whether you want to use unicode or ansi. Here the answer is quite simple (in my opinion) - just use unicode.
What is different in Release compared to Debug so that they can't mix? Mainly it's the memory management. The memory management in Debug does a lot of additional things to allow you to find errors early. As an example, there are canaries that can be checked for overwriting of code. Uninitialized memory is initialized with a specific pattern, ... Additionally, there are a lot of optimizations in release that are not used in debug. This allows release to run faster but makes it difficult to debug the code. Methods might get optimized away and instead are inlined, the parameter passing may be optimized to use registers, ...
So in C++ you manage (at least) 2 configurations. One Debug configuration that you link with the debug library. This one is for developing & testing. And a Release configuration linked with the release library. This one is for delivery. But don't forget that you need to test Release as well as it might behave differently than the Debug configuration.

Mixing debug and release library/binary - bad practice?

Is it a bad practice to use a release version of 3rd party library in debug binary?
I am using a 3rd party library and compiled a release .lib library. My exe is in debug mode development. Then I got:
error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in test1.obj
After some googling I found that is because I am trying to mix release with debug, and I should probably compile the library in debug mode or otherwise muddle with the _ITERATOR_DEBUG_LEVEL macro. But I am just curious if that is the recommanded way and why. It just seem cumbersome that I need to compile and keep a record of both release and debug binaries for every 3rd party library I intend to use, which will be many very soon, while having no intention to debug into these code.
Mixing debug and release code is bad practice. The problem is that the different versions can depend on different fundamental parts of the C++ runtime library, such as how memory is allocated, structures for things like iterators might be different, extra code could be generated to perform operations (e.g. checked iterators).
It's the same as mixing library files built with any other different settings. Imagine a case where a header file contains a structure that is used by both application and library. The library is built with structure packing and alignment set to one value and the application built with another. There are no guarantees that passing the structure from the application into the library will work since they could vary in size and member positions.
Is it possible to build your 3rd party libraries as DLLs? Assuming the interface to any functions is cleaner and does not try to pass any STL objects you will be able to mix a debug application with release DLLs without problems.
Mixing debug and release library/binary is good and very useful practice.
Debugging a large solution (100+ projects as an example) typically is not fast or even could not be possible at all (for an example when not all projects can be built in debug). Previous commentators wrote that debug/release binary could have different alignment or another staff. It's not true. All linking parameters are same in debug and release binaries because they depends on the same architecture.
You have to remove all optimizations (/Od) from the selected project. Then assign a release c++ runtime.
The issue came because you have defined _DEBUG in the project. Remove the macro from the definitions (Project->Properties->Preprocessor->Preprocessor Definitions).
If the macro isn't in the Preprocessor Definitions, then you have to add it in "UndefinePreprocessorDefinitions".
The fact that it doesn't compile should be sufficient to prove it's bad practice.
Regarding maintaining separate builds - you don't need to do that. Here's a workaround that previously worked for me:
#ifdef _DEBUG
#define DEBUG_WAS_DEFINED
#undef _DEBUG
#endif
#include <culprit>
#ifdef DEBUG_WAS_DEFINED
#define _DEBUG
#endif
Let me know if this works for you.

Defining macros in Visual Studio - /D or #define?

Recently, when porting some STL code to VS2008 I wanted to disable warnings generated by std::copy by defining the new _SCL_SECURE_NO_WARNINGS flag. You can do this in two ways:
Using the /D compiler switch, which can be specified in the project properties. You need to ensure it is defined for both Release and Debug builds, which I often forget to do.
By defining it macro style before you include the relevant STL headers, or, for total coverage, in stdafx.h:
#define _SCL_SECURE_NO_WARNINGS
Both of these methods work fine but I wondered if there was any argument for favouring one over the other?
The /D option is generally used when you want to define it differently on different builds (so it can be changed in the makefile)
If you will "always" want it set the same way, use #define.
By putting them in your project file you maintain a close association between the platform specific warnings and the platform, which seems correct to me.
If they're in the code, they're always in the code whether or not it's appropriate for the platform. You don't need it for GCC or possibly future versions of Visual C++. On the other hand, by having it in the code, it's more obvious that it's there at all. If you move (copy) the code, it'll be easier to remember to move that define with it.
Pros and Cons each way. YMMV.
If you have a header that is included in all others (like that stdafx.h), you should put that there. The compiler command line switch is used usually for build options, that are not always set, like NDEBUG, UNICODE and such things. While your macro would essential always be set.
That might sound arbitrary. And indeed, some might say other things. At the end, though, you have to decide what fits your situation.
If you do put them in your code, remember to ifdef them properly:
#ifdef _MSC_VER
#define _SCL_SECURE_NO_WARNINGS
#endif
This will keep your code portable.
In general I prefer putting #define's in the code as opposed to using the /D compiler switch for most things because it seems to be more intuitive to look for a #define than to check compiler settings.
/D isn't a valid flag for msbuild.exe (at least the version I'm using v2.0.50727).
The way this is done is:
/p:DefineConstants="MY_MACRO1;MY_MACRO2"
The result of doing this is:
Target CoreCompile:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Csc.exe /define:MY_MACRO1;MY_MACRO2 ...