Use of #if defined(_DEBUG) - c++

I am writing a C++ MFC-based application. Occasionally I will add #if defined(_DEBUG) statements in order to help me during the development process. My manager has demanded that I remove all such statements as he doesn't want two versions of the code. My feeling is that without the ability to use #if defined(_DEBUG), bugs are more likely to creep in undetected during the development process.
Does anyone else have any thoughts on this?

Well, the runtime library, and the MFC has two versions, debug and release, so there will always be two versions of the code.
The usage of #ifdef(_DEBUG) and assert() will help you in the debugging process.
BUT...
It is not recommended to add class/struct members in an #ifdef clause, because the binary interface of the object will be different, and if you serialize or sending such a struct from debug and release versions they will be different.
E.G:
#include <assert.h>
class MyClass
{
void SetA(int a)
{
assert(a<10000); // this is recommended
}
#ifdef _DEBUG
int m_debugCounter; // This is not recommended
#endif
};
in the example, sizeof(MyClass) is different from debug and release versions.

There are pros and cons for having debug code compiled out of production code.
Some of the pros:
The production executables will be smaller
Resources are not wasted preparing logging messages, statistics, etc which are never used
You may be able to remove dependencies on external libraries, if they are only used by the debugging code
Some of the cons:
You end up with two different executables
It makes troubleshooting in the field more difficult because some of the debugging logs are not available in the production version
There is a risk that the behavior of the two versions is not the same. For examples, bugs might appear in the production version which did not appear during testing because the testing was done in the debug version
There is a risk of incompatibilities between the versions, such as files having slightly different formats (as indicated in eranb's answer).
In particular, if using code compiled out (even assert() calls) be very careful that the debug code has no side effects. Otherwise you will create bugs which disappear when debugging is turned on.
You can achieve at least some of pros by using a decent logging framework. For example, I have had some success using rLog - one of the things I like about it is that it is optimized to minimize overhead of dormant logging statements. Other more up to date logging frameworks provide similar functionality.
Having said that, every C++ environment I have worked with has at least some level of compile in/out of debug code. For example, assert() is compiled in/out depending on the NDEBUG macro. The ASSERT() / _DEBUG seems to be an MSVC specific variation on that theme (although as far as I know MSVC also supports the more standard assert() / NDEBUG).

If the manager wants this he doesn't understand to target code quality...
If #if should be removed you can also remove ASSERTs of any Kind. Because they just hide the #if _DEBUG.
And Keep in mind that the MFC and CRT itself is full of this (real) useful #if _DEBUG code!
Without such special _DEBUG blocks I wouldn't be able to target misuse of classes, or to trap internal problems.

Related

Is there a difference in source code for release and debug compiled program? [C/C++]

I've been getting more into C++ programming as of late and keep running into the whole 'debug vs release' compiled versions. Now I feel like I've got a pretty decent understanding of some of the differences between released and debug versions of compiled code. For the debug version of code, the compiler doesn't attempt to optimize the code such that you can run a debugger and step through your program line by line. Essentially the compiled code closely resembles your source code in how it is executed. When compiling in release mode, the compiler attempts to optimize the program such that it has the same functionality, but is more efficient.
However, I'm curious as to whether or not there are instances where the source code between release and debug version can be different. That is, when we refer to debug vs release, are we always just talking about the compiled code, or can there exist differences in the source code?
This question arises due to me working in a proprietary programming language in which a formal, step by step debugger doesn't exist, yet serial monitors do exist. Thus a lot of our 'debug' vs 'release' code is implemented via #defines which look something like this:
#ifdef _DEBUG
check that error didn't occur...
SerialPrint("Error occurred")
#endif
So to summarize my question, depending on your IDE, are there often settings for implementing what I've illustrated? That is, when you attempt to compile to a debug version, can it be integrated with changes in the source code? Or does release vs debug typically just refer to the compiled binaries?
Thank you!
Is there a difference in source code for release and debug compiled program?
It depends on the source code, and the options used to compile the library or program. Below are a few differences I am aware of.
ASSERTS
The simplest of "debugging and diagnostics" is an assert. They are in effect when NDEBUG is not defined. Asserts create self-debugging code, and they snap when an unexpected condition is encountered. The trick is you have to assert everything. Everywhere you validate parameters and state, you should see an assert. Everywhere there's an assert, you should see an if to validate parameters and state.
I laugh when I see a code base without asserts. I kind of say to myself, the devs have too much time on their hands if they are wasting it under the debugger. I often ask why thy don't use asserts, and they usually answer with the following...
Posix assert sucks because it calls abort. If you are debugging a program, then you usually want to step the code to see how the code handles negative conditions that caused the assert to fire. Terminating the program runs foul with the "debugging and diagnostic" purpose. It has got to be one of the dumbest decisions in the history of C/C++. No one seems to recall the reasoning for the abort (a few years ago I tried to track down the pedigree on various C/C++ standards lists).
Usually you replace the useless Posix assert with something more useful, like an assert that raises a SIGTRAP on Linux or calls DebugBreak on Windows. See, for example, a sample trap.h. You replace the Posix assert with your assert to ensure libraries you are using get the updated behavior (if they have already been compiled, then its too late).
I also laugh when projects like ISC's BIND (the DNS server that powers the Internet) DoS's itself with its asserts (they have their own assert; they don't use Posix assert). There's a number of CVE's against BIND for its self-inflicted DoS. DoS'ing yourself is right up there with "lets abort a program being debugged".
For completeness, Microsoft Foundation Classs (MFC) used to have something like 16,000 or 20,000 asserts to help catch mistakes early. That was back in the late 1990s or mid 2000s. I don't what the state is today.
APIs
Some APIs exist that are purposefully built for "debugging and diagnostics". Other APIs can be used for it even though they are not necessarily safe to use in production.
An example of the former (purposefully built) is a Logging and DebugPrint API. Apple successfully used it to egress a user's FileVault passwords and keys. Also see os x filevault debug print.
An example of the latter (not safe for production) is Windows IsBadReadPointer and IsBadWritePointer. Its not safe for production because it suffers a race condition. But its usually fine for development because you want the extra scrutiny.
When we perform security reviews and audits, we often ask/recommend removing all non-essential logging; and ensure the logging level cannot be changed at runtime. When an app goes production, the time for debugging is over. There's no reason to log everything.
Libraries
Sometimes there are special libraries to use to help with debugging a diagnostics. Linux's Electric Fence and Microsoft's CRT Library come to mind. Both are memory checkers with APIs. In this case, you link command will be different, too.
Options
Sometimes you need additional options or defines to help with debugging and diagnostics. Glibc++ and -D_GLIBCXX_DEBUG comes to mind. Another one is concept checking, which used to be enabled by the define -D_GLIBCXX_CONCEPT_CHECKS. Its Boost code and its broken, so you should not use it. In these cases, your compile flags will be different.
Another one I often laugh at is a Release build that lacks the NDEBUG define. That includes Debian and Ubuntu as a matter of policy. The NSA, GHCQ and other 3-letter agencies thanks them for taking the sensitive information (like server keys), stripping the encryption (writing it to a file unprotected), and then egressing the sensitive information (sending them Windows Error Reporting, Apport Error Reporting, etc).
Initialization
Some development environments perform initialization with special bit patterns when a value is not explicitly initialized. Its really just a feature of the tools, like the compiler or linker. Microsoft's tools come to mind; see When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete? GCC had a feature request for it, but I don't think anything was ever done with it.
I often laugh when I disassemble a production DLL and see the Microsoft debug bit patterns bcause I know they are shipping a Debug DLL. I laugh because it often indicates the Release DLL has a memory error that the dev team was not able to clear. Adobe is notorious for doing this (not surprisingly, Adobe supplies some of the most insecure software on the planet, even though they don't supply an Operating System like Apple or Microsoft).
#ifdef _DEBUG
check that error didn't occur...
SerialPrint("Error occurred")
#endif
It makes me want to cry, but you still have to do this in 2016. GDB is (was?) broken under Aarch64, X32 and S/390, so you have to use printf's to debug your code.
The C++ standard supports a kind of debug versus release via the assert macro, whose behavior is governed by whether the NDEBUG macro symbol is defined. But this is not intended as an application wide setting. The standard explicitly notes that each time <assert.h> or <cassert> is included, regardless of how many times it's already been included, it changes the effective definition of assert according to the current definitional state of NDEBUG.
The compiler vendor's implementation of the standard library may rely on other symbols.
And application frameworks may rely on yet other symbols, e.g. _DEBUG, which is a symbol defined by the Visual C++ compiler when you specify the (debug library) /MTd or /MDd option.
In regards to IDE settings, you're free to do what you want. Yes, some IDEs (like MS Visual Studio) or tools like CMake add _DEBUG macro definition specifically for debug configurations, but you could also define one yourself if it's missing. Also, _DEBUG name is not set in stone, you could define MY_PROJECT_DEBUG or whatever instead.
If release and debug versions stay identical in regards to their primary functionality, you're fine. You could add any code wrapped in #ifdef _DEBUG (or otherwise #ifndef _DEBUG) as long as the end result produced by the program is the same.
The usual mistake there is when debug code, which is considered optional, produces side effects. Consider assert example given by others; approximate implementation looks like this:
#ifdef NDEBUG
#define assert(x) ((void)0)
#else
#define assert(x) ((x) ? (void)0 : abort())
#endif
Notice how assert doesn't evaluate the x when in release mode (provided that NDEBUG is only defined in release mode). This means that if condition passed as macro argument has side effects, you code will behave differently in debug and release modes:
#include <assert.h>
int main()
{
int x = 5;
assert(x-- == 5);
return x; // returns 5 in release mode, 4 in debug mode
}
The behavior above is not something you want, as it changes the end result. Real-world code may be more complex and less evident to introduce side effects, e.g. assert(SomeFunctionCall()) and the like.
Note that asserts may not be the best example though, as some people like to have them enabled even in release builds.

is there a way to run C code designed for an embedded micro-controller on a normal computer?

I have a C code which is written for an ATmega16 chip, and it is full of keywords like :
flash, eeprom, bit
and macros(?) like
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
that come before function signatures.
Now what I want to do is write and run unit tests that verify the correctness of the logic of the controller unit and I want to be able to run these tests on any computer and not need to have the "device" that the code represents.
I searched a lot and came across "abstracting the hardware" and "replacing them with stubs" kind of solutions, but I'm not sure how I can abstract something like "interrupt [TIM1_OVF]" in the code!
I was wondering if there any special tools that provide the environment for running these sorts of codes?
And also if I am going at it wrong, can anybody point me in the right direction? giving that changing or rewriting (!) the micro-controller's code might not be an option?
Thanks a bunch.
Your examples are not ISO C code, they are compiler specific extensions, they are not common across AVR compilers let alone architectures. In many cases they can be worked around by defining macros that require little or no modification of the code. To make your code portable in any case even across different vendor's AVR compilers it is a good idea to do that in any case, although a combination of techniques may be required.
Most compilers support an "always include" option that allows a header file to be included from the command line with an explicit #include directive in the source. Creating a header with your compatibility macros, and including it either implicitly as described or explicitly in the code is a useful technique. For example for the issues you have mentioned, you might have:
// compatability.h
#if !defined COMPATABILITY_INCLUDE
#define COMPATABILITY_INCLUDE
#if defined __IAR_SYSTEMS_ICC__
#define INTERRUPT( irq, handler ) __interrupt [irq] void handler(void)
#elif defined _WIN32
#define INTERRUPT( irq, handler ) void handler(void)
#define __flash const
#define __eeprom const
#define __bit char
#else
#error Unknown toolchain/environment
#endif
#endif
That will remove the memory location qualifiers from the Win32 code, and define __bit as a char. The interrupt handler macro will turn a handler into a regular function on Win32, but does require your code to be modified, but since every toolchain does this differently, that is perhaps no bad thing.
For example in this case you would change:
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
{
...
}
to
INTERRUPT( TIM_OVF, timer1_ovf_isr )
{
...
}
Note that you should use approapriate target macros in the compatability file - I have guessed at IAR for example; you may be using a different compiler. Your compiler documentation should specify the available predefined macros, alternatively Pre-defined Compiler Macros "project" on Sourceforge is a useful resource.
Some of the transformations may change the code semantically, such as swapping __bit for char in some cases for example if the bit is assigned a value greater than one, and then compared with 1, the embedded target is likely to yield true, while on the PC build it will not. It might better be transformed to _Bool but your compiler may give warnings about implicit conversions. My suggestions may not necessarily be the best possible transformation either - consult your compiler's manual for the precise semantics and decide how best to transform them to standard C for test builds.
An alternative that preserves proprietary semantics is to run your unit tests in an instruction-set simulator using debugger scripting if available to implement stubs for hardware interaction, however that method makes it impossible to use off-the-shelf unit-testing frameworks such as CUnit.
Depending on your toolchain, you may already have AVR simulator available, which would allow you to run your unit tests on any PC. For example, IAR provides "C-SPY", an AVR simulator that supports a terminal window, can show show register values, can support generation of interrupts, etc. Assuming you keep your unit sizes reasonable, you do not need significant infrastructure or stubbed interfaces to make this work.
One large benefit of running unit tests on your target platform (with your target compiler) is that you can account for any particular behaviors that will be caused by the platform (endianness, word size, compiler or library peculiarities, etc), compared to running in a PC environment.

Best (cleanest) way for writing platform specific code

Say you have a piece of code that must be different depending on the operating system your program is running on.
There's the old school way of doing it:
#ifdef WIN32
// code for Windows systems
#else
// code for other systems
#endif
But there must be cleaner solutions that this one, right?
The typical approach I've seen first hand at a half-dozen companies over my career is the use of a Hardware Abstraction Layer (HAL).
The idea is that you put the lowest level stuff into a dedicated header plus statically linked library, which includes things like:
Fixed width integers (int64_t on Linux, __int64 on Windows, etc).
Common library functions (strtok_r() vs strtok_s() on Linux vs Windows).
A common data type setup (ie: typedefs for all data types, such as xInt, xFloat etc, used throughout the code so that if the underlying type changes for a platform, or a new platform is suddenly supported, no need to re-write and re-test code that depends on it, which can be extremely expensive in terms of labor).
The HAL itself is usually riddled with preprocessor directives like in your example, and that's just the reality of the matter. If you wrap it with run-time if/else statements, you comilation will fail due to unresolved symbols. Or worse, you could have extra symbols included which will increase the size of your output, and likely slow down your program if that code is executed frequently.
So long as the HAL has been well-written, the header and library for the HAL give you a common interface and set of data types to use in the rest of your code with minimal hassle.
The most beautiful aspect of this, from a professional standpoint, is that all of your other code doesn't have to ever concern itself with architecture or operating system specifics. You'll have the same code-flow on various systems, which will by extension, allow you to test the same code in a variety of different manners, and find bugs you wouldn't normally expect or test for. From a company's perspective, this saves a ton of money in terms of labor, and not losing clients due to them being angry with bugs in production software.
I've had to do a lot of this sort of stuff in my career, supporting code that buils and runs on an embedded device, plus in windows, and then also have it run on different ASICS and/or revisions of ASICS.
I tend to do what you suggest and then when things really diverge, move on to defining the interface I desire to be fixed between platforms and then having separate implementation files or even libraries. It can get really messy as the codebase gets older and more exceptions need to be added.
Sometimes you can hide this stuff in header files, so your code looks 'clean', but a lot of times that's just obfuscating what's going on behind a bunch of macro magic.
The only other thing I'd add is I tend to make the #ifdef/#else/#endif chain fail if none of the options are defined. This forces me to revisit the issue when a new revision comes along. Some folks prefer it to have a default, but I find that just hides potential failures.
Granted, I'm working in the embedded world where code space is paramount (since memory is small and fixed), and code cleanliness unfortunately has to take a back seat.
An adopted practice for non-trivial projects is to write platform-specific code in separate files (and in separate directories, where applicable), avoiding "localized" #ifdefs to the fullest possible extent.
Say you are developing a library called "Example" and example.hpp will be your library header:
example.hpp
#include "platform.hpp"
//
// here: platform-independent declarations, includes etc
//
// below: platform-specific includes
#if defined(WINDOWS)
#include "windows\win32_specific_code.hpp"
// other win32 headers
#elif defined(POSIX)
#include "posix/linux_specific_code.hpp"
// other linux headers
#endif
platform.hpp (simplified)
#if defined(WIN32) && !defined(UNIX)
#define WINDOWS
#elif defined(UNIX) && !defined(WIN32)
#define POSIX
#endif
win32_specific_code.hpp
void Function1();
win32_specific_code.cpp
#include "../platform.hpp"
#ifdef WINDOWS // We should not violate the One Definition Rule
#include "win32_specific_code.hpp"
#include <iostream>
void Function1()
{
std::cout << "You are on WINDOWS" << std::endl;
}
//...
#endif /* WINDOWS */
Of course, declare Function1() in your linux_specific_code.hpp file as well.
Then, when implementing it for Linux (in the linux_specific_code.cpp file), be sure to surround everything for conditional compilation as well, similar to I did above (eg. using #ifdef POSIX). Otherwise, the compiler will generate multiple definitions and you'll get a linker error.
Now, everything an user of your library must do is #include <example.hpp> in his code, and place either #define WINDOWS or #define POSIX in his compiler's preprocessor definitions. In fact, the second step might not be necessary at all, assuming his environment already defines either one of the WIN32 or UNIX macros. This way, Function1() can already be used from the code in a cross-platform manner.
This approach is pretty much the one used by the Boost C++ Libraries. I personally find it clean and sensible. If, however, you don't like it, you can have a read at Chromium's conventions for multi-platform development for a somewhat different strategy.

Dos and Don'ts of Conditional Compile [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
When is doing conditional compilation a good idea and when is it a horribly bad idea?
By conditional compile I mean using #ifdefs to only compile certain bits of code in certain conditions. The #defineds themselves may be in either a common header file or introduced via the -D compiler directive.
The good ideas:
header guards (you can't do much better for portability)
conditional implementation (juggling with platform differences)
debug specific checks (asserts, etc...)
per suggestion: extern "C" { and } so that the same headers may be used by the C++ implementation and by the C clients of the API
The bad idea:
changing the API between compile flags, since it forces the client to changes its uses with the same compile flags... urk!
Don't put ifdef in your code.
It makes it really hard to read and understand. Please make the code as easy to read as possable for the maintainer (he knows where you live and owns an Axe).
Hide the conditional code in separate functions and use the ifdef to define what functions are being used.
DONT use the else part to make a definition. If you are ding that you are saying one platform is unique and all others are the same. This is unlikely, what is more likely is that you know what happens on a couple of platforms but you should use the #else section to stick a #error so when it is ported to a new platform a developer has to explicitly fix the condition for his platform.
x.h
#if defined(WINDOWS)
#define MyPlatfromSleepSeconds(x) sleep(x * 1000)
#elif defined (UNIX)
#define MyPlatfromSleepSeconds(x) Sleep(x)
#else
#error "Please define appropriate sleep for your platform"
#endif
Don;t be tempted to expand a macro into multiple lines of code. That leads to madness.
p.h
#if defined(SOLARIS_3_1_1)
#define DO_SOME_TASK(x,y) doPartA(x); \
doPartB(y); \
couple(x,y)
#elif defined(WINDOWS)
#define DO_SOME_TASK(x,y) doAndCouple(x,y)
#else
#error "Please define appropriate DO_SOME_TASK for your platform"
#endif
If you develop the code on windows then test on solaris 3_1_1 later you may find unexpected bugs when people do things like:
int loop;
for(loop = 0;loop < 10;++loop)
DO_SOME_TASK(loop,loop); // Windows works fine()
// Solaras. Only doPartA() is in the loop.
// The other statements are done when the loop finishes
Basically, you should try to keep the amount of code that is conditionally compiled to a minimum, because you should be trying to test all that and having lots of conditions makes that more difficult. It also reduces the readability of the code; conditionally compiling whole files is clearer, e.g., by putting platform-specific code in a separate file for each platform and having that all have the same API from the perspective of the rest of the problem. Also try to avoid using it in function headers; again, that's because that's a place where it is particularly confusing.
But that's not to say that you should never use conditional compilation. Just try to keep it short and minimal. (Where I can, I use conditional compilation to control the definitions of other macros which are then just used in the rest of the code; that seems to be clearer to me at least.)
It's a bad idea whenever you don't know what you're doing. It can be a good idea when you're effectively solving an issue this way :).
The way you describe conditional compiling, include guards are part of it. It's not only a good idea to use it. It's a way to avoid compilation errors.
For me, conditional compiling is also a way to target multiple compilers and operating systems. I'm involved in a lib that's supposed to be compileable on Windows XP and newer, 32 or 64 bit, using MinGW and Visual C++, on Linux 32 and 64 bit using gcc/g++ and on MacOS using I-don't-know-what (I'm not maintaining that, but I assume it's a gcc port). Without the preprocessor conditions, it would be pretty much impossible to create a single source file that's compileable anywhere.
Another pragmatic use of conditional compiles is to "comment out" sections of code which contain standard "C" comments (i.e. /* */). Some compilers do not allow nesting of these comments, for example:
/* comment out block of code
.... code ....
/* This is a standard
* comment.
*/ ... oopos! Some compilers try to compile code after this closing comment.
.... code ....
end of block of code*/
(As you can see in the syntax highlighting, StackOverflow does not nest comments.)
instead you can use#ifdef to get the right effect, for example:
#ifdef _NOT_DEFINED_
.... code ....
/* This is a standard
* comment.
*/
.... code ....
#endif
In the past if you wanted to produce truly portable code, you'd have to resort to some form of conditional compilation. With there being a proliferation of portable libraries (such as APR, boost etc.) this reason has little weight IMHO. If you are using conditional compilation simply compile out blocks of code that are not need for particular builds, you should really revisit your design - I should imagine that this would become a nightmare to maintain.
Having said all that, if you do need to use conditional compilation, I would hide as much as I can away from the main body of the code and limit to to very specific cases that are very well understood.
Good/justifiable uses are based on cost/benefit analysis. Obviously, people here are very conscious of the risks:
in linking objects that saw different versions of classes, functions etc.
in making code hard to understand, test and reason about
But, there are uses which often fall into the net-benefit category:
header guards
code customisations for distinct software "ecosystems", such as Linux versus Windows, Visual C++ versus GCC, CPU-specific optimisations, sometimes word size and endianness factors (though with C++ you can often determine these at compile via template hackery, but that may prove messier still) - abstracts away lower-level differences to provide a consistent API across those environments
interacting with existing code that uses preprocessor defines to select versions of APIs, standards, behaviours, thread safety, protocols etc. (sad but true)
compilation that may use optional features when available (think of GNU configure scripts and all the tests they perform on OS interfaces etc)
request that extra code be generated in a translation unit, such as adding main() for a standalone app versus without for a library
controlling code inclusion for distinct logical build modes such as debug and release
It is always a bad idea. What it does is effectively create multiple versions of your source code, all of which need to be tested, which is a pain, to say the least. Unfortunately, like many bad things it is sometimes unavoidable. I use it in very small amounts when writing code that needs to be ported between Windows and Linux, but if I found myself doing it a lot, I would consider alternatives, such as having two separate development sub-trees.

Best practices and tools for debugging differences between Debug and Release builds?

I've seen posts talk about what might cause differences between Debug and Release builds, but I don't think anybody has addressed from a development standpoint what is the most efficient way to solve the problem.
The first thing I do when a bug appears in the Release build but not in Debug is I run my program through valgrind in hopes of a better analysis. If that reveals nothing, -- and this has happened to me before -- then I try various inputs in hopes of getting the bug to surface also in the Debug build. If that fails, then I would try to track changes to find the most recent version for which the two builds diverge in behavior. And finally I guess I would resort to print statements.
Are there any best software engineering practices for efficiently debugging when the Debug and Release builds differ? Also, what tools are there that operate at a more fundamental level than valgrind to help debug these cases?
EDIT: I notice a lot of responses suggesting some general good practices such as unit testing and regression testing, which I agree are great for finding any bug. However, is there something specifically tailored to this Release vs. Debug problem? For example, is there such a thing as a static analysis tool that says "Hey, this macro or this code or this programming practice is dangerous because it has the potential to cause differences between your Debug/Release builds?"
One other "Best Practice", or rather a combination of two: Have Automated Unit Tests, and Divide and Conquer.
If you have a modular application, and each module has good unit tests, then you may be able to quickly isolate the errant piece.
The very existence of two configurations is a problem from debugging point of view. Proper engineering would be such that the system on the ground and in the air behave the same way, and achieve this by reducing the number of ways by which the system can tell the difference.
Debug and Release builds differ in 3 aspects:
_DEBUG define
optimizations
different version of the standard library
The best way around, the way I often work, is this:
Disable optimizations where performance is not critical. Debugging is more important. Most important is disable function auto-inlining, keep standard stack frame and variable reuse optimizations. These annoy debug the most.
Monitor code for dependence on DEBUG define. Never use debug-only asserts, or any other tools sensitive to DEBUG define.
By default, compile and work /release.
When I come across a bug that only happens in release, the first thing I always look for is use of an uninitialized stack variable in the code that I am working on. On Windows, the debug C runtime will automatically initialise stack variables to a know bit pattern, 0xcdcdcdcd or something. In release, stack variables will contain the value that was last stored at that memory location, which is going to be an unexpected value.
Secondly, I will try to identify what is different between debug and release builds. I look at the compiler optimization settings that the compiler is passed in Debug and Release configurations. You can see this is the last property page of the compiler settings in Visual Studio. I will start with the release config, and change the command line arguments passed to the compiler one item at a time until they match the command line that is used for compiling in debug. After each change I run the program and reproducing the bug. This will often lead me to the particular setting that causes the bug to happen.
A third technique can be to take a function that is misbehaving and disable optimizations around it using the pre-processor. This will allow you run the program in release with the particular function compiled in debug. The behaviour of the program which has been built in this way will help you learn more about the bug.
#pragma optimize( "", off )
void foo() {
return 1;
}
#pragma optimize( "", on )
From experience, the problems are usually stack initialization, memory scrubbing in the memory allocator, or strange #define directives causing the code to be compiled incorrectly.
The most obvious cause is simply the use of #ifdef and #ifndef directives associated DEBUG or similar symbol that change between the two builds.
Before going down the debugging road (which is not my personal idea of fun), I would inspect both command lines and check which flags are passed in one mode and not the other, then grep my code for this flags and check their uses.
One particular issue that comes to mind are macros:
#ifdef _DEBUG_
#define CHECK(CheckSymbol) { if (!(CheckSymbol)) throw CheckException(); }
#else
#define CHECK(CheckSymbol)
#endif
also known as a soft-assert.
Well, if you call it with a function that has side effect, or rely on it to guard a function (contract enforcement) and somehow catches the exception it throws in debug and ignore it... you will see differences in release :)
When debug and release differ it means:
you code depends on the _DEBUG or similar macros (defined when compiling a debug version - no optimizations)
your compiler has an optimization bug (I seen this few times)
You can easily deal with (1) (code modification) but with (2) you will have to isolate the compiler bug. After isolating the bug you do a little "code rewriting" to get the compiler generate correct binary code (I did this a few times - the most difficult part is to isolate the bug).
I can say that when enabling debug information for release version the debugging process works ... (though because of optimizations you might see some "strange" jumps when running).
You will need to have some "black-box" tests for your application - valgrind is a solution in this case. These solutions help you find differences between release and debug (which is very important).
The best solution is to set up something like automated unit testing to thoroughly test all aspects of the application (not just individual components, but real world tests which use the application the same way a regular user would with all of the dependencies). This allows you to know immediately when a release-only bug has been introduced which should give you a good idea of where the problem is.
Good practice to actively monitor and seek out problems beats any tool to help you fix them long after they happen.
However, when you have one of those cases where it's too late: too many builds have gone by, can't reproduce consistently, etc. then I don't know of any one tool for the job. Sometimes fiddling with your release settings can give a bit of insight as to why the bug is occurring: if you can eliminate optimizations which suddenly make the bug go away, that could give you some useful information about it.
Release-only bugs can fall into various categories, but the most common ones (aside from something like a misuse of assertions) is:
1) Uninitialized memory. I use this term over uninitialized variables as a variable may be initialized but still be pointing to memory which hasn't been initialized properly. For this, memory diagnostic tools like Valgrind can help.
2) Timing (ex: race conditions). These can be a nightmare to debug, but there are some multithreading profilers and diagnostic tools which can help. I can't suggest any off the bat, but there's Coverity Integrity Manager as one example.