I've been wondering if it is possible to enable any debug checks for libc++. One of my favorite things about MSVC's STL is that it catches some otherwise hard to find bugs right from the get go (though I wish it were easier to disable for speed). After peeking in the headers I tried:
#define _LIBCPP_DEBUG_LEVEL 2
However, this leads to build errors ('__get_db undeclared'). Any insights on whether this is a work in progress, or if there is a different expected way to enable these checks?
This is definitely a work in progress.
There's a (very old) status page here that I need to update.
The idea is that users will interact with it by setting the preprocessor symbol _LIBCPP_DEBUG
Just defining it will give basic tests, setting it to a number > 1 will give more extensive tests.
However, as you have found, it is currently non-functional.
It appears some progress was made on this in the meantime. At least there is some documentation now that doesn't state that the debug mode is horribly broken.
As stated in the docs I link, the debug mode shall be controlled by defining _LIBCPP_DEBUG to either 0 or 1; the macro _LIBCPP_DEBUG_LEVEL appears to be some internal switch.
However, looking at question like this one, spurious compilation errors can and do still happen.
Related
I've been getting more into C++ programming as of late and keep running into the whole 'debug vs release' compiled versions. Now I feel like I've got a pretty decent understanding of some of the differences between released and debug versions of compiled code. For the debug version of code, the compiler doesn't attempt to optimize the code such that you can run a debugger and step through your program line by line. Essentially the compiled code closely resembles your source code in how it is executed. When compiling in release mode, the compiler attempts to optimize the program such that it has the same functionality, but is more efficient.
However, I'm curious as to whether or not there are instances where the source code between release and debug version can be different. That is, when we refer to debug vs release, are we always just talking about the compiled code, or can there exist differences in the source code?
This question arises due to me working in a proprietary programming language in which a formal, step by step debugger doesn't exist, yet serial monitors do exist. Thus a lot of our 'debug' vs 'release' code is implemented via #defines which look something like this:
#ifdef _DEBUG
check that error didn't occur...
SerialPrint("Error occurred")
#endif
So to summarize my question, depending on your IDE, are there often settings for implementing what I've illustrated? That is, when you attempt to compile to a debug version, can it be integrated with changes in the source code? Or does release vs debug typically just refer to the compiled binaries?
Thank you!
Is there a difference in source code for release and debug compiled program?
It depends on the source code, and the options used to compile the library or program. Below are a few differences I am aware of.
ASSERTS
The simplest of "debugging and diagnostics" is an assert. They are in effect when NDEBUG is not defined. Asserts create self-debugging code, and they snap when an unexpected condition is encountered. The trick is you have to assert everything. Everywhere you validate parameters and state, you should see an assert. Everywhere there's an assert, you should see an if to validate parameters and state.
I laugh when I see a code base without asserts. I kind of say to myself, the devs have too much time on their hands if they are wasting it under the debugger. I often ask why thy don't use asserts, and they usually answer with the following...
Posix assert sucks because it calls abort. If you are debugging a program, then you usually want to step the code to see how the code handles negative conditions that caused the assert to fire. Terminating the program runs foul with the "debugging and diagnostic" purpose. It has got to be one of the dumbest decisions in the history of C/C++. No one seems to recall the reasoning for the abort (a few years ago I tried to track down the pedigree on various C/C++ standards lists).
Usually you replace the useless Posix assert with something more useful, like an assert that raises a SIGTRAP on Linux or calls DebugBreak on Windows. See, for example, a sample trap.h. You replace the Posix assert with your assert to ensure libraries you are using get the updated behavior (if they have already been compiled, then its too late).
I also laugh when projects like ISC's BIND (the DNS server that powers the Internet) DoS's itself with its asserts (they have their own assert; they don't use Posix assert). There's a number of CVE's against BIND for its self-inflicted DoS. DoS'ing yourself is right up there with "lets abort a program being debugged".
For completeness, Microsoft Foundation Classs (MFC) used to have something like 16,000 or 20,000 asserts to help catch mistakes early. That was back in the late 1990s or mid 2000s. I don't what the state is today.
APIs
Some APIs exist that are purposefully built for "debugging and diagnostics". Other APIs can be used for it even though they are not necessarily safe to use in production.
An example of the former (purposefully built) is a Logging and DebugPrint API. Apple successfully used it to egress a user's FileVault passwords and keys. Also see os x filevault debug print.
An example of the latter (not safe for production) is Windows IsBadReadPointer and IsBadWritePointer. Its not safe for production because it suffers a race condition. But its usually fine for development because you want the extra scrutiny.
When we perform security reviews and audits, we often ask/recommend removing all non-essential logging; and ensure the logging level cannot be changed at runtime. When an app goes production, the time for debugging is over. There's no reason to log everything.
Libraries
Sometimes there are special libraries to use to help with debugging a diagnostics. Linux's Electric Fence and Microsoft's CRT Library come to mind. Both are memory checkers with APIs. In this case, you link command will be different, too.
Options
Sometimes you need additional options or defines to help with debugging and diagnostics. Glibc++ and -D_GLIBCXX_DEBUG comes to mind. Another one is concept checking, which used to be enabled by the define -D_GLIBCXX_CONCEPT_CHECKS. Its Boost code and its broken, so you should not use it. In these cases, your compile flags will be different.
Another one I often laugh at is a Release build that lacks the NDEBUG define. That includes Debian and Ubuntu as a matter of policy. The NSA, GHCQ and other 3-letter agencies thanks them for taking the sensitive information (like server keys), stripping the encryption (writing it to a file unprotected), and then egressing the sensitive information (sending them Windows Error Reporting, Apport Error Reporting, etc).
Initialization
Some development environments perform initialization with special bit patterns when a value is not explicitly initialized. Its really just a feature of the tools, like the compiler or linker. Microsoft's tools come to mind; see When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete? GCC had a feature request for it, but I don't think anything was ever done with it.
I often laugh when I disassemble a production DLL and see the Microsoft debug bit patterns bcause I know they are shipping a Debug DLL. I laugh because it often indicates the Release DLL has a memory error that the dev team was not able to clear. Adobe is notorious for doing this (not surprisingly, Adobe supplies some of the most insecure software on the planet, even though they don't supply an Operating System like Apple or Microsoft).
#ifdef _DEBUG
check that error didn't occur...
SerialPrint("Error occurred")
#endif
It makes me want to cry, but you still have to do this in 2016. GDB is (was?) broken under Aarch64, X32 and S/390, so you have to use printf's to debug your code.
The C++ standard supports a kind of debug versus release via the assert macro, whose behavior is governed by whether the NDEBUG macro symbol is defined. But this is not intended as an application wide setting. The standard explicitly notes that each time <assert.h> or <cassert> is included, regardless of how many times it's already been included, it changes the effective definition of assert according to the current definitional state of NDEBUG.
The compiler vendor's implementation of the standard library may rely on other symbols.
And application frameworks may rely on yet other symbols, e.g. _DEBUG, which is a symbol defined by the Visual C++ compiler when you specify the (debug library) /MTd or /MDd option.
In regards to IDE settings, you're free to do what you want. Yes, some IDEs (like MS Visual Studio) or tools like CMake add _DEBUG macro definition specifically for debug configurations, but you could also define one yourself if it's missing. Also, _DEBUG name is not set in stone, you could define MY_PROJECT_DEBUG or whatever instead.
If release and debug versions stay identical in regards to their primary functionality, you're fine. You could add any code wrapped in #ifdef _DEBUG (or otherwise #ifndef _DEBUG) as long as the end result produced by the program is the same.
The usual mistake there is when debug code, which is considered optional, produces side effects. Consider assert example given by others; approximate implementation looks like this:
#ifdef NDEBUG
#define assert(x) ((void)0)
#else
#define assert(x) ((x) ? (void)0 : abort())
#endif
Notice how assert doesn't evaluate the x when in release mode (provided that NDEBUG is only defined in release mode). This means that if condition passed as macro argument has side effects, you code will behave differently in debug and release modes:
#include <assert.h>
int main()
{
int x = 5;
assert(x-- == 5);
return x; // returns 5 in release mode, 4 in debug mode
}
The behavior above is not something you want, as it changes the end result. Real-world code may be more complex and less evident to introduce side effects, e.g. assert(SomeFunctionCall()) and the like.
Note that asserts may not be the best example though, as some people like to have them enabled even in release builds.
I'm reviewing a C++ MFC project. At the beginning of some of the files there is this line:
#pragma optimize("", off)
I get that this turns optimization off for all following functions. But what would the motivation typically be for doing so?
I have used this exclusively to get better debug information in a particular set of code with the rest of the application is compiled with the optimization on. This is very useful when running with a full debug build is impossible due to the performance requirement of your application.
I've seen production code which is correct but so complicated that it confuses the optimiser into producing incorrect output. This could be the reason to turn optimisations off.
However, I'd consider it much more likely that the code is simply buggy, having Undefined Behaviour. The optimiser exposes that and leads to incorrect runtime behaviour or crashes. Without optimisations, the code happens to "work." And rather than find and remove the underlying problem, someone "fixed" it by disabling optimisations and leaving it at that.
Of course, this is about as fragile and workarounds can get. New hardware, new OS patch, new compiler patch, any of these can break such a "fix."
Even if the pragma is there for the first reason, it should be heavily documented.
Another alternative reason for these to be in a code base... Its an accident.
This is a very handy tool for turning off the optimizer on a specific file whilst debugging - as Ray mentioned above.
If changelists are not reviewed carefully before committing, it is very easy for these lines to make their way into codebases, simply because they were 'accidentally' still there when other changes were committed.
I know this is an old topic, but I would add that there is another reason to use this directive - though not relevant for most application developers.
When writing device drivers or other low-level code, the optimizer sometimes produces output that does not interact with the hardware correctly.
For example code that needs to read a memory-mapped register (but not use the value read) to clear an interrupt might be optimized out by the compiler, producing assembly code that does not work.
This might also illustrate why (as Angew notes) use of this directive should be clearly documented.
It allows you to debug in release mode.
While working within C++ libraries, I've noticed that I am not granted any intellisense while inside directive blocks like "#ifndef CLIENT_DLL ... #endif". This is obviously due to the fact that "CLIENT_DLL" has been defined. I realize that I can work around this by simply commenting out the directives.
Are there any intellisense options that will enable intellisense regardless of directive evaluation?
By getting what you want, you would lose a lot.
Visual C++ IntelliSense is based on a couple major presumptions
1. that you want good/usable results.
2. that your current IntelliSense compiland will present information related to the "configuration" you are currently in.
Because your current configuration has that preprocessor directive, you will not be able to get results from the #ifndef region.
The reason makes sense if you think it through. What if the IntelliSense compiler just tried to compile the region you were in, regardless of #ifdef regions? You would get nonsense and non-compilable code. It would not be able to make heads or tails of your compiland.
I can imagine a very complex solution where it runs a smaller (new) parse on the region you are in, with only that region being assumed to be part of the compiland. However, there are so many holes in this approach (like nothing in that region being declared/defined) that this possible approach would immediately frustrate you, except in very very simple scenarios.
Generally it's best to avoid logic in #ifdef regions, and instead to delegate the usage of parameterized compilation to entire functions, so that the front-end of the compiler is always compiling those modules, but the linker/optimizer will select the correct OBJ later on.
Hope that helps,
Will
Visual Studio 6.0 has a little better support for C++ in some arena's such as this. If you need the intellisense then just comment it out temporarily, build and then you should have intellisense. Just remember to recomment it when you're through if that was your intent.
I just wish Intellisense would work when it SHOULD in VS2008. MS "workarounds" don't work (deleting .ncb files) most of the time. Oooh,
here's another SO discussion..., let's see what IT has to say (I just love SO)
I'm often annoyed by that too ... but I wonder whether intellisense would actually be able to provide any useful information, in general, within a conditioned-out block?
The problem I see is that if the use of a variable or function changes depending on the value of a preprocessor directive then so may it's definition. If code-browsing features like "go to definition" were active within a conditioned-out block would you want them to lead to the currently-enabled definition or to one that was disabled by the same preprocessor conditions as the disabled code you're looking at?
I think the "princple of least surprise" dictates that the current behaviour is the safest, annoying though it is.
Why you want to do explicitly in the code?
There is already cofiguration setting in VS and the way you can enable and disble the intellisense.
see the link.
http://msdn.microsoft.com/en-us/library/ms173379(VS.80).aspx
http://msdn.microsoft.com/en-us/library/ks1ka3t6(v=VS.80).aspx
This link may help you.
I've seen posts talk about what might cause differences between Debug and Release builds, but I don't think anybody has addressed from a development standpoint what is the most efficient way to solve the problem.
The first thing I do when a bug appears in the Release build but not in Debug is I run my program through valgrind in hopes of a better analysis. If that reveals nothing, -- and this has happened to me before -- then I try various inputs in hopes of getting the bug to surface also in the Debug build. If that fails, then I would try to track changes to find the most recent version for which the two builds diverge in behavior. And finally I guess I would resort to print statements.
Are there any best software engineering practices for efficiently debugging when the Debug and Release builds differ? Also, what tools are there that operate at a more fundamental level than valgrind to help debug these cases?
EDIT: I notice a lot of responses suggesting some general good practices such as unit testing and regression testing, which I agree are great for finding any bug. However, is there something specifically tailored to this Release vs. Debug problem? For example, is there such a thing as a static analysis tool that says "Hey, this macro or this code or this programming practice is dangerous because it has the potential to cause differences between your Debug/Release builds?"
One other "Best Practice", or rather a combination of two: Have Automated Unit Tests, and Divide and Conquer.
If you have a modular application, and each module has good unit tests, then you may be able to quickly isolate the errant piece.
The very existence of two configurations is a problem from debugging point of view. Proper engineering would be such that the system on the ground and in the air behave the same way, and achieve this by reducing the number of ways by which the system can tell the difference.
Debug and Release builds differ in 3 aspects:
_DEBUG define
optimizations
different version of the standard library
The best way around, the way I often work, is this:
Disable optimizations where performance is not critical. Debugging is more important. Most important is disable function auto-inlining, keep standard stack frame and variable reuse optimizations. These annoy debug the most.
Monitor code for dependence on DEBUG define. Never use debug-only asserts, or any other tools sensitive to DEBUG define.
By default, compile and work /release.
When I come across a bug that only happens in release, the first thing I always look for is use of an uninitialized stack variable in the code that I am working on. On Windows, the debug C runtime will automatically initialise stack variables to a know bit pattern, 0xcdcdcdcd or something. In release, stack variables will contain the value that was last stored at that memory location, which is going to be an unexpected value.
Secondly, I will try to identify what is different between debug and release builds. I look at the compiler optimization settings that the compiler is passed in Debug and Release configurations. You can see this is the last property page of the compiler settings in Visual Studio. I will start with the release config, and change the command line arguments passed to the compiler one item at a time until they match the command line that is used for compiling in debug. After each change I run the program and reproducing the bug. This will often lead me to the particular setting that causes the bug to happen.
A third technique can be to take a function that is misbehaving and disable optimizations around it using the pre-processor. This will allow you run the program in release with the particular function compiled in debug. The behaviour of the program which has been built in this way will help you learn more about the bug.
#pragma optimize( "", off )
void foo() {
return 1;
}
#pragma optimize( "", on )
From experience, the problems are usually stack initialization, memory scrubbing in the memory allocator, or strange #define directives causing the code to be compiled incorrectly.
The most obvious cause is simply the use of #ifdef and #ifndef directives associated DEBUG or similar symbol that change between the two builds.
Before going down the debugging road (which is not my personal idea of fun), I would inspect both command lines and check which flags are passed in one mode and not the other, then grep my code for this flags and check their uses.
One particular issue that comes to mind are macros:
#ifdef _DEBUG_
#define CHECK(CheckSymbol) { if (!(CheckSymbol)) throw CheckException(); }
#else
#define CHECK(CheckSymbol)
#endif
also known as a soft-assert.
Well, if you call it with a function that has side effect, or rely on it to guard a function (contract enforcement) and somehow catches the exception it throws in debug and ignore it... you will see differences in release :)
When debug and release differ it means:
you code depends on the _DEBUG or similar macros (defined when compiling a debug version - no optimizations)
your compiler has an optimization bug (I seen this few times)
You can easily deal with (1) (code modification) but with (2) you will have to isolate the compiler bug. After isolating the bug you do a little "code rewriting" to get the compiler generate correct binary code (I did this a few times - the most difficult part is to isolate the bug).
I can say that when enabling debug information for release version the debugging process works ... (though because of optimizations you might see some "strange" jumps when running).
You will need to have some "black-box" tests for your application - valgrind is a solution in this case. These solutions help you find differences between release and debug (which is very important).
The best solution is to set up something like automated unit testing to thoroughly test all aspects of the application (not just individual components, but real world tests which use the application the same way a regular user would with all of the dependencies). This allows you to know immediately when a release-only bug has been introduced which should give you a good idea of where the problem is.
Good practice to actively monitor and seek out problems beats any tool to help you fix them long after they happen.
However, when you have one of those cases where it's too late: too many builds have gone by, can't reproduce consistently, etc. then I don't know of any one tool for the job. Sometimes fiddling with your release settings can give a bit of insight as to why the bug is occurring: if you can eliminate optimizations which suddenly make the bug go away, that could give you some useful information about it.
Release-only bugs can fall into various categories, but the most common ones (aside from something like a misuse of assertions) is:
1) Uninitialized memory. I use this term over uninitialized variables as a variable may be initialized but still be pointing to memory which hasn't been initialized properly. For this, memory diagnostic tools like Valgrind can help.
2) Timing (ex: race conditions). These can be a nightmare to debug, but there are some multithreading profilers and diagnostic tools which can help. I can't suggest any off the bat, but there's Coverity Integrity Manager as one example.
We have a large C++ application, which sometimes we need to run as a debug build in order to investigate bugs. The debug build is much much slower than the release build, to the point of being almost unusable.
What tricks are available for making MSVC Debug builds execute faster without sacrificing too much on the debugability?
Use #pragma optimize("", off) at the top of selected files that you want to debug in release. This gives better stack trace/variable view.
Works well if it's only a few files you need to chase the bug in.
Why don't you just switch on debug information in your release configuration?
We turned off Iterator debugging with the preprocessor symbols:
_HAS_ITERATOR_DEBUGGING=0
_SCL_SECURE=0
It helped a bit, but was still not as fast as we'd like. We also ended up making our debug build more release-like by defining NDEBUG instead of _DEBUG. There were a couple other options that we changed too, but I'm not remembering them.
Its unfortunate that we needed to do all this, but our application has a certain amount of work needed to be done every 50ms or its unusable. VS2008 out of the box would give us ~60ms times for debug and ~6ms times for release. With the tweaks mentioned above we could get debug down to ~20ms or so, which is at least usable.
profile the app and see what ti taking the time. you should then be able to see what debugging need to be tuned.
Are you using MFC?
In my experience, the main thing that can make a debug version slow is the class validation routines, which are usually disabled in release. If the data structure is at all tree-like, it can end up re-validating subtrees hundreds of times.
Regardless, if it is, say, 10 times slower than the release build, that means it is spending 1/10 of its time doing what's necessary, and 9/10 doing something else. If, while you're waiting for it, you just hit the "pause" button and look at the call stack, chances are 9/10 that you will see exactly what the problem is.
It's a quick & dirty, but effective way to find performance problems.
Create a ReleaseWithSymbols configuration, that defines NDEBUG and has no optimisations enabled. This will give you better performance while maintaining accurate symbols for debugging.
there are several difference between debug builds and release builds that influence both debugability and speed. The most important are the _DEBUG/NDEBUG define, the compiler optimizations and the creation of debug information.
You might want to create a third Solution Configuration and play around with these settings. For example, adding debug information to a release build doesn't really decrease performance but you already get a sensible stack trace so you know which function you are in. Only the line information is not reliable because of the compiler optimizations.
If you want reliable line information, go on and turn off optimizations. This will slow down the execution a bit but this will still be faster than normal debug as long as the _DEBUG define is not set yet. Then you can do pretty good debugging, only everything that has "#ifdef _DEBUG" or similar around it won't be there (e.g. calls to assert etc.).
Hope this helps,
Jan
Which VS are you using? We moved from VS.net to VS2008 recently and I experienced same slowness while debugging on high end machine on > 500k LOC project. Turns out, Intellisense base got corrupted and would update itself constantly but get stuck somewhere. Deleting .ncb file solved the problem.