On a Jenkins instance, I need Valgrind to check if there are particular problems in a C++ compiled binary. However, I only need a yes / no answer, not a stack trace for example. If they are any problems, I will launch valgrind on the faulty code with debug flags activated on my personal machine. The build is managed with CMake on a Linux running machine (targeting gcc).
If I compile my code with -DCMAKE_BUILD_TYPE=Release on the Jenkins instance, will Valgrind detect the same problems in the binary as with -DCMAKE_BUILD_TYPE=Debug ?
Valgrind works by instrumenting and replacing parts of your code at runtime, like redirecting calls to memory allocation functions. For doing this it does not rely on debug information, but it might get confused by optimized code:
If you are planning to use Memcheck: On rare occasions, compiler
optimisations (at -O2 and above, and sometimes -O1) have been observed
to generate code which fools Memcheck into wrongly reporting
uninitialised value errors, or missing uninitialised value errors. We
have looked in detail into fixing this, and unfortunately the result
is that doing so would give a further significant slowdown in what is
already a slow tool. So the best solution is to turn off optimisation
altogether.
(from the Valgrind manual)
Since the Release build type is uses optimizations, that makes it a bad fit for your case.
Related
Hello everyone as starting programmer in c++ i was looking into some differences in compilers I imported the same source files for both the gcc compiler (code blocks) and the visual c++ (Visual studio express) and i found some strange behavior that i did not expect.
The visual c++ threw a bunch of errors which were in my opinion quite big... like iterating through vector with different iterators , iterator was from another instance of the vector than the operation was done on with this iterator.... gcc compiled successfully and threw no errors in runtime... while the visual c++ threw a bunch of errors in compilation and then threw a runtime error of 'different iterator type', or dynamic char allocation with new char[str.length()+1] and strcpy_s() into them from string - visual c++ debuger threw runtime error of corrupted heap while code blocks debugger ran just fine.
My question is. Is there really this big of a difference in these compilers and debuggers? Should i worry that my programming is on a bad level if the code runs totally perfect with gcc and code blocks debuger but throws errors in visual studio?
I ve learned to programm in c++ in code blocks, visal c++ has shown me mistakes that i was totally not aware of..
The problem is with your code, not with your compilers or setup. The types of problems you are describing are examples of undefined behaviour that result from rather bad programming or coding techniques (in fact, some of them are fairly hard to achieve, without going out of your way to write very flawed code).
The thing is, compilers are not required to detect such things. Whether they do or not is a concern of compiler or library quality of implementation. In your case, it appears that your version of VC++ is detecting concerns that g++ is not, which is a point in favour of VC++.
My experience is actually the reverse of that: I find g++ detects more problems than VC++. However, both VC++ and g++ do diagnose problems that the other does not.
Which all just goes to show that your milage will vary. Personally, I'm an advocate of feeding all my code through multiple compilers when possible - precisely because that widens the net of what problems are diagnosed.
And then I exercise a policy of ensuring my code compiles cleanly with all compilers (no diagnostics at all, which includes no warnings) without having to disable any diagnostics, and avoiding use of any code constructs that are designed to suppress compiler diagnostics.
One thing to realise is that compilers, when installed, are typically configured to NOT produce many diagnostics. The reasons for this are historical. It is necessary to turn on the settings to make the compiler give warnings or errors. With g++, command options like -Wall -pedantic (which can be enabled through Code::Blocks) really increase the number of problems that will be reported. There are similar options for VC++ (although I don't remember them offhand).
MSVC has "checked iterators" for std::vector, which perform a number of useful checks. You can turn on some of these types of checks in GCC by compiling with -D_GLIBCXX_DEBUG. If you want your access to always be bounds-checked, then you need to use std::vector::at(). Often, for performance reasons, it is better to ensure bounds checking outside of your loop and then use unchecked iterators or indexing in your loop.
GCC's standard library does exactly what you told it, bugs and all. Most of the time, it behaves as you expected, and you don't realize that the bugs are there. Don't be fooled by the fact the program appears to work, it may still have bugs.
Visual Studio has two variants of the standard library. In Release builds, it acts the same as GCC. It does exactly what you told it to do, bugs and all. In Debug builds, it adds a ton of code behind the scenes to detect some of these errors, and will notify you, as you've observed. Fix these!. Note that some of these, like "heap corruption" mean that it detected that a bug occured several seconds ago, and does not mean that the bug is at the free/delete. You should also go to the project properties, and in C++/General, make sure your Warning Level is set to Level3 or even Level4. This will reveal even more bugs at compile time.
The differences in the compilers in this respect aren't that significant, except that in Debug builds, Visual Studio adds tons of error checking that's finding bugs. The other implementations, and Visual Studio in Release builds, don't go out of their way to help you find bugs.
I have a Rust program that isn't running as fast as I think it should. Is there a way to tell the compiler to instrument the binary to generate profiling information?
I mean something like GCC's -p and -pg options or GHC's -prof.
The compiler doesn't support any form of instrumentation specifically for profiling (like -p/-pg/-prof), but compiled Rust programs can be profiled under tools that do not require custom instrumentation, such as Instruments on OS X, and perf or callgrind on Linux.
I believe such tools support using DWARF debuginfo (as emitted by -g) to provide more detailed performance diagnostics (per-line etc.), but enabling optimisations play havoc with the debug info, and it's never really worked for me. When I'm analysing performance, diving into the asm is very common.
Making this easier would be really nice, and tooling is definitely a post-1.0 priority.
There's no direct switch that I'm aware of. However, I've successfully compiled my code with optimizations enabled as well as debugging symbols. I can then use OS X's Instruments to profile the code. Other people have used KCachegrind on Linux systems to the same effect.
I'm trying to debug a program on a embedded device. The problem is that it uses ARMv5 and valgrind doesn't support that platform (there are some patches over there but I was not able to make it work).
I tried some tools like gdb or memwatch, but it isn't enough to find the leaks.
Anyone could suggest a solution? I thought of maybe some kind of remote debugging or so.
Thanks for your answers
Valgrind is a very powerful tool and it's pretty sad that it does not work on ARMv5 because it makes debugging memory leaks and invalid memory accesses more difficult on this platform.
I see several less powerful options. You can try to enable some additional checks within the C library by setting the MALLOC_CHECK_ environment variable. If your compiler is GCC 4.8 or higher you can try AddressSanitizer (I never used it on ARMv5 though).
I recently noticed that running a program inside gdb in windows makes it a lot slower, and I want to know why.
Here's an example:
It is a pure C++03 project, compiled with mingw32 (gcc 4.8.1, 32 bits).
It is statically linked against libstdc++ and libgcc, no other lib is used.
It is a cpu and memory intensive non-parallel process (a mesh edition operation, lots of news and deletes and queries to data structures involved).
The problem is not start-up time, the whole process is painfully slow.
Debug build (-O0 -g2) runs in 8 secs outside gdb, but in 140 secs within gdb.
Tested from command line, just launching gdb and just typing "run" (no breakpoints defined).
I also tested a release build (optimized, and without debugging information), and it is still much slower inside gdb (3 secs vs 140 secs; yes, it takes the same time as the not optimized version inside gdb).
Tested with gdb 7.5 and 7.6 from mingw32 project, and with a gdb 7.8 compiled by me (all of them without python support).
I usually develop on a GNU/Linux box, and there I can't notice speed differences between running with or withoud gdb.
I want to know what is gdb doing that is making it run so slowly. I have some basic understanding of how a debugger works, but I cannot figure out what is it doing here, and googling didn't helped me this time.
I've finally found the problem, thanks to greatwolf for asking me to test other debuggers. Ollydbg takes the same time as gdb, so it's not a gdb problem, its a Windows problem. This tip changed my search criteria and then I've found this article* that explains the problem very well and gives a really simple solution: define an environment varible _NO_DEBUG_HEAP to 1. This will disable the use of a special heap system windows provides and c++ programs use.
* Here's the link: http://preshing.com/20110717/the-windows-heap-is-slow-when-launched-from-the-debugger/
I once had issues with gdb being incredibly slow and I remember disabling nls (native language support, i.e. the translations of all the messages) would remedy this.
The configure time option is --disable-nls. I might have just been mistaken as to what is the true cause, but it's worth a shot for you anyways.
My bug report from back then is here, although the conclusion there would be that I was mistaken. If you can provide further insight into this, that would be great!
I use valgrind to debug my application. I have two machines where I want to run the code without errors.
One is an ubuntu 11.10 with valgrind 3.7.0 running and one is a Mac OS X 10.7.2 with valgrind 3.6.0 and valgrind 3.8.0.
I run the following valgrind command:
valgrind --track-origins=yes ./my_program
On the Linux machine I did not get any error reports. On the Mac valgrind complains about
==35723== Conditional jump or move depends on uninitialised value(s)
==35723== at 0x10004DCAF: boost::spirit ...
As the error is reported in a boost lib I do not think that there might be an error in the boost libraries (boost version is the same on both machines 1.46.1).
What can be the cause for the different error reports?
Valgrind is not a static analysis tools, but rather a runtime one, i.e. valgrind runs the program on a virtual machine. There is plenty of code in many applications that is not triggered by or compiled for every machine alike, explaining the differences.
Are you using different compilers on the two computers? Perhaps different compilers, or different compiler versions, produce code with different behaviour when accessing an uninitialised variable.
I've had statements of the form
if (A && B) {
do_stuff
}
in which B was only initialized if A was true. When I didn't use optimizations, the program (as expected) first checked A and then, if it were true, checked B. When optimizing, the compiler found it profitable to check B first; since neither A nor B had any side effects or depended on volatile memory this was equivalent. This latter behavior caused valgrind to give me the type of warning you're seeing even though there wasn't anything really wrong with the code. My guess is that something similar is going on here.