I use valgrind to debug my application. I have two machines where I want to run the code without errors.
One is an ubuntu 11.10 with valgrind 3.7.0 running and one is a Mac OS X 10.7.2 with valgrind 3.6.0 and valgrind 3.8.0.
I run the following valgrind command:
valgrind --track-origins=yes ./my_program
On the Linux machine I did not get any error reports. On the Mac valgrind complains about
==35723== Conditional jump or move depends on uninitialised value(s)
==35723== at 0x10004DCAF: boost::spirit ...
As the error is reported in a boost lib I do not think that there might be an error in the boost libraries (boost version is the same on both machines 1.46.1).
What can be the cause for the different error reports?
Valgrind is not a static analysis tools, but rather a runtime one, i.e. valgrind runs the program on a virtual machine. There is plenty of code in many applications that is not triggered by or compiled for every machine alike, explaining the differences.
Are you using different compilers on the two computers? Perhaps different compilers, or different compiler versions, produce code with different behaviour when accessing an uninitialised variable.
I've had statements of the form
if (A && B) {
do_stuff
}
in which B was only initialized if A was true. When I didn't use optimizations, the program (as expected) first checked A and then, if it were true, checked B. When optimizing, the compiler found it profitable to check B first; since neither A nor B had any side effects or depended on volatile memory this was equivalent. This latter behavior caused valgrind to give me the type of warning you're seeing even though there wasn't anything really wrong with the code. My guess is that something similar is going on here.
Related
Since a couple of weeks I'm trying to profile a piece of numerical software and I'm unable to get useful results.
The code I'm profiling results in a huge function (__attribute__((flatten))) created out of many inlined functions and a few calls to std::exp/std::log/std::pow. This function is located inside a shared library and loaded via dlopen().
I've used
the google CPU profiler (hangs in the first fork() (interrupted by SIGPROF and restarted and interrupted and...) -- same problem with g++ option -pg)
linux tool perf (caused a reboot of the machine, I complained and they upgraded the OS (CENTOS 6.5). The results only highlight two assembler instructions out of above mentioned huge function. I don't have permissions to read accurate event sources (*:ppp))
some old version of vtune (difficult to operate, results are unreliable, no hardware drivers loaded)
sprof (results do not tell me anything as there is only a single function to profile -- when avoiding to use attribute flatten then the behavior is fully different)
I'm running
CENTOS 6.5
and
g++ (GCC) 5.3.0
I don't have any influence over the version of the OS or the compiler version.
I complained about the ancient OS some weeks ago, and they upgraded me to what I mentioned above.
In some former live I successfully used the google profiler -- when it was working (and not crashing or hanging due to signal handling problems) it provided useful results.
Anybody any comment?
Could all these unclear results be the result of the fixes for SPECTRE?
Do I need to insist, that certain profiling options are enabled on the machine?
Do I need to insist on the vtune drivers loaded?
Do I need to insist on an uptodate copy of vtune installed?
Compile with -fno-omit-frame-pointers?
On a Jenkins instance, I need Valgrind to check if there are particular problems in a C++ compiled binary. However, I only need a yes / no answer, not a stack trace for example. If they are any problems, I will launch valgrind on the faulty code with debug flags activated on my personal machine. The build is managed with CMake on a Linux running machine (targeting gcc).
If I compile my code with -DCMAKE_BUILD_TYPE=Release on the Jenkins instance, will Valgrind detect the same problems in the binary as with -DCMAKE_BUILD_TYPE=Debug ?
Valgrind works by instrumenting and replacing parts of your code at runtime, like redirecting calls to memory allocation functions. For doing this it does not rely on debug information, but it might get confused by optimized code:
If you are planning to use Memcheck: On rare occasions, compiler
optimisations (at -O2 and above, and sometimes -O1) have been observed
to generate code which fools Memcheck into wrongly reporting
uninitialised value errors, or missing uninitialised value errors. We
have looked in detail into fixing this, and unfortunately the result
is that doing so would give a further significant slowdown in what is
already a slow tool. So the best solution is to turn off optimisation
altogether.
(from the Valgrind manual)
Since the Release build type is uses optimizations, that makes it a bad fit for your case.
I'm trying to debug a program on a embedded device. The problem is that it uses ARMv5 and valgrind doesn't support that platform (there are some patches over there but I was not able to make it work).
I tried some tools like gdb or memwatch, but it isn't enough to find the leaks.
Anyone could suggest a solution? I thought of maybe some kind of remote debugging or so.
Thanks for your answers
Valgrind is a very powerful tool and it's pretty sad that it does not work on ARMv5 because it makes debugging memory leaks and invalid memory accesses more difficult on this platform.
I see several less powerful options. You can try to enable some additional checks within the C library by setting the MALLOC_CHECK_ environment variable. If your compiler is GCC 4.8 or higher you can try AddressSanitizer (I never used it on ARMv5 though).
I recently noticed that running a program inside gdb in windows makes it a lot slower, and I want to know why.
Here's an example:
It is a pure C++03 project, compiled with mingw32 (gcc 4.8.1, 32 bits).
It is statically linked against libstdc++ and libgcc, no other lib is used.
It is a cpu and memory intensive non-parallel process (a mesh edition operation, lots of news and deletes and queries to data structures involved).
The problem is not start-up time, the whole process is painfully slow.
Debug build (-O0 -g2) runs in 8 secs outside gdb, but in 140 secs within gdb.
Tested from command line, just launching gdb and just typing "run" (no breakpoints defined).
I also tested a release build (optimized, and without debugging information), and it is still much slower inside gdb (3 secs vs 140 secs; yes, it takes the same time as the not optimized version inside gdb).
Tested with gdb 7.5 and 7.6 from mingw32 project, and with a gdb 7.8 compiled by me (all of them without python support).
I usually develop on a GNU/Linux box, and there I can't notice speed differences between running with or withoud gdb.
I want to know what is gdb doing that is making it run so slowly. I have some basic understanding of how a debugger works, but I cannot figure out what is it doing here, and googling didn't helped me this time.
I've finally found the problem, thanks to greatwolf for asking me to test other debuggers. Ollydbg takes the same time as gdb, so it's not a gdb problem, its a Windows problem. This tip changed my search criteria and then I've found this article* that explains the problem very well and gives a really simple solution: define an environment varible _NO_DEBUG_HEAP to 1. This will disable the use of a special heap system windows provides and c++ programs use.
* Here's the link: http://preshing.com/20110717/the-windows-heap-is-slow-when-launched-from-the-debugger/
I once had issues with gdb being incredibly slow and I remember disabling nls (native language support, i.e. the translations of all the messages) would remedy this.
The configure time option is --disable-nls. I might have just been mistaken as to what is the true cause, but it's worth a shot for you anyways.
My bug report from back then is here, although the conclusion there would be that I was mistaken. If you can provide further insight into this, that would be great!
I am trying to build a project in QNX Momentics IDE (4.6) using qcc in Linux. I fail to succeed the build process with the following error:
virtual memory exhausted: Cannot allocate memory
/opt/qnx641/host/linux/x86/usr/lib/gcc/i386-pc-nto-qnx6.4.0/4.3.3/cc1plus error 1
The project has a cpp file with more than 1.3 MLOC. This one is an autogenerated code from a large Matlab/SIMULINK simulation model so it is not easy to divide and conquer.
It is hard to understand if it is LOC limit of qcc compiler or due to a programming practice in the autogenerated code.
I would like to ask:
Is there any source file size limit for qcc?
What are the bad programming practices that cause this?
Any suggestions to fix virtual memory exhausted problem of cc1plus?
Q1: Is there any source file size limit for qcc?
A1: qcc = gcc. More accurately: qcc is a lightweight wrapper that calls gcc; all the real work is done by gcc. GNU software is, as a general philosophy, designed to not impose arbitrarily limit and I presume this is especially true for gcc. Even if there exist arbitrarily limits you are not hitting those because you are running out of system memory.
Random links:
preprocessor limits: http://gcc.gnu.org/onlinedocs/cpp/Implementation-limits.html
some gcc limits benchmarking: gcc module size limits
Q2: What are the bad programming practices that cause this?
A2: E.g., dumping all source code into a single file, as you demonstrated. I'd say this question is not relevant to your case because you already stated you don't have control over the generated code.
Q3: Any suggestions to fix virtual memory exhausted problem of cc1plus?
A3: a) put more ram into your host computer (may or may not help depending on how much you have and if your OS is 32 or 64 bit); b)increase your swap-space (same applies); c) if a/b does not help then upgrade your OS to 64 bit and try a/b again. Unfortunately, this 64-bit suggestion almost surely does not apply to the gcc version that QNX shipped with 6.4.1. Maybe not even to the latest one.
As a general suggestion, since qcc is using gcc I'd recommend that you have the same code build using the host's gcc (gcc that is shipped with your Linux). Once that works you may start looking for the differences, which likely boil down to 64-bit support.