Valgrind on ARMv5 - c++

I'm trying to debug a program on a embedded device. The problem is that it uses ARMv5 and valgrind doesn't support that platform (there are some patches over there but I was not able to make it work).
I tried some tools like gdb or memwatch, but it isn't enough to find the leaks.
Anyone could suggest a solution? I thought of maybe some kind of remote debugging or so.
Thanks for your answers

Valgrind is a very powerful tool and it's pretty sad that it does not work on ARMv5 because it makes debugging memory leaks and invalid memory accesses more difficult on this platform.
I see several less powerful options. You can try to enable some additional checks within the C library by setting the MALLOC_CHECK_ environment variable. If your compiler is GCC 4.8 or higher you can try AddressSanitizer (I never used it on ARMv5 though).

Related

Valgrind flags, debug vs release compilation

On a Jenkins instance, I need Valgrind to check if there are particular problems in a C++ compiled binary. However, I only need a yes / no answer, not a stack trace for example. If they are any problems, I will launch valgrind on the faulty code with debug flags activated on my personal machine. The build is managed with CMake on a Linux running machine (targeting gcc).
If I compile my code with -DCMAKE_BUILD_TYPE=Release on the Jenkins instance, will Valgrind detect the same problems in the binary as with -DCMAKE_BUILD_TYPE=Debug ?
Valgrind works by instrumenting and replacing parts of your code at runtime, like redirecting calls to memory allocation functions. For doing this it does not rely on debug information, but it might get confused by optimized code:
If you are planning to use Memcheck: On rare occasions, compiler
optimisations (at -O2 and above, and sometimes -O1) have been observed
to generate code which fools Memcheck into wrongly reporting
uninitialised value errors, or missing uninitialised value errors. We
have looked in detail into fixing this, and unfortunately the result
is that doing so would give a further significant slowdown in what is
already a slow tool. So the best solution is to turn off optimisation
altogether.
(from the Valgrind manual)
Since the Release build type is uses optimizations, that makes it a bad fit for your case.

Does the Rust compiler have a profiling option?

I have a Rust program that isn't running as fast as I think it should. Is there a way to tell the compiler to instrument the binary to generate profiling information?
I mean something like GCC's -p and -pg options or GHC's -prof.
The compiler doesn't support any form of instrumentation specifically for profiling (like -p/-pg/-prof), but compiled Rust programs can be profiled under tools that do not require custom instrumentation, such as Instruments on OS X, and perf or callgrind on Linux.
I believe such tools support using DWARF debuginfo (as emitted by -g) to provide more detailed performance diagnostics (per-line etc.), but enabling optimisations play havoc with the debug info, and it's never really worked for me. When I'm analysing performance, diving into the asm is very common.
Making this easier would be really nice, and tooling is definitely a post-1.0 priority.
There's no direct switch that I'm aware of. However, I've successfully compiled my code with optimizations enabled as well as debugging symbols. I can then use OS X's Instruments to profile the code. Other people have used KCachegrind on Linux systems to the same effect.

Why is gdb so slow in Windows?

I recently noticed that running a program inside gdb in windows makes it a lot slower, and I want to know why.
Here's an example:
It is a pure C++03 project, compiled with mingw32 (gcc 4.8.1, 32 bits).
It is statically linked against libstdc++ and libgcc, no other lib is used.
It is a cpu and memory intensive non-parallel process (a mesh edition operation, lots of news and deletes and queries to data structures involved).
The problem is not start-up time, the whole process is painfully slow.
Debug build (-O0 -g2) runs in 8 secs outside gdb, but in 140 secs within gdb.
Tested from command line, just launching gdb and just typing "run" (no breakpoints defined).
I also tested a release build (optimized, and without debugging information), and it is still much slower inside gdb (3 secs vs 140 secs; yes, it takes the same time as the not optimized version inside gdb).
Tested with gdb 7.5 and 7.6 from mingw32 project, and with a gdb 7.8 compiled by me (all of them without python support).
I usually develop on a GNU/Linux box, and there I can't notice speed differences between running with or withoud gdb.
I want to know what is gdb doing that is making it run so slowly. I have some basic understanding of how a debugger works, but I cannot figure out what is it doing here, and googling didn't helped me this time.
I've finally found the problem, thanks to greatwolf for asking me to test other debuggers. Ollydbg takes the same time as gdb, so it's not a gdb problem, its a Windows problem. This tip changed my search criteria and then I've found this article* that explains the problem very well and gives a really simple solution: define an environment varible _NO_DEBUG_HEAP to 1. This will disable the use of a special heap system windows provides and c++ programs use.
* Here's the link: http://preshing.com/20110717/the-windows-heap-is-slow-when-launched-from-the-debugger/
I once had issues with gdb being incredibly slow and I remember disabling nls (native language support, i.e. the translations of all the messages) would remedy this.
The configure time option is --disable-nls. I might have just been mistaken as to what is the true cause, but it's worth a shot for you anyways.
My bug report from back then is here, although the conclusion there would be that I was mistaken. If you can provide further insight into this, that would be great!

cc1plus: Virtual memory exhausted

I am trying to build a project in QNX Momentics IDE (4.6) using qcc in Linux. I fail to succeed the build process with the following error:
virtual memory exhausted: Cannot allocate memory
/opt/qnx641/host/linux/x86/usr/lib/gcc/i386-pc-nto-qnx6.4.0/4.3.3/cc1plus error 1
The project has a cpp file with more than 1.3 MLOC. This one is an autogenerated code from a large Matlab/SIMULINK simulation model so it is not easy to divide and conquer.
It is hard to understand if it is LOC limit of qcc compiler or due to a programming practice in the autogenerated code.
I would like to ask:
Is there any source file size limit for qcc?
What are the bad programming practices that cause this?
Any suggestions to fix virtual memory exhausted problem of cc1plus?
Q1: Is there any source file size limit for qcc?
A1: qcc = gcc. More accurately: qcc is a lightweight wrapper that calls gcc; all the real work is done by gcc. GNU software is, as a general philosophy, designed to not impose arbitrarily limit and I presume this is especially true for gcc. Even if there exist arbitrarily limits you are not hitting those because you are running out of system memory.
Random links:
preprocessor limits: http://gcc.gnu.org/onlinedocs/cpp/Implementation-limits.html
some gcc limits benchmarking: gcc module size limits
Q2: What are the bad programming practices that cause this?
A2: E.g., dumping all source code into a single file, as you demonstrated. I'd say this question is not relevant to your case because you already stated you don't have control over the generated code.
Q3: Any suggestions to fix virtual memory exhausted problem of cc1plus?
A3: a) put more ram into your host computer (may or may not help depending on how much you have and if your OS is 32 or 64 bit); b)increase your swap-space (same applies); c) if a/b does not help then upgrade your OS to 64 bit and try a/b again. Unfortunately, this 64-bit suggestion almost surely does not apply to the gcc version that QNX shipped with 6.4.1. Maybe not even to the latest one.
As a general suggestion, since qcc is using gcc I'd recommend that you have the same code build using the host's gcc (gcc that is shipped with your Linux). Once that works you may start looking for the differences, which likely boil down to 64-bit support.

Heap corruption detection tool for C++

Is there any tool to help me detect heap corruption in C++? I can't provide source code because it's a big project. I can use any tool that works with Visual Studio or with xcode. The tool should work fine with multithreading. The problem is not very common, it appears after a long time and only in very special cases(they were not detected precisely!).
Thank you!
EDIT:
Thanks you all for your answers! I will test the tools and I will accept one answer after the tests.
Valgrind is the defacto tool for doing memory instrumentation for native code. It, however, does not run on Windows (OS X is fine).
There are a few commercial tools which do run on Windows, and while they feature a GUI, are in my opinion inferior to Valgrind.
The debugging tools for Windows include gflags and page heap which help detecting heap corruptions.
On Mac OS X (which I presume is what you mean when you say Xcode), you have a whole bunch of memory debugging tools already, e.g. http://developer.apple.com/library/mac/#releasenotes/DeveloperTools/RN-MallocOptions/index.html which lets you turn on heap checking via environment variables.
On Windows use Application Verifier