I use Valgrind to detect issues by running a program from the beginning.
Now I have memory/performance issues in a very specific moment in the program. Unfortunately there is no feasible way to make a shortcut to this place from the start.
Is there a way to instrument the c++ program ( Valgrind/Callgrind ) in its mid-way execution, like attaching to the process?
Already answered here:
How use callgrind to profiling only a certain period of program execution?
There is no way to use valgrind on an already running program.
For callgrind, you can however somewhat speed it up by only recording data later during execution.
For this, you might e.g. look at callgrind options
--instr-atstart=no|yes Do instrumentation at callgrind start [yes]
--collect-atstart=no|yes Collect at process/thread start [yes]
--toggle-collect=<func> Toggle collection on enter/leave function
You can also control such aspects from within your program.
Refer to valgrind/callgrind user manual for more details.
There are two things that Callgrind does that slows down execution.
Counting operations (the collect part)
Simulating the cache and branch predictor (the instrumentation part). This depends on the --cache-sim and --branch-sim options, which both default to "no". If you are using these options and you disable instrumentation then I expect that there will be some impact on the accuracy of the modelling as the cache and predictor won't be "warm" when they are toggled.
There are a few other approaches that you could use.
Use the client request mechanisms. This would require you to include a Valgrind header and add a few lines that use Valgrind macros CALLGRIND_START_INSTRUMENTATION/CALLGRIND_STOP_INSTRUMENTATION and CALLGRIND_TOGGLE_COLLECTto start and stop instrumentation/collection. See the manual for details. Then just run your application under Valgrind with --instr-atstart=no --collect-atstart=no
Use the gdb monitor commands. In this case you would have two terminals. In the first you would run Valgrind with --instr-atstart=no --collect-atstart=no --vgdb=yes. In the second terminal run gdb yourapplication then from the gdb prompt (gdb) target remote | vgdb. Then you can use the monitor commands as described in the manual, which include collect and instrumentation control.
callgrind_control, part of the Valgrind distribution. To be honest I've never used this.
I recently did some profiling using the first technique without cache/branch sim. When I used Callgrind on the whole run I saw a 23x runtime increase compared to running outside of Callgrind. When I profiled only the one function that I wanted to analyze, this fell to about a 5x slowdown. Obviously this will very greatly case by case.
Thanks all for help.
Seems It's been already answered here: How use callgrind to profiling only a certain period of program execution?
but not marked as answered for some reason.
Summary:
Starting callgrind with instrumentation off
valgrind --tool=callgrind --instr-atstart=no <PROG>
Controling instrumentation ( can be executed in other shell )
callgrind_control -i on/off
Related
I am working on a big code base. It is heavily multithreaded.
After running the linux based application for a few hours, in the end, right before reporting, the application silences. It doesn't die, it doesn't crash, it just waits there. Joins, mutexes, condition variables ... any of these can be the culprit.
If it had crashed, I would at least have a chance to find the source using debugger. But this way, I have no clue how to use what tool to find the bug. I can't even post a code sample for you. The only thing that can possibly help is to tap MANY places with cout to get a visual where the application is.
Have you been in such a situation? What do you recommend?
If you're running under Linux then just use gdb to run the program. When the application 'silences', interrupt it with CTRL+C, then type backtrace to see the call stack. With this you will find out the function where your application was blocked.
Incase of linux, gdb will be great help. Another tool that can be of great help is strace (This can also be used where there are problems with program for with source is not readily available because strace does not need recompilation to trace them.)
strace shall intercept/record system calls that are called by a process and also the signals that are received by a process. It will be able to show the order of events and all the return/resumption paths of calls. This can take you almost closer to the area of problem.
iotop, LTTng and Ftrace are few of other tools that be helpful to you in this scenario.
I am currently developing a distributed software in C++ using linux which is executed in more than 20 nodes simultaneously. So one of the most challenging issue that I found is how to debug it.
I heard that is possible to manage in a single gdb session multiple remote sessions (e.g. in the master node I create the gdb session and in every other node I launch the program using gdbserver), is it possible? If so can you give an example? Do you know any other way to do it?
Thanks
You can try to do it like this:
First start nodes with gdbserver on remote hosts. It is even possible to start it without a program to debug, if you start it with --multi flag. When server is in multi mode, you can control it from your local session, I mean that you can make it start a program you want to debug.
Then, start multiple inferiors in your gdb session
gdb> add-inferior -copies <number of servers>
switch them to a remote target and connect them to remote servers
gdb> inferior 1
gdb> target extended-remote host:port // use extended to switch gdbserver to multi mode
// start a program if gdbserver was started in multi mode
gdb> inferior 2
...
Now you have them all attached to one gdb session. The problem is that, AFAIK, it is not much better than to start multiple gdb's from different console tabs. On the other hand you can write some scripts or auto tests this way. See the gdb tutorial: server and inferiors.
I don't believe there is one, simple, answer to debugging "many remote applications". Yes, you can attach to a process on another machine, and step through it in GDB. But it's quite awkward to debug a large number of interdependent processes, especially when the problem is complicated.
I believe a good set of logging capabilities in the code, supplemented with additional logs for specific debugging as needed, is more likely to give you a good/fast result.
Another option might be to run the processes on one machine, rather than on multiple machines. Perhaps even use threads within one process, to simulate the behaviour of multiple machines, simplifying the debugging process. Of course, this doesn't prevent bugs that appear ONLY when you run 20 processes on 20 different machines. But the basic idea is to reduce the number of those bugs to a minimum, and debug most things in a "simpler environment".
Aggressive use of defensive programming paradigms, such as liberal use of assert is clearly a good idea (perhaps with a macro to turn it off for the production runs, but make sure that you don't just leave error paths completely unchecked - it is MUCH harder to detect that the reason something crashes is that a memory allocation failed than to track down where that NULL pointer came from some 20 function calls away from a failed allocation.
I am trying to extract the execution sequence of my program (something like a program counter) with gdb on my local computer (windows x86) and gdbserver on a remote target (arm-linux). The idea I had was to insert breakpoints at "important" lines of my source files (i.e.: at the beginning of a specific function, and more in general before and after a conditional statement) with a high ignore count for each breakpoint, and then check if a breakpoint was hit or not. I was actually able to receive the informations with this method, but there is a problem: the application behavior I am debugging depends on real-time, and this specific method slows down the program execution too much. Do you think I could use some other method with gdb? I stumbled upon tracepoints, wich seems the exact thing I am looking for, but I was not able to find some property like a "hit counter" for them. The gdb version I am currently using is 7.5.
Thanks a lot in advance.
If your program execution must not be slowed down, you will probably need some HW tool. See these:
Keil real time trace
Lauterbach PowerDebug
(probably other similar solutions)
I'm writing a software renderer in g++ under mingw32 in Windows 7, using NetBeans 7 as my IDE.
I've been needing to profile it of late, and this need has reached critical mass now that I'm past laying down the structure. I looked around, and to me this answer shows the most promise in being simultaneously cross-platform and keeping things simple.
The gist of that approach is that possibly the most basic (and in many ways, the most accurate) way to profile/optimise is to simply sample the stack directly every now and then by halting execution... Unfortunately, NetBeans won't pause. So I'm trying to find out how to do this sampling with gdb directly.
I don't know a great deal about gdb. What I can tell from the man pages though, is that you set breakpoints before running your executable. That doesn't help me.
Does anyone know of a simple approach to getting gdb (or other gnu tools) to either:
Sample the stack when I say so (preferable)
Take a whole bunch of samples at random intervals over a given period
...give my stated configuration?
Have you tried simply running your executable in gdb, and then just hitting ^C (Ctrl+C) when you want to interrupt it? That should drop you to gdb's prompt, where you can simply run the where command to see where you are, and then carry on execution with continue.
If you find yourself in a irrelevant thread (e.g. a looping UI thread), use thread, info threads and thread n to go to the correct one, then execute where.
I want to use valgrind to do some profiling, since it does not need re-build the program. (the program I want to profile is already build with “-g")
But valgrind(callgrind) is quite slow ... so here's what I to do:
start the server ( I want to profile that server)
kind of attach to that server
before I do some operation on server, start collect profile data
after the operation is done, end collecting profile data
analyze the profiling data.
I can do this kind of thing using sun studio on Solaris. (using dbx ). I just want to know is it possible to do the same thing using valgrind(callgrind)?
Thanks
You should look at callgrind documentation, and read about callgrind_control.
Launch your app : valgrind --tool=callgrind --instr-atstart=no your_server.x
See 1.
start collect profile data: callgrind_control -i on
end collect profile data: callgrind_control -i off
Analyze data with kcachegrind or callgrind_annotate/cg_annotate
For profiling only some function you can also find useful CALLGRIND_START_INSTRUMENTATION and CALLGRIND_STOP_INSTRUMENTATION from <valgrind/callgrind.h> header and using callgrind's --instr-atstart=no option as suggested in Doomsday's answer.
You don't say what OS - I'm assuming Linux - in which case you might want to look at oprofile (free) or Zoom (not free, but you can get an evaluation licence), both of which are sampling profilers and can profile existing code without re-compilation. Zoom is much nicer and easier to use (it has a GUI and some nice additional features), but you probably already have oprofile on your system.