I am dealing with a large code base with tons of globals. Under some peculiar set of data it produces the wrong result. I wanted to automatically run few scenarios with gdb in automatic step-by-step execution and periodical dumping
of some values and recording the tracing in some file. Doing it manually will ruin my sight and my brain. I speculate that there is some globals mess-up. How to do this automatically? Use some scripting. All this is in RH linux.
Thanks in advance.
tried to do this manually using conditional breaks, but gave up after a while
I wanted to automatically run few scenarios with gdb in automatic step-by-step execution and periodical dumping of some values and recording the tracing in some file.
It may be significantly more effective to run the program under reverse debugger (such as rr), and trace the wrong result back to its source.
How to do this automatically?
You can't do automatically what you can't express as an algorithm, and you haven't described an algorithm you want to use. If it's something like "stop every 100 times foo is called and print the values of these 500 globals", than that's trivially automatable with GDB.
More complicated algorithms are possible with the use of embedded Python.
In the .gdbinit file in your home folder add
add-auto-load-safe-path /path_to_the_folder_containing_your_executable/
Now you can create another .gdbinit file in the same folder where your executable is that will be loaded when you start gdb from there (the .gdbinit file in your home that is also read - useful if you have nice stuff there such as loading pretty printers).
In this .gdbinit file, add the code below
file your_executable_name
start
# Optional
set args "<any command line parameters you program might need>"
# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Add gdb commands below to set breakpoints, print variable values, etc
# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
run
Gdb is very powerful and I'll list a few things that might help this automation.
You can set breakpoints with b filename.ext:line_number or b function_name
You can use commands breakpoint_number (see here) and then list commands that should be run after this breakpoint is hit. Use end to finish the commands
You know the breakpoint number, since they are created sequencially (note that the start will count as a breakpoint and thus the first one you add will be breakpoint 2)
You can use convenience variables to store useful things such as the address of an important object
The python api is very powerful
One idea is that you can save the address of the important global variables (if they are not always accessible) using convenience variables. For instance, add a breakpoint where this global variable is and then add to this breakpoint the command to save the address of this variable to a convenience variable followed by continue (with continue you will not see gdb stopping there).
set $var1 = &myglobal
continue
You might want to also delete this breakpoint with delete breakpoint_number before continue to avoid stopping at this breakpoint again.
Then as long is the object exists you can inspect it using p $var1 or p $var1->something when the program is stopped at a different breakpoint where myglobal might not be directly accessible.
In the commands that you add to be run when a breakpoint is hit you can do things such as echo message explaining where you are, p some_var to see the values of variables or even python some_complicated_python_code.
In the case you want to use python for more power it is worth reading the section about it in the manual. Let-me give you one example. Suppose one of your global variables was stored in a convenience variable called "$myvar". Then you can pass it to the python interpreter with
python myvar = gdb.parse_and_eval("$myvar")
You can also pass any variable in the current scope to parse_and_eval.
Now, suppose this global variable stores an object of a class with a "n_elem" attribute you want to check. You can print it with
python print(myvar["n_elem"])
You can also create a python file in the same folder and use
python from my_python_file import *
to import functions defined there.
With these gdb features you can pretty much automate whatever you might need.
Related
I'm new to c++ debugging and LLDB. I'm using VSCode with its c++ adapter, LLDB as the debugger, and bazel as the build system. My application deals with manipulating images. The application runs quickly but debugging it is very slow. That's because once I've loaded the images into memory, it takes about 20 seconds to a minute to step through each line. My assumption is that the raw images are too much for the debugger. If I use a small image, then I'm able to step through the code quickly inside the debugger
My question is: is there a way to tell the debugger to ignore the image loaded variables? Or perhaps to lazy-load the image variable data? I'm more interested in the other variables such as the matrices.
The underlying debugger, lldb, doesn't fetch any variables unless explicitly asked. It's always the UI that requests variable values.
In Xcode, if you close the Locals View, Xcode won't ask lldb to fetch variables. That does speed up stepping in frames with big local variables.
Then if you need to keep an eye on one or two of the variables while stepping you can use tooltips or the debugger console to print them on demand. You can also set up target stop-hooks in the lldb Console and use them to auto-print the variables you are tracking.
Some UI's also separate the "Locals" view from the "Watched Expression" view, so you can close the former and put the variables you need to see in the latter.
I don't know if VSCode allows you to close the Locals view, but if it does that might be a way to handle this problem.
When debuging a simple program in gdb, I want to continue the execution automatically after hitting breakpoints. As far as I know, there are two methods to accomplish it:
1) use hook-stop.
define hook-stop
continue
end
But it seems the hook-stop is trigged only once. When another breakpoint is hit next time, the execution still stops.
2) use gdb.events.stop.connect().
def handle_stop_event(event):
if isinstance(event, gdb.BreakpointEvent):
gdb.execute('continue')
gdb.events.stop.connect(handle_stop_event)
This method works well. But if there are too many breakpoints been hit, an error "Fatal Python error: Cannot recover from stack overflow." occurs.
It seems because of recursive call. I'm wondering why the gdb.execute('continue') would cause this issue.
I searched online and still didn't find a solution.
PS: gdb version 7.11.1 on Ubuntu 16.04
Any advice would be appreciate! Thanks in advance.
It seems, continue inside hook-stop doesn't work properly. Have you seen this question I posted yesterday?
I think, the best approach here is writing a convenience function in python and setting a conditional breakpoint. Or using commands — see the "Breakpoint Command Lists" section of the GDB user manual.
Here's how do to it (also described in the manual).
The python module:
import gdb
class should_skip_f(gdb.Function):
def __init__ (self):
super (should_skip_f, self).__init__("should_skip")
def invoke(self):
return True # Your condition here
should_skip_f()
(gdb) b <your target> if !$should_skip()
Or add the condition to existing breakpoints with
(gdb) condition <BNUM> !$should_skip()
The only downside is that you have to set the condition for each breakpoint individually, but that's scriptable. Also, I think, the commands syntax allows you to add commands to a list of breakpoints at once.
'commands [LIST...]'
'... COMMAND-LIST ...'
'end'
Specify a list of commands for the given breakpoints. The commands
themselves appear on the following lines. Type a line containing
just 'end' to terminate the commands.
As for the recursion — yeah, that's a bad "design" of a debugger script (if one should talk about design of one-off throwaway things). You can examine what happens there if you extend your python script like so
import inspect
...
def handle_stop_event(event):
...
print(len(inspect.stack())) # Or you can print the frames themselves...
The Python interpreter doesn't know that execution does not return from gdb.execute("continue"), so the Python stack frames for invocations of this function are never destroyed.
You can increase max stack size for the interpreter, but like I said, this script does not seem like the best solution to me.
Are there any debuggers, tools, gdb-scripts that can be used to do code-path analysis?
Say I have an executable (in C++, but the question is not language restricted) that runs fine with one input and crashes with another. I would like to see the difference between the two execution paths, without having to step (or instrument) through potentially thousands of lines of code.
Ideally, I would be able to compare between 2 different streams of (C++) statements (preferably not assembler) and pinpoint the difference(s). Maybe a certain if-branch is taken in one execution and not the other, etc.
Is there a way to achieve / automate that? Thanks in advance.
So, provided the source of the bug could be located in one (or a few) source files, the simplest way to achieve comparative code-execution paths seems to be GDB scripting. You create a gdb script file:
set args <arg_list>
set logging off
set logging file <log_file_1>
set logging on
set pagination off
set breakpoint pending on
b <source_file>:<line_1>
commands
frame
c
end
...
b <source_file>:<line_n>
commands
frame
c
end
with a preamble (all the set commands) and then breakpoint + command for each line in the source file (which can be easily generated by a script; don't worry about blank or commented lines, they will be skipped).
Load the executable in gdb (properly built with debug flags, of course); source the gdb script file above (call it gdb_script.txt) and run:
source gdb_script.txt
run
Then repeat the process above with a slightly changed script file (gdb_script.txt). Specifically, change the <arg_list> to modify the input; and set logging file to a different file <log_file_2>.
Source and run. Then compare <log_file_1> vs. <log_file_2> with your preferred diffing tool (say, tkdiff).
This will not do a better job than gcov (suggested above). But, it can help better restrict your output to the suspicious region of code.
I am writing a bash script that runs a C++ program multiple times. I use getenv() and putenv() to create, get, and update environment variables in the C++ program. After the C++ program ends, the bash script needs to grab these variables and perform some basic logic. The problem is that when the C++ program exits, the environment variables disappear. Is there any way to permanently store these variables after the program's termination so that the bash script can use them? If not, what is the best way to share variables between a bash script and a C++ program? The only solution I can think of is writing output to files. I do not want to print this data in the console. Any help would be greatly appreciated.
Each process has its own copy of the environment variables, which are initialised by copying them from the parent process when the new process is launched. When you change an environment variable in your process, the parent process has no knowledge of this.
In order to pass back information from a child to a parent, you will need to set up some other kind of communications channel. It could be files on disk, or a pipe, or (depending on the capabilities of your parent, bash might not be able to do all this) shared memory or some other IPC mechanism. The parent program would then be responsible for changing its own environment variables based on information received from the child.
I personally have only ever been able to do this in 16-bit DOS assembler, by tracing the pointer to the previous process until it points at itself, which means that you've reached the first instance of COMMAND.COM, and then altered its environment manually.
If your program returned the variables via standard output as string, like this:
FOO=23; BAR=45;
Then, bash could call it like this:
eval `./your_program`
end $FOO and $BAR will be accessible to bash.
To test this try:
eval `echo "FOO=23; BAR=45;"`
echo "$FOO $BAR"
Of course, in this method the program does not change environment variables of calling process (which is not possible), but just returns a string that is then evaluated by bash and the evaluation sets the variables.
Do not use this method if your program processes input from not trusted source. If someone tricked your program to print "rm -rf /" to the standard output you would be doomed.
As far as i know under a "standard" GNU/Linux environment you can set environment variables in 3 ways:
using command line utilities like export
editing files like ~/.profile or ~/.bashrc for an user or the equivalent files under /etc for system
feeding temporary values to a command like this CXX=g++ CUSTOM_VERSION=2.5 command
the last one is usually used to customize builds and it's good because doesn't harm the system and does not interfere with any system settings or values or files and everything is back to normal after the execution of the command. It's the best way if you like to have a temporary modification for a particular set of variables.
There is no way for a program to set environment variables in its parent. Or, well, no legitimate way. Hacking into its process with ptrace does not count. :-)
What you should do is output the environment variables on standard output. Have the shell script read them and set them. If the environment variables are all that you output the call is very simple:
`program`
The back-ticks will capture the output of the program. Then it will replace the back-ticks with the output. Which will be commands to set shell variables. Then later in your shell script be sure to do:
export VAR1
export VAR2
You need the export commands in order to move them into the environment that is passed to programs launched from the shell.
You cannot set environment variables which survive the life-time of your process, so the easiest solution is to write to output files as you suggested or write to specific filehandles handed down from Bash:
C++:
int main (int argc, char* argv[])
{
// missing error handling
int fd = atoi(argv[1]);
const char* env = "BLAH=SMURF";
write(5, env, strlen(env));
return 0;
}
Bash:
# discard stdout and stderr, redirect 5 to stdout so that it can be captured
# and tell the process that it should write to 5 (the first 5)
VARIABLES=`./command 5 2>&1 1>/dev/null 5>&1`
This is probably a crack-pot idea but it should work :)
I am writing gdb command scripts to simplify the debugging. One of the problems I have
is that I am setting a breakpoint, and I want to disable it afterwards, and only enable it after another breakpoint is hit.
What I want to do is this
$my_break_number = break SomeFile.cpp:231
disable $my_break_number
but unfortunately gdb doesn't work this way. I have read the manual, but I cannot find any information on how to do this. Hopefully there is some information I have missed.
gdb will automatically set a convenience variable $bpnum with the last set breakpoint number.
You can possibly use that after setting a breakpoint to disable it (I haven't tested when a breakpoint is ambiguous and creates multiple breakpoints, I think it will work and disable all breakpoint locations created.)
see: http://sourceware.org/gdb/current/onlinedocs/gdb/Set-Breaks.html#Set-Breaks
if you need to use the breakpoint number from commands, that is probably not what you want, but it works for the question as specified.
It sounds like you may want to use the Python GDB scripting, which gives you a lot better programmatic access to breakpoints than what is possible with "regular" command scripts.
Also info breakpoints gives useful information such as:
number of breakpoint, how many time the breakpoint was hit, address in memory, what function is it in, file and line number of breakpoint