I am trying to debug a service. The usual procedure is to start the service and attach gdb to the process. But I want to debug the code when the service is still starting up. It takes a while for gdb to load the libraries, and the required code has executed before I can put the breakpoints. Any idea how to do it? Thanks!
Let's assume your service is called "myservice.exe"
If you can get on the box that the code is actually running on, then I would do the following:
$ gdb myservice.exe
(gdb) break myclass:myfunction
(gdb) run
This should get you what you want.
Note: if you can't run gdb directly, then put a "sleep" statement for 1 minute at the very start (before the part you want to debug) - that should allow you to connect before it starts the sensitive code.
Related
I'd like to to debug a multiprocess C++ project with GDB, specifically I'd like to know if there is a way to achieve the following
Attach multiple processes to a single instance of GDB while letting all the processes run
Setting up a breakpoint in the source code of one of the processes stops all the attached processes
The ideal solution would be something similar to what is offered by the Visual Studio debugger as described here.
At the moment I'm able to attach multiple processes to a GDB instance but then only the current selected inferior is executed while the others are stopped and waiting for a continue command.
In order to be able to run inferiors in the background, one needs to issue this gdb command
set target-async on
after start up and before running anything. With this option in effect, one ca issue
continue&
(or just c&) and this will send the inferior to the background, giving an opportunity to switch to run another one.
Stopping all inferiors at once is a bit more difficult. There is no built-in command for that. Fortunately gdb is scriptable and it is possible to attach a script to a breakpoint. Once the breakpoint is hit, the commands are executed. Put inferior n and interrupt commands in the script for each inferior. It is probably more convenient to do that from a Python script, something like
(gdb) python
>inf = gdb.inferiors()
>for i in inf:
> gdb.execute("inferior %d" % i.num)
> gdb.execute("interrupt")
I am trying to do a stackoverflow for a course at university. The binary I am to exploit has a canary, however, there is a way to leak that canary to stdout. The canary of course consists of some random bytes so I can't just read them from the string that the program outputs to stdout.
For this reason I am using the python and pwntools like p.recv(timeout = 0.01).encode("hex").
(I'm using pwntools only because I don't know another way to read the output in hex format, if there is an easier way I can of course use something else)
This works more or less works as expected, I manage to write the memory area that is past the canary. However, I get a segfault, so I obviously have some problem with the stackoverflow I am causing. I need a way of debugging this, like seeing the stack after I provide the input that causes the stackoverflow.
And now without any further ado the actual question: Can I debug a process that I started with pwntools (like process("./myprog")) in GDB or some other program that can show me the content of the stack?
I already tried getting the pid in python and using gdb attach to attach to that pid, but that didn't work.
Note: The binary I am trying to exploit has the guid set. Don't know if that matters tho.
You can use the pwnlib.gdb to interface with gdb.
You can use the gdb.attach() function:
From the docs:
bash = process('bash')
# Attach the debugger
gdb.attach(bash, '''
set follow-fork-mode child
break execve
continue
''')
# Interact with the process
bash.sendline('whoami')
or you can use gdb.debug():
# Create a new process, and stop it at 'main'
io = gdb.debug('bash', '''
# Wait until we hit the main executable's entry point
break _start
continue
# Now set breakpoint on shared library routines
break malloc
break free
continue
''')
# Send a command to Bash
io.sendline("echo hello")
# Interact with the process
io.interactive()
The pwntools template contains code to get you started with debugging with gdb. You can create the pwntools template by running pwn template ./binary_name > template.py. Then you have to add the GDB arg when you run template.py to debug: ./template.py GDB.
If you get [ERROR] Could not find a terminal binary to use., you might need to set context.terminal before you use gdb.
If you're using tmux, the following will automatically open up a gdb debugging session in a new horizontally split window:
context.terminal = ["tmux", "splitw", "-h"]
And to split the screen with the new gdb session window vertically:
context.terminal = ["tmux", "splitw", "-v"]
(Note: I never got this part working, so idk if it'll work. Tell me if you get the gdb thing working).
(To use tmux, install tmux on your machine, and then just type tmux to start it. Then type python template.py GDB.
If none of the above works, then you can always just start your script, use ps aux, find the PID, and then use gdb -p PID to attach to the running process.
I have multiple different processes communicating over IPC and when debugging a single process using gdb, whenever a breakpoint is hit, I am trying to send a message to other processes. Is there a way to automatically invoke a function/piece of code (NotifyAll()) whenever a breakpoint is hit without manually running commands and invoking the function in the gdb console.
Basically, whenever a gdb debugger is attached to one of these processes, I want gdb to know that it should invoke NotifyAll() whenever a breakpoint (application-wide) is hit.
Yes, this can be done using the Python scripting capabilities in gdb.
In particular you want to add a listener to gdb.events.stop that checks for a breakpoint stop event, then calls your function. It's possible (I don't know offhand) that you'll have to defer the calling of the function by posting an event to the gdb event loop.
To make this work with the minimum of manual intervention, use the gdb script auto-loading feature to associate this Python script with your application. This will require users to trust the script (read about add-auto-load-safe-path), but that's all.
Note that doing things like this is potentially confusing to people trying to debug your application. For example, setting a breakpoint in the RPC code will cause problems unless your script takes extra care.
I'm currently debugging syslinux (a boot loader) through the gdb stub of qemu.
Recently, I wrote some gdb commands that (un)load the debug symbols everytime a module is dynamically (un)loaded. In order not to disrupt the execution, I ended the commands with continue.
break com32/lib/sys/module/elf_module.c:282
commands
silent
python
name = gdb.parse_and_eval("module->name").string()
addr = int(str(gdb.parse_and_eval("module->base_addr")), 0)
gdb.execute("load-syslinux-module %s 0x%08x" % (name, addr))
end
continue
end
However, when stepping through the code line by line, if the next or step command makes the execution hit the breakpoint, the breakpoints takes precedence, the commands are executed, including the continue. And the execution continue irrespectively of the line-by-line debugging I was doign. This also happen if I try to step over the function that has this breakpoint.
How can I keep (un)loading the debug symbols on the fly while not interfering with the debugging?
Is there an alternative to the continue command? Maybe using breakpoints isn't the right way? I'd take any solution.
This can't be done from the gdb CLI. However, it is easy to do from Python.
In Python the simplest way is to define one's own gdb.Breakpoint subclass, and define the stop method on it. This method can do the work you like, then return False to tell gdb to continue.
The stop facility was designed to avoid the problems with cont in commands. See the documentation for more details.
So I am relatively new to coding so please forgive improper vocab. What I am basically trying to do is create a script for, or perhaps enter commands into, GDB so that it can run my code with the input file of a test case over and over. Basically, I am working on a project right now that makes heavy usage of semaphores and mutexes, and somewhere, every once in a blue moon, my code breaks due to race conditions. If I could have gdb run my test case continuously until my code reached a seg fault, this would be ideal.
PS- Please be specific as to what I must do, I am not great at dissecting answers that have heavy technical answers.
Thank You!
The simplest solution is expect script. Expect is a program to automate interactions with programs that expose a text terminal interface.
Examples are available at http://en.wikipedia.org/wiki/Expect
The script should be like
#!/usr/bin/expect
# start gdb
spawn gdb yourprogram
while {1} {
# wait for gdb to start, expect the (gdb) to appear
expect "(gdb)"
# send command to run your program
send "run your_args\n"
expect {
"Program exited normally." {continue} # just run again
"(Some error message)" {interact} # start to debug
}
}
You can use GDB scripts in order to automate your GDB sessions.The GDB macro coding language consists of gdb commands along with basic looping statements and conditional statements.
You can find information about it here
http://www.adacore.com/adaanswers/gems/gem-119-gdb-scripting-part-1/
What are the best ways to automate a GDB debugging session?