Step to Next Branching Instruction in GDB - gdb

Is there a way to continue execution until a branching instruction is reached in GDB? Much like the WinDbg "ph" command. If not, can it be scripted in Python?

I finally came across this question:
Does GDB have a "step-to-next-call" instruction?
I was able to modify the code there to create my own "ph" command:
import gdb
mips_branches = ["beq", "beqz", "bne", "bnez", "bgtz", "bltz", "bgez", "blez", "j", "jr", "jal", "jalr"]
arm_branches = ["b", "bl", "blx", "bx", "beq"]
class StepToNextBranch (gdb.Command):
def __init__ (self):
super (StepToNextBranch, self).__init__ ("ph", gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
arch = gdb.selected_frame().architecture()
while True:
SILENT=True
gdb.execute("nexti", to_string=SILENT)
current_pc = int(gdb.selected_frame().read_register("pc"))
disa = arch.disassemble(current_pc)[0]
opcode = disa["asm"].split("\t")[0]
if opcode in mips_branches or opcode in arm_branches:
break
gdb.execute("context")
StepToNextBranch()
I'm not as familiar with ARM as MIPS so I'm not sure this covers everything. Eventually, I'll add x86 branching instructions.
Also, that gdb.execute("context") line is there because I use GEF.

Related

How to avoid step into built in types in gdb? [duplicate]

I'm debugging C++ code with GDB and when it enters a constructor of some object containing standard library objects, it shows me the constructor of these objects (like std::map) and everything that's underneath.
I know about the next operator, but I'd prefer to basically black list any standard library code, which is never the source of the error I'm investigating. The wished behavior is that a simple skip would send me to the next "user-land" code.
gdb 7.12 supports file globbing to specify the files to skip in the debugger. The documentation for the same is as below:
https://sourceware.org/gdb/onlinedocs/gdb/Skipping-Over-Functions-and-Files.html
To skip stepping into all library headers in the directory /usr/include/c++/5/bits, add the below lines to ~/.gdbinit
# To skip all .h files in /usr/include/c++/5/bits
skip -gfi /usr/include/c++/5/bits/*.h
Instead to skip a specific file, say stl_vector.h, add the below lines to ~/.gdbinit
# To skip the file /usr/include/c++/5/bits/stl_vector.h
skip file /usr/include/c++/5/bits/stl_vector.h
Doing the above with gdb 7.11 and below version leads to the below error:
Ignore function pending future shared library load? (y or [n]) [answered N; input not from terminal]
However, gdb 7.12 seems to have solved the above issue.
This blog addresses the same problem for gdb version 7.11 or below.
Note - You can use the below command from the gdb command prompt to list all the files marked for skipping
info skip
* Changes in GDB 7.4
GDB now allows you to skip uninteresting functions and files when stepping with the "skip function" and "skip file" commands.
Step instructions and skip all files without source
This will be too slow for most applications, but it is fun!
Based on: Displaying each assembly instruction executed in gdb
class ContinueUntilSource(gdb.Command):
def __init__(self):
super().__init__(
'cus',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, argument, from_tty):
argv = gdb.string_to_argv(argument)
if argv:
gdb.write('Does not take any arguments.\n')
else:
done = False
thread = gdb.inferiors()[0].threads()[0]
while True:
message = gdb.execute('si', to_string=True)
if not thread.is_valid():
break
try:
path = gdb.selected_frame().find_sal().symtab.fullname()
except:
pass
else:
if os.path.exists(path):
break
ContinueUntilSource()
Tested in Ubuntu 16.04, GDB 7.11. GitHub upstream.
std::function case
How to step debug into std::function user code from C++ functional with GDB?
Modified from Ciro Santilli's answer command ss steps inside specific source. You may specify source file name or the current one will be stepped. Very handy for stepping through bison/yacc sources or other meta-sources that generate С code and insert #line directives.
import os.path
class StepSource(gdb.Command):
def __init__(self):
super().__init__(
'ss',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, argument, from_tty):
argv = gdb.string_to_argv(argument)
if argv:
if len(argv) > 1:
gdb.write('Usage:\nns [source-name]]\n')
return
source = argv[0]
full_path = False if os.path.basename(source) == source else True
else:
source = gdb.selected_frame().find_sal().symtab.fullname()
full_path = True
thread = gdb.inferiors()[0].threads()[0]
while True:
message = gdb.execute('next', to_string=True)
if not thread.is_valid():
break
try:
cur_source = gdb.selected_frame().find_sal().symtab.fullname()
if not full_path:
cur_source = os.path.basename(cur_source)
except:
break
else:
if source == cur_source:
break
StepSource()
Known bugs
it doesn't interrupt debugger on SIGINT while running;
changed pass to break on exception as not sure whether it is right.

Does GDB have a "step-to-next-call" instruction?

WinDBG and the related windows kernel debuggers support a "pc" command which runs the target until reaching the next call statement (in assembly). In other words, it breaks just prior to creating a new stack frame, sort of the opposite of "finish". "Start" in GDB runs until main starts, but in essence I want 'start' but with a wildcard of "any next frame".
I'm trying to locate a similar functionality in GDB, but have not found it.
is this possible?
Example WinDBG doc: http://windbg.info/doc/1-common-cmds.html#4_expr_and_cmds
Simple answer: no, step-to-next-call is not part of GDB commands.
GDB/Python-aware answer: no, it's not part of GDB commands, but it's easy to implement!
I'm not sure to understand if you want to stop before or after the call instruction execution.
To stop before, you need to stepi/nexti (next assembly instruction) until you see call in the current instruction:
import gdb
class StepBeforeNextCall (gdb.Command):
def __init__ (self):
super (StepBeforeNextCall, self).__init__ ("step-before-next-call",
gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
arch = gdb.selected_frame().architecture()
while True:
current_pc = addr2num(gdb.selected_frame().read_register("pc"))
disa = arch.disassemble(current_pc)[0]
if "call" in disa["asm"]: # or startswith ?
break
SILENT=True
gdb.execute("stepi", to_string=SILENT)
print("step-before-next-call: next instruction is a call.")
print("{}: {}".format(hex(int(disa["addr"])), disa["asm"]))
def addr2num(addr):
try:
return int(addr) # Python 3
except:
return long(addr) # Python 2
StepBeforeNextCall()
To stop after the call, you compute the current stack depth, then step until it's deeper:
import gdb
def callstack_depth():
depth = 1
frame = gdb.newest_frame()
while frame is not None:
frame = frame.older()
depth += 1
return depth
class StepToNextCall (gdb.Command):
def __init__ (self):
super (StepToNextCall, self).__init__ ("step-to-next-call",
gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
start_depth = current_depth =callstack_depth()
# step until we're one step deeper
while current_depth == start_depth:
SILENT=True
gdb.execute("step", to_string=SILENT)
current_depth = callstack_depth()
# display information about the new frame
gdb.execute("frame 0")
StepToNextCall()
just put that in a file, source it with GDB (or in your .gdbinit) and that will provide you the new commands step-before-next-call and step-to-next-call.
Relevant documentation is there:
Python API table of content
Basic Python
Python representation of architectures
Accessing inferior stack frames from Python.

Testing Motor calls with IOLoop

I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/

ReactorNotRestartable error

I have a tool, where i am implementing upnp discovery of devices connected in network.
For that i have written a script and used datagram class in it.
Implementation:
whenever scan button is pressed on tool, it will run that upnp script and will list the devices in the box created in tool.
This was working fine.
But when i again press the scan button, it gives me following error:
Traceback (most recent call last):
File "tool\ui\main.py", line 508, in updateDevices
upnp_script.main("server", localHostAddress)
File "tool\ui\upnp_script.py", line 90, in main
reactor.run()
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1191, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1171, in startRunning
ReactorBase.startRunning(self)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 683, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
Main function of upnp script:
def main(mode, iface):
klass = Server if mode == 'server' else Client
obj = klass
obj(iface)
reactor.run()
There is server class which is sending M-search command(upnp) for discovering devices.
MS = 'M-SEARCH * HTTP/1.1\r\nHOST: %s:%d\r\nMAN: "ssdp:discover"\r\nMX: 2\r\nST: ssdp:all\r\n\r\n' % (SSDP_ADDR, SSDP_PORT)
In server class constructor, after sending m-search i am stooping reactor
reactor.callLater(10, reactor.stop)
From google i found that, we cannot restart a reactor beacause it is its limitation.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted
Please guide me how can i modify my code so that i am able to scan devices more than 1 time and don't get this "reactor not restartable error"
In response to "Please guide me how can i modify my code...", you haven't provided enough code that I would know how to specifically guide you, I would need to understand the (twisted part) of the logic around your scan/search.
If I were to offer a generic design/pattern/mental-model for the "twisted reactor" though, I would say think of it as your programs main loop. (thinking about the reactor that way is what makes the problem obvious to me anyway...)
I.E. most long running programs have a form something like
def main():
while(True):
check_and_update_some_stuff()
sleep 10
That same code in twisted is more like:
def main():
# the LoopingCall adds the given function to the reactor loop
l = task.LoopingCall(check_and_update_some_stuff)
l.start(10.0)
reactor.run() # <--- this is the endless while loop
If you think of the reactor as "the endless loop that makes up the main() of my program" then you'll understand why no-one is bothering to add support for "restarting" the reactor. Why would you want to restart an endless loop? Instead of stopping the core of your program, you should instead only surgically stop the task inside that is complete, leaving the main loop untouched.
You seem to be implying that the current code will keep "sending m-search"s endlessly when the reactor is running. So change your sending code so it stops repeating the "send" (... I can't tell you how to do this because you didn't provide code, but for instance, a LoopingCall can be turned off by calling its .stop method.
Runnable example as follows:
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main()
self.l = None # Also being pushed in from main()
self.iotries = 0
def pollingtry(self):
self.iotries += 1
if self.iotries > 5:
print "stoping this task"
self.l.stop()
return()
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
# Normally I would define a class of ServerFactory here but I'm going to
# hack it into main() as they do in the twisted chat, to make things shorter
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
polling_stuff.l = l
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
This script has extra cruft in it that you might not care about, so just focus on the self.l.stop() in PollingIOThingys polling try method and the l related stuff in main() to illustrates the point.
(this code comes from SO: Persistent connection in twisted check that question if you want to know what the extra bits are about)

Tell gdb to skip standard files

I'm debugging C++ code with GDB and when it enters a constructor of some object containing standard library objects, it shows me the constructor of these objects (like std::map) and everything that's underneath.
I know about the next operator, but I'd prefer to basically black list any standard library code, which is never the source of the error I'm investigating. The wished behavior is that a simple skip would send me to the next "user-land" code.
gdb 7.12 supports file globbing to specify the files to skip in the debugger. The documentation for the same is as below:
https://sourceware.org/gdb/onlinedocs/gdb/Skipping-Over-Functions-and-Files.html
To skip stepping into all library headers in the directory /usr/include/c++/5/bits, add the below lines to ~/.gdbinit
# To skip all .h files in /usr/include/c++/5/bits
skip -gfi /usr/include/c++/5/bits/*.h
Instead to skip a specific file, say stl_vector.h, add the below lines to ~/.gdbinit
# To skip the file /usr/include/c++/5/bits/stl_vector.h
skip file /usr/include/c++/5/bits/stl_vector.h
Doing the above with gdb 7.11 and below version leads to the below error:
Ignore function pending future shared library load? (y or [n]) [answered N; input not from terminal]
However, gdb 7.12 seems to have solved the above issue.
This blog addresses the same problem for gdb version 7.11 or below.
Note - You can use the below command from the gdb command prompt to list all the files marked for skipping
info skip
* Changes in GDB 7.4
GDB now allows you to skip uninteresting functions and files when stepping with the "skip function" and "skip file" commands.
Step instructions and skip all files without source
This will be too slow for most applications, but it is fun!
Based on: Displaying each assembly instruction executed in gdb
class ContinueUntilSource(gdb.Command):
def __init__(self):
super().__init__(
'cus',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, argument, from_tty):
argv = gdb.string_to_argv(argument)
if argv:
gdb.write('Does not take any arguments.\n')
else:
done = False
thread = gdb.inferiors()[0].threads()[0]
while True:
message = gdb.execute('si', to_string=True)
if not thread.is_valid():
break
try:
path = gdb.selected_frame().find_sal().symtab.fullname()
except:
pass
else:
if os.path.exists(path):
break
ContinueUntilSource()
Tested in Ubuntu 16.04, GDB 7.11. GitHub upstream.
std::function case
How to step debug into std::function user code from C++ functional with GDB?
Modified from Ciro Santilli's answer command ss steps inside specific source. You may specify source file name or the current one will be stepped. Very handy for stepping through bison/yacc sources or other meta-sources that generate С code and insert #line directives.
import os.path
class StepSource(gdb.Command):
def __init__(self):
super().__init__(
'ss',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, argument, from_tty):
argv = gdb.string_to_argv(argument)
if argv:
if len(argv) > 1:
gdb.write('Usage:\nns [source-name]]\n')
return
source = argv[0]
full_path = False if os.path.basename(source) == source else True
else:
source = gdb.selected_frame().find_sal().symtab.fullname()
full_path = True
thread = gdb.inferiors()[0].threads()[0]
while True:
message = gdb.execute('next', to_string=True)
if not thread.is_valid():
break
try:
cur_source = gdb.selected_frame().find_sal().symtab.fullname()
if not full_path:
cur_source = os.path.basename(cur_source)
except:
break
else:
if source == cur_source:
break
StepSource()
Known bugs
it doesn't interrupt debugger on SIGINT while running;
changed pass to break on exception as not sure whether it is right.