I'm trying to set breakpoing (break ) after attaching to process in background mode (attach &). However I got
Cannot insert breakpoint 1.
Cannot access memory at address 0x5560c872b71a
Any reason why it's happening?
Setting breakpoint in foreground mode is fine.
Program was written in C++.
Any reason why it's happening?
The program must be stopped when inserting a breakpoint into it. Inserting a breakpoint is not an atomic operation, and writing to program code (which is what breakpoint insertion amounts to) while that code is executing may result in all kind of badness.
Use interrupt command to stop the process and bring it to foreground, insert your breakpoint, then continue & to put it into background again.
Related
I know how to create/modify a breakpoint to stop only on a specific thread using breakpoint modify <breakpoint-id> -T <thread-name>, but can I do it the other way around, preventing the breakpoint to stop on a specific thread?
I have a log thread that use the same function I'm trying to debug on other threads and it's annoying to have to let it continue every time it hits.
You can do this sort of thing using the lldb Python API and Python breakpoint callbacks. The callback returns a "should stop" value, i.e. if it returns True, you stop at the breakpoint, and False you continue.
So for instance:
(lldb) breakpoint command add -s python -o 'return frame.thread.name != "MyThread"'
is what you wanted. Note, in this example, I didn't provide a breakpoint ID. In lldb that means "act on the last set breakpoint". But you can also supply the breakpoint ID if the one you want to add the command to doesn't happen to be the last breakpoint you set. And if you want to add it to multiple breakpoints, you can specify a breakpoint ID list.
There's more on the API's and the callback format here:
https://lldb.llvm.org/python_api.html
https://lldb.llvm.org/use/python-reference.html#running-a-python-script-when-a-breakpoint-gets-hit
Note, however, that breakpoints with should-stop callbacks will force a stop every time the breakpoint is hit. lldb auto-continues pretty quickly, so for many applications this isn't a problem. But just getting from the stop over to the debugger and back is not cheap, so if your breakpoint is in a work loop and getting hit hundreds of thousands of times a second, this is going to slow down the run. If that ends up being a problem and you control your source code, tricks like the one Eljay suggested in the comments can be really handy.
I have a network software that I need to debug. It forks at multiple places and I need to debug one particular function handling one particular request.
Is there any way to setup a global breakpoint that would be caught even when it is in an inferior process?
I cannot use follow-fork-mode child because this will follow the first request, not the one I need to debug.
One way to do this is to have gdb remain attached to all the processes. Then you would set your breakpoint and run the program as usual; the breakpoint would fire in any sub-process that happened to hit that location. You can use breakpoint conditions to try to reduce the number of hits.
To put gdb into multi-inferior mode, I use this:
set detach-on-fork off
set non-stop on
set pagination off
Depending on your version of gdb, you might also need set target-async on.
This mode can be a bit peculiar to work in. For example, when one thread stops, the other keep going. Also, breakpoint stops are reported, but not always obvious; and I think gdb doesn't immediately switch to the stopping thread (this may have changed in gdb git, I forget).
How are SW breakpoints handled (conceptually) by gdb stub or server (I assume client stub and server handle them in pretty much same way)?
I'm interested in a 'bare metal' target where the gdb stub/server runs, and both breakpoints and single stepping use software interrupts.
My actual questions:
When a breakpoint is hit, how is the stored instruction run so that the breakpoint can be 're-installed' and the (saved) machine status (including register contents) is not changed from the moment of hitting the breakpoint?
=>When is the breakpoint re-installed and how? Between breakpoint hit and entering the command interpreter, or during the next single step or coninue?
Also how does single-stepping over breakpoint work such that the original non-breakpoint instruction gets executed, and the breakpoint still remains there after being single-stepped over?
[edit]
Forgot: the document "GDB Internals" seems to be missing that info - and actually the whole subchapter about single stepping in the "Algorithms" chapter.
[edit2]
Ah, I seem to need stronger glasses: The 'Internals'-manual says:
"When the user says to continue, GDB will restore the original instruction, single-step, re-insert the trap, and continue on."
The single stepping over breakpoint, however, is still open question.
The single stepping over breakpoint, however, is still open question.
It's done exactly the same way as continue, except for the last step ("and continue on"). That is:
Process stops. GDB "looks around", discovers that $ip points to one of its breakpoints.
User issues continue, next, step or stepi command.
Restore original instruction (i.e. remove the breakpoint)
Single-step process
Re-insert breakpoint
Continue (this is done for continue but not for next, step or stepi).
For stepi, return control to the user (we are already at the next instruction due to step 4 above). For next, continue single-stepping until we reach a line in source that is not the same line we were on at step 1 above.
I'm using gdb to solve a binary bomb as part of a class. Every time we set the bomb off we lose points. I already have a break point on the function that explodes the bomb, but have accidentally stepped past it a couple times already and exploded the bomb. Is there a way to make gdb exit at a specific point rather than just break?
You can attach commands to breakpoints. After you set breakpoint, e.g. #1, do this:
commands 1
quit
end
I'm running on a Marvell Monahans PXA320 under Green Hills INTEGRITY 5.0.10. I'm using MULTI 4.2.3 for development. I'm using an RTSERV connection for debugging, I've been asked to take over a menu-driven program.
I've noticed that if I halt the program (to modify breakpoints) and then resume it, the task gets into an infinite loop displaying the menu in the debugger I/O tab. After each instance of the menu that gets printed, it says that I have made an illegal selection. So, some input is apparently being fed into the task as if I had typed it in (and this input obviously corresponds to an invalid menu selection). I do not see on the display what this phantom input is.
Is there anything I can do to prevent a halt / resume from screwing up the I/O?
Thanks,
Dave
My first guess is that getc() (or your equivalent) is returning -1. This can happen if your input buffers overflowed as a result of halting the application. I/O keeps flowing while the application is halted...
It is generally not a good idea to halt the program when debugging with INTEGRITY. You're generally better off to attach the debugger to a single thread (something idle or infrequently used), set an "any-task" breakpoint in that thread, then resume the thread. (Don't close the window! Doing so will delete the breakpoint.) You'll see a "DebugBrk" status on the thread that hits the breakpoint -- then you can double-click and attach to that specific thread.
Following that alternate procedure should (hopefully!) prevent the I/O error.