I'am now dealing with a new project, threre exist several struct for varaible transfering in parallel programming, can anybody give some hint on how to use gdb to watch struct chage process step by step, thanks first:)
You can try GDB watch command. However, as stated by this link :
Warning: In multi-thread programs,
watchpoints have only limited
usefulness. With the current
watchpoint implementation, GDB can
only watch the value of an expression
in a single thread.
If that's a deal breaker for you, I guess you will have to instrument your code to manually track changes.
Related
Is it possible to intentionally crash the kernel at specific point during the course of its execution (by inserting some C statement there Or otherwise) and then collect the corefile for analysis using normal gdb program ?
Can somebody pls share the steps and what needs to be done.
Is it possible to intentionally crash the kernel
Sure: just insert a call to panic() in desired place.
The easiest way to do this is using user-mode linux. The kernel becomes just a regular program, and you can execute it under GDB the usual way, setting breakpoints, looking at variables, etc.
If you need to do "bare metal" execution, you should probably start here or here.
I compile the c++ project, which is not too large, about 6M binary. When I debug it and want to print some variable, I type the first two characters and press the Tab to complete. Then the gdb read symbols forever freezing. How can I solve this problem. thank you!
I type the first two characters and press the Tab to complete. Then the gdb read symbols forever freezing. How can I solve this problem
Doctor, it hurts when I do that.
Well, don't do that.
Seriously, if you have a very large binary (it's unclear whether your 6MB is the size with debug info or without), and lots of variables, then GDB will necessarily have to spend some time searching for variables matching your two initial characters.
That said,
we routinely debug binaries that are 2GB in size or larger, and
have spent quite a lot of effort improving GDB experience with such binaries
So perhaps your first step should be to take the latest release of GDB, and see if the problem has already been solved for you.
Update:
My binary is 6MB with debug info
That's not large at all. Certainly it should not cause more than a few seconds delay to list all variables in such a binary.
My GDB version is "GNU gdb (GDB) 7.6.2"
That's the latest release.
It's probably safe to conclude that there is a bug in GDB.
If you can construct a minimal test case that shows the problem, then your best bet is to report it as a bug in http://sourceware.org/bugzilla.
If you can't, you'll have to debug GDB yourself. A reasonable place to start is running strace -p <pid-of-hung-gdb> and gdb -p <pid-of-hung-gdb>; (gdb) where to find out exactly where GDB is getting stuck.
If you can update to GDB 7.10, your tab-completion freeze-ups should disappear.
GDB 7.10 (as of August 2015) contains a feature to address this problem.
set max-completions
Set the maximum number of candidates to be considered during
completion. The default value is 200. This limit allows GDB to avoid
generating large completion lists, the computation of which can cause
the debugger to become temporarily unresponsive.
[The above quote is taken from the patch shown on the gitweb site for gdb]
The GDB news release lists the feature as: "The number of candidates to be considered during completion can now be limited."
Updating to GDB 7.10 solved the problem for me. The default value of 200 for max-completions was sufficient. I did not customize it.
Is there a way for my code to be instrumented to insert a break point or watch on a memory location that will be honored by gdb? (And presumably have no effect when gdb is not attached.)
I know how to do such things as gdb commands within the gdb session, but for certain types of debugging it would be really handy to do it "programmatically", if you know what I mean -- for example, the bug only happens with a particular circumstance, not any of the first 11,024 times the crashing routine is called, or the first 43,028,503 times that memory location is modified, so setting a simple break point on the routine or watch point on the variable is not helpful -- it's all false positives.
I'm concerned mostly about Linux, but curious about if similar solutions exist for OS X (or Windows, though obviously not with gdb).
For breakpoints, on x86 you can break at any location with
asm("int3");
Unfortunately, I don't know how to detect if you're running inside gdb (doing that outside a debugger will kill your program with a SIGTRAP signal)
GDB supports a scripting language that can help in situations like this. For example, you can trigger a bit of custom script on a breakpoint that (for example) may decided to "continue" because some condition hasn't been met.
Not directly related to your question, but may be helpful. Have you looked at backtrace and backtrace_symbol calls in execinfo.h
http://linux.die.net/man/3/backtrace
This can help you log a backtrace whenever your condition is met. It isn't gdb, so you can't break and step through your program, but may be useful as a quick diagnostic.
The commonly used approach is to use a dummy function with non-obvious name. Then, you can augment your .gdbinit or use whatever other technique to always break on that symbol name.
Trivial dummy function:
void my_dummy_breakpoint_loc(void) {}
Code under test (can be an assert-like macro):
if (rare_condition)
my_dummy_breakpoint_loc();
gdb session (obvious, eh?):
b my_dummy_breakpoint_loc
It is important to make sure that "my_dummy_breakpoint_loc" is not optimized away by compiler for this technique to work.
In the fanciest of cases, the actual assembler instruction that calls my_dummy_breakpoint_loc can be replaced by "nops" and enabled on site by site basis by a bit of code self-modification in run-time. This technique is used by Linux kernel development instrumentation, to name a one example.
I've stumbled onto a very interesting issue where a function (has to deal with the Windows clipboard) in my app only works properly when a breakpoint is hit inside the function. This got me wondering, what exactly does the debugger do (VS2008, C++) when it hits a breakpoint?
Without directly answering your question (since I suspect the debugger's internal workings may not really be the problem), I'll offer two possible reasons this might occur that I've seen before:
First, your program does pause when it hits a breakpoint, and often that delay is enough time for something to happen (perhaps in another thread or another process) that has to happen before your function will work. One easy way to verify this is to add a pause for a few seconds beforehand and run the program normally. If that works, you'll have to look for a more reliable way of finding the problem.
Second, Visual Studio has historically (I'm not certain about 2008) over-allocated memory when running in debug mode. So, for example, if you have an array of int[10] allocated, it should, by rights, get 40 bytes of memory, but Visual Studio might give it 44 or more, presumably in case you have an out-of-bounds error. Of course, if you DO have an out-of-bounds error, this over-allocation might make it appear to be working anyway.
Typically, for software breakpoints, the debugger places an interrupt instruction at the location you set the breakpoint at. This transfers control of the program to the debugger's interrupt handler, and from there you're in a world where the debugger can decide what to do (present you with a command prompt, print the stack and continue, what have you.)
On a related note, "This works in the debugger but not when I run without a breakpoint" suggests to me that you have a race condition. So if your app is multithreaded, consider examining your locking discipline.
It might be a timing / thread synchronization issue. Do you do any multimedia or multithreading stuff in your program?
The reason your app only works properly when a breakpoint is hit might be that you have some watches with side effects still in your watch list from previous debugging sessions. When you hit the break point, the watch is executed and your program behaves differently.
http://en.wikipedia.org/wiki/Debugger
A debugger essentially allows you to step through your source code and examine how the code is working. If you set a breakpoint, and run in debug mode, your code will pause at that break point and allow you to step into the code. This has some distinct advantages. First, you can see what the status of your variables are in memory. Second, it allows you to make sure your code is doing what you expect it to do without having to do a whole ton of print statements. And, third, it let's you make sure the logic is working the way you expect it to work.
Edit: A debugger is one of the more valuable tools in my development toolbox, and I'd recommend that you learn and understand how to use the tool to improve your development process.
I'd recommend reading the Wikipedia article for more information.
The debugger just halts execution of your program when it hits a breakpoint. If your program is working okay when it hits the breakpoint, but doesn't work without the breakpoint, that would indicate to me that you have a race condition or another threading issue in your code. The breakpoint is stopping the execution of your code, perhaps allowing another process to complete normally?
It stops the program counter for your process (the one you are debugging), and shows the current value of your variables, and uses the value of your variables at the moment to calculate expressions.
You must take into account, that if you edit some variable value when you hit a breakpoint, you are altering your process state, so it may behave differently.
Debugging is possible because the compiler inserts debugging information (such as function names, variable names, etc) into your executable. Its possible not to include this information.
Debuggers sometimes change the way the program behaves in order to work properly.
I'm not sure about Visual Studio but in Eclipse for example. Java classes are not loaded the same when ran inside the IDE and when ran outside of it.
You may also be having a race condition and the debugger stops one of the threads so when you continue the program flow it's at the right conditions.
More info on the program might help.
On Windows there is another difference caused by the debugger. When your program is launched by the debugger, Windows will use a different memory manager (heap manager to be exact) for your program. Instead of the default heap manager your program will now get the debug heap manager, which differs in the following points:
it initializes allocated memory to a pattern (0xCDCDCDCD comes to mind but I could be wrong)
it fills freed memory with another pattern
it overallocates heap allocations (like a previous answer mentioned)
All in all it changes the memory use patterns of your program so if you have a memory thrashing bug somewhere its behavior might change.
Two useful tricks:
Use PageHeap to catch memory accesses beyond the end of allocated blocks
Build using the /RTCsu (older Visual C++ compilers: /GX) switch. This will initialize the memory for all your local variables to a nonzero bit pattern and will also throw a runtime error when an unitialized local variable is accessed.
I'm debugging an application and it segfaults at a position where it is almost impossible to determine which of the many instances causes the segfault.
I figured that if I'm able to resolve the position at which the object is created, I will know which instance is causing the problem and resolve the bug.
To be able to retrieve this information, gdb (or some other application) would of course have to override the default malloc/new/new[] implementations, but instrumenting my application with this would be alright.
One could argue that I could just put a breakpoint on the line before the one that segfaults and step into the object from there, but the problem is that this is a central message dispatcher loop which handles a lot of different messages and I'm not able to set a breakpoint condition in such a way as to trap my misbehaving object.
So, at the point where the segfault occurs, you have the object, but you don't know which of many pieces of code that create such objects created it, right?
I'd instrument all of those object-creation bits and have them log the address of each object created to a file, along with the file and line number (the __LINE__ and __FILE__ pre-defined macros can help make this easy).
Then run the app under the debugger, let it trap the segfault and look the address of the offending object up in your log to find out where it was created. Then peel the next layer of the onion.
Have you tried using a memory debugging library (e.g. dmalloc). Many of these already instrument new, etc. and records where an allocation is made. Some are easier to access from gdb than others though.
This product has a memory debugging feature that does what you want: http://www.allinea.com/index.php?page=48
I would first try using the backtrace command in gdb when the segfault occurs. If that does not give me a good clue about what is going on, I would next try to use valgrind to check if there are any memory leaks occurring. These two steps are usually sufficient, in my experience, to narrow down and find the problem spot in most of the usual cases.
Regards.