I have a double-free bug. I am able to reproduce it using a debug build with Address Sanitizer (AS) detects but when I run under GDB, AS kills the GDB session.
I found this Address Sanitizer page with instructions how to keep GDB:
https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger
but when I do:
(gdb) break __asan::ReportGenericError
at the beginning of the session, the GDB state still disappears after the problem is detected:
(gdb) bt
No stack.
the GDB state still disappears after the problem is detected
There are several possible reasons for this:
Somehow you didn't set the breakpoint correctly
It's actually a child process that is dying
Somehow the thread in which the error is detected is not attached by GDB.
To eliminate 1, use catch syscall exit_group (and possibly also catch syscall exit) -- this way GDB is sure to stop before the process disappears.
For 2, AddressSanitizer message should indicate the thread id in which the error is detected, and that id should match one of the threads in GDB info thread output.
For 3, we'd need to understand more about how that thread was created.
Related
I have web daemon and request that makes it fail with SIGSEGV. So i start daemon, attach with gdb, continuing, send request and getting this:
$ gdb attach -p 630066
(gdb) c
Continuing.
Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.
(gdb)
How to make gdb print stacktrace before killing application? Application do not have subprocesses, just threads.
Thanks.
Your GDB session indicates that you have not attached all threads of the multithreaded process, and some other thread (one you didn't attach) ran into SIGSEGV and terminated the entire process.
Another (somewhat unlikely) possibility is that you are using a very old version of GDB, one which still has this bug in it (the bug was fixed in 2009).
When using gdb -p NNNN you need to be careful and specify correct process id. pgrep daemon-name or ps aux | grep daemon-name should give you a good idea which process to attach.
Just enter backtrace or bt right in the gdb shell after getting SIGSEGV.
To explore stack trace for each separate thread, start with info thread, then choose the thread you need, for example thread 3 and then type bt to see the stack trace for that thread.
I am trying to let my program run continously with GDB.
Currently I have a bash script which starts GDB with my program and when it crashes it prints the backtrace and starts GDB again (endless loop).
Now I added a signal handler for my program which kills specific threads when the handler gets a signal from them. Now I can achieve that GDB does not stop by doing this:
handle SIGSEGV nostop
But this leads me to the problem that I do not get a GDB backtrace which I would like to print automatically without stopping the program (or at least continuing automatically).
Any help would be appreciated!
Continue to use handle to suppress ordinary stops from SEGV. Then set a catchpoint that does what you want:
(gdb) catch signal SIGSEGV
(gdb) commands
> silent # this bit is optional
> bt
> continue
> end
This will print a backtrace on SIGSEGV but not otherwise interfere with normal operation. You may also want handle SIGSEGV noprint.
I will try to be as specific as I can, but so far I have worded this problem so poorly that Google failed to return any useful results (hence my question here).
I am attaching gdb to a multi-threaded c++ server process. All I can say is that strange things have been happening while trying to do the usual set-breakpoint-break-investigate.
First, while waiting for the breakpoint to be hit (in 'Continuing' mode), I suddenly got back the (gdb) prompt with the message:
Continuing.
[Thread 0x54d5b940 (LWP 28503) exited]
[New Thread 0x54d5b940 (LWP 28726)]
Cannot get thread event message: debugger service failed
Second, also while waiting for the breakpoint to be hit, I'm suddenly told the program has received SIGSEGV and - back to the (gdb) prompt - backtrace tells me the segfault happened in pthread_cancel(). Note the process under investigation does not normally segfault.
I clearly lack enough information about how gdb works to even begin guessing what is happening. Am I doing anything wrong? The steps I take are the same each time:
gdb attach
break 'MyFunction()'
continue
Thoughts? Thanks.
I fought with similar gdb issues for a while. My case was having lots of threads spawned that executed few functions and then exited.
It appears if a thread exits too fast and there's lots of these happening sometimes gdb cannot keep up and when it fails, it fails with style as in crashes :) I think it tries to attach to a thread that is already done as per the error message.
I see this as an issue in gdb 6.5 to 7.6 and still happening. Did not try with older versions.
My advice is look for this use case or similar. Once I changed my design to have a thread serving a queue of requests gdb works flawlessly.
Design wise is healthier to have already created threads that digest actions than always spawning new threads.
Still same code debugs without a problem on Visual Studio so I do have to say that is a small disappointment to me with regards to gdb.
I use Eclipse and looking at the GDB traces (usually enabled by default) will give you a better hint of where GDB fails. One of the buttons on the console shows you the GDB trace.
I am trying to catch floating point exception (SIGFPE) in GDB, not pass it to the process and continue debugging onwards.
I have given gdb this:
handle SIGFPE stop nopass
When a SIGFPE occurs GDB stops at the correct place. The problem is I can't and don't know how can I continue debugging.
I have tried giving GDB
continue
or
signal 0
but it still hangs on the offending line and refuses to continue.
Is there a way to continue debugging after receiving a signal?
I am using GDB 7.5.1, which I have compiled myself and I have also tried with GDB 7.4, which comes with my 12.04 Ubuntu distribution. Both have the same behaviour.
The problem is that when you continue a program after a synchronous signal, it reexecutes the same instruction that caused the signal, which means you'll just get the signal again. If you tell it to ignore the signal (either directly or via gdb) it will go into a tight loop reexecuting that instruction repeatedly.
If you want to actually continue the program somewhere after the instruction that causes the signal, you need to manually set the $pc register to the next (or some other) instruction before issuing the continue command.
I have a program which crashes due to a segmentation fault. The core file is produced.
running the core in gdb gives me the following:
HP gdb 6.1 for HP Itanium (32 or 64 bit) and target HP-UX 11iv2 and 11iv3.
Core was generated by `gcpf1fwcApp'.
Program terminated with signal 6, Aborted.
I used the command
thread apply all bt
When I check the stack trace I get error in the main thread which is in waiting state.
However when I run the same program in GDB I get a completely different error in stack trace. Which seems more correct than the core dump.
The program has 31 threads.
Why do I get this kind of difference?
It is possible that you are simply looking at the wrong thread.
Try thread apply all where, and see if one of the threads is in fact abort()ing.
When debugging a live process, GDB will stop when a thread receives SIGABRT, and so will likely show you the relevant thread.
When debugging a core (post-mortem), GDB doesn't know which thread is relevant, and so shows them to you in whichever order the OS saved them into the core. Linux kernels save the thread which caused the process to die first, so GDB on Linux shows relevant thread from core. I am guessing that HP-UX does not do that, and so GDB shows you a "random" thread instead.