I have a couple of questions regarding core dumps. I have gdb on Windows, using Cygwin.
What is the location of core dump file? Is it a.exe.stackdump file? (This is the only file that generated after crash) I read on other forums that the core dump file is named "core". But I don't see any file with name "core".
What is the command for opening and understanding core dump file?
You need to configure Cygwin to produce core dumps by including
error_start=x:\path\to\dumper.exe
in your CYGWIN environment variable (see here in section "dumper" for more information). If you didn't do this, you will only get a stacktrace -- which may also help you in diagnosing the problem, though.
Start gdb as follows to attach it to a core dump file:
gdb myexecutable --core=mycorefile
You can now use the usual gdb commands to print a stacktrace, examine the values of variables, and so on.
Yes, cygwin creates a.exe.stackdump files by default. You need to configure it to create cores as well (Martin's answer covers that).
A simple tutorial on core dump debugging can be found here
Related
I'm using gdb to debug an intermittent crash. I can open the core dump, and see that the crash occurred inside a shared library. (I can see the function names and the file name of the library in the backtrace, though I don't have the source code for the library.)
Meanwhile, the library has been updated, so that file name now holds a different version of the library than the one that was loaded when the core dump was generated.
I can run disassemble to see the machine code for the function where the crash occurred - but would I see the code from the version in use when the crash occurred, or will gdb load the code from the library file on disk, thereby picking a mismatching version?
would I see the code from the version in use when the crash occurred, or will gdb load the code from the library file on disk, thereby picking a mismatching version?
The latter (mismatched version).
By default, executable (and other read-only mappings) are not saved in the core to save space -- the contents is already available on disk.
On Linux you can ask your system to save read-only mappings with:
echo 0x7 > /proc/self/coredump_filter
See man 5 core.
I have the problem that was discussed several times here: the application runs when started directly from shell and does not run when I try to start it within the debugger in the same shell. Running inside GDB produces the "cannot open shared object" error.
I've read all the posts and did all the suggestions:
I setup LD_LIBRARY_PATH manually and verified that my application runs and ldd -r passes without errors
I setup solib-search-path and solib-absolute-prefix inside GDB to the same value as LD_LIBRARY_PATH and '/' respectively. All the paths are absolute
I ran GDB with strace to see where GDB looks for the required shared libraries and found that it ignores the list of directories from LD_LIBRARY_PATH / solib-search-path
What can I do?
It's GDB 7.11.1 with RHEL 7
I have the problem
If you do, you've done a very poor job of describing what your problem actually is.
I ran GDB with strace to see where GDB looks for the required shared libraries and found that it ignores the list of directories
GDB doesn't look for any shared libraries until it gets notified by the runtime loader that some shared library has been added to the process.
If you didn't attach to a running inferior, then GDB wouldn't look for any libraries, and you've likely arrived at a completely wrong conclusion.
P.S. GDB doesn't use LD_LIBRARY_PATH at all, and setting either solib-search-path or solib-absolute-prefix is usually not needed either (unless you are doing remote debugging, or analyzing a core that came from a different machine).
Update:
the application runs when started directly from shell and does not run when I try to start it within the debugger
In 99.9% of the cases I've seen, this happens because ~/.bashrc or some such resets LD_LIBRARY_PATH inappropriately. GDB starts a new shell to run the application in, and if that (non-interactive) shell resets the environment, the application may behave differently inside vs. outside of GDB.
The correct solution is to make ~/.bashrc (or whatever shell startup file is appropriate for your shell) to do nothing when running non-interactively.
I am writing some CPP code that runs as part of Apache.
I have a segfault. Where would I find the core dump so I can debug this.
If there is no core dump, how do I tell Apache to create one (is there a debug flag?)
Whether a core is dumped or not is set via ulimit -c. It's not up to the application to decide whether to dump core or not (a core is generated by the OS, not the app, which has perished at that point already).
A corefile should be located in the directory from which the application was started.
A core can / will be dumped whether the application is a debug version or not. (Of course, a core dump of a non-debug version is somewhat less helpful due to the lack of debugging symbols in the process image.)
I have a core dump generated by 2 applications -> /usr/bin/python and /usr/bin/app1.
I know the dump can be analyzed by
gdb /path/to/app /path/to/core
but is there a way to include both applications in the arguement?
I did try gdb '/usr/bin/python /usr/bin/app1' core.xxx but that doesnt seem right.
Any suggestions?
I think you cannot achieve what you want with a single invocation of gdb. But you could run gdb twice, in different terminal windows. I did that more than once, and it works quite well (except of course that your own brain could be slightly overloaded).
a gdb process can debug only one single program, with one single debugged process or (for post mortem debug) one single core file.
And a given core file is produced by abnormal termination of one single process (not several), so I don't understand your question.
Apparently, you have a crash in some execution of python probably augmented by your faulty C code. I suggest having a debuggable variant of Python, perhaps by installing the python3-all-dbg package or something similar, then use gdb on it. Of course, compile your C code plugged into Python with debugging enabled. Perhaps you violated some invariant of the Python garbage collector.
What are the 'best practices' when it comes to debugging core dumps using GDB?
Currently, I am facing a problem:
The release version of my application is compiled without the '-g' compiler flag.
The debug version of my application (compiled with '-g') is archived (along with the source code, and a copy of the release binary).
Recently, when a user gave me a core dump, I tried debugging it using
gdb --core=./core.pid ./my_app_debug-bin
The core was created by my_app_release-bin. There seems to be some kind of mismatch between the core file and the binary.
On the other hand, if I try
gdb --core=./core.pid ./my_app_release-bin
the core matches but I am unable to get source code line numbers (although I get the function names).
Is this what is practised? Because I feel I am missing something here.
It sounds like there are other differences between your release and debug build then simply the absence/presence of the -g flag. Assuming that's the case, there is nothing you can do right now, but you can adjust your build to handle this better:
Here's what we do at my place of work.
Include the -g flag when building the release version.
Archive that version.
run strip --strip-unneeded on the binary before shipping it to customers.
Now, when we get a crash we can use the archived version with symbols to do debugging.
One thing to note is that if your release version includes optimization, debugging may be difficult even with symbols. For example, the optimizer can reorder your code so even though the debugger will say you crashed on line N, you can't assume that the code actually executed line N-1.
You need to do some additional stuff to create binaries with stripped debug information that you can then debug from cores. Best description I could find is here
No, you don't miss anything. debug and release are just different binaries, so the core files of release don't match the debug binary. You have to look at machine code to get something from the release core dump.
You probably have to ask your user how the crash happened and collect additional log information or whatever you app produces.