How to resume the coredump file as a process? - gdb

Suppose there are coredump, executable files and dwarf files locally.
If the process can be reloaded locally, can it be on a remote machine?
I know how to use lldb -c coredump to load a coredump. But the coredump loaded in this way seems to be different from the usual lldb attached process. Command expr is not really support.

How to resume the coredump file as a process?
That's impossible.
For one thing, all file descriptors from the old process have been closed. For another, the process and thread IDs may now belong to some other process.

Related

GDB: running cpp process debugging without symbols

Linux system running an application. This application is a cpp binary without any debug symbols. Some how this application using 100% cpu. Would like to debug why it is running infinitely. If I stop and replace the binary with debug symbols, the issue may not be reproducible.
So, running the same application with debug symbols in another environment. Here it is running fine.
Can I compare them (with and without debug symbols binaries) and deduct what is the problem using GDB.
This application is a cpp binary without any debug symbols
You don't need any debug symbols to understand where it is spending time, you just need the application to not be fully-stripped (most binaries aren't).
Use perf record -p $pid to collect CPU profile, then perf report to analyse it.
If the application is fully stripped, you can still use perf record to collect program counter values, then perf record --symfs ... to point it to unstripped copy of the application. Documentation here.
Beware: both stripped and unstripped copies must be built with exactly the same build flags, or you'll get garbage. Best practice is to always save unstripped copy as part of the build process.

Disassembling shared library - which version is shown?

I'm using gdb to debug an intermittent crash. I can open the core dump, and see that the crash occurred inside a shared library. (I can see the function names and the file name of the library in the backtrace, though I don't have the source code for the library.)
Meanwhile, the library has been updated, so that file name now holds a different version of the library than the one that was loaded when the core dump was generated.
I can run disassemble to see the machine code for the function where the crash occurred - but would I see the code from the version in use when the crash occurred, or will gdb load the code from the library file on disk, thereby picking a mismatching version?
would I see the code from the version in use when the crash occurred, or will gdb load the code from the library file on disk, thereby picking a mismatching version?
The latter (mismatched version).
By default, executable (and other read-only mappings) are not saved in the core to save space -- the contents is already available on disk.
On Linux you can ask your system to save read-only mappings with:
echo 0x7 > /proc/self/coredump_filter
See man 5 core.

GDB cannot open shared object

I have the problem that was discussed several times here: the application runs when started directly from shell and does not run when I try to start it within the debugger in the same shell. Running inside GDB produces the "cannot open shared object" error.
I've read all the posts and did all the suggestions:
I setup LD_LIBRARY_PATH manually and verified that my application runs and ldd -r passes without errors
I setup solib-search-path and solib-absolute-prefix inside GDB to the same value as LD_LIBRARY_PATH and '/' respectively. All the paths are absolute
I ran GDB with strace to see where GDB looks for the required shared libraries and found that it ignores the list of directories from LD_LIBRARY_PATH / solib-search-path
What can I do?
It's GDB 7.11.1 with RHEL 7
I have the problem
If you do, you've done a very poor job of describing what your problem actually is.
I ran GDB with strace to see where GDB looks for the required shared libraries and found that it ignores the list of directories
GDB doesn't look for any shared libraries until it gets notified by the runtime loader that some shared library has been added to the process.
If you didn't attach to a running inferior, then GDB wouldn't look for any libraries, and you've likely arrived at a completely wrong conclusion.
P.S. GDB doesn't use LD_LIBRARY_PATH at all, and setting either solib-search-path or solib-absolute-prefix is usually not needed either (unless you are doing remote debugging, or analyzing a core that came from a different machine).
Update:
the application runs when started directly from shell and does not run when I try to start it within the debugger
In 99.9% of the cases I've seen, this happens because ~/.bashrc or some such resets LD_LIBRARY_PATH inappropriately. GDB starts a new shell to run the application in, and if that (non-interactive) shell resets the environment, the application may behave differently inside vs. outside of GDB.
The correct solution is to make ~/.bashrc (or whatever shell startup file is appropriate for your shell) to do nothing when running non-interactively.

Callgrind does not see source in dynamically loaded SO

I'm attempting to run KCacheGrind on some results of callgrind. Basically the codebase is a plugin container that launches a shared object to run a specific function. Upon using Callgrind to profile this application, I can see the costs at the function level, but not at the source level.
I can see at the source level with the plugin container code, before it launches the SO, but I can't see any code contained in the SO that was launched.
I know I'm compiling with debug symbols on, but for some reason I am unable to see the dynamically loaded SO source code.
Thanks,
I ran into this problem too. The way to fix it is to stop the host application from unloading the plugins before it exits. In my case I was trying to profile C modules for Lua and Lua was unloading the modules when the VM exited normally. To fix this issues I had the script call os.exit() to do a forced shutdown.
Either disable plugin unloading in the plugin container, or create a plugin the allows you to force the application to exit (calling _exit(0)).

Analyzing core dump generated by multiple applications with gdb

I have a core dump generated by 2 applications -> /usr/bin/python and /usr/bin/app1.
I know the dump can be analyzed by
gdb /path/to/app /path/to/core
but is there a way to include both applications in the arguement?
I did try gdb '/usr/bin/python /usr/bin/app1' core.xxx but that doesnt seem right.
Any suggestions?
I think you cannot achieve what you want with a single invocation of gdb. But you could run gdb twice, in different terminal windows. I did that more than once, and it works quite well (except of course that your own brain could be slightly overloaded).
a gdb process can debug only one single program, with one single debugged process or (for post mortem debug) one single core file.
And a given core file is produced by abnormal termination of one single process (not several), so I don't understand your question.
Apparently, you have a crash in some execution of python probably augmented by your faulty C code. I suggest having a debuggable variant of Python, perhaps by installing the python3-all-dbg package or something similar, then use gdb on it. Of course, compile your C code plugged into Python with debugging enabled. Perhaps you violated some invariant of the Python garbage collector.