Analyzing core dump generated by multiple applications with gdb - gdb

I have a core dump generated by 2 applications -> /usr/bin/python and /usr/bin/app1.
I know the dump can be analyzed by
gdb /path/to/app /path/to/core
but is there a way to include both applications in the arguement?
I did try gdb '/usr/bin/python /usr/bin/app1' core.xxx but that doesnt seem right.
Any suggestions?

I think you cannot achieve what you want with a single invocation of gdb. But you could run gdb twice, in different terminal windows. I did that more than once, and it works quite well (except of course that your own brain could be slightly overloaded).
a gdb process can debug only one single program, with one single debugged process or (for post mortem debug) one single core file.
And a given core file is produced by abnormal termination of one single process (not several), so I don't understand your question.
Apparently, you have a crash in some execution of python probably augmented by your faulty C code. I suggest having a debuggable variant of Python, perhaps by installing the python3-all-dbg package or something similar, then use gdb on it. Of course, compile your C code plugged into Python with debugging enabled. Perhaps you violated some invariant of the Python garbage collector.

Related

How to trace the called functions and executed statements of a C/C++ program?

I want to know how can I trace the execution of a C/C++ program during the running time? I am working on a new already existing code base, which is a large project. I want to run the project in a common use case, and I want to see which functions or methods get called during the running time. I want to run the C/C++ program, and be able to trace which functions, and which statements get executed in this specific use case.
I am familiar with strace and ltrace, which trace the execution of the program's system calls and library functions respectively. I am looking for something similar, but instead I want to see which functions of the project itself are executed during the running time of the program. Is there a specialized tool for that?
Alternatively, can I run the program in gdb without setting any breakpoints that would block the program so that it could run through in real time, and configure gdb to step into all the functions of the project, but not step into any library or system calls, and have gdb echo out all the functions that it steps into and all the statements that it executes into a log file?
I have the source code, and I can compile it with -g -O0.
-gp switch to gcc and gprof will help, but you'll need the source of your program to compile.

Can I set a breakpoint with gdb as I'm calling it?

I'm in the middle of a large debugging project, and every time I start running gdb I have to type b 253.
It would be really nice if I could set my run script so that gdb loads with that breakpoint already set.
To be more explicit: Here are the contents of run.csh:
gdb --args path/to/program arg1 arg2
Can I modify this so that, once I run it, I can just type r and the program breaks on line 253?
Yes. Read documentation of gdb.
You can extend GDB. You can have Canned Sequences of Commands.
You can define or use extensions in Python, in Guile. See also this.
(you might need to recompile GDB itself from source, since sadly not all usual gdb are configured with Guile support)
You can have your .gdbinit file (read about startup files and command files). Btw you might prefer to break in function names, not in line numbers there. Read more about specifying locations.
Actually, many large projects have some .gdbinit (perhaps generated) in their source repository.
Be sure to use a recent version of GDB. The latest one (in March 2018) is GDB 8.1

Recompiling code while program execution

If I have a series of C/C++ programs that I need to build using Make , would it mess up the code run if I made changes to the code and recompiled while the program is executing an executable? Or is all the information preloaded in the executable before runtime?
Thanks.
This depends entirely on whatever operating system you're using.
Linux is perfectly happy continuing to execute a program whose binary has been removed, and replaced with a new binary.
It is my understanding that Microsoft Windows is, on the other hand, rather grumpy in the same situation, and won't be happy if something like this is attempted.
If I am understanding correctly, you can edit the code while you run the program and the program will not change while you run it.

libtool slowing down gdb

I have a larger C++ programm with lot of templates which i want to debug. Unfortunately gdb takes several minutes to read the symbols.
http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html contains lots of options for debugging.
Which options would you suggest to make gdb faster/more usable.
Update: It looks like the slow down is caused by libtool. If gdb is launched via libtool --mode execute it is slow. If gdb is launched gdb .libs/foo it is reasonable fast. Any ideas why is much slower?
Update: Another suggestion was -fvisibility=hidden see http://gcc.gnu.org/wiki/Visibility
Sometimes using -fdebug-types-section can make things a bit faster. It isn't guaranteed though.
Several minutes to load ... I wonder how big this executable is. If I were desperate I might try only compiling selected modules with debug info. Or perhaps look to see if it is a gdb bug. If it is split into an executable and some shared libraries, and some parts don't change very often, you could also look into using the "gdb index" feature (see the manual) to speed up the loading of debuginfo for those modules.

Debugging in Linux using core dumps

What are the 'best practices' when it comes to debugging core dumps using GDB?
Currently, I am facing a problem:
The release version of my application is compiled without the '-g' compiler flag.
The debug version of my application (compiled with '-g') is archived (along with the source code, and a copy of the release binary).
Recently, when a user gave me a core dump, I tried debugging it using
gdb --core=./core.pid ./my_app_debug-bin
The core was created by my_app_release-bin. There seems to be some kind of mismatch between the core file and the binary.
On the other hand, if I try
gdb --core=./core.pid ./my_app_release-bin
the core matches but I am unable to get source code line numbers (although I get the function names).
Is this what is practised? Because I feel I am missing something here.
It sounds like there are other differences between your release and debug build then simply the absence/presence of the -g flag. Assuming that's the case, there is nothing you can do right now, but you can adjust your build to handle this better:
Here's what we do at my place of work.
Include the -g flag when building the release version.
Archive that version.
run strip --strip-unneeded on the binary before shipping it to customers.
Now, when we get a crash we can use the archived version with symbols to do debugging.
One thing to note is that if your release version includes optimization, debugging may be difficult even with symbols. For example, the optimizer can reorder your code so even though the debugger will say you crashed on line N, you can't assume that the code actually executed line N-1.
You need to do some additional stuff to create binaries with stripped debug information that you can then debug from cores. Best description I could find is here
No, you don't miss anything. debug and release are just different binaries, so the core files of release don't match the debug binary. You have to look at machine code to get something from the release core dump.
You probably have to ask your user how the crash happened and collect additional log information or whatever you app produces.