gdb - generate-core-file for remote target? - gdb

I'm debugging with the Codesourcery version of gdb for ARM (i.e. arm-none-eabi-gdb) and attempting to generate a corefile for later inspection. OpenOCD is my GDB target. All gdb tells me when I run 'gcore' or 'generate-core-file' is "Can't create corefile". Any suggestions? In general is it possible to do a core dump with a remote target?

It doesn't seem possible yet, but there is some promising discussion on the GDB mailing list
here and here. As an alternative maybe you could try the following?
dump memory filename.bin start_addr end_addr
restore filename.bin binary start_addr
where you fill in start_addr and end_addr appropriately. You'd have to save registers by hand.

Related

gdb script: How can a script determine if it is invoked under `gdb` or `gdb-multiarch`?

I'd like to define a command which does X under gdb-multiarch, but prints out a helpful message when run under normal gdb. How can my script determine which of the two its run under?
Why? When I start gdb-multiarch, I can bind to a qemu-arm session. When I try that in gdb, I get bizarre errors. It's easy to forget and run gdb (and not -multiarch), and I want to my bind-to-qemu tell me "This must be run under gdb-multiarch".
Your question presumes that there is some difference between gdb and gdb-multiarch, but there doesn't have be any such difference.
Presumably on the OS you are using the gdb and gdb-multiarch are configured differently, with gdb only supporting native architecture, while gdb-multiarch supports cross-architecture debugging.
Presumably what you actually want to detect is that the target-architecture you need (arm ?) is / isn't supported by the current binary.
In the bind-to-qemu user-defined function, you can try to set architecture arm.
If that errors out, the rest of bind-to-qemu should not execute.

loading libc's symbols into gdb

I'm debugging a binary with an older libc version than my system's one (I have libc-2.31, I'm running 2.24). I execute gdb with the LD_LIBRARY_PATH and it works like a charm, but I cannot load any symbols.
I downloaded the closest symbols file from http://archive.ubuntu.com/ubuntu/pool/main/g/glibc/libc6-dbg_2.23-0ubuntu11.2_amd64.deb, extracted it and after loading the binary into gdb, I execute:
add-symbol-file <path_to_libc-2.27.so from the deb package>
the file was loaded successfuly, but the addresses are incorrect. For example, trying to stop on a symbol such as 'main_arena' (x/40gx &main_arena) produces the following error:
0x3ebc40 <main_arena>: Cannot access memory at address 0x3ebc40
obviously this address is too low, thus I guess it's only the offset. What is my problem? maybe I need to find the exact debug file that suits my version (2.24)? because I there is no one.
Thanks!
I execute gdb with the LD_LIBRARY_PATH and it works like a charm,
It is not supposed to work, and if it happens to work today, it will likely break tomorrow.
The easiest solution is to debug inside a VM or a docker container with the desired version of GLIBC installed.
If you don't want to do that, see this answer on how to properly set things up for multiple GLIBCs on a single host.

gdb/solaris: When attaching to a process, symbols not being loaded

I'm using gcc 4.9.2 & gdb 7.2 in Solaris 10 on sparc. The following was tested after compiling/linking with -g, -ggdb, and -ggdb3.
When I attach to a process:
~ gdb
/snip/
(gdb) attach pid_goes_here
... it is not loading symbolic information. I started with netbeans which starts gdb without specifying the executable name until after the attach occurs, but I've eliminated netbeans as the cause.
I can force it to load the symbol table under netbeans if I do one of the following:
Attach to the process, then in the debugger console do one of the following:
(gdb) detach
(gdb) file /path/to/file
(gdb) attach the_pid_goes_here
or
(gdb) file /path/to/file
(gdb) sharedlibrary .
I want to know if there's a more automatic way I can force this behavior. So far googling has turned up zilch.
I want to know if there's a more automatic way I can force this behavior.
It looks like a bug.
Are you sure that the main executable symbols are loaded? This bug says that attach pid without giving the binary doesn't work on Solaris at all.
In any case, it's supposed to work automatically, so your best bet to make it work better is probably to file a bug, and wait for it to be fixed (or send a patch to fix it yourself :-)

How do I make use of core files to find application problems in C/C++?

I don't have any idea how I could find the root cause of a C/C++ linux application's problem using the core files. I understand that core files are genereated when something unexpected happens to an application. But I don't know where to start. Can anybody give me a jump start?
Learn to analyze core-dumps from Here. This is where I learnt from. Yes it uses GDB.
And this
"gdb" is the main tool you can use to analyze Linux core dumps. Here are several good tutorials:
RMS's GDB tutorial
http://www.gentoo.org/proj/en/qa/backtraces.xml
Howto: Debug Crashed Linux Applications Like a Pro
Some generic help:
Install gdb using :
yum install gdb
gdb start GDB, with no debugging les
gdb program begin debugging program
gdb program core debug coredump core produced by program
gdb --help describe command line options
1- First of all find the directory where the corefile is generated.
2- Then use "ls -ltr" command in the directory to find the latest generated corefile.
3- To load the corefile use
gdb binary path of corefile
This will load the corefile.
4- Then you can get the information using "bt" command. For detailed backtrace use "bt full".
5- To print the variables use "print varibale-name" or " p varibale-name"
6- To get any help on gdb use "help" option or use "apropos search-topic"
7- Use "frame frame-number" to go to desired frame number.
8- Use "up n" and "down n" commands to select frame n frames up and select frame n frames down respectively.
9- To stop gdb use "quit" or "q".

analysis of core file

I'm using Linux redhat 3, can someone explain how is that possible that i am able to analyze
with gdb , a core dump generated in Linux redhat 5 ?
not that i complaint :) but i need to be sure this will always work... ?
EDIT: the shared libraries are the same version, so no worries about that, they are placed in a shaerd storage so it can be accessed from both linux 5 and linux 3.
thanks.
You can try following commands of GDB to open a core file
gdb
(gdb) exec-file <executable address>
(gdb) set solib-absolute-prefix <path to shared library>
(gdb) core-file <path to core file>
The reason why you can't rely on it is because every process used libc or system shared library,which will definitely has changes from Red hat 3 to red hat 5.So all the instruction address and number of instruction in native function will be diff,and there where debugger gets goofed up,and possibly can show you wrong data to analyze. So its always good to analyze the core on the same platform or if you can copy all the required shared library to other machine and set the path through set solib-absolute-prefix.
In my experience analysing core file, generated on other system, do not work, because standard library (and other libraries your program probably use) typically will be different, so addresses of the functions are different, so you cannot even get a sensible backtrace.
Don't do it, because even if it works sometimes, you cannot rely on it.
You can always run gdb -c /path/to/corefile /path/to/program_that_crashed. However, if program_that_crashed has no debug infos (i.e. was not compiled and linked with the -g gcc/ld flag) the coredump is not that useful unless you're a hard-core debugging expert ;-)
Note that the generation of corefiles can be disabled (and it's very likely that it is disabled by default on most distros). See man ulimit. Call ulimit -c to see the limit of core files, "0" means disabled. Try ulimit -c unlimited in this case. If a size limit is imposed the coredump will not exceed the limit size, thus maybe cutting off valuable information.
Also, the path where a coredump is generated depends on /proc/sys/kernel/core_pattern. Use cat /proc/sys/kernel/core_pattern to query the current pattern. It's actually a path, and if it doesn't start with / then the file will be generated in the current working directory of the process. And if cat /proc/sys/kernel/core_uses_pid returns "1" then the coredump will have the file PID of the crashed process as file extension. You can also set both value, e.g. echo -n /tmp/core > /proc/sys/kernel/core_pattern will force all coredumps to be generated in /tmp.
I understand the question as:
how is it possible that I am able to
analyse a core that was produced under
one version of an OS under another
version of that OS?
Just because you are lucky (even that is questionable). There are a lot of things that can go wrong by trying to do so:
the tool chains gcc, gdb etc will
be of different versions
the shared libraries will be of
different versions
so no, you shouldn't rely on that.
You have asked similar question and accepted an answer, ofcourse by yourself here : Analyzing core file of shared object
Once you load the core file you can get the stack trace and get the last function call and check the code for the reason of crash.
There is a small tutorial here to get started with.
EDIT:
Assuming you want to know how to analyse core file using gdb on linux as your question is little unclear.