How to interpret backtrace produced by stackshot into a readable format? - c++

I've started using Stackshot utility in order to debug my executable and got the log below. The command i used is : sudo /usr/libexec/stackshot -i -p [pid]
However, I don't see an easy way to interpret the function backtrace into a readable format of function_name / line_number (similar to the equivalent command in lldb/gdb).
Thread ID: 0x130
Thread state: 0x9 == TH_WAIT|TH_UNINT
Thread priority: 0x52/0x52
Thread wait_event: 0xffffff8026cd77a8
Kernel Stack:
Return Frame
0xffffff80002d8452 0xffffff80005e3e45
0xffffff800023695b 0xffffff80005e4072
0xffffff8000235d5c 0xffffff800042f6e4
0xffffff800022d18e 0xffffff80002d7607
0xffffff80005e3e45 0x00000008feedface
0xffffff80005e4072 0x00000000ffffff80
0xffffff800042f6e4 0x0000000000000000
0xffffff80002d7607 0x0000000900000000

Related

How to get thread names from a core dump?

I have a multi-threaded program, where each thread is assigned a unique name with help of pthread_setname_np. If I attach to a running program with gdb, I can see their names:
(gdb) i th
Id Target Id Frame
* 1 Thread 0x7f883dba1200 (LWP 3867757) "main" __pthread_clockjoin_ex (threadid=140222870320896, thread_return=0x0, clockid=<optimized out>, abstime=<optimized out>, block=<optimized out>)
at pthread_join_common.c:145
2 Thread 0x7f883477f700 (LWP 3867759) "log_aggregator" 0x00007f883dc8026f in __GI___clock_nanosleep (clock_id=clock_id#entry=0, flags=flags#entry=0, req=0x7f883477b220, rem=0x7f883477b220)
at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
Note main and log_aggregator names. However, If I make a core dump and then load this dump with gdb, I don't see the names:
$ gcore 3867757
...
Saved corefile core.3867757
...
$ gdb -c core.3867757 my_program
GNU gdb (Ubuntu 10.2-0ubuntu1~20.04~1) 10.2
...
(gdb) i th
Id Target Id Frame
* 1 Thread 0x7f883dba1200 (LWP 3867757) __pthread_clockjoin_ex (threadid=140222870320896, thread_return=0x0, clockid=<optimized out>, abstime=<optimized out>, block=<optimized out>)
at pthread_join_common.c:145
2 Thread 0x7f883477f700 (LWP 3867759) 0x00007f883dc8026f in __GI___clock_nanosleep (clock_id=clock_id#entry=0, flags=flags#entry=0, req=0x7f883477b220, rem=0x7f883477b220)
at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
How do I get thread names in this case? Or should I create core file differently to put them in the core file? I examined coredump_filter option, but did not find anything related. I also discovered this discussion at narkive mailing list archive, but it does not provide an answer, only a suggestion that this might be impossible.
My next question would be is there any way this /proc/self/task/[tid]/comm
The /proc/.../comm is not a real file, it's a representation of kernel data structures related to this task generated on-demand (when you try to read that "file").
The current->comm field that you are looking for is not mentioned in the binfmt_elf.c file (which is where other core dumping code is), so it's probably not saved.
In fact this is easily confirmed by running a test program generating a random string, calling pthread_setname_np, dumping core and then running strings -a core | grep $string, which produces no output.
There is no fundamental reason the current->comm can not be saved in the core, but currently it isn't.
P.S. Usually looking at thread apply all where is enough to distinguish different threads, so the unique thread name is redundant and unnecessary.

Output thread IDs as seen by debugger

I'm developing a multi-threaded C++ application using GCC 4.4.5 and GDB 7.2.
At the moment, I have four threads. Each one interacts with a CAN bus in one form or another, either reading, writing, polling or handling messages.
In order to determine which thread is doing what, I have decided to add the thread IDs to log messages.
In my logging functions, I have the following code:
// This is for outputting debug messages
void logDebug(string msg, thread::id threadId[ = NULL]) {
#ifdebug _DEBUG
threadState.outputLock->lock();
if (threadId != NULL)
cout << "[Thread #" << threadId << "] ";
// The rest of the output
threadState.outputLock->unlock();
#endif
}
This is the (debug) output from the application:
[Thread #3085296768] [DEBUG] [Mon Jun 17 10:18:45 2019] CAN frame was empty or no message on bus...
----------
And this is the what GDB is telling me:
Thread #3 7575 [core: 0] (Suspended: Breakpoint)
----
Why is the debugger giving me different information from the application (the thread IDs/numbers) and is there a way to output the same information in the application, as the debugger is telling me?
The expected behaviour is that the thread IDs are identical.
EDIT:
I forgot to add some possibly important information.
I'm cross-compiling to an embedded device powered by a POWERPC chip, running a derivative of Debian Wheezy.
You can get the thread id from your application with the following system call : syscall(SYS_gettid)
From there you can set the thread name by either :
writing directly the name in /proc/PID/task/TID/comm
using the pthread function int pthread_setname_np(pthread_t thread, const char *name)
Then in GDB you can easily match the given thread name, its Linux TID and the GDB thread ID with info threads command.
Hope this helps.

Why would a process hang within RtlExitUserProcess/LdrpDrainWorkQueue?

To debug a locked file problem, we're calling SysInternal's Handle64.exe 4.11 from a .NET process (via Process.Start with asynchronous output redirection). The calling process hangs on Process.WaitForExit because the Handle64 process doesn't exit (for more than two hours).
We took a dump of the corresponding Handle64 process and checked it in the Visual Studio 2017 debugger. It shows two threads ("Main Thread" and "ntdll.dll!TppWorkerThread").
Main thread's call stack:
ntdll.dll!NtWaitForSingleObject () Unknown
ntdll.dll!LdrpDrainWorkQueue() Unknown
ntdll.dll!RtlExitUserProcess() Unknown
kernel32.dll!ExitProcessImplementation () Unknown
handle64.exe!000000014000664c() Unknown
handle64.exe!00000001400082a5() Unknown
kernel32.dll!BaseThreadInitThunk () Unknown
ntdll.dll!RtlUserThreadStart () Unknown
Worker thread's call stack:
ntdll.dll!NtWaitForSingleObject() Unknown
ntdll.dll!LdrpDrainWorkQueue() Unknown
ntdll.dll!LdrpInitializeThread() Unknown
ntdll.dll!_LdrpInitialize() Unknown
ntdll.dll!LdrInitializeThunk() Unknown
My question is: Why would a process hang in LdrpDrainWorkQueue? From https://stackoverflow.com/a/42789684/62838, I gather that this is the Windows 10 parallel loader at work, but why would it get stuck while exiting the process? Can this be caused by how we invoke Handle64 from another process? I.e., are we doing something wrong or is this rather a bug in Handle64?
How long did you wait?
According to this analysis,
The worker thread idle timeout is set to 30 seconds. Programs which
execute in less than 30 seconds will appear to hang due to
ntdll!TppWorkerThread waiting for the idle timeout before the process
terminates.
I would recommend trying to set the registry key specified in that article to disable the parallel loader and see if this resolved the issue.
Parent Key: HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\handle64.exe
Value Name: MaxLoaderThreads
Type: DWORD
Value: 1 to disable

How to run record instruction-history and function-call-history in GDB?

(EDIT: per the first answer below the current "trick" seems to be using an Atom processor. But I hope some gdb guru can answer if this is a fundamental limitation, or whether there adding support for other processors is on the roadmap?)
Reverse execution seems to be working in my environment: I can reverse-continue, see a plausible record log, and move around within it:
(gdb) start
...Temporary breakpoint 5 at 0x8048460: file bang.cpp, line 13.
Starting program: /home/thomasg/temp/./bang
Temporary breakpoint 5, main () at bang.cpp:13
13 f(1000);
(gdb) record
(gdb) continue
Continuing.
Breakpoint 3, f (d=900) at bang.cpp:5
5 if(d) {
(gdb) info record
Active record target: record-full
Record mode:
Lowest recorded instruction number is 1.
Highest recorded instruction number is 1005.
Log contains 1005 instructions.
Max logged instructions is 200000.
(gdb) reverse-continue
Continuing.
Breakpoint 3, f (d=901) at bang.cpp:5
5 if(d) {
(gdb) record goto end
Go forward to insn number 1005
#0 f (d=900) at bang.cpp:5
5 if(d) {
However the instruction and function histories aren't available:
(gdb) record instruction-history
You can't do that when your target is `record-full'
(gdb) record function-call-history
You can't do that when your target is `record-full'
And the only target type available is full, the other documented type "btrace" fails with "Target does not support branch tracing."
So quite possibly it just isn't supported for this target, but as it's a mainstream modern one (gdb 7.6.1-ubuntu, on amd64 Linux Mint "Petra" running an "Intel(R) Core(TM) i5-3570") I'm hoping that I've overlooked a crucial step or config?
It seems that there is no other solution except a CPU that supports it.
More precisely, your kernel has to support Intel Processor Tracing (Intel PT). This can be checked in Linux with:
grep intel_pt /proc/cpuinfo
See also: https://unix.stackexchange.com/questions/43539/what-do-the-flags-in-proc-cpuinfo-mean
The commands only works in record btrace mode.
In the GDB source commit beab5d9, it is nat/linux-btrace.c:kernel_supports_pt that checks if we can enter btrace. The following checks are carried out:
check if /sys/bus/event_source/devices/intel_pt/type exists and read the type
do a syscall (SYS_perf_event_open, &attr, child, -1, -1, 0); with the read type, and see if it returns >=0. TODO: why not use the C wrapper?
The first check fails for me: the file does not exist.
Kernel side
cd into the kernel 4.1 source and:
git grep '"intel_pt"'
we find arch/x86/kernel/cpu/perf_event_intel_pt.c which sets up that file. In particular, it does:
if (!test_cpu_cap(&boot_cpu_data, X86_FEATURE_INTEL_PT))
goto fail;
so intel_pt is a pre-requisite.
How I've found kernel_supports_pt
First grep for:
git grep 'Target does not support branch tracing.'
which leads us to btrace.c:btrace_enable. After a quick debug with:
gdb -q -ex start -ex 'b btrace_enable' -ex c --args /home/ciro/git/binutils-gdb/install/bin/gdb --batch -ex start -ex 'record btrace' ./hello_world.out
Virtual box does not support it either: Extract execution log from gdb record in a VirtualBox VM
Intel SDE
Intel SDE 7.21 already has this CPU feature, checked with:
./sde64 -- cpuid | grep 'Intel processor trace'
But I'm not sure if the Linux kernel can be run on it: https://superuser.com/questions/950992/how-to-run-the-linux-kernel-on-intel-software-development-emulator-sde
Other GDB methods
More generic questions, with less efficient software solutions:
call graph: List of all function calls made in an application
instruction trace: Displaying each assembly instruction executed in gdb
At least a partial answer (for the "am I doing it wrong" aspect) - from gdb-7.6.50.20140108/gdb/NEWS
* A new record target "record-btrace" has been added. The new target
uses hardware support to record the control-flow of a process. It
does not support replaying the execution, but it implements the
below new commands for investigating the recorded execution log.
This new recording method can be enabled using:
record btrace
The "record-btrace" target is only available on Intel Atom processors
and requires a Linux kernel 2.6.32 or later.
* Two new commands have been added for record/replay to give information
about the recorded execution without having to replay the execution.
The commands are only supported by "record btrace".
record instruction-history prints the execution history at
instruction granularity
record function-call-history prints the execution history at
function granularity
It's not often that I envy the owner of an Atom processor ;-)
I'll edit the question to refocus upon the question of workarounds or plans for future support.

Efficient variable watching in C/C++

I'm currently writing a multi-threaded, high efficient and scalable algorithm. Because I have to guess a parameter for the code and I'm not sure how the calculation performs on a specific data set, I would like to watch a variable. The test only works with a real world, huge data set. It is possible to analyze the collected data after profiling. Imagine the following, simple code example (real code can contain multiple watch points:
// function get's called by loops of multiple threads
void payload(data_t* data, double threshold) {
double value = calc(data);
// here I want to watch the value
if (value < threshold) {
doSomething(data);
} else {
doSomethingElse(data);
}
}
I thought about the following approaches:
Using cout or other system outputs
Use a binary output (file, network)
Set a breakpoint via gdb/lldb
Use variable watching + logging via gdb/lldb
I'm not happy with the results because: To use 1. and 2. I have to change the code, but this is a debugging/evaluating task. Furthermore 1. requires locking and 1.+2. requires I/O operations, which heavily slows down the entire code and makes testing with real data nearly impossible. 3. is also too slow. To use 4., I have to know the variable address because it's not a global variable, but because threads get created by a dynamic scheduler, this would require breaking + stepping for each thread.
So my conclusion is, that I need a profiler/debugger that works at machine code level and dumps/logs/watches the variable without double->string conversion and is highly efficient, or to sum up with other words: I would like to profile the internal state of my algorithm without heavy slow-down and without doing deep modification. Does anybody know a tool that is able to this?
OK, this took some time but now I'm able to present a solution for my problem. It's called tracepoints. Instead of breaking the program every time, it's more lightweight and (ideally) doesn't change performance/timing too much. It does not require code changes. Here is an explanation how to use them using gdb:
Make sure you compiled your program with debugging symbols (using the -g flag). Now, start the gdb server and provide a network port (e.g. 10000) and the program arguments:
gdbserver :10000 ./program --parameters you --want --to use
Now, switch to a second console and start gdb (program parameters are not required here):
gdb ./program
All following commands are entered in the gdb command line interface. So let's connect to the server:
target remote :10000
After you got the connection confirmation, use trace or ftrace to set a tracepoint to a specific source location (try ftrace first, it should be faster but doesn't work on all platforms):
trace source.c:127
This should create tracepoint #1. Now you can setup an action for this tracepoint. Here I want to collect the data from myVariable
action 1
collect myVariable
end
If expect much data or want to use the data later (after restart), you can set a binary trace file:
tsave trace.bin
Now, start tracing and run the program:
tstart
continue
You can wait for program exit or interrupt your program using CTRL-C (still on gdb console, not on server side). Continue by telling gdb that you want to stop tracing:
tstop
Now we come the tricky part and I'm not really happy with the following code because it's really slow:
set pagination off
set logging file trace.txt
tfind start
while ($trace_frame != -1)
set logging on
printf "%f\n", myVariable
set logging off
tfind
end
This dumps all variable data to a text file. You can add some filter or preparation here. Now you're done and you can exit gdb. This will also shutdown the server:
quit
For detailed documentation especially for explanation of filtering and more advanced tracepoint positions, you can visit the following document: http://sourceware.org/gdb/onlinedocs/gdb/Tracepoints.html
To isolate trace file writing from your program execution, you can use cgroups or another network connected computer. When using another computer, you have to add the host to the port information (e.g. 192.168.1.37:10000). To load a binary trace file later, just start gdb as shown above (forget the server) and change the target:
gdb ./program
target tfile trace.bin
you can set hardware watchpoint using gdb debugger, for example if you have
bool b;
variable and you want to be notified every time the value of it has chenged (by any thread)
you would declare a watchpoint like this:
(gdb) watch *(bool*)0x7fffffffe344
example:
root#comp:~# gdb prog
GNU gdb (GDB) 7.5-ubuntu
Copyright ...
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /dist/Debug/GNU-Linux-x86/cppapp_socket5_ipaddresses...done.
(gdb) watch *(bool*)0x7fffffffe344
Hardware watchpoint 1: *(bool*)0x7fffffffe344
(gdb) start
Temporary breakpoint 2 at 0x40079f: file main.cpp, line 26.
Starting program: /dist/Debug/GNU-Linux-x86/cppapp_socket5_ipaddresses
Hardware watchpoint 1: *(bool*)0x7fffffffe344
Old value = true
New value = false
main () at main.cpp:50
50 if (strcmp(mask, "255.0.0.0") != 0) {
(gdb) c
Continuing.
Hardware watchpoint 1: *(bool*)0x7fffffffe344
Old value = false
New value = true
main () at main.cpp:41
41 if (ifa ->ifa_addr->sa_family == AF_INET) { // check it is IP4
(gdb) c
Continuing.
mask:255.255.255.0
eth0 IP Address 192.168.1.5
[Inferior 1 (process 18146) exited normally]
(gdb) q