How to automatically generate a stacktrace when my program crashes - c++

I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace.
My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc).
I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas?

For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual.
Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler:
#include <stdio.h>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
void *array[10];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, 10);
// print out all the frames to stderr
fprintf(stderr, "Error: signal %d:\n", sig);
backtrace_symbols_fd(array, size, STDERR_FILENO);
exit(1);
}
void baz() {
int *foo = (int*)-1; // make a bad pointer
printf("%d\n", *foo); // causes segfault
}
void bar() { baz(); }
void foo() { bar(); }
int main(int argc, char **argv) {
signal(SIGSEGV, handler); // install our handler
foo(); // this will call foo, bar, and baz. baz segfaults.
}
Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace:
$ gcc -g -rdynamic ./test.c -o test
Executing this gets you this output:
$ ./test
Error: signal 11:
./test(handler+0x19)[0x400911]
/lib64/tls/libc.so.6[0x3a9b92e380]
./test(baz+0x14)[0x400962]
./test(bar+0xe)[0x400983]
./test(foo+0xe)[0x400993]
./test(main+0x28)[0x4009bd]
/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]
./test[0x40086a]
This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main, foo, bar, and baz.

It's even easier than "man backtrace", there's a little-documented library (GNU specific) distributed with glibc as libSegFault.so, which was I believe was written by Ulrich Drepper to support the program catchsegv (see "man catchsegv").
This gives us 3 possibilities. Instead of running "program -o hai":
Run within catchsegv:
$ catchsegv program -o hai
Link with libSegFault at runtime:
$ LD_PRELOAD=/lib/libSegFault.so program -o hai
Link with libSegFault at compile time:
$ gcc -g1 -lSegFault -o program program.cc
$ program -o hai
In all 3 cases, you will get clearer backtraces with less optimization (gcc -O0 or -O1) and debugging symbols (gcc -g). Otherwise, you may just end up with a pile of memory addresses.
You can also catch more signals for stack traces with something like:
$ export SEGFAULT_SIGNALS="all" # "all" signals
$ export SEGFAULT_SIGNALS="bus abrt" # SIGBUS and SIGABRT
The output will look something like this (notice the backtrace at the bottom):
*** Segmentation fault Register dump:
EAX: 0000000c EBX: 00000080 ECX:
00000000 EDX: 0000000c ESI:
bfdbf080 EDI: 080497e0 EBP:
bfdbee38 ESP: bfdbee20
EIP: 0805640f EFLAGS: 00010282
CS: 0073 DS: 007b ES: 007b FS:
0000 GS: 0033 SS: 007b
Trap: 0000000e Error: 00000004
OldMask: 00000000 ESP/signal:
bfdbee20 CR2: 00000024
FPUCW: ffff037f FPUSW: ffff0000
TAG: ffffffff IPOFF: 00000000
CSSEL: 0000 DATAOFF: 00000000
DATASEL: 0000
ST(0) 0000 0000000000000000 ST(1)
0000 0000000000000000 ST(2) 0000
0000000000000000 ST(3) 0000
0000000000000000 ST(4) 0000
0000000000000000 ST(5) 0000
0000000000000000 ST(6) 0000
0000000000000000 ST(7) 0000
0000000000000000
Backtrace:
/lib/libSegFault.so[0xb7f9e100]
??:0(??)[0xb7fa3400]
/usr/include/c++/4.3/bits/stl_queue.h:226(_ZNSt5queueISsSt5dequeISsSaISsEEE4pushERKSs)[0x805647a]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/player.cpp:73(_ZN6Player5inputESs)[0x805377c]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:159(_ZN6Socket4ReadEv)[0x8050698]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:413(_ZN12ServerSocket4ReadEv)[0x80507ad]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:300(_ZN12ServerSocket4pollEv)[0x8050b44]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/main.cpp:34(main)[0x8049a72]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5)[0xb7d1b775]
/build/buildd/glibc-2.9/csu/../sysdeps/i386/elf/start.S:122(_start)[0x8049801]
If you want to know the gory details, the best source is unfortunately the source: See http://sourceware.org/git/?p=glibc.git;a=blob;f=debug/segfault.c and its parent directory http://sourceware.org/git/?p=glibc.git;a=tree;f=debug

Linux
While the use of the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault has already been suggested, I see no mention of the intricacies necessary to ensure the resulting backtrace points to the actual location of the fault (at least for some architectures - x86 & ARM).
The first two entries in the stack frame chain when you get into the signal handler contain a return address inside the signal handler and one inside sigaction() in libc. The stack frame of the last function called before the signal (which is the location of the fault) is lost.
Code
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifndef __USE_GNU
#define __USE_GNU
#endif
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ucontext.h>
#include <unistd.h>
/* This structure mirrors the one found in /usr/include/asm/ucontext.h */
typedef struct _sig_ucontext {
unsigned long uc_flags;
ucontext_t *uc_link;
stack_t uc_stack;
sigcontext_t uc_mcontext;
sigset_t uc_sigmask;
} sig_ucontext_t;
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
void * array[50];
void * caller_address;
char ** messages;
int size, i;
sig_ucontext_t * uc;
uc = (sig_ucontext_t *)ucontext;
/* Get the address at the time the signal was raised */
#if defined(__i386__) // gcc specific
caller_address = (void *) uc->uc_mcontext.eip; // EIP: x86 specific
#elif defined(__x86_64__) // gcc specific
caller_address = (void *) uc->uc_mcontext.rip; // RIP: x86_64 specific
#else
#error Unsupported architecture. // TODO: Add support for other arch.
#endif
fprintf(stderr, "signal %d (%s), address is %p from %p\n",
sig_num, strsignal(sig_num), info->si_addr,
(void *)caller_address);
size = backtrace(array, 50);
/* overwrite sigaction with caller's address */
array[1] = caller_address;
messages = backtrace_symbols(array, size);
/* skip first stack frame (points here) */
for (i = 1; i < size && messages != NULL; ++i)
{
fprintf(stderr, "[bt]: (%d) %s\n", i, messages[i]);
}
free(messages);
exit(EXIT_FAILURE);
}
int crash()
{
char * p = NULL;
*p = 0;
return 0;
}
int foo4()
{
crash();
return 0;
}
int foo3()
{
foo4();
return 0;
}
int foo2()
{
foo3();
return 0;
}
int foo1()
{
foo2();
return 0;
}
int main(int argc, char ** argv)
{
struct sigaction sigact;
sigact.sa_sigaction = crit_err_hdlr;
sigact.sa_flags = SA_RESTART | SA_SIGINFO;
if (sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL) != 0)
{
fprintf(stderr, "error setting signal handler for %d (%s)\n",
SIGSEGV, strsignal(SIGSEGV));
exit(EXIT_FAILURE);
}
foo1();
exit(EXIT_SUCCESS);
}
Output
signal 11 (Segmentation fault), address is (nil) from 0x8c50
[bt]: (1) ./test(crash+0x24) [0x8c50]
[bt]: (2) ./test(foo4+0x10) [0x8c70]
[bt]: (3) ./test(foo3+0x10) [0x8c8c]
[bt]: (4) ./test(foo2+0x10) [0x8ca8]
[bt]: (5) ./test(foo1+0x10) [0x8cc4]
[bt]: (6) ./test(main+0x74) [0x8d44]
[bt]: (7) /lib/libc.so.6(__libc_start_main+0xa8) [0x40032e44]
All the hazards of calling the backtrace() functions in a signal handler still exist and should not be overlooked, but I find the functionality I described here quite helpful in debugging crashes.
It is important to note that the example I provided is developed/tested on Linux for x86. I have also successfully implemented this on ARM using uc_mcontext.arm_pc instead of uc_mcontext.eip.
Here's a link to the article where I learned the details for this implementation:
http://www.linuxjournal.com/article/6391

Even though a correct answer has been provided that describes how to use the GNU libc backtrace() function1 and I provided my own answer that describes how to ensure a backtrace from a signal handler points to the actual location of the fault2, I don't see any mention of demangling C++ symbols output from the backtrace.
When obtaining backtraces from a C++ program, the output can be run through c++filt1 to demangle the symbols or by using abi::__cxa_demangle1 directly.
1 Linux & OS X
Note that c++filt and __cxa_demangle are GCC specific
2 Linux
The following C++ Linux example uses the same signal handler as my other answer and demonstrates how c++filt can be used to demangle the symbols.
Code:
class foo
{
public:
foo() { foo1(); }
private:
void foo1() { foo2(); }
void foo2() { foo3(); }
void foo3() { foo4(); }
void foo4() { crash(); }
void crash() { char * p = NULL; *p = 0; }
};
int main(int argc, char ** argv)
{
// Setup signal handler for SIGSEGV
...
foo * f = new foo();
return 0;
}
Output (./test):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(crash__3foo+0x13) [0x8048e07]
[bt]: (2) ./test(foo4__3foo+0x12) [0x8048dee]
[bt]: (3) ./test(foo3__3foo+0x12) [0x8048dd6]
[bt]: (4) ./test(foo2__3foo+0x12) [0x8048dbe]
[bt]: (5) ./test(foo1__3foo+0x12) [0x8048da6]
[bt]: (6) ./test(__3foo+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
Demangled Output (./test 2>&1 | c++filt):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(foo::crash(void)+0x13) [0x8048e07]
[bt]: (2) ./test(foo::foo4(void)+0x12) [0x8048dee]
[bt]: (3) ./test(foo::foo3(void)+0x12) [0x8048dd6]
[bt]: (4) ./test(foo::foo2(void)+0x12) [0x8048dbe]
[bt]: (5) ./test(foo::foo1(void)+0x12) [0x8048da6]
[bt]: (6) ./test(foo::foo(void)+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
The following builds on the signal handler from my original answer and can replace the signal handler in the above example to demonstrate how abi::__cxa_demangle can be used to demangle the symbols. This signal handler produces the same demangled output as the above example.
Code:
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
sig_ucontext_t * uc = (sig_ucontext_t *)ucontext;
void * caller_address = (void *) uc->uc_mcontext.eip; // x86 specific
std::cerr << "signal " << sig_num
<< " (" << strsignal(sig_num) << "), address is "
<< info->si_addr << " from " << caller_address
<< std::endl << std::endl;
void * array[50];
int size = backtrace(array, 50);
array[1] = caller_address;
char ** messages = backtrace_symbols(array, size);
// skip first stack frame (points here)
for (int i = 1; i < size && messages != NULL; ++i)
{
char *mangled_name = 0, *offset_begin = 0, *offset_end = 0;
// find parantheses and +address offset surrounding mangled name
for (char *p = messages[i]; *p; ++p)
{
if (*p == '(')
{
mangled_name = p;
}
else if (*p == '+')
{
offset_begin = p;
}
else if (*p == ')')
{
offset_end = p;
break;
}
}
// if the line could be processed, attempt to demangle the symbol
if (mangled_name && offset_begin && offset_end &&
mangled_name < offset_begin)
{
*mangled_name++ = '\0';
*offset_begin++ = '\0';
*offset_end++ = '\0';
int status;
char * real_name = abi::__cxa_demangle(mangled_name, 0, 0, &status);
// if demangling is successful, output the demangled function name
if (status == 0)
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< real_name << "+" << offset_begin << offset_end
<< std::endl;
}
// otherwise, output the mangled function name
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< mangled_name << "+" << offset_begin << offset_end
<< std::endl;
}
free(real_name);
}
// otherwise, print the whole line
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << std::endl;
}
}
std::cerr << std::endl;
free(messages);
exit(EXIT_FAILURE);
}

Might be worth looking at Google Breakpad, a cross-platform crash dump generator and tools to process the dumps.

You did not specify your operating system, so this is difficult to answer. If you are using a system based on gnu libc, you might be able to use the libc function backtrace().
GCC also has two builtins that can assist you, but which may or may not be implemented fully on your architecture, and those are __builtin_frame_address and __builtin_return_address. Both of which want an immediate integer level (by immediate, I mean it can't be a variable). If __builtin_frame_address for a given level is non-zero, it should be safe to grab the return address of the same level.

Thank you to enthusiasticgeek for drawing my attention to the addr2line utility.
I've written a quick and dirty script to process the output of the answer provided here:
(much thanks to jschmier!) using the addr2line utility.
The script accepts a single argument: The name of the file containing the output from jschmier's utility.
The output should print something like the following for each level of the trace:
BACKTRACE: testExe 0x8A5db6b
FILE: pathToFile/testExe.C:110
FUNCTION: testFunction(int)
107
108
109 int* i = 0x0;
*110 *i = 5;
111
112 }
113 return i;
Code:
#!/bin/bash
LOGFILE=$1
NUM_SRC_CONTEXT_LINES=3
old_IFS=$IFS # save the field separator
IFS=$'\n' # new field separator, the end of line
for bt in `cat $LOGFILE | grep '\[bt\]'`; do
IFS=$old_IFS # restore default field separator
printf '\n'
EXEC=`echo $bt | cut -d' ' -f3 | cut -d'(' -f1`
ADDR=`echo $bt | cut -d'[' -f3 | cut -d']' -f1`
echo "BACKTRACE: $EXEC $ADDR"
A2L=`addr2line -a $ADDR -e $EXEC -pfC`
#echo "A2L: $A2L"
FUNCTION=`echo $A2L | sed 's/\<at\>.*//' | cut -d' ' -f2-99`
FILE_AND_LINE=`echo $A2L | sed 's/.* at //'`
echo "FILE: $FILE_AND_LINE"
echo "FUNCTION: $FUNCTION"
# print offending source code
SRCFILE=`echo $FILE_AND_LINE | cut -d':' -f1`
LINENUM=`echo $FILE_AND_LINE | cut -d':' -f2`
if ([ -f $SRCFILE ]); then
cat -n $SRCFILE | grep -C $NUM_SRC_CONTEXT_LINES "^ *$LINENUM\>" | sed "s/ $LINENUM/*$LINENUM/"
else
echo "File not found: $SRCFILE"
fi
IFS=$'\n' # new field separator, the end of line
done
IFS=$old_IFS # restore default field separator

ulimit -c <value> sets the core file size limit on unix. By default, the core file size limit is 0. You can see your ulimit values with ulimit -a.
also, if you run your program from within gdb, it will halt your program on "segmentation violations" (SIGSEGV, generally when you accessed a piece of memory that you hadn't allocated) or you can set breakpoints.
ddd and nemiver are front-ends for gdb which make working with it much easier for the novice.

It's important to note that once you generate a core file you'll need to use the gdb tool to look at it. For gdb to make sense of your core file, you must tell gcc to instrument the binary with debugging symbols: to do this, you compile with the -g flag:
$ g++ -g prog.cpp -o prog
Then, you can either set "ulimit -c unlimited" to let it dump a core, or just run your program inside gdb. I like the second approach more:
$ gdb ./prog
... gdb startup output ...
(gdb) run
... program runs and crashes ...
(gdb) where
... gdb outputs your stack trace ...
I hope this helps.

It looks like in one of last c++ boost version appeared library to provide exactly what You want, probably the code would be multiplatform.
It is boost::stacktrace, which You can use like as in boost sample:
#include <filesystem>
#include <sstream>
#include <fstream>
#include <signal.h> // ::signal, ::raise
#include <boost/stacktrace.hpp>
const char* backtraceFileName = "./backtraceFile.dump";
void signalHandler(int)
{
::signal(SIGSEGV, SIG_DFL);
::signal(SIGABRT, SIG_DFL);
boost::stacktrace::safe_dump_to(backtraceFileName);
::raise(SIGABRT);
}
void sendReport()
{
if (std::filesystem::exists(backtraceFileName))
{
std::ifstream file(backtraceFileName);
auto st = boost::stacktrace::stacktrace::from_dump(file);
std::ostringstream backtraceStream;
backtraceStream << st << std::endl;
// sending the code from st
file.close();
std::filesystem::remove(backtraceFileName);
}
}
int main()
{
::signal(SIGSEGV, signalHandler);
::signal(SIGABRT, signalHandler);
sendReport();
// ... rest of code
}
In Linux You compile the code above:
g++ --std=c++17 file.cpp -lstdc++fs -lboost_stacktrace_backtrace -ldl -lbacktrace
Example backtrace copied from boost documentation:
0# bar(int) at /path/to/source/file.cpp:70
1# bar(int) at /path/to/source/file.cpp:70
2# bar(int) at /path/to/source/file.cpp:70
3# bar(int) at /path/to/source/file.cpp:70
4# main at /path/to/main.cpp:93
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# _start

Ive been looking at this problem for a while.
And buried deep in the Google Performance Tools README
http://code.google.com/p/google-perftools/source/browse/trunk/README
talks about libunwind
http://www.nongnu.org/libunwind/
Would love to hear opinions of this library.
The problem with -rdynamic is that it can increase the size of the binary relatively significantly in some cases

The new king in town has arrived
https://github.com/bombela/backward-cpp
1 header to place in your code and 1 library to install.
Personally I call it using this function
#include "backward.hpp"
void stacker() {
using namespace backward;
StackTrace st;
st.load_here(99); //Limit the number of trace depth to 99
st.skip_n_firsts(3);//This will skip some backward internal function from the trace
Printer p;
p.snippet = true;
p.object = true;
p.color = true;
p.address = true;
p.print(st, stderr);
}

Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.

You can use DeathHandler - small C++ class which does everything for you, reliable.

Forget about changing your sources and do some hacks with backtrace() function or macroses - these are just poor solutions.
As a properly working solution, I would advice:
Compile your program with "-g" flag for embedding debug symbols to binary (don't worry this will not impact your performance).
On linux run next command: "ulimit -c unlimited" - to allow system make big crash dumps.
When your program crashed, in the working directory you will see file "core".
Run next command to print backtrace to stdout: gdb -batch -ex "backtrace" ./your_program_exe ./core
This will print proper readable backtrace of your program in human readable way (with source file names and line numbers).
Moreover this approach will give you freedom to automatize your system:
have a short script that checks if process created a core dump, and then send backtraces by email to developers, or log this into some logging system.

ulimit -c unlimited
is a system variable, wich will allow to create a core dump after your application crashes. In this case an unlimited amount. Look for a file called core in the very same directory. Make sure you compiled your code with debugging informations enabled!
regards

Look at:
man 3 backtrace
And:
#include <exeinfo.h>
int backtrace(void **buffer, int size);
These are GNU extensions.

As a Windows-only solution, you can get the equivalent of a stack trace (with much, much more information) using Windows Error Reporting. With just a few registry entries, it can be set up to collect user-mode dumps:
Starting with Windows Server 2008 and Windows Vista with Service Pack 1 (SP1), Windows Error Reporting (WER) can be configured so that full user-mode dumps are collected and stored locally after a user-mode application crashes. [...]
This feature is not enabled by default. Enabling the feature requires administrator privileges. To enable and configure the feature, use the following registry values under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps key.
You can set the registry entries from your installer, which has the required privileges.
Creating a user-mode dump has the following advantages over generating a stack trace on the client:
It's already implemented in the system. You can either use WER as outlined above, or call MiniDumpWriteDump yourself, if you need more fine-grained control over the amount of information to dump. (Make sure to call it from a different process.)
Way more complete than a stack trace. Among others it can contain local variables, function arguments, stacks for other threads, loaded modules, and so on. The amount of data (and consequently size) is highly customizable.
No need to ship debug symbols. This both drastically decreases the size of your deployment, as well as makes it harder to reverse-engineer your application.
Largely independent of the compiler you use. Using WER does not even require any code. Either way, having a way to get a symbol database (PDB) is very useful for offline analysis. I believe GCC can either generate PDB's, or there are tools to convert the symbol database to the PDB format.
Take note, that WER can only be triggered by an application crash (i.e. the system terminating a process due to an unhandled exception). MiniDumpWriteDump can be called at any time. This may be helpful if you need to dump the current state to diagnose issues other than a crash.
Mandatory reading, if you want to evaluate the applicability of mini dumps:
Effective minidumps
Effective minidumps (Part 2)

See the Stack Trace facility in ACE (ADAPTIVE Communication Environment). It's already written to cover all major platforms (and more). The library is BSD-style licensed so you can even copy/paste the code if you don't want to use ACE.

I can help with the Linux version: the function backtrace, backtrace_symbols and backtrace_symbols_fd can be used. See the corresponding manual pages.

*nix:
you can intercept SIGSEGV (usualy this signal is raised before crashing) and keep the info into a file. (besides the core file which you can use to debug using gdb for example).
win:
Check this from msdn.
You can also look at the google's chrome code to see how it handles crashes. It has a nice exception handling mechanism.

I have seen a lot of answers here performing a signal handler and then exiting.
That's the way to go, but remember a very important fact: If you want to get the core dump for the generated error, you can't call exit(status). Call abort() instead!

I found that #tgamblin solution is not complete.
It cannot handle with stackoverflow.
I think because by default signal handler is called with the same stack and
SIGSEGV is thrown twice. To protect you need register an independent stack for the signal handler.
You can check this with code below. By default the handler fails. With defined macro STACK_OVERFLOW it's all right.
#include <iostream>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <string>
#include <cassert>
using namespace std;
//#define STACK_OVERFLOW
#ifdef STACK_OVERFLOW
static char stack_body[64*1024];
static stack_t sigseg_stack;
#endif
static struct sigaction sigseg_handler;
void handler(int sig) {
cerr << "sig seg fault handler" << endl;
const int asize = 10;
void *array[asize];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, asize);
// print out all the frames to stderr
cerr << "stack trace: " << endl;
backtrace_symbols_fd(array, size, STDERR_FILENO);
cerr << "resend SIGSEGV to get core dump" << endl;
signal(sig, SIG_DFL);
kill(getpid(), sig);
}
void foo() {
foo();
}
int main(int argc, char **argv) {
#ifdef STACK_OVERFLOW
sigseg_stack.ss_sp = stack_body;
sigseg_stack.ss_flags = SS_ONSTACK;
sigseg_stack.ss_size = sizeof(stack_body);
assert(!sigaltstack(&sigseg_stack, nullptr));
sigseg_handler.sa_flags = SA_ONSTACK;
#else
sigseg_handler.sa_flags = SA_RESTART;
#endif
sigseg_handler.sa_handler = &handler;
assert(!sigaction(SIGSEGV, &sigseg_handler, nullptr));
cout << "sig action set" << endl;
foo();
return 0;
}

I would use the code that generates a stack trace for leaked memory in Visual Leak Detector. This only works on Win32, though.

If you still want to go it alone as I did you can link against bfd and avoid using addr2line as I have done here:
https://github.com/gnif/LookingGlass/blob/master/common/src/platform/linux/crash.c
This produces the output:
[E] crash.linux.c:170 | crit_err_hdlr | ==== FATAL CRASH (a12-151-g28b12c85f4+1) ====
[E] crash.linux.c:171 | crit_err_hdlr | signal 11 (Segmentation fault), address is (nil)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (0) /home/geoff/Projects/LookingGlass/client/src/main.c:936 (register_key_binds)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (1) /home/geoff/Projects/LookingGlass/client/src/main.c:1069 (run)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (2) /home/geoff/Projects/LookingGlass/client/src/main.c:1314 (main)
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f8aa65f809b]
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (4) ./looking-glass-client(_start+0x2a) [0x55c70fc4aeca]

In addition to above answers, here how you make Debian Linux OS generate core dump
Create a “coredumps” folder in the user's home folder
Go to /etc/security/limits.conf. Below the ' ' line, type “ soft core unlimited”, and “root soft core unlimited” if enabling core dumps for root, to allow unlimited space for core dumps.
NOTE: “* soft core unlimited” does not cover root, which is why root has to be specified in its own line.
To check these values, log out, log back in, and type “ulimit -a”. “Core file size” should be set to unlimited.
Check the .bashrc files (user, and root if applicable) to make sure that ulimit is not set there. Otherwise, the value above will be overwritten on startup.
Open /etc/sysctl.conf.
Enter the following at the bottom: “kernel.core_pattern = /home//coredumps/%e_%t.dump”. (%e will be the process name, and %t will be the system time)
Exit and type “sysctl -p” to load the new configuration
Check /proc/sys/kernel/core_pattern and verify that this matches what you just typed in.
Core dumping can be tested by running a process on the command line (“ &”), and then killing it with “kill -11 ”. If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.

gdb -ex 'set confirm off' -ex r -ex bt -ex q <my-program>

On Linux/unix/MacOSX use core files (you can enable them with ulimit or compatible system call). On Windows use Microsoft error reporting (you can become a partner and get access to your application crash data).

I forgot about the GNOME tech of "apport", but I don't know much about using it. It is used to generate stacktraces and other diagnostics for processing and can automatically file bugs. It's certainly worth checking in to.

You are probably not going to like this - all I can say in its favour is that it works for me, and I have similar but not identical requirements: I am writing a compiler/transpiler for a 1970's Algol-like language which uses C as it's output and then compiles the C so that as far as the user is concerned, they're generally not aware of C being involved, so although you might call it a transpiler, it's effectively a compiler that uses C as it's intermediate code. The language being compiled has a history of providing good diagnostics and a full backtrace in the original native compilers. I've been able to find gcc compiler flags and libraries etc that allow me to trap most of the runtime errors that the original compilers did (although with one glaring exception - unassigned variable trapping). When a runtime error occurs (eg arithmetic overflow, divide by zero, array index out of bounds, etc) the original compilers output a backtrace to the console listing all variables in the stack frames of every active procedure call. I struggled to get this effect in C, but eventually did so with what can only be described as a hack... When the program is invoked, the wrapper that supplies the C "main" looks at its argv, and if a special option is not present, it restarts itself under gdb with an altered argv containing both gdb options and the 'magic' option string for the program itself. This restarted version then hides those strings from the user's code by restoring the original arguments before calling the main block of the code written in our language. When an error occurs (as long as it is not one explicitly trapped within the program by user code), it exits to gdb which prints the required backtrace.
Keys lines of code in the startup sequence include:
if ((argc >= 1) && (strcmp(origargv[argc-1], "--restarting-under-gdb")) != 0) {
// initial invocation
// the "--restarting-under-gdb" option is how the copy running under gdb knows
// not to start another gdb process.
and
char *gdb [] = {
"/usr/bin/gdb", "-q", "-batch", "-nx", "-nh", "-return-child-result",
"-ex", "run",
"-ex", "bt full",
"--args"
};
The original arguments are appended to the gdb options above. That should be enough of a hint for you to do something similar for your own system.
I did look at other library-supported backtrace options (eg libbacktrace,
https://codingrelic.geekhold.com/2010/09/gcc-function-instrumentation.html, etc) but they only output the procedure call stack, not the local variables. However if anyone knows of any cleaner mechanism to get a similar effect, do please let us know. The main downside to this is that the variables are printed in C syntax, not the syntax of the language the user writes in. And (until I add suitable #line directives on every generated line of C :-() the backtrace lists the C source file and line numbers.
G
PS The gcc compile options I use are:
GCCOPTS=" -Wall -Wno-return-type -Wno-comment -g -fsanitize=undefined
-fsanitize-undefined-trap-on-error -fno-sanitize-recover=all -frecord-gcc-switches
-fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -ftrapv
-grecord-gcc-switches -O0 -ggdb3 "

Related

C/C++: How to print a stack trace? [duplicate]

I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace.
My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc).
I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas?
For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual.
Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler:
#include <stdio.h>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
void *array[10];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, 10);
// print out all the frames to stderr
fprintf(stderr, "Error: signal %d:\n", sig);
backtrace_symbols_fd(array, size, STDERR_FILENO);
exit(1);
}
void baz() {
int *foo = (int*)-1; // make a bad pointer
printf("%d\n", *foo); // causes segfault
}
void bar() { baz(); }
void foo() { bar(); }
int main(int argc, char **argv) {
signal(SIGSEGV, handler); // install our handler
foo(); // this will call foo, bar, and baz. baz segfaults.
}
Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace:
$ gcc -g -rdynamic ./test.c -o test
Executing this gets you this output:
$ ./test
Error: signal 11:
./test(handler+0x19)[0x400911]
/lib64/tls/libc.so.6[0x3a9b92e380]
./test(baz+0x14)[0x400962]
./test(bar+0xe)[0x400983]
./test(foo+0xe)[0x400993]
./test(main+0x28)[0x4009bd]
/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]
./test[0x40086a]
This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main, foo, bar, and baz.
It's even easier than "man backtrace", there's a little-documented library (GNU specific) distributed with glibc as libSegFault.so, which was I believe was written by Ulrich Drepper to support the program catchsegv (see "man catchsegv").
This gives us 3 possibilities. Instead of running "program -o hai":
Run within catchsegv:
$ catchsegv program -o hai
Link with libSegFault at runtime:
$ LD_PRELOAD=/lib/libSegFault.so program -o hai
Link with libSegFault at compile time:
$ gcc -g1 -lSegFault -o program program.cc
$ program -o hai
In all 3 cases, you will get clearer backtraces with less optimization (gcc -O0 or -O1) and debugging symbols (gcc -g). Otherwise, you may just end up with a pile of memory addresses.
You can also catch more signals for stack traces with something like:
$ export SEGFAULT_SIGNALS="all" # "all" signals
$ export SEGFAULT_SIGNALS="bus abrt" # SIGBUS and SIGABRT
The output will look something like this (notice the backtrace at the bottom):
*** Segmentation fault Register dump:
EAX: 0000000c EBX: 00000080 ECX:
00000000 EDX: 0000000c ESI:
bfdbf080 EDI: 080497e0 EBP:
bfdbee38 ESP: bfdbee20
EIP: 0805640f EFLAGS: 00010282
CS: 0073 DS: 007b ES: 007b FS:
0000 GS: 0033 SS: 007b
Trap: 0000000e Error: 00000004
OldMask: 00000000 ESP/signal:
bfdbee20 CR2: 00000024
FPUCW: ffff037f FPUSW: ffff0000
TAG: ffffffff IPOFF: 00000000
CSSEL: 0000 DATAOFF: 00000000
DATASEL: 0000
ST(0) 0000 0000000000000000 ST(1)
0000 0000000000000000 ST(2) 0000
0000000000000000 ST(3) 0000
0000000000000000 ST(4) 0000
0000000000000000 ST(5) 0000
0000000000000000 ST(6) 0000
0000000000000000 ST(7) 0000
0000000000000000
Backtrace:
/lib/libSegFault.so[0xb7f9e100]
??:0(??)[0xb7fa3400]
/usr/include/c++/4.3/bits/stl_queue.h:226(_ZNSt5queueISsSt5dequeISsSaISsEEE4pushERKSs)[0x805647a]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/player.cpp:73(_ZN6Player5inputESs)[0x805377c]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:159(_ZN6Socket4ReadEv)[0x8050698]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:413(_ZN12ServerSocket4ReadEv)[0x80507ad]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:300(_ZN12ServerSocket4pollEv)[0x8050b44]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/main.cpp:34(main)[0x8049a72]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5)[0xb7d1b775]
/build/buildd/glibc-2.9/csu/../sysdeps/i386/elf/start.S:122(_start)[0x8049801]
If you want to know the gory details, the best source is unfortunately the source: See http://sourceware.org/git/?p=glibc.git;a=blob;f=debug/segfault.c and its parent directory http://sourceware.org/git/?p=glibc.git;a=tree;f=debug
Linux
While the use of the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault has already been suggested, I see no mention of the intricacies necessary to ensure the resulting backtrace points to the actual location of the fault (at least for some architectures - x86 & ARM).
The first two entries in the stack frame chain when you get into the signal handler contain a return address inside the signal handler and one inside sigaction() in libc. The stack frame of the last function called before the signal (which is the location of the fault) is lost.
Code
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifndef __USE_GNU
#define __USE_GNU
#endif
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ucontext.h>
#include <unistd.h>
/* This structure mirrors the one found in /usr/include/asm/ucontext.h */
typedef struct _sig_ucontext {
unsigned long uc_flags;
ucontext_t *uc_link;
stack_t uc_stack;
sigcontext_t uc_mcontext;
sigset_t uc_sigmask;
} sig_ucontext_t;
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
void * array[50];
void * caller_address;
char ** messages;
int size, i;
sig_ucontext_t * uc;
uc = (sig_ucontext_t *)ucontext;
/* Get the address at the time the signal was raised */
#if defined(__i386__) // gcc specific
caller_address = (void *) uc->uc_mcontext.eip; // EIP: x86 specific
#elif defined(__x86_64__) // gcc specific
caller_address = (void *) uc->uc_mcontext.rip; // RIP: x86_64 specific
#else
#error Unsupported architecture. // TODO: Add support for other arch.
#endif
fprintf(stderr, "signal %d (%s), address is %p from %p\n",
sig_num, strsignal(sig_num), info->si_addr,
(void *)caller_address);
size = backtrace(array, 50);
/* overwrite sigaction with caller's address */
array[1] = caller_address;
messages = backtrace_symbols(array, size);
/* skip first stack frame (points here) */
for (i = 1; i < size && messages != NULL; ++i)
{
fprintf(stderr, "[bt]: (%d) %s\n", i, messages[i]);
}
free(messages);
exit(EXIT_FAILURE);
}
int crash()
{
char * p = NULL;
*p = 0;
return 0;
}
int foo4()
{
crash();
return 0;
}
int foo3()
{
foo4();
return 0;
}
int foo2()
{
foo3();
return 0;
}
int foo1()
{
foo2();
return 0;
}
int main(int argc, char ** argv)
{
struct sigaction sigact;
sigact.sa_sigaction = crit_err_hdlr;
sigact.sa_flags = SA_RESTART | SA_SIGINFO;
if (sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL) != 0)
{
fprintf(stderr, "error setting signal handler for %d (%s)\n",
SIGSEGV, strsignal(SIGSEGV));
exit(EXIT_FAILURE);
}
foo1();
exit(EXIT_SUCCESS);
}
Output
signal 11 (Segmentation fault), address is (nil) from 0x8c50
[bt]: (1) ./test(crash+0x24) [0x8c50]
[bt]: (2) ./test(foo4+0x10) [0x8c70]
[bt]: (3) ./test(foo3+0x10) [0x8c8c]
[bt]: (4) ./test(foo2+0x10) [0x8ca8]
[bt]: (5) ./test(foo1+0x10) [0x8cc4]
[bt]: (6) ./test(main+0x74) [0x8d44]
[bt]: (7) /lib/libc.so.6(__libc_start_main+0xa8) [0x40032e44]
All the hazards of calling the backtrace() functions in a signal handler still exist and should not be overlooked, but I find the functionality I described here quite helpful in debugging crashes.
It is important to note that the example I provided is developed/tested on Linux for x86. I have also successfully implemented this on ARM using uc_mcontext.arm_pc instead of uc_mcontext.eip.
Here's a link to the article where I learned the details for this implementation:
http://www.linuxjournal.com/article/6391
Even though a correct answer has been provided that describes how to use the GNU libc backtrace() function1 and I provided my own answer that describes how to ensure a backtrace from a signal handler points to the actual location of the fault2, I don't see any mention of demangling C++ symbols output from the backtrace.
When obtaining backtraces from a C++ program, the output can be run through c++filt1 to demangle the symbols or by using abi::__cxa_demangle1 directly.
1 Linux & OS X
Note that c++filt and __cxa_demangle are GCC specific
2 Linux
The following C++ Linux example uses the same signal handler as my other answer and demonstrates how c++filt can be used to demangle the symbols.
Code:
class foo
{
public:
foo() { foo1(); }
private:
void foo1() { foo2(); }
void foo2() { foo3(); }
void foo3() { foo4(); }
void foo4() { crash(); }
void crash() { char * p = NULL; *p = 0; }
};
int main(int argc, char ** argv)
{
// Setup signal handler for SIGSEGV
...
foo * f = new foo();
return 0;
}
Output (./test):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(crash__3foo+0x13) [0x8048e07]
[bt]: (2) ./test(foo4__3foo+0x12) [0x8048dee]
[bt]: (3) ./test(foo3__3foo+0x12) [0x8048dd6]
[bt]: (4) ./test(foo2__3foo+0x12) [0x8048dbe]
[bt]: (5) ./test(foo1__3foo+0x12) [0x8048da6]
[bt]: (6) ./test(__3foo+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
Demangled Output (./test 2>&1 | c++filt):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(foo::crash(void)+0x13) [0x8048e07]
[bt]: (2) ./test(foo::foo4(void)+0x12) [0x8048dee]
[bt]: (3) ./test(foo::foo3(void)+0x12) [0x8048dd6]
[bt]: (4) ./test(foo::foo2(void)+0x12) [0x8048dbe]
[bt]: (5) ./test(foo::foo1(void)+0x12) [0x8048da6]
[bt]: (6) ./test(foo::foo(void)+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
The following builds on the signal handler from my original answer and can replace the signal handler in the above example to demonstrate how abi::__cxa_demangle can be used to demangle the symbols. This signal handler produces the same demangled output as the above example.
Code:
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
sig_ucontext_t * uc = (sig_ucontext_t *)ucontext;
void * caller_address = (void *) uc->uc_mcontext.eip; // x86 specific
std::cerr << "signal " << sig_num
<< " (" << strsignal(sig_num) << "), address is "
<< info->si_addr << " from " << caller_address
<< std::endl << std::endl;
void * array[50];
int size = backtrace(array, 50);
array[1] = caller_address;
char ** messages = backtrace_symbols(array, size);
// skip first stack frame (points here)
for (int i = 1; i < size && messages != NULL; ++i)
{
char *mangled_name = 0, *offset_begin = 0, *offset_end = 0;
// find parantheses and +address offset surrounding mangled name
for (char *p = messages[i]; *p; ++p)
{
if (*p == '(')
{
mangled_name = p;
}
else if (*p == '+')
{
offset_begin = p;
}
else if (*p == ')')
{
offset_end = p;
break;
}
}
// if the line could be processed, attempt to demangle the symbol
if (mangled_name && offset_begin && offset_end &&
mangled_name < offset_begin)
{
*mangled_name++ = '\0';
*offset_begin++ = '\0';
*offset_end++ = '\0';
int status;
char * real_name = abi::__cxa_demangle(mangled_name, 0, 0, &status);
// if demangling is successful, output the demangled function name
if (status == 0)
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< real_name << "+" << offset_begin << offset_end
<< std::endl;
}
// otherwise, output the mangled function name
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< mangled_name << "+" << offset_begin << offset_end
<< std::endl;
}
free(real_name);
}
// otherwise, print the whole line
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << std::endl;
}
}
std::cerr << std::endl;
free(messages);
exit(EXIT_FAILURE);
}
Might be worth looking at Google Breakpad, a cross-platform crash dump generator and tools to process the dumps.
You did not specify your operating system, so this is difficult to answer. If you are using a system based on gnu libc, you might be able to use the libc function backtrace().
GCC also has two builtins that can assist you, but which may or may not be implemented fully on your architecture, and those are __builtin_frame_address and __builtin_return_address. Both of which want an immediate integer level (by immediate, I mean it can't be a variable). If __builtin_frame_address for a given level is non-zero, it should be safe to grab the return address of the same level.
Thank you to enthusiasticgeek for drawing my attention to the addr2line utility.
I've written a quick and dirty script to process the output of the answer provided here:
(much thanks to jschmier!) using the addr2line utility.
The script accepts a single argument: The name of the file containing the output from jschmier's utility.
The output should print something like the following for each level of the trace:
BACKTRACE: testExe 0x8A5db6b
FILE: pathToFile/testExe.C:110
FUNCTION: testFunction(int)
107
108
109 int* i = 0x0;
*110 *i = 5;
111
112 }
113 return i;
Code:
#!/bin/bash
LOGFILE=$1
NUM_SRC_CONTEXT_LINES=3
old_IFS=$IFS # save the field separator
IFS=$'\n' # new field separator, the end of line
for bt in `cat $LOGFILE | grep '\[bt\]'`; do
IFS=$old_IFS # restore default field separator
printf '\n'
EXEC=`echo $bt | cut -d' ' -f3 | cut -d'(' -f1`
ADDR=`echo $bt | cut -d'[' -f3 | cut -d']' -f1`
echo "BACKTRACE: $EXEC $ADDR"
A2L=`addr2line -a $ADDR -e $EXEC -pfC`
#echo "A2L: $A2L"
FUNCTION=`echo $A2L | sed 's/\<at\>.*//' | cut -d' ' -f2-99`
FILE_AND_LINE=`echo $A2L | sed 's/.* at //'`
echo "FILE: $FILE_AND_LINE"
echo "FUNCTION: $FUNCTION"
# print offending source code
SRCFILE=`echo $FILE_AND_LINE | cut -d':' -f1`
LINENUM=`echo $FILE_AND_LINE | cut -d':' -f2`
if ([ -f $SRCFILE ]); then
cat -n $SRCFILE | grep -C $NUM_SRC_CONTEXT_LINES "^ *$LINENUM\>" | sed "s/ $LINENUM/*$LINENUM/"
else
echo "File not found: $SRCFILE"
fi
IFS=$'\n' # new field separator, the end of line
done
IFS=$old_IFS # restore default field separator
ulimit -c <value> sets the core file size limit on unix. By default, the core file size limit is 0. You can see your ulimit values with ulimit -a.
also, if you run your program from within gdb, it will halt your program on "segmentation violations" (SIGSEGV, generally when you accessed a piece of memory that you hadn't allocated) or you can set breakpoints.
ddd and nemiver are front-ends for gdb which make working with it much easier for the novice.
It's important to note that once you generate a core file you'll need to use the gdb tool to look at it. For gdb to make sense of your core file, you must tell gcc to instrument the binary with debugging symbols: to do this, you compile with the -g flag:
$ g++ -g prog.cpp -o prog
Then, you can either set "ulimit -c unlimited" to let it dump a core, or just run your program inside gdb. I like the second approach more:
$ gdb ./prog
... gdb startup output ...
(gdb) run
... program runs and crashes ...
(gdb) where
... gdb outputs your stack trace ...
I hope this helps.
It looks like in one of last c++ boost version appeared library to provide exactly what You want, probably the code would be multiplatform.
It is boost::stacktrace, which You can use like as in boost sample:
#include <filesystem>
#include <sstream>
#include <fstream>
#include <signal.h> // ::signal, ::raise
#include <boost/stacktrace.hpp>
const char* backtraceFileName = "./backtraceFile.dump";
void signalHandler(int)
{
::signal(SIGSEGV, SIG_DFL);
::signal(SIGABRT, SIG_DFL);
boost::stacktrace::safe_dump_to(backtraceFileName);
::raise(SIGABRT);
}
void sendReport()
{
if (std::filesystem::exists(backtraceFileName))
{
std::ifstream file(backtraceFileName);
auto st = boost::stacktrace::stacktrace::from_dump(file);
std::ostringstream backtraceStream;
backtraceStream << st << std::endl;
// sending the code from st
file.close();
std::filesystem::remove(backtraceFileName);
}
}
int main()
{
::signal(SIGSEGV, signalHandler);
::signal(SIGABRT, signalHandler);
sendReport();
// ... rest of code
}
In Linux You compile the code above:
g++ --std=c++17 file.cpp -lstdc++fs -lboost_stacktrace_backtrace -ldl -lbacktrace
Example backtrace copied from boost documentation:
0# bar(int) at /path/to/source/file.cpp:70
1# bar(int) at /path/to/source/file.cpp:70
2# bar(int) at /path/to/source/file.cpp:70
3# bar(int) at /path/to/source/file.cpp:70
4# main at /path/to/main.cpp:93
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# _start
Ive been looking at this problem for a while.
And buried deep in the Google Performance Tools README
http://code.google.com/p/google-perftools/source/browse/trunk/README
talks about libunwind
http://www.nongnu.org/libunwind/
Would love to hear opinions of this library.
The problem with -rdynamic is that it can increase the size of the binary relatively significantly in some cases
The new king in town has arrived
https://github.com/bombela/backward-cpp
1 header to place in your code and 1 library to install.
Personally I call it using this function
#include "backward.hpp"
void stacker() {
using namespace backward;
StackTrace st;
st.load_here(99); //Limit the number of trace depth to 99
st.skip_n_firsts(3);//This will skip some backward internal function from the trace
Printer p;
p.snippet = true;
p.object = true;
p.color = true;
p.address = true;
p.print(st, stderr);
}
Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.
You can use DeathHandler - small C++ class which does everything for you, reliable.
Forget about changing your sources and do some hacks with backtrace() function or macroses - these are just poor solutions.
As a properly working solution, I would advice:
Compile your program with "-g" flag for embedding debug symbols to binary (don't worry this will not impact your performance).
On linux run next command: "ulimit -c unlimited" - to allow system make big crash dumps.
When your program crashed, in the working directory you will see file "core".
Run next command to print backtrace to stdout: gdb -batch -ex "backtrace" ./your_program_exe ./core
This will print proper readable backtrace of your program in human readable way (with source file names and line numbers).
Moreover this approach will give you freedom to automatize your system:
have a short script that checks if process created a core dump, and then send backtraces by email to developers, or log this into some logging system.
ulimit -c unlimited
is a system variable, wich will allow to create a core dump after your application crashes. In this case an unlimited amount. Look for a file called core in the very same directory. Make sure you compiled your code with debugging informations enabled!
regards
Look at:
man 3 backtrace
And:
#include <exeinfo.h>
int backtrace(void **buffer, int size);
These are GNU extensions.
As a Windows-only solution, you can get the equivalent of a stack trace (with much, much more information) using Windows Error Reporting. With just a few registry entries, it can be set up to collect user-mode dumps:
Starting with Windows Server 2008 and Windows Vista with Service Pack 1 (SP1), Windows Error Reporting (WER) can be configured so that full user-mode dumps are collected and stored locally after a user-mode application crashes. [...]
This feature is not enabled by default. Enabling the feature requires administrator privileges. To enable and configure the feature, use the following registry values under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps key.
You can set the registry entries from your installer, which has the required privileges.
Creating a user-mode dump has the following advantages over generating a stack trace on the client:
It's already implemented in the system. You can either use WER as outlined above, or call MiniDumpWriteDump yourself, if you need more fine-grained control over the amount of information to dump. (Make sure to call it from a different process.)
Way more complete than a stack trace. Among others it can contain local variables, function arguments, stacks for other threads, loaded modules, and so on. The amount of data (and consequently size) is highly customizable.
No need to ship debug symbols. This both drastically decreases the size of your deployment, as well as makes it harder to reverse-engineer your application.
Largely independent of the compiler you use. Using WER does not even require any code. Either way, having a way to get a symbol database (PDB) is very useful for offline analysis. I believe GCC can either generate PDB's, or there are tools to convert the symbol database to the PDB format.
Take note, that WER can only be triggered by an application crash (i.e. the system terminating a process due to an unhandled exception). MiniDumpWriteDump can be called at any time. This may be helpful if you need to dump the current state to diagnose issues other than a crash.
Mandatory reading, if you want to evaluate the applicability of mini dumps:
Effective minidumps
Effective minidumps (Part 2)
See the Stack Trace facility in ACE (ADAPTIVE Communication Environment). It's already written to cover all major platforms (and more). The library is BSD-style licensed so you can even copy/paste the code if you don't want to use ACE.
I can help with the Linux version: the function backtrace, backtrace_symbols and backtrace_symbols_fd can be used. See the corresponding manual pages.
*nix:
you can intercept SIGSEGV (usualy this signal is raised before crashing) and keep the info into a file. (besides the core file which you can use to debug using gdb for example).
win:
Check this from msdn.
You can also look at the google's chrome code to see how it handles crashes. It has a nice exception handling mechanism.
I have seen a lot of answers here performing a signal handler and then exiting.
That's the way to go, but remember a very important fact: If you want to get the core dump for the generated error, you can't call exit(status). Call abort() instead!
I found that #tgamblin solution is not complete.
It cannot handle with stackoverflow.
I think because by default signal handler is called with the same stack and
SIGSEGV is thrown twice. To protect you need register an independent stack for the signal handler.
You can check this with code below. By default the handler fails. With defined macro STACK_OVERFLOW it's all right.
#include <iostream>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <string>
#include <cassert>
using namespace std;
//#define STACK_OVERFLOW
#ifdef STACK_OVERFLOW
static char stack_body[64*1024];
static stack_t sigseg_stack;
#endif
static struct sigaction sigseg_handler;
void handler(int sig) {
cerr << "sig seg fault handler" << endl;
const int asize = 10;
void *array[asize];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, asize);
// print out all the frames to stderr
cerr << "stack trace: " << endl;
backtrace_symbols_fd(array, size, STDERR_FILENO);
cerr << "resend SIGSEGV to get core dump" << endl;
signal(sig, SIG_DFL);
kill(getpid(), sig);
}
void foo() {
foo();
}
int main(int argc, char **argv) {
#ifdef STACK_OVERFLOW
sigseg_stack.ss_sp = stack_body;
sigseg_stack.ss_flags = SS_ONSTACK;
sigseg_stack.ss_size = sizeof(stack_body);
assert(!sigaltstack(&sigseg_stack, nullptr));
sigseg_handler.sa_flags = SA_ONSTACK;
#else
sigseg_handler.sa_flags = SA_RESTART;
#endif
sigseg_handler.sa_handler = &handler;
assert(!sigaction(SIGSEGV, &sigseg_handler, nullptr));
cout << "sig action set" << endl;
foo();
return 0;
}
I would use the code that generates a stack trace for leaked memory in Visual Leak Detector. This only works on Win32, though.
If you still want to go it alone as I did you can link against bfd and avoid using addr2line as I have done here:
https://github.com/gnif/LookingGlass/blob/master/common/src/platform/linux/crash.c
This produces the output:
[E] crash.linux.c:170 | crit_err_hdlr | ==== FATAL CRASH (a12-151-g28b12c85f4+1) ====
[E] crash.linux.c:171 | crit_err_hdlr | signal 11 (Segmentation fault), address is (nil)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (0) /home/geoff/Projects/LookingGlass/client/src/main.c:936 (register_key_binds)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (1) /home/geoff/Projects/LookingGlass/client/src/main.c:1069 (run)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (2) /home/geoff/Projects/LookingGlass/client/src/main.c:1314 (main)
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f8aa65f809b]
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (4) ./looking-glass-client(_start+0x2a) [0x55c70fc4aeca]
In addition to above answers, here how you make Debian Linux OS generate core dump
Create a “coredumps” folder in the user's home folder
Go to /etc/security/limits.conf. Below the ' ' line, type “ soft core unlimited”, and “root soft core unlimited” if enabling core dumps for root, to allow unlimited space for core dumps.
NOTE: “* soft core unlimited” does not cover root, which is why root has to be specified in its own line.
To check these values, log out, log back in, and type “ulimit -a”. “Core file size” should be set to unlimited.
Check the .bashrc files (user, and root if applicable) to make sure that ulimit is not set there. Otherwise, the value above will be overwritten on startup.
Open /etc/sysctl.conf.
Enter the following at the bottom: “kernel.core_pattern = /home//coredumps/%e_%t.dump”. (%e will be the process name, and %t will be the system time)
Exit and type “sysctl -p” to load the new configuration
Check /proc/sys/kernel/core_pattern and verify that this matches what you just typed in.
Core dumping can be tested by running a process on the command line (“ &”), and then killing it with “kill -11 ”. If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.
gdb -ex 'set confirm off' -ex r -ex bt -ex q <my-program>
On Linux/unix/MacOSX use core files (you can enable them with ulimit or compatible system call). On Windows use Microsoft error reporting (you can become a partner and get access to your application crash data).
I forgot about the GNOME tech of "apport", but I don't know much about using it. It is used to generate stacktraces and other diagnostics for processing and can automatically file bugs. It's certainly worth checking in to.
You are probably not going to like this - all I can say in its favour is that it works for me, and I have similar but not identical requirements: I am writing a compiler/transpiler for a 1970's Algol-like language which uses C as it's output and then compiles the C so that as far as the user is concerned, they're generally not aware of C being involved, so although you might call it a transpiler, it's effectively a compiler that uses C as it's intermediate code. The language being compiled has a history of providing good diagnostics and a full backtrace in the original native compilers. I've been able to find gcc compiler flags and libraries etc that allow me to trap most of the runtime errors that the original compilers did (although with one glaring exception - unassigned variable trapping). When a runtime error occurs (eg arithmetic overflow, divide by zero, array index out of bounds, etc) the original compilers output a backtrace to the console listing all variables in the stack frames of every active procedure call. I struggled to get this effect in C, but eventually did so with what can only be described as a hack... When the program is invoked, the wrapper that supplies the C "main" looks at its argv, and if a special option is not present, it restarts itself under gdb with an altered argv containing both gdb options and the 'magic' option string for the program itself. This restarted version then hides those strings from the user's code by restoring the original arguments before calling the main block of the code written in our language. When an error occurs (as long as it is not one explicitly trapped within the program by user code), it exits to gdb which prints the required backtrace.
Keys lines of code in the startup sequence include:
if ((argc >= 1) && (strcmp(origargv[argc-1], "--restarting-under-gdb")) != 0) {
// initial invocation
// the "--restarting-under-gdb" option is how the copy running under gdb knows
// not to start another gdb process.
and
char *gdb [] = {
"/usr/bin/gdb", "-q", "-batch", "-nx", "-nh", "-return-child-result",
"-ex", "run",
"-ex", "bt full",
"--args"
};
The original arguments are appended to the gdb options above. That should be enough of a hint for you to do something similar for your own system.
I did look at other library-supported backtrace options (eg libbacktrace,
https://codingrelic.geekhold.com/2010/09/gcc-function-instrumentation.html, etc) but they only output the procedure call stack, not the local variables. However if anyone knows of any cleaner mechanism to get a similar effect, do please let us know. The main downside to this is that the variables are printed in C syntax, not the syntax of the language the user writes in. And (until I add suitable #line directives on every generated line of C :-() the backtrace lists the C source file and line numbers.
G
PS The gcc compile options I use are:
GCCOPTS=" -Wall -Wno-return-type -Wno-comment -g -fsanitize=undefined
-fsanitize-undefined-trap-on-error -fno-sanitize-recover=all -frecord-gcc-switches
-fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -ftrapv
-grecord-gcc-switches -O0 -ggdb3 "

Why is my program exiting before main is called?

When I execute the following program on the my embedded Linux nothing happens:
#include <boost/thread/thread.hpp>
#include <boost/lockfree/spsc_queue.hpp>
#include <iostream>
#include <boost/atomic.hpp>
void Test(void)
{
std::cout << "Hello World" << std::endl;
}
int main(int argc, char* argv[])
{
std::cout << "init";
boost::thread producer_thread(Test);
producer_thread.join();
std::cout << "end";
}
# ./prog -> nothing happens here
The last few lines from strace output are:
open("/lib/libboost_thread.so.1.55.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\240\272\0\0004\0\0\0"..., 512) = 512
lseek(3, 95536, SEEK_SET) = 95536
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1200) = 1200
lseek(3, 95226, SEEK_SET) = 95226
read(3, "A'\0\0\0aeabi\0\1\35\0\0\0\0055T\0\6\3\10\1\t\1\22\4\24\1\25\1"..., 40) = 40
exit_group(1) = ?
+++ exited with 1 +++
#
The cross compiled libbost_thread is right installed at /lib.
The program exit before main() being called. The program runs normal under my Ubuntu.
Target: ARM with buildroot (sama5d3)
Toolchain: arm-linux-gnueabihf-
Regards
Maybe as a hint:
Have you linked against libpthread with compile and link option -pthread for your target?
If not it can have the same effect as seen in your environment: The prog starts, try to start a new thread, have no threading enabled and call the abort() function. Because abort() simply leave the prog with error in exit code nothing else happens.
Can you also add your compile & link commands for debugging purpose please!
In addition:
Your outputs without endl will not be printed because cout is buffered. The buffer will be printed only if you call flush or send a endl. Maybe you change this in your example.
Hope that helps...
strace is a tool that traces system calls. In your example, this consists of calls to open(), lseek(), and read(). Specifically, the snippet that you pasted shows the OS's dynamic library loader opening the libboost_thread.so.1.55.0 file and reading its contents; nothing more. It doesn't really demonstrate anything about your program except that it is linked against that library.
I found the problem.
The boost library was compiled with arm-linux-gnueabi- (elibc) and the buildroot is compiled with uClibc.

How to get a stack trace for C++ using gcc with line number information?

We use stack traces in proprietary assert like macro to catch developer mistakes - when error is caught, stack trace is printed.
I find gcc's pair backtrace()/backtrace_symbols() methods insufficient:
Names are mangled
No line information
1st problem can be resolved by abi::__cxa_demangle.
However 2nd problem s more tough. I found replacement for backtrace_symbols().
This is better than gcc's backtrace_symbols(), since it can retrieve line numbers (if compiled with -g) and you don't need to compile with -rdynamic.
Hoverer the code is GNU licenced, so IMHO I can't use it in commercial code.
Any proposal?
P.S.
gdb is capable to print out arguments passed to functions.
Probably it's already too much to ask for :)
PS 2
Similar question (thanks nobar)
So you want a stand-alone function that prints a stack trace with all of the features that gdb stack traces have and that doesn't terminate your application. The answer is to automate the launch of gdb in a non-interactive mode to perform just the tasks that you want.
This is done by executing gdb in a child process, using fork(), and scripting it to display a stack-trace while your application waits for it to complete. This can be performed without the use of a core-dump and without aborting the application. I learned how to do this from looking at this question: How it's better to invoke gdb from program to print it's stacktrace?
The example posted with that question didn't work for me exactly as written, so here's my "fixed" version (I ran this on Ubuntu 9.04).
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
#include <sys/prctl.h>
void print_trace() {
char pid_buf[30];
sprintf(pid_buf, "%d", getpid());
char name_buf[512];
name_buf[readlink("/proc/self/exe", name_buf, 511)]=0;
prctl(PR_SET_PTRACER, PR_SET_PTRACER_ANY, 0, 0, 0);
int child_pid = fork();
if (!child_pid) {
dup2(2,1); // redirect output to stderr - edit: unnecessary?
execl("/usr/bin/gdb", "gdb", "--batch", "-n", "-ex", "thread", "-ex", "bt", name_buf, pid_buf, NULL);
abort(); /* If gdb failed to start */
} else {
waitpid(child_pid,NULL,0);
}
}
As shown in the referenced question, gdb provides additional options that you could use. For example, using "bt full" instead of "bt" produces an even more detailed report (local variables are included in the output). The manpages for gdb are kind of light, but complete documentation is available here.
Since this is based on gdb, the output includes demangled names, line-numbers, function arguments, and optionally even local variables. Also, gdb is thread-aware, so you should be able to extract some thread-specific metadata.
Here's an example of the kind of stack traces that I see with this method.
0x00007f97e1fc2925 in waitpid () from /lib/libc.so.6
[Current thread is 0 (process 15573)]
#0 0x00007f97e1fc2925 in waitpid () from /lib/libc.so.6
#1 0x0000000000400bd5 in print_trace () at ./demo3b.cpp:496
2 0x0000000000400c09 in recursive (i=2) at ./demo3b.cpp:636
3 0x0000000000400c1a in recursive (i=1) at ./demo3b.cpp:646
4 0x0000000000400c1a in recursive (i=0) at ./demo3b.cpp:646
5 0x0000000000400c46 in main (argc=1, argv=0x7fffe3b2b5b8) at ./demo3b.cpp:70
Note: I found this to be incompatible with the use of valgrind (probably due to Valgrind's use of a virtual machine). It also doesn't work when you are running the program inside of a gdb session (can't apply a second instance of "ptrace" to a process).
Not too long ago I answered a similar question. You should take a look at the source code available on method #4, which also prints line numbers and filenames.
Method #4:
A small improvement I've done on method #3 to print line numbers. This could be copied to work on method #2 also.
Basically, it uses addr2line to convert addresses into file names and line numbers.
The source code below prints line numbers for all local functions. If a function from another library is called, you might see a couple of ??:0 instead of file names.
#include <stdio.h>
#include <signal.h>
#include <stdio.h>
#include <signal.h>
#include <execinfo.h>
void bt_sighandler(int sig, struct sigcontext ctx) {
void *trace[16];
char **messages = (char **)NULL;
int i, trace_size = 0;
if (sig == SIGSEGV)
printf("Got signal %d, faulty address is %p, "
"from %p\n", sig, ctx.cr2, ctx.eip);
else
printf("Got signal %d\n", sig);
trace_size = backtrace(trace, 16);
/* overwrite sigaction with caller's address */
trace[1] = (void *)ctx.eip;
messages = backtrace_symbols(trace, trace_size);
/* skip first stack frame (points here) */
printf("[bt] Execution path:\n");
for (i=1; i<trace_size; ++i)
{
printf("[bt] #%d %s\n", i, messages[i]);
/* find first occurence of '(' or ' ' in message[i] and assume
* everything before that is the file name. (Don't go beyond 0 though
* (string terminator)*/
size_t p = 0;
while(messages[i][p] != '(' && messages[i][p] != ' '
&& messages[i][p] != 0)
++p;
char syscom[256];
sprintf(syscom,"addr2line %p -e %.*s", trace[i], p, messages[i]);
//last parameter is the file name of the symbol
system(syscom);
}
exit(0);
}
int func_a(int a, char b) {
char *p = (char *)0xdeadbeef;
a = a + b;
*p = 10; /* CRASH here!! */
return 2*a;
}
int func_b() {
int res, a = 5;
res = 5 + func_a(a, 't');
return res;
}
int main() {
/* Install our signal handler */
struct sigaction sa;
sa.sa_handler = (void *)bt_sighandler;
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGUSR1, &sa, NULL);
/* ... add any other signal here */
/* Do something */
printf("%d\n", func_b());
}
This code should be compiled as: gcc sighandler.c -o sighandler -rdynamic
The program outputs:
Got signal 11, faulty address is 0xdeadbeef, from 0x8048975
[bt] Execution path:
[bt] #1 ./sighandler(func_a+0x1d) [0x8048975]
/home/karl/workspace/stacktrace/sighandler.c:44
[bt] #2 ./sighandler(func_b+0x20) [0x804899f]
/home/karl/workspace/stacktrace/sighandler.c:54
[bt] #3 ./sighandler(main+0x6c) [0x8048a16]
/home/karl/workspace/stacktrace/sighandler.c:74
[bt] #4 /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0x3fdbd6]
??:0
[bt] #5 ./sighandler() [0x8048781]
??:0
There is a robust discussion of essentially the same question at: How to generate a stacktrace when my gcc C++ app crashes. Many suggestions are provided, including lots of discussion about how to generate stack traces at run-time.
My personal favorite answer from that thread was to enable core dumps which allows you to view the complete application state at the time of the crash (including function arguments, line numbers, and unmangled names). An additional benefit of this approach is that it not only works for asserts, but also for segmentation faults and unhandled exceptions.
Different Linux shells use different commands to enable core dumps, but you can do it from within your application code with something like this...
#include <sys/resource.h>
...
struct rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
assert( setrlimit( RLIMIT_CORE, &core_limit ) == 0 ); // enable core dumps for debug builds
After a crash, run your favorite debugger to examine the program state.
$ kdbg executable core
Here's some sample output...
It is also possible to extract the stack trace from a core dump at the command line.
$ ( CMDFILE=$(mktemp); echo "bt" >${CMDFILE}; gdb 2>/dev/null --batch -x ${CMDFILE} temp.exe core )
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 22857]
#0 0x00007f4189be5fb5 in raise () from /lib/libc.so.6
#0 0x00007f4189be5fb5 in raise () from /lib/libc.so.6
#1 0x00007f4189be7bc3 in abort () from /lib/libc.so.6
#2 0x00007f4189bdef09 in __assert_fail () from /lib/libc.so.6
#3 0x00000000004007e8 in recursive (i=5) at ./demo1.cpp:18
#4 0x00000000004007f3 in recursive (i=4) at ./demo1.cpp:19
#5 0x00000000004007f3 in recursive (i=3) at ./demo1.cpp:19
#6 0x00000000004007f3 in recursive (i=2) at ./demo1.cpp:19
#7 0x00000000004007f3 in recursive (i=1) at ./demo1.cpp:19
#8 0x00000000004007f3 in recursive (i=0) at ./demo1.cpp:19
#9 0x0000000000400849 in main (argc=1, argv=0x7fff2483bd98) at ./demo1.cpp:26
Since the GPL licensed code is intended to help you during development, you could simply not include it in the final product. The GPL restricts you from distributing GPL licenses code linked with non-GPL compatible code. As long as you only use the GPL code inhouse, you should be fine.
Use the google glog library for it. It has new BSD licence.
It contains a GetStackTrace function in the stacktrace.h file.
EDIT
I found here http://blog.bigpixel.ro/2010/09/09/stack-unwinding-stack-trace-with-gcc/ that there is an utility called addr2line that translates program addresses into file names and line numbers.
http://linuxcommand.org/man_pages/addr2line1.html
Here's an alternative approach. A debug_assert() macro programmatically sets a conditional breakpoint. If you are running in a debugger, you will hit a breakpoint when the assert expression is false -- and you can analyze the live stack (the program doesn't terminate). If you are not running in a debugger, a failed debug_assert() causes the program to abort and you get a core dump from which you can analyze the stack (see my earlier answer).
The advantage of this approach, compared to normal asserts, is that you can continue running the program after the debug_assert is triggered (when running in a debugger). In other words, debug_assert() is slightly more flexible than assert().
#include <iostream>
#include <cassert>
#include <sys/resource.h>
// note: The assert expression should show up in
// stack trace as parameter to this function
void debug_breakpoint( char const * expression )
{
asm("int3"); // x86 specific
}
#ifdef NDEBUG
#define debug_assert( expression )
#else
// creates a conditional breakpoint
#define debug_assert( expression ) \
do { if ( !(expression) ) debug_breakpoint( #expression ); } while (0)
#endif
void recursive( int i=0 )
{
debug_assert( i < 5 );
if ( i < 10 ) recursive(i+1);
}
int main( int argc, char * argv[] )
{
rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
setrlimit( RLIMIT_CORE, &core_limit ); // enable core dumps
recursive();
}
Note: Sometimes "conditional breakpoints" setup within debuggers can be slow. By establishing the breakpoint programmatically, the performance of this method should be equivalent to that of a normal assert().
Note: As written, this is specific to the Intel x86 architecture -- other processors may have different instructions for generating a breakpoint.
A bit late, but you can use libbfb to fetch the filename and linenumber like refdbg does in symsnarf.c. libbfb is internally used by addr2line and gdb
here is my solution:
#include <execinfo.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <stdlib.h>
#include <iostream>
#include <zconf.h>
#include "regex"
std::string getexepath() {
char result[PATH_MAX];
ssize_t count = readlink("/proc/self/exe", result, PATH_MAX);
return std::string(result, (count > 0) ? count : 0);
}
std::string sh(std::string cmd) {
std::array<char, 128> buffer;
std::string result;
std::shared_ptr<FILE> pipe(popen(cmd.c_str(), "r"), pclose);
if (!pipe) throw std::runtime_error("popen() failed!");
while (!feof(pipe.get())) {
if (fgets(buffer.data(), 128, pipe.get()) != nullptr) {
result += buffer.data();
}
}
return result;
}
void print_backtrace(void) {
void *bt[1024];
int bt_size;
char **bt_syms;
int i;
bt_size = backtrace(bt, 1024);
bt_syms = backtrace_symbols(bt, bt_size);
std::regex re("\\[(.+)\\]");
auto exec_path = getexepath();
for (i = 1; i < bt_size; i++) {
std::string sym = bt_syms[i];
std::smatch ms;
if (std::regex_search(sym, ms, re)) {
std::string addr = ms[1];
std::string cmd = "addr2line -e " + exec_path + " -f -C " + addr;
auto r = sh(cmd);
std::regex re2("\\n$");
auto r2 = std::regex_replace(r, re2, "");
std::cout << r2 << std::endl;
}
}
free(bt_syms);
}
void test_m() {
print_backtrace();
}
int main() {
test_m();
return 0;
}
output:
/home/roroco/Dropbox/c/ro-c/cmake-build-debug/ex/test_backtrace_with_line_number
test_m()
/home/roroco/Dropbox/c/ro-c/ex/test_backtrace_with_line_number.cpp:57
main
/home/roroco/Dropbox/c/ro-c/ex/test_backtrace_with_line_number.cpp:61
??
??:0
"??" and "??:0" since this trace is in libc, not in my source
The one of solutions is to start a gdb with "bt"-script in failed assert handler. It is not very easy to integrate such gdb-starting, but It will give you both backtrace and args and demangle names (or you can pass gdb output via c++filt programm).
Both programms (gdb and c++filt) will be not linked into your application, so GPL will not require you to opensource complete application.
The same approach (exec a GPL programme) you can use with backtrace-symbols. Just generate ascii list of %eip's and map of exec file (/proc/self/maps) and pass it to separate binary.
You can use DeathHandler - small C++ class which does everything for you, reliable.
I suppose line numbers are related to current eip value, right?
SOLUTION 1:
Then you can use something like GetThreadContext(), except that you're working on linux. I googled around a bit and found something similar, ptrace():
The ptrace() system call provides a
means by which a parent process may
observe and control the execution of
another process, and examine and
change its core image and registers. [...]
The parent can initiate a trace by
calling fork(2) and having the
resulting child do a PTRACE_TRACEME,
followed (typically) by an exec(3).
Alternatively, the parent may commence
trace of an existing process using
PTRACE_ATTACH.
Now I was thinking, you can do a 'main' program which checks for signals that are sent to its child, the real program you're working on. after fork() it call waitid():
All of these system calls are used to
wait for state changes in a child of
the calling process, and obtain
information about the child whose
state has changed.
and if a SIGSEGV (or something similar) is caught call ptrace() to obtain eip's value.
PS: I've never used these system calls (well, actually, I've never seen them before ;) so I don't know if it's possible neither can help you. At least I hope these links are useful. ;)
SOLUTION 2:
The first solution is quite complicated, right? I came up with a much simpler one: using signal() catch the signals you are interested in and call a simple function that reads the eip value stored in the stack:
...
signal(SIGSEGV, sig_handler);
...
void sig_handler(int signum)
{
int eip_value;
asm {
push eax;
mov eax, [ebp - 4]
mov eip_value, eax
pop eax
}
// now you have the address of the
// **next** instruction after the
// SIGSEGV was received
}
That asm syntax is Borland's one, just adapt it to GAS. ;)
Here's my third answer -- still trying to take advantage of core dumps.
It wasn't completely clear in the question whether the "assert-like" macros were supposed to terminate the application (the way assert does) or they were supposed to continue executing after generating their stack-trace.
In this answer, I'm addressing the case where you want to show a stack-trace and continue executing. I wrote the coredump() function below to generate a core dump, automatically extract the stack-trace from it, then continue executing the program.
Usage is the same as that of assert(). The difference, of course, is that assert() terminates the program but coredump_assert() does not.
#include <iostream>
#include <sys/resource.h>
#include <cstdio>
#include <cstdlib>
#include <boost/lexical_cast.hpp>
#include <string>
#include <sys/wait.h>
#include <unistd.h>
std::string exename;
// expression argument is for diagnostic purposes (shows up in call-stack)
void coredump( char const * expression )
{
pid_t childpid = fork();
if ( childpid == 0 ) // child process generates core dump
{
rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
setrlimit( RLIMIT_CORE, &core_limit ); // enable core dumps
abort(); // terminate child process and generate core dump
}
// give each core-file a unique name
if ( childpid > 0 ) waitpid( childpid, 0, 0 );
static int count=0;
using std::string;
string pid = boost::lexical_cast<string>(getpid());
string newcorename = "core-"+boost::lexical_cast<string>(count++)+"."+pid;
string rawcorename = "core."+boost::lexical_cast<string>(childpid);
int rename_rval = rename(rawcorename.c_str(),newcorename.c_str()); // try with core.PID
if ( rename_rval == -1 ) rename_rval = rename("core",newcorename.c_str()); // try with just core
if ( rename_rval == -1 ) std::cerr<<"failed to capture core file\n";
#if 1 // optional: dump stack trace and delete core file
string cmd = "( CMDFILE=$(mktemp); echo 'bt' >${CMDFILE}; gdb 2>/dev/null --batch -x ${CMDFILE} "+exename+" "+newcorename+" ; unlink ${CMDFILE} )";
int system_rval = system( ("bash -c '"+cmd+"'").c_str() );
if ( system_rval == -1 ) std::cerr.flush(), perror("system() failed during stack trace"), fflush(stderr);
unlink( newcorename.c_str() );
#endif
}
#ifdef NDEBUG
#define coredump_assert( expression ) ((void)(expression))
#else
#define coredump_assert( expression ) do { if ( !(expression) ) { coredump( #expression ); } } while (0)
#endif
void recursive( int i=0 )
{
coredump_assert( i < 2 );
if ( i < 4 ) recursive(i+1);
}
int main( int argc, char * argv[] )
{
exename = argv[0]; // this is used to generate the stack trace
recursive();
}
When I run the program, it displays three stack traces...
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24251]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=2) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#6 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24259]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=3) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=2) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#6 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#7 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24267]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=4) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=3) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=2) at ./demo3.cpp:61
#6 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#7 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#8 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
I had to do this in a production environment with many constraints, so I wanted to explain the advantages and disadvantages of the already posted methods.
attach GDB
+ very simple and robust
- Slow for large programs because GDB insists on loading the entire address to line # database upfront instead of lazily
- Interferes with signal handling. When GDB is attached, it intercepts signals like SIGINT (ctrl-c), which will cause the program to get stuck at the GDB interactive prompt? if some other process routinely sends such signals. Maybe there's some way around it, but this made GDB unusable in my case. You can still use it if you only care about printing a call stack once when your program crashes, but not multiple times.
addr2line. Here's an alternate solution that doesn't use backtrace_symbols.
+ Doesn't allocate from the heap, which is unsafe inside a signal handler
+ Don't need to parse output of backtrace_symbols
- Won't work on MacOS, which doesn't have dladdr1. You can use _dyld_get_image_vmaddr_slide instead, which returns the same offset as link_map::l_addr.
- Requires adding negative offset or else the translated line # will be 1 greater. backtrace_symbols does this for you
#include <execinfo.h>
#include <link.h>
#include <stdlib.h>
#include <stdio.h>
// converts a function's address in memory to its VMA address in the executable file. VMA is what addr2line expects
size_t ConvertToVMA(size_t addr)
{
Dl_info info;
link_map* link_map;
dladdr1((void*)addr,&info,(void**)&link_map,RTLD_DL_LINKMAP);
return addr-link_map->l_addr;
}
void PrintCallStack()
{
void *callstack[128];
int frame_count = backtrace(callstack, sizeof(callstack)/sizeof(callstack[0]));
for (int i = 0; i < frame_count; i++)
{
char location[1024];
Dl_info info;
if(dladdr(callstack[i],&info))
{
char command[256];
size_t VMA_addr=ConvertToVMA((size_t)callstack[i]);
//if(i!=crash_depth)
VMA_addr-=1; // https://stackoverflow.com/questions/11579509/wrong-line-numbers-from-addr2line/63841497#63841497
snprintf(command,sizeof(command),"addr2line -e %s -Ci %zx",info.dli_fname,VMA_addr);
system(command);
}
}
}
void Foo()
{
PrintCallStack();
}
int main()
{
Foo();
return 0;
}
I also want to clarify what addresses backtrace and backtrace_symbols generate and what addr2line expects.
addr2line expects FooVMA or if you're using --section=.text, then Foofile - textfile. backtrace returns Foomem. backtrace_symbols generates FooVMA somewhere.
One big mistake I made and saw in several other posts was assuming VMAbase = 0 or FooVMA = Foofile = Foomem - ELFmem, which is easy to calculate.
That often works, but for some compilers (i.e. linker scripts) use VMAbase > 0. Examples would be the GCC 5.4 on Ubuntu 16 (0x400000) and clang 11 on MacOS (0x100000000).
For shared libs, it's always 0. Seems VMAbase was only meaningful for non-position independent code. Otherwise it has no effect on where the EXE is loaded in memory.
Also, neither karlphillip's nor this one requires compiling with -rdynamic. That will increase the binary size, especially for a large C++ program or shared lib, with useless entries in the dynamic symbol table that never get imported
AFAICS all of the solutions provided so far won't print functions names and line numbers from shared libraries. That's what I needed, so i altered karlphillip's solution (and some other answer from a similar question) to resolve shared library addresses using /proc/id/maps.
#include <stdlib.h>
#include <inttypes.h>
#include <stdio.h>
#include <string.h>
#include <execinfo.h>
#include <stdbool.h>
struct Region { // one mapped file, for example a shared library
uintptr_t start;
uintptr_t end;
char* path;
};
static struct Region* getRegions(int* size) {
// parse /proc/self/maps and get list of mapped files
FILE* file;
int allocated = 10;
*size = 0;
struct Region* res;
uintptr_t regionStart = 0x00000000;
uintptr_t regionEnd = 0x00000000;
char* regionPath = "";
uintmax_t matchedStart;
uintmax_t matchedEnd;
char* matchedPath;
res = (struct Region*)malloc(sizeof(struct Region) * allocated);
file = fopen("/proc/self/maps", "r");
while (!feof(file)) {
fscanf(file, "%jx-%jx %*s %*s %*s %*s%*[ ]%m[^\n]\n", &matchedStart, &matchedEnd, &matchedPath);
bool bothNull = matchedPath == 0x0 && regionPath == 0x0;
bool similar = matchedPath && regionPath && !strcmp(matchedPath, regionPath);
if(bothNull || similar) {
free(matchedPath);
regionEnd = matchedEnd;
} else {
if(*size == allocated) {
allocated *= 2;
res = (struct Region*)realloc(res, sizeof(struct Region) * allocated);
}
res[*size].start = regionStart;
res[*size].end = regionEnd;
res[*size].path = regionPath;
(*size)++;
regionStart = matchedStart;
regionEnd = matchedEnd;
regionPath = matchedPath;
}
}
return res;
}
struct SemiResolvedAddress {
char* path;
uintptr_t offset;
};
static struct SemiResolvedAddress semiResolve(struct Region* regions, int regionsNum, uintptr_t address) {
// convert address from our address space to
// address suitable fo addr2line
struct Region* region;
struct SemiResolvedAddress res = {"", address};
for(region = regions; region < regions+regionsNum; region++) {
if(address >= region->start && address < region->end) {
res.path = region->path;
res.offset = address - region->start;
}
}
return res;
}
void printStacktraceWithLines(unsigned int max_frames)
{
int regionsNum;
fprintf(stderr, "stack trace:\n");
// storage array for stack trace address data
void* addrlist[max_frames+1];
// retrieve current stack addresses
int addrlen = backtrace(addrlist, sizeof(addrlist) / sizeof(void*));
if (addrlen == 0) {
fprintf(stderr, " <empty, possibly corrupt>\n");
return;
}
struct Region* regions = getRegions(&regionsNum);
for (int i = 1; i < addrlen; i++)
{
struct SemiResolvedAddress hres =
semiResolve(regions, regionsNum, (uintptr_t)(addrlist[i]));
char syscom[256];
sprintf(syscom, "addr2line -C -f -p -a -e %s 0x%jx", hres.path, (intmax_t)(hres.offset));
system(syscom);
}
free(regions);
}
C++23 <stacktrace>
Finally, this has arrived! More details/comparison with other systems at: print call stack in C or C++
stacktrace.cpp
#include <iostream>
#include <stacktrace>
void my_func_2(void) {
std::cout << std::stacktrace::current(); // Line 5
}
void my_func_1(double f) {
(void)f;
my_func_2(); // Line 10
}
void my_func_1(int i) {
(void)i;
my_func_2(); // Line 15
}
int main(int argc, char **argv) {
my_func_1(1); // Line 19
my_func_1(2.0); // Line 20
}
GCC 12.1.0 from Ubuntu 22.04 does not have support compiled in, so for now I built it from source as per: How to edit and re-build the GCC libstdc++ C++ standard library source? and set --enable-libstdcxx-backtrace=yes, and it worked!
Compile and run:
g++ -ggdb3 -O2 -std=c++23 -Wall -Wextra -pedantic -o stacktrace.out stacktrace.cpp -lstdc++_libbacktrace
./stacktrace.out
Output:
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(int) at /home/ciro/stacktrace.cpp:15
2# at :0
3# at :0
4# at :0
5#
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(double) at /home/ciro/stacktrace.cpp:10
2# at :0
3# at :0
4# at :0
5#
The trace is not perfect (missing main line) because of optimization I think. With -O0 it is better:
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(int) at /home/ciro/stacktrace.cpp:15
2# at /home/ciro/stacktrace.cpp:19
3# at :0
4# at :0
5# at :0
6#
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(double) at /home/ciro/stacktrace.cpp:10
2# at /home/ciro/stacktrace.cpp:20
3# at :0
4# at :0
5# at :0
6#
I don't know why the name main is missing, but the line is there.
The "extra" lines after main like:
3# at :0
4# at :0
5# at :0
6#
are probably stuff that runs before main and that ends up calling main: What happens before main in C++?

print call stack in C or C++

Is there any way to dump the call stack in a running process in C or C++ every time a certain function is called? What I have in mind is something like this:
void foo()
{
print_stack_trace();
// foo's body
return
}
Where print_stack_trace works similarly to caller in Perl.
Or something like this:
int main (void)
{
// will print out debug info every time foo() is called
register_stack_trace_function(foo);
// etc...
}
where register_stack_trace_function puts some sort of internal breakpoint that will cause a stack trace to be printed whenever foo is called.
Does anything like this exist in some standard C library?
I am working on Linux, using GCC.
Background
I have a test run that behaves differently based on some commandline switches that shouldn't affect this behavior. My code has a pseudo-random number generator that I assume is being called differently based on these switches. I want to be able to run the test with each set of switches and see if the random number generator is called differently for each one.
Survey of C/C++ backtrace methods
In this answer I will try to run a single benchmark for a bunch of solutions to see which one runs faster, while also considering other points such as features and portability.
Tool
Time / call
Line number
Function name
C++ demangling
Recompile
Signal safe
As string
C
C++23 <stacktrace> GCC 12.1
7 us
y
y
y
y
n
y
n
Boost 1.74 stacktrace()
5 us
y
y
y
y
n
y
n
Boost 1.74 stacktrace::safe_dump_to
y
n
n
glibc backtrace_symbols_fd
25 us
n
-rdynamic
hacks
y
y
n
y
glibc backtrace_symbols
21 us
n
-rdynamic
hacks
y
n
y
y
GDB scripting
600 us
y
y
y
n
y
n
y
GDB code injection
n
n
y
libunwind
y
libdwfl
4 ms
n
y
libbacktrace
y
Empty cells mean "TODO", not "no".
us: microsecond
Line number: shows actual line number, not just function name + a memory address.
It is usually possible to recover the line number from an address manually after the fact with addr2line. But it is a pain.
Recompile: requires recompiling the program to get your traces. Not recompiling is better!
Signal safe: crucial for the important uses case of "getting a stack trace in case of segfault": How to automatically generate a stacktrace when my program crashes
As string: you get the stack trace as a string in the program itself, as opposed to e.g. just printing to stdout. Usually implies not signal safe, as we don't know the size of the stack trace string size in advance, and therefore requires malloc which is not async signal safe.
C: does it work on a plain-C project (yes, there are still poor souls out there), or is C++ required?
Test setup
All benchmarks will run the following
main.cpp
#include <cstdlib> // strtoul
#include <mystacktrace.h>
void my_func_2(void) {
print_stacktrace(); // line 6
}
void my_func_1(double f) {
(void)f;
my_func_2();
}
void my_func_1(int i) {
(void)i;
my_func_2(); // line 16
}
int main(int argc, char **argv) {
long long unsigned int n;
if (argc > 1) {
n = std::strtoul(argv[1], NULL, 0);
} else {
n = 1;
}
for (long long unsigned int i = 0; i < n; ++i) {
my_func_1(1); // line 27
}
}
This input is designed to test C++ name demangling since my_func_1(int) and my_func_1(float) are necessarily mangled as a way to implement C++ function overload.
We differentiate between the benchmarks by using different -I includes to point to different implementations of print_stacktrace().
Each benchmark is done with a command of form:
time ./stacktrace.out 100000 &>/dev/null
The number of iterations is adjusted for each implementation to produce a total runtime of the order of 1s for that benchmark.
-O0 is used on all tests below unless noted. Stack traces may be irreparably mutilated by certain optimizations. Tail call optimization is a notable example of that: What is tail call optimization? There's nothing we can do about it.
C++23 <stacktrace>
This method was previously mentioned at: https://stackoverflow.com/a/69384663/895245 please consider upvoting that answer.
This is the best solution... it's portable, fast, shows line numbers and demangles C++ symbols. This option will displace every other alternative as soon as it becomes more widely available, with the exception perhaps only of GDB for one-offs without the need or recompilation.
cpp20_stacktrace/mystacktrace.h
#include <iostream>
#include <stacktrace>
void print_stacktrace() {
std::cout << std::stacktrace::current();
}
GCC 12.1.0 from Ubuntu 22.04 does not have support compiled in, so for now I built it from source as per: How to edit and re-build the GCC libstdc++ C++ standard library source? and set --enable-libstdcxx-backtrace=yes, and it worked!
Compile with:
g++ -O0 -ggdb3 -Wall -Wextra -pedantic -std=c++23 -o cpp20_stacktrace.out main.cpp -lstdc++_libbacktrace
Sample output:
0# print_stacktrace() at cpp20_stacktrace/mystacktrace.h:5
1# my_func_2() at /home/ciro/main.cpp:6
2# my_func_1(int) at /home/ciro/main.cpp:16
3# at /home/ciro/main.cpp:27
4# at :0
5# at :0
6# at :0
7#
If we try to use GCC 12.1.0 from Ubuntu 22.04:
sudo apt install g++-12
g++-12 -ggdb3 -O2 -std=c++23 -Wall -Wextra -pedantic -o stacktrace.out stacktrace.cpp -lstdc++_libbacktrace
It fails with:
stacktrace.cpp: In function ‘void my_func_2()’:
stacktrace.cpp:6:23: error: ‘std::stacktrace’ has not been declared
6 | std::cout << std::stacktrace::current();
| ^~~~~~~~~~
Checking build options with:
g++-12 -v
does not show:
--enable-libstdcxx-backtrace=yes
so it wasn't compiled in. Bibliography:
How to use <stacktrace> in GCC trunk?
How can I generate a C++23 stacktrace with GCC 12.1?
It does not fail on the include because the header file:
/usr/include/c++/12
has a feature check:
#if __cplusplus > 202002L && _GLIBCXX_HAVE_STACKTRACE
Boost stacktrace
The library has changed quite a lot around Ubuntu 22.04, so make sure your version matches: Boost stack-trace not showing function names and line numbers
The library is pretty much superseded by the more portable C++23 implementation, but remains a very good option for those that are not at that standard version yet, but already have a "Boost clearance".
Documented at: https://www.boost.org/doc/libs/1_66_0/doc/html/stacktrace/getting_started.html#stacktrace.getting_started.how_to_print_current_call_stack
Tested on Ubuntu 22.04, boost 1.74.0, you should do:
boost_stacktrace/mystacktrace.h
#include <iostream>
#define BOOST_STACKTRACE_LINK
#include <boost/stacktrace.hpp>
void print_stacktrace(void) {
std::cout << boost::stacktrace::stacktrace();
}
On Ubuntu 19.10 boost 1.67.0 to get the line numbers we had to instead:
#include <iostream>
#define BOOST_STACKTRACE_USE_ADDR2LINE
#include <boost/stacktrace.hpp>
void print_stacktrace(void) {
std::cout << boost::stacktrace::stacktrace();
}
which would call out to the addr2line executable and be 1000x slower than the newer Boost version.
The package libboost-stacktrace-dev did not exist at all on Ubuntu 16.04.
The rest of this section considers only the Ubuntu 22.04, boost 1.74 behaviour.
Compile:
sudo apt-get install libboost-stacktrace-dev
g++ -O0 -ggdb3 -Wall -Wextra -pedantic -std=c++11 -o boost_stacktrace.out main.cpp -lboost_stacktrace_backtrace
Sample output:
0# print_stacktrace() at boost_stacktrace/mystacktrace.h:7
1# my_func_2() at /home/ciro/main.cpp:7
2# my_func_1(int) at /home/ciro/main.cpp:17
3# main at /home/ciro/main.cpp:26
4# __libc_start_call_main at ../sysdeps/nptl/libc_start_call_main.h:58
5# __libc_start_main at ../csu/libc-start.c:379
6# _start in ./boost_stacktrace.out
Note that the lines are off by one line. It was suggested in the comments that this is because the following instruction address is being considered.
Boost stacktrace header only
What the BOOST_STACKTRACE_LINK does is to require -lboost_stacktrace_backtrace at link time, so we imagine without that it will just work. This would be a good option for devs who don't have the "Boost clearance" to just add as one offs to debug.
TODO unfortunately it didn't so well for me:
#include <iostream>
#include <boost/stacktrace.hpp>
void print_stacktrace(void) {
std::cout << boost::stacktrace::stacktrace();
}
then:
g++ -O0 -ggdb3 -Wall -Wextra -pedantic -std=c++11 -o boost_stacktrace_header_only.out main.cpp
contains the overly short output:
0# 0x000055FF74AFB601 in ./boost_stacktrace_header_only.out
1# 0x000055FF74AFB66C in ./boost_stacktrace_header_only.out
2# 0x000055FF74AFB69C in ./boost_stacktrace_header_only.out
3# 0x000055FF74AFB6F7 in ./boost_stacktrace_header_only.out
4# 0x00007F0176E7BD90 in /lib/x86_64-linux-gnu/libc.so.6
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# 0x000055FF74AFB4E5 in ./boost_stacktrace_header_only.out
which we can't even use with addr2line. Maybe we have to pass some other define from: https://www.boost.org/doc/libs/1_80_0/doc/html/stacktrace/configuration_and_build.html ?
Tested on Ubuntu 22.04. boost 1.74.
Boost boost::stacktrace::safe_dump_to
This is an interesting alternative to boost::stacktrace::stacktrace as it writes the stack trace in a async signal safe manner to a file, which makes it a good option for automatically dumping stack traces on segfaults which is a super common use case: How to automatically generate a stacktrace when my program crashes
Documented at: https://www.boost.org/doc/libs/1_70_0/doc/html/boost/stacktrace/safe_dump_1_3_38_7_6_2_1_6.html
TODO get it to work. All I see each time is a bunch of random bytes. My attempt:
boost_stacktrace_safe/mystacktrace.h
#include <unistd.h>
#define BOOST_STACKTRACE_LINK
#include <boost/stacktrace.hpp>
void print_stacktrace(void) {
boost::stacktrace::safe_dump_to(0, 1024, STDOUT_FILENO);
}
Sample output:
1[FU1[FU"2[FU}2[FUm1#n10[FU
Changes drastically each time, suggesting it is random memory addresses.
Tested on Ubuntu 22.04, boost 1.74.0.
glibc backtrace
This method is quite portable as it comes with glibc itself. Documented at: https://www.gnu.org/software/libc/manual/html_node/Backtraces.html
Tested on Ubuntu 22.04, glibc 2.35.
glibc_backtrace_symbols_fd/mystacktrace.h
#include <execinfo.h> /* backtrace, backtrace_symbols_fd */
#include <unistd.h> /* STDOUT_FILENO */
void print_stacktrace(void) {
size_t size;
enum Constexpr { MAX_SIZE = 1024 };
void *array[MAX_SIZE];
size = backtrace(array, MAX_SIZE);
backtrace_symbols_fd(array, size, STDOUT_FILENO);
}
Compile with:
g++ -O0 -ggdb3 -Wall -Wextra -pedantic -rdynamic -std=c++11 -o glibc_backtrace_symbols_fd.out main.cpp
Sample output with -rdynamic:
./glibc_backtrace_symbols.out(_Z16print_stacktracev+0x47) [0x556e6a131230]
./glibc_backtrace_symbols.out(_Z9my_func_2v+0xd) [0x556e6a1312d6]
./glibc_backtrace_symbols.out(_Z9my_func_1i+0x14) [0x556e6a131306]
./glibc_backtrace_symbols.out(main+0x58) [0x556e6a131361]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f175e7bdd90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f175e7bde40]
./glibc_backtrace_symbols.out(_start+0x25) [0x556e6a131125]
Sample output without -rdynamic:
./glibc_backtrace_symbols_fd_no_rdynamic.out(+0x11f0)[0x556bd40461f0]
./glibc_backtrace_symbols_fd_no_rdynamic.out(+0x123c)[0x556bd404623c]
./glibc_backtrace_symbols_fd_no_rdynamic.out(+0x126c)[0x556bd404626c]
./glibc_backtrace_symbols_fd_no_rdynamic.out(+0x12c7)[0x556bd40462c7]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f0da2b70d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f0da2b70e40]
./glibc_backtrace_symbols_fd_no_rdynamic.out(+0x10e5)[0x556bd40460e5]
To get the line numbers without -rdynamic we can use addr2line:
addr2line -C -e glibc_backtrace_symbols_fd_no_rdynamic.out 0x11f0 0x123c 0x126c 0x12c7
addr2line cannot unfortunately handle the function name + offset in function format of when we are not using -rdynamic, e.g. _Z9my_func_2v+0xd.
GDB can however:
gdb -nh -batch -ex 'info line *(_Z9my_func_2v+0xd)' -ex 'info line *(_Z9my_func_1i+0x14)' glibc_backtrace_symbols.out
Line 7 of "main.cpp" starts at address 0x12d6 <_Z9my_func_2v+13> and ends at 0x12d9 <_Z9my_func_1d>.
Line 17 of "main.cpp" starts at address 0x1306 <_Z9my_func_1i+20> and ends at 0x1309 <main(int, char**)>.
A helper to make it more bearable:
addr2lines() (
perl -ne '$m = s/(.*).*\(([^)]*)\).*/gdb -nh -q -batch -ex "info line *\2" \1/;print $_ if $m' | bash
)
Usage:
xsel -b | addr2lines
glibc backtrace_symbols
A version of backtrace_symbols_fd that returns a string rather than printing to a file handle.
glibc_backtrace_symbols/mystacktrace.h
#include <execinfo.h> /* backtrace, backtrace_symbols */
#include <stdio.h> /* printf */
void print_stacktrace(void) {
char **strings;
size_t i, size;
enum Constexpr { MAX_SIZE = 1024 };
void *array[MAX_SIZE];
size = backtrace(array, MAX_SIZE);
strings = backtrace_symbols(array, size);
for (i = 0; i < size; i++)
printf("%s\n", strings[i]);
free(strings);
}
glibc backtrace with C++ demangling hack 1: -export-dynamic + dladdr
I couldn't find a simple way to automatically demangle C++ symbols with glibc backtrace.
https://panthema.net/2008/0901-stacktrace-demangled/
https://gist.github.com/fmela/591333/c64f4eb86037bb237862a8283df70cdfc25f01d3
Adapted from: https://gist.github.com/fmela/591333/c64f4eb86037bb237862a8283df70cdfc25f01d3
This is a "hack" because it requires changing the ELF with -export-dynamic.
glibc_ldl.cpp
#include <dlfcn.h> // for dladdr
#include <cxxabi.h> // for __cxa_demangle
#include <cstdio>
#include <string>
#include <sstream>
#include <iostream>
// This function produces a stack backtrace with demangled function & method names.
std::string backtrace(int skip = 1)
{
void *callstack[128];
const int nMaxFrames = sizeof(callstack) / sizeof(callstack[0]);
char buf[1024];
int nFrames = backtrace(callstack, nMaxFrames);
char **symbols = backtrace_symbols(callstack, nFrames);
std::ostringstream trace_buf;
for (int i = skip; i < nFrames; i++) {
Dl_info info;
if (dladdr(callstack[i], &info)) {
char *demangled = NULL;
int status;
demangled = abi::__cxa_demangle(info.dli_sname, NULL, 0, &status);
std::snprintf(
buf,
sizeof(buf),
"%-3d %*p %s + %zd\n",
i,
(int)(2 + sizeof(void*) * 2),
callstack[i],
status == 0 ? demangled : info.dli_sname,
(char *)callstack[i] - (char *)info.dli_saddr
);
free(demangled);
} else {
std::snprintf(buf, sizeof(buf), "%-3d %*p\n",
i, (int)(2 + sizeof(void*) * 2), callstack[i]);
}
trace_buf << buf;
std::snprintf(buf, sizeof(buf), "%s\n", symbols[i]);
trace_buf << buf;
}
free(symbols);
if (nFrames == nMaxFrames)
trace_buf << "[truncated]\n";
return trace_buf.str();
}
void my_func_2(void) {
std::cout << backtrace() << std::endl;
}
void my_func_1(double f) {
(void)f;
my_func_2();
}
void my_func_1(int i) {
(void)i;
my_func_2();
}
int main() {
my_func_1(1);
my_func_1(2.0);
}
Compile and run:
g++ -fno-pie -ggdb3 -O0 -no-pie -o glibc_ldl.out -std=c++11 -Wall -Wextra \
-pedantic-errors -fpic glibc_ldl.cpp -export-dynamic -ldl
./glibc_ldl.out
output:
1 0x40130a my_func_2() + 41
./glibc_ldl.out(_Z9my_func_2v+0x29) [0x40130a]
2 0x40139e my_func_1(int) + 16
./glibc_ldl.out(_Z9my_func_1i+0x10) [0x40139e]
3 0x4013b3 main + 18
./glibc_ldl.out(main+0x12) [0x4013b3]
4 0x7f7594552b97 __libc_start_main + 231
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f7594552b97]
5 0x400f3a _start + 42
./glibc_ldl.out(_start+0x2a) [0x400f3a]
1 0x40130a my_func_2() + 41
./glibc_ldl.out(_Z9my_func_2v+0x29) [0x40130a]
2 0x40138b my_func_1(double) + 18
./glibc_ldl.out(_Z9my_func_1d+0x12) [0x40138b]
3 0x4013c8 main + 39
./glibc_ldl.out(main+0x27) [0x4013c8]
4 0x7f7594552b97 __libc_start_main + 231
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f7594552b97]
5 0x400f3a _start + 42
./glibc_ldl.out(_start+0x2a) [0x400f3a]
Tested on Ubuntu 18.04.
glibc backtrace with C++ demangling hack 2: parse backtrace output
Shown at: https://panthema.net/2008/0901-stacktrace-demangled/
This is a hack because it requires parsing.
TODO get it to compile and show it here.
GDB scripting
We can also do this with GDB without recompiling by using: How to do an specific action when a certain breakpoint is hit in GDB?
We setup an empty backtrace function for our testing:
gdb/mystacktrace.h
void print_stacktrace(void) {}
and then with:
main.gdb
start
break print_stacktrace
commands
silent
backtrace
printf "\n"
continue
end
continue
we can run:
gdb -nh -batch -x main.gdb --args gdb.out
Sample output:
Temporary breakpoint 1 at 0x11a7: file main.cpp, line 21.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Temporary breakpoint 1, main (argc=1, argv=0x7fffffffc3e8) at main.cpp:21
warning: Source file is more recent than executable.
21 if (argc > 1) {
Breakpoint 2 at 0x555555555151: file gdb/mystacktrace.h, line 1.
#0 print_stacktrace () at gdb/mystacktrace.h:1
#1 0x0000555555555161 in my_func_2 () at main.cpp:6
#2 0x0000555555555191 in my_func_1 (i=1) at main.cpp:16
#3 0x00005555555551ec in main (argc=1, argv=0x7fffffffc3e8) at main.cpp:27
[Inferior 1 (process 165453) exited normally]
The above can be made more usable with the following Bash function:
gdbbt() (
tmpfile=$(mktemp /tmp/gdbbt.XXXXXX)
fn="$1"
shift
printf '%s' "
start
break $fn
commands
silent
backtrace
printf \"\n\"
continue
end
continue
" > "$tmpfile"
gdb -nh -batch -x "$tmpfile" -args "$#"
rm -f "$tmpfile"
)
Usage:
gdbbt print_stacktrace gdb.out 2
I don't know how to make commands with -ex without the temporary file: Problems adding a breakpoint with commands from command line with ex command
Tested in Ubuntu 22.04, GDB 12.0.90.
GDB code injection
TODO this is the dream! It might allow for both compiled-liked speeds, but without the need to recompile! Either:
with compile code + one of the other options, ideally C++23 <stacktrace>: How to call assembly in gdb? Might already be possible. But compile code is mega-quirky so I'm lazy to even try
a built-in dbt command analogous to dprintf dynamic printf: How to do an specific action when a certain breakpoint is hit in GDB?
libunwind
TODO does this have any advantage over glibc backtrace? Very similar output, also requires modifying the build command, but not part of glibc so requires an extra package installation.
Code adapted from: https://eli.thegreenplace.net/2015/programmatic-access-to-the-call-stack-in-c/
main.c
/* This must be on top. */
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <stdlib.h>
/* Paste this on the file you want to debug. */
#define UNW_LOCAL_ONLY
#include <libunwind.h>
#include <stdio.h>
void print_trace() {
char sym[256];
unw_context_t context;
unw_cursor_t cursor;
unw_getcontext(&context);
unw_init_local(&cursor, &context);
while (unw_step(&cursor) > 0) {
unw_word_t offset, pc;
unw_get_reg(&cursor, UNW_REG_IP, &pc);
if (pc == 0) {
break;
}
printf("0x%lx:", pc);
if (unw_get_proc_name(&cursor, sym, sizeof(sym), &offset) == 0) {
printf(" (%s+0x%lx)\n", sym, offset);
} else {
printf(" -- error: unable to obtain symbol name for this frame\n");
}
}
puts("");
}
void my_func_3(void) {
print_trace();
}
void my_func_2(void) {
my_func_3();
}
void my_func_1(void) {
my_func_3();
}
int main(void) {
my_func_1(); /* line 46 */
my_func_2(); /* line 47 */
return 0;
}
Compile and run:
sudo apt-get install libunwind-dev
gcc -fno-pie -ggdb3 -O3 -no-pie -o main.out -std=c99 \
-Wall -Wextra -pedantic-errors main.c -lunwind
Either #define _XOPEN_SOURCE 700 must be on top, or we must use -std=gnu99:
Is the type `stack_t` no longer defined on linux?
Glibc - error in ucontext.h, but only with -std=c11
Run:
./main.out
Output:
0x4007db: (main+0xb)
0x7f4ff50aa830: (__libc_start_main+0xf0)
0x400819: (_start+0x29)
0x4007e2: (main+0x12)
0x7f4ff50aa830: (__libc_start_main+0xf0)
0x400819: (_start+0x29)
and:
addr2line -e main.out 0x4007db 0x4007e2
gives:
/home/ciro/main.c:34
/home/ciro/main.c:49
With -O0:
0x4009cf: (my_func_3+0xe)
0x4009e7: (my_func_1+0x9)
0x4009f3: (main+0x9)
0x7f7b84ad7830: (__libc_start_main+0xf0)
0x4007d9: (_start+0x29)
0x4009cf: (my_func_3+0xe)
0x4009db: (my_func_2+0x9)
0x4009f8: (main+0xe)
0x7f7b84ad7830: (__libc_start_main+0xf0)
0x4007d9: (_start+0x29)
and:
addr2line -e main.out 0x4009f3 0x4009f8
gives:
/home/ciro/main.c:47
/home/ciro/main.c:48
Tested on Ubuntu 16.04, GCC 6.4.0, libunwind 1.1.
libunwind with C++ name demangling
Code adapted from: https://eli.thegreenplace.net/2015/programmatic-access-to-the-call-stack-in-c/
unwind.cpp
#define UNW_LOCAL_ONLY
#include <cxxabi.h>
#include <libunwind.h>
#include <cstdio>
#include <cstdlib>
#include <iostream>
void backtrace() {
unw_cursor_t cursor;
unw_context_t context;
// Initialize cursor to current frame for local unwinding.
unw_getcontext(&context);
unw_init_local(&cursor, &context);
// Unwind frames one by one, going up the frame stack.
while (unw_step(&cursor) > 0) {
unw_word_t offset, pc;
unw_get_reg(&cursor, UNW_REG_IP, &pc);
if (pc == 0) {
break;
}
std::printf("0x%lx:", pc);
char sym[256];
if (unw_get_proc_name(&cursor, sym, sizeof(sym), &offset) == 0) {
char* nameptr = sym;
int status;
char* demangled = abi::__cxa_demangle(sym, nullptr, nullptr, &status);
if (status == 0) {
nameptr = demangled;
}
std::printf(" (%s+0x%lx)\n", nameptr, offset);
std::free(demangled);
} else {
std::printf(" -- error: unable to obtain symbol name for this frame\n");
}
}
}
void my_func_2(void) {
backtrace();
std::cout << std::endl; // line 43
}
void my_func_1(double f) {
(void)f;
my_func_2();
}
void my_func_1(int i) {
(void)i;
my_func_2();
} // line 54
int main() {
my_func_1(1);
my_func_1(2.0);
}
Compile and run:
sudo apt-get install libunwind-dev
g++ -fno-pie -ggdb3 -O0 -no-pie -o unwind.out -std=c++11 \
-Wall -Wextra -pedantic-errors unwind.cpp -lunwind -pthread
./unwind.out
Output:
0x400c80: (my_func_2()+0x9)
0x400cb7: (my_func_1(int)+0x10)
0x400ccc: (main+0x12)
0x7f4c68926b97: (__libc_start_main+0xe7)
0x400a3a: (_start+0x2a)
0x400c80: (my_func_2()+0x9)
0x400ca4: (my_func_1(double)+0x12)
0x400ce1: (main+0x27)
0x7f4c68926b97: (__libc_start_main+0xe7)
0x400a3a: (_start+0x2a)
and then we can find the lines of my_func_2 and my_func_1(int) with:
addr2line -e unwind.out 0x400c80 0x400cb7
which gives:
/home/ciro/test/unwind.cpp:43
/home/ciro/test/unwind.cpp:54
TODO: why are the lines off by one?
Tested on Ubuntu 18.04, GCC 7.4.0, libunwind 1.2.1.
Linux kernel
How to print the current thread stack trace inside the Linux kernel?
libdwfl
This was originally mentioned at: https://stackoverflow.com/a/60713161/895245 and it might be the best method, but I have to benchmark a bit more, but please go upvote that answer.
TODO: I tried to minimize the code in that answer, which was working, to a single function, but it is segfaulting, let me know if anyone can find why.
dwfl.cpp: answer reached 30k chars and this was the easiest cut: https://gist.github.com/cirosantilli/f1dd3ee5d324b9d24e40f855723544ac
Compile and run:
sudo apt install libdw-dev libunwind-dev
g++ -fno-pie -ggdb3 -O0 -no-pie -o dwfl.out -std=c++11 -Wall -Wextra -pedantic-errors dwfl.cpp -ldw -lunwind
./dwfl.out
We also need libunwind as that makes results more correct. If you do without it, it runs, but you will see that some of the lines are a bit wrong.
Output:
0: 0x402b72 stacktrace[abi:cxx11]() at /home/ciro/test/dwfl.cpp:65
1: 0x402cda my_func_2() at /home/ciro/test/dwfl.cpp:100
2: 0x402d76 my_func_1(int) at /home/ciro/test/dwfl.cpp:111
3: 0x402dd1 main at /home/ciro/test/dwfl.cpp:122
4: 0x7ff227ea0d8f __libc_start_call_main at ../sysdeps/nptl/libc_start_call_main.h:58
5: 0x7ff227ea0e3f __libc_start_main##GLIBC_2.34 at ../csu/libc-start.c:392
6: 0x402534 _start at ../csu/libc-start.c:-1
0: 0x402b72 stacktrace[abi:cxx11]() at /home/ciro/test/dwfl.cpp:65
1: 0x402cda my_func_2() at /home/ciro/test/dwfl.cpp:100
2: 0x402d5f my_func_1(double) at /home/ciro/test/dwfl.cpp:106
3: 0x402de2 main at /home/ciro/test/dwfl.cpp:123
4: 0x7ff227ea0d8f __libc_start_call_main at ../sysdeps/nptl/libc_start_call_main.h:58
5: 0x7ff227ea0e3f __libc_start_main##GLIBC_2.34 at ../csu/libc-start.c:392
6: 0x402534 _start at ../csu/libc-start.c:-1
Benchmark run:
g++ -fno-pie -ggdb3 -O3 -no-pie -o dwfl.out -std=c++11 -Wall -Wextra -pedantic-errors dwfl.cpp -ldw
time ./dwfl.out 1000 >/dev/null
Output:
real 0m3.751s
user 0m2.822s
sys 0m0.928s
So we see that this method is 10x faster than Boost's stacktrace, and might therefore be applicable to more use cases.
Tested in Ubuntu 22.04 amd64, libdw-dev 0.186, libunwind 1.3.2.
libbacktrace
https://github.com/ianlancetaylor/libbacktrace
Considering the harcore library author, it is worth trying this out, maybe it is The One. TODO check it out.
A C library that may be linked into a C/C++ program to produce symbolic backtraces
As of October 2020, libbacktrace supports ELF, PE/COFF, Mach-O, and XCOFF executables with DWARF debugging information. In other words, it supports GNU/Linux, *BSD, macOS, Windows, and AIX. The library is written to make it straightforward to add support for other object file and debugging formats.
The library relies on the C++ unwind API defined at https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html This API is provided by GCC and clang.
See also
How can one grab a stack trace in C?
How to make backtrace()/backtrace_symbols() print the function names?
Is there a portable/standard-compliant way to get filenames and linenumbers in a stack trace?
Best way to invoke gdb from inside program to print its stacktrace?
automatic stack trace on failure:
on C++ exception: C++ display stack trace on exception
generic: How to automatically generate a stacktrace when my program crashes
For a linux-only solution you can use backtrace(3) that simply returns an array of void * (in fact each of these point to the return address from the corresponding stack frame). To translate these to something of use, there's backtrace_symbols(3).
Pay attention to the notes section in backtrace(3):
The symbol names may be unavailable
without the use of special linker
options.
For systems using the GNU linker, it is necessary to use the
-rdynamic linker
option. Note that names of "static" functions are not exposed,
and won't be
available in the backtrace.
In C++23, there will be <stacktrace>, and then you can do:
#include <stacktrace>
/* ... */
std::cout << std::stacktrace::current();
Further details:
  • https://en.cppreference.com/w/cpp/header/stacktrace
  • https://en.cppreference.com/w/cpp/utility/basic_stacktrace/operator_ltlt
Is there any way to dump the call stack in a running process in C or C++ every time a certain function is called?
You can use a macro function instead of return statement in the specific function.
For example, instead of using return,
int foo(...)
{
if (error happened)
return -1;
... do something ...
return 0
}
You can use a macro function.
#include "c-callstack.h"
int foo(...)
{
if (error happened)
NL_RETURN(-1);
... do something ...
NL_RETURN(0);
}
Whenever an error happens in a function, you will see Java-style call stack as shown below.
Error(code:-1) at : so_topless_ranking_server (sample.c:23)
Error(code:-1) at : nanolat_database (sample.c:31)
Error(code:-1) at : nanolat_message_queue (sample.c:39)
Error(code:-1) at : main (sample.c:47)
Full source code is available here.
c-callstack at https://github.com/Nanolat
Linux specific, TLDR:
backtrace in glibc produces accurate stacktraces only when -lunwind is linked (undocumented platform-specific feature).
To output function name, source file and line number use #include <elfutils/libdwfl.h> (this library is documented only in its header file). backtrace_symbols and backtrace_symbolsd_fd are least informative.
On modern Linux your can get the stacktrace addresses using function backtrace. The undocumented way to make backtrace produce more accurate addresses on popular platforms is to link with -lunwind (libunwind-dev on Ubuntu 18.04) (see the example output below). backtrace uses function _Unwind_Backtrace and by default the latter comes from libgcc_s.so.1 and that implementation is most portable. When -lunwind is linked it provides a more accurate version of _Unwind_Backtrace but this library is less portable (see supported architectures in libunwind/src).
Unfortunately, the companion backtrace_symbolsd and backtrace_symbols_fd functions have not been able to resolve the stacktrace addresses to function names with source file name and line number for probably a decade now (see the example output below).
However, there is another method to resolve addresses to symbols and it produces the most useful traces with function name, source file and line number. The method is to #include <elfutils/libdwfl.h>and link with -ldw (libdw-dev on Ubuntu 18.04).
Working C++ example (test.cc):
#include <stdexcept>
#include <iostream>
#include <cassert>
#include <cstdlib>
#include <string>
#include <boost/core/demangle.hpp>
#include <execinfo.h>
#include <elfutils/libdwfl.h>
struct DebugInfoSession {
Dwfl_Callbacks callbacks = {};
char* debuginfo_path = nullptr;
Dwfl* dwfl = nullptr;
DebugInfoSession() {
callbacks.find_elf = dwfl_linux_proc_find_elf;
callbacks.find_debuginfo = dwfl_standard_find_debuginfo;
callbacks.debuginfo_path = &debuginfo_path;
dwfl = dwfl_begin(&callbacks);
assert(dwfl);
int r;
r = dwfl_linux_proc_report(dwfl, getpid());
assert(!r);
r = dwfl_report_end(dwfl, nullptr, nullptr);
assert(!r);
static_cast<void>(r);
}
~DebugInfoSession() {
dwfl_end(dwfl);
}
DebugInfoSession(DebugInfoSession const&) = delete;
DebugInfoSession& operator=(DebugInfoSession const&) = delete;
};
struct DebugInfo {
void* ip;
std::string function;
char const* file;
int line;
DebugInfo(DebugInfoSession const& dis, void* ip)
: ip(ip)
, file()
, line(-1)
{
// Get function name.
uintptr_t ip2 = reinterpret_cast<uintptr_t>(ip);
Dwfl_Module* module = dwfl_addrmodule(dis.dwfl, ip2);
char const* name = dwfl_module_addrname(module, ip2);
function = name ? boost::core::demangle(name) : "<unknown>";
// Get source filename and line number.
if(Dwfl_Line* dwfl_line = dwfl_module_getsrc(module, ip2)) {
Dwarf_Addr addr;
file = dwfl_lineinfo(dwfl_line, &addr, &line, nullptr, nullptr, nullptr);
}
}
};
std::ostream& operator<<(std::ostream& s, DebugInfo const& di) {
s << di.ip << ' ' << di.function;
if(di.file)
s << " at " << di.file << ':' << di.line;
return s;
}
void terminate_with_stacktrace() {
void* stack[512];
int stack_size = ::backtrace(stack, sizeof stack / sizeof *stack);
// Print the exception info, if any.
if(auto ex = std::current_exception()) {
try {
std::rethrow_exception(ex);
}
catch(std::exception& e) {
std::cerr << "Fatal exception " << boost::core::demangle(typeid(e).name()) << ": " << e.what() << ".\n";
}
catch(...) {
std::cerr << "Fatal unknown exception.\n";
}
}
DebugInfoSession dis;
std::cerr << "Stacktrace of " << stack_size << " frames:\n";
for(int i = 0; i < stack_size; ++i) {
std::cerr << i << ": " << DebugInfo(dis, stack[i]) << '\n';
}
std::cerr.flush();
std::_Exit(EXIT_FAILURE);
}
int main() {
std::set_terminate(terminate_with_stacktrace);
throw std::runtime_error("test exception");
}
Compiled on Ubuntu 18.04.4 LTS with gcc-8.3:
g++ -o test.o -c -m{arch,tune}=native -std=gnu++17 -W{all,extra,error} -g -Og -fstack-protector-all test.cc
g++ -o test -g test.o -ldw -lunwind
Outputs:
Fatal exception std::runtime_error: test exception.
Stacktrace of 7 frames:
0: 0x55f3837c1a8c terminate_with_stacktrace() at /home/max/src/test/test.cc:76
1: 0x7fbc1c845ae5 <unknown>
2: 0x7fbc1c845b20 std::terminate()
3: 0x7fbc1c845d53 __cxa_throw
4: 0x55f3837c1a43 main at /home/max/src/test/test.cc:103
5: 0x7fbc1c3e3b96 __libc_start_main at ../csu/libc-start.c:310
6: 0x55f3837c17e9 _start
When no -lunwind is linked, it produces a less accurate stacktrace:
0: 0x5591dd9d1a4d terminate_with_stacktrace() at /home/max/src/test/test.cc:76
1: 0x7f3c18ad6ae6 <unknown>
2: 0x7f3c18ad6b21 <unknown>
3: 0x7f3c18ad6d54 <unknown>
4: 0x5591dd9d1a04 main at /home/max/src/test/test.cc:103
5: 0x7f3c1845cb97 __libc_start_main at ../csu/libc-start.c:344
6: 0x5591dd9d17aa _start
For comparison, backtrace_symbols_fd output for the same stacktrace is least informative:
/home/max/src/test/debug/gcc/test(+0x192f)[0x5601c5a2092f]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x92ae5)[0x7f95184f5ae5]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt9terminatev+0x10)[0x7f95184f5b20]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(__cxa_throw+0x43)[0x7f95184f5d53]
/home/max/src/test/debug/gcc/test(+0x1ae7)[0x5601c5a20ae7]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe6)[0x7f9518093b96]
/home/max/src/test/debug/gcc/test(+0x1849)[0x5601c5a20849]
In a production version (as well as C language version) you may like to make this code extra robust by replacing boost::core::demangle, std::string and std::cout with their underlying calls.
You can also override __cxa_throw to capture the stacktrace when an exception is thrown and print it when the exception is caught. By the time it enters catch block the stack has been unwound, so it is too late to call backtrace, and this is why the stack must be captured on throw which is implemented by function __cxa_throw. Note that in a multi-threaded program __cxa_throw can be called simultaneously by multiple threads, so that if it captures the stacktrace into a global array that must be thread_local.
You can also make the stack trace printing function async-signal safe, so that you can invoke it directly from your SIGSEGV, SIGBUS signal handlers (which should use their own stacks for robustness). Obtaining function name, source file and line number using libdwfl from a signal handler may fail because it is not async-signal safe or if the address space of the process has been substantially corrupted, but in practice it succeeds 99% of the time (I haven't seen it fail).
To summarize, a complete production-ready library for automatic stacktrace output should:
Capture the stacktrace on throw into thread-specific storage.
Automatically print the stacktrace on unhandled exceptions.
Print the stacktrace in async-signal-safe manner.
Provide a robust signal handler function which uses its own stack that prints the stacktrace in a async-signal-safe manner. The user can install this function as a signal handler for SIGSEGV, SIGBUS, SIGFPE, etc..
The signal handler may as well print the values of all CPU registers at the point of the fault from ucontext_t signal function argument (may be excluding vector registers), a-la Linux kernel oops log messages.
Another answer to an old thread.
When I need to do this, I usually just use system() and pstack
So something like this:
#include <sys/types.h>
#include <unistd.h>
#include <string>
#include <sstream>
#include <cstdlib>
void f()
{
pid_t myPid = getpid();
std::string pstackCommand = "pstack ";
std::stringstream ss;
ss << myPid;
pstackCommand += ss.str();
system(pstackCommand.c_str());
}
void g()
{
f();
}
void h()
{
g();
}
int main()
{
h();
}
This outputs
#0 0x00002aaaab62d61e in waitpid () from /lib64/libc.so.6
#1 0x00002aaaab5bf609 in do_system () from /lib64/libc.so.6
#2 0x0000000000400c3c in f() ()
#3 0x0000000000400cc5 in g() ()
#4 0x0000000000400cd1 in h() ()
#5 0x0000000000400cdd in main ()
This should work on Linux, FreeBSD and Solaris. I don't think that macOS has pstack or a simple equivalent, but this thread seems to have an alternative.
If you are using C, then you will need to use C string functions.
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
void f()
{
pid_t myPid = getpid();
/*
length of command 7 for 'pstack ', 7 for the PID, 1 for nul
*/
char pstackCommand[7+7+1];
sprintf(pstackCommand, "pstack %d", (int)myPid);
system(pstackCommand);
}
I've used 7 for the max number of digits in the PID, based on this post.
There is no standardized way to do that. For windows the functionality is provided in the DbgHelp library
You can use the Boost libraries to print the current callstack.
#include <boost/stacktrace.hpp>
// ... somewhere inside the `bar(int)` function that is called recursively:
std::cout << boost::stacktrace::stacktrace();
Man here: https://www.boost.org/doc/libs/1_65_1/doc/html/stacktrace.html
I know this thread is old, but I think it can be useful for other people. If you are using gcc, you can use its instrument features (-finstrument-functions option) to log any function call (entry and exit). Have a look at this for more information: http://hacktalks.blogspot.fr/2013/08/gcc-instrument-functions.html
You can thus for instance push and pop every calls into a stack, and when you want to print it, you just look at what you have in your stack.
I've tested it, it works perfectly and is very handy
UPDATE: you can also find information about the -finstrument-functions compile option in the GCC doc concerning the Instrumentation options: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html
You can implement the functionality yourself:
Use a global (string)stack and at start of each function push the function name and such other values (eg parameters) onto this stack; at exit of function pop it again.
Write a function that will printout the stack content when it is called, and use this in the function where you want to see the callstack.
This may sound like a lot of work but is quite useful.
Of course the next question is: will this be enough ?
The main disadvantage of stack-traces is that why you have the precise function being called you do not have anything else, like the value of its arguments, which is very useful for debugging.
If you have access to gcc and gdb, I would suggest using assert to check for a specific condition, and produce a memory dump if it is not met. Of course this means the process will stop, but you'll have a full fledged report instead of a mere stack-trace.
If you wish for a less obtrusive way, you can always use logging. There are very efficient logging facilities out there, like Pantheios for example. Which once again could give you a much more accurate image of what is going on.
You can use Poppy for this. It is normally used to gather the stack trace during a crash but it can also output it for a running program as well.
Now here's the good part: it can output the actual parameter values for each function on the stack, and even local variables, loop counters, etc.
You can use the GNU profiler. It shows the call-graph as well! the command is gprof and you need to compile your code with some option.
Is there any way to dump the call stack in a running process in C or C++ every time a certain function is called?
No there is not, although platform-dependent solutions might exist.

Need Help Tracking Down EXC_BAD_ACCESS on Function Entry on MacOS

I have a program that gets a KERN_PROTECTION_FAILURE with EXC_BAD_ACCESS in a very strange place when running multithreaded and I haven't the faintest idea how to troubleshoot it further. This is on MacOS 10.6 using GCC.
The very strange place that it gets this is when entering a function. Not on the first line of the function, but the actual jump to the function GetMachineFactors():
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_PROTECTION_FAILURE at address: 0xb00009ec
[Switching to process 28242]
0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
168 MachineFactors* GetMachineFactors()
(gdb) bt
#0 0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
#1 0x000156d0 in CollectMachineFactorsThreadProc (parameter=0x200280) at Threads.cpp:341
#2 0x952f681d in _pthread_start ()
#3 0x952f66a2 in thread_start ()
(gdb)
If I run this non-threaded, it runs great, no issues:
#include "MachineFactors.h"
int main( int argc, char** argv )
{
MachineFactors* factors = GetMachineFactors();
std::string str = CreateJSONObject(factors);
cout << str;
delete factors;
return 0;
}
If I run this in a pthread, I get the EXC_BAD_ACCESS above.
THREAD_FUNCTION CollectMachineFactorsThreadProc( LPVOID parameter )
{
Main* client = (Main*) parameter;
if ( parameter == NULL )
{
ERRORLOG( "No data passed to machine identification thread. Aborting." );
return 0;
}
MachineFactors* mfactors = GetMachineFactors(); // This is where it dies.
// If I don't call GetMachineFactors and do something like mfactors =
// new MachineFactors(); everything is good and the threads communicate and exit
// normally.
if (mfactors == NULL)
{
ERRORLOG("Failed to collect machine identification: GetMachineFactors returned NULL." << endl)
return 0;
}
client->machineFactors = CreateJSONObject(mfactors);
delete mfactors;
EVENT_RAISE(client->machineFactorsEvent);
return 0;
}
Here is an excerpt from the GetMachineFactors() code:
MachineFactors* GetMachineFactors() // Dies on this line in multi-threaded.
{
// printf( "Getting machine factors.\n"); // Tried with and without this, never prints.
factors = new MachineFactors();
factors->OSName = "MacOS";
factors->Manufacturer = "Apple";
///…
// gather various machine metrics here.
//…
return factors;
}
For reference, I am using a socketpair to wait on the thread to complete:
// From the header file I use for cross-platform defines (this runs on OSX, Windows, and Linux.
struct _waitt
{
int fds[2];
};
#define THREAD_FUNCTION void*
#define THREAD_REFERENCE pthread_t
#define MUTEX_REFERENCE pthread_mutex_t*
#define MUTEX_LOCK(m) pthread_mutex_lock(m)
#define MUTEX_UNLOCK pthread_mutex_unlock
#define EVENT_REFERENCE struct _waitt
#define EVENT_WAIT(m) do { char lc; if (read(m.fds[0], &lc, 1)) {} } while (0)
#define EVENT_RAISE(m) do { char lc = 'j'; if (write(m.fds[1], &lc, 1)) {} } while (0)
#define EVENT_NULL(m) do { m.fds[0] = -1; m.fds[1] = -1; } while (0)
Here is the code where I launch the thread.
void Main::CollectMachineFactors()
{
#ifdef WIN32
machineFactorsThread = CreateThread(NULL, 0, CollectMachineFactorsThreadProc, this, 0, 0);
if ( machineFactorsThread == NULL )
{
ERRORLOG( "Could not create thread for machine id: " << ERROR_NO << endl )
}
#else
int retval = pthread_create(&machineFactorsThread, NULL, CollectMachineFactorsThreadProc, this);
if (retval)
{
ERRORLOG( "Return code from machine id pthread_create() is " << retval << endl )
}
#endif
}
Here's the simple failure case of running this multithreaded. It always fails for this code with the stack trace above:
CollectMachineFactors();
EVENT_WAIT(machineFactorsEvent);
cout << machineFactors;
return 0;
At first I suspected a library problem. Here's my makefile:
# Main executable file
PROGRAM = sysinfo
# Object files
OBJECTS = Version.h Main.o Protocol.o Socket.o SSLConnection.o Stats.o TimeElapsed.o Formatter.o OSX.o Threads.o
# Include directories
INCLUDE = -Itaocrypt/include -IyaSSL/taocrypt/mySTL -IyaSSL/include -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5
# Library settings
STATICLIBS = libtaocrypt.a libyassl.a -Wl,-rpath,. -ldl -lpthread -lz -lexpat
# Compile settings
RELCXX = g++ -g -ggdb -DDEBUG -Wall $(INCLUDE)
.SUFFIXES: .o .cpp
.cpp.o :
$(RELCXX) -c -Wall $(INCLUDE) -o $# $<
all: $(PROGRAM)
$(PROGRAM): $(OBJECTS)
$(RELCXX) -o $(PROGRAM) $(OBJECTS) $(STATICLIBS)
clean:
rm -f *.o $(PROGRAM)
I can't for the life of me see anything particularly odd or dangerous and I'm not sure where to look. The same threaded process works fine on any Linux machine I have tried. Any suggestions? Any tools I should try?
I can add more info if it would be helpful.
I can see a problem with your Windows code, but not the OSX code that's crashing on you.
It seems that you're not posting the actual code for GetMachineFactors, since the variable factors is not declared. But regarding debugging, you should not take the non-appearance of printf output as conclusive that that statement hasn't been executed. Use debugger facilities such as setting a breakpoint, using special debugger trace output, so on (not sure what gdb handles, it's a very primitive debugger, but perhaps Apple has better tools?).
For Windows, you should use the run time library's thread creation instead of Windows API CreateThread. That's because with CreateThread the runtime lib isn't informed. E.g, a new expression or other call that uses the runtime lib might fail.
Sorry I can't help more.
I think it could perhaps have something to do with the GetMachineFactors code that you haven't shown?
It turns out, and I can't explain why, that a fork() call combined with a socketpair() as the IPC mechanism was the workaround to get things going as intended.
I wish I knew why it was failing in the first place (headscratch) but that approach seems to have been a good workaround.
It almost seemed like the kind of "build out of whack" problem that could be caused by failing to run a 'make clean' after changing header files, but that wasn't the case here.