I started with testing GoogleMock (1.8.0 release) on Windows. I wanted to show an example that it is not thread safe. After proving that successfully, I wanted to show that the same test runs fine on Linux. That failed however. That did not match my expectation.
Since the GoogleMock documentation says that it is, or should be, thread safe on systems with pthreads, it should be thread safe on Linux. I did have to add -pthread to the linker command line to build the executable. That means that GoogleMock or GoogleTest does use pthreads.
This is the code I use for testing:
#include <thread>
#include <vector>
#include "gmock/gmock.h"
class Dummy
{
public:
virtual void SomeMethod(int) {}
};
class DummyMock : public Dummy
{
public:
MOCK_METHOD1(SomeMethod, void(int));
};
using ::testing::Exactly;
constexpr static int nrCallsPerThread = 100 * 1000;
constexpr static int nrThreads = 10;
TEST(SomeTest, Test100)
{
DummyMock dummy;
std::vector<std::thread> threads;
for (int i = 0; i < nrThreads; i++)
{
EXPECT_CALL(dummy, SomeMethod(i)).Times(Exactly(nrCallsPerThread));
threads.emplace_back([&dummy, i]
{
for (int j = 0; j < nrCallsPerThread; j++)
{
dummy.SomeMethod(i);
}
});
}
for (auto& t: threads)
{
t.join();
}
}
int main(int argc, char** argv)
{
testing::InitGoogleMock(&argc, argv);
return RUN_ALL_TESTS();
}
The problem is, on Linux, not exposed every execution. But running the executable with --gtest_repeat=100 has a near 100% hit rate.
On Windows, using Visual Studio 2015, if get a Debug Assertion Failed! message with Expression: vector iterator not decrementable.
On Linux, Ubuntu 17.04, when I run the Debug build from the command line I get [ FATAL ] ../googletest-release-1.8.0/googletest/include/gtest/internal/gtest-port.h:1928:: pthread_mutex_lock(&mutex_)failed with error 22.
When running in a debugger on Linux, the program is (often) interrupted at gtest-port.h line 1100, which is line 2 in this callstack:
0 0x5555555a6633 testing::Cardinality::ConservativeUpperBound() const () (??:??)
1 0x5555555a1a13 testing::internal::ExpectationBase::CheckActionCountIfNotDone() const () (??:??)
2 0x555555563f98 testing::internal::TypedExpectation<void (int)>::ShouldHandleArguments(std::tuple<int> const&) const(this=0x5555557f58a0, args=std::tuple containing = {...}) (../googletest-release-1.8.0/googlemock/include/gmock/gmock-spec-builders.h:1100)
3 0x55555556397d testing::internal::FunctionMockerBase<void (int)>::FindMatchingExpectationLocked(std::tuple<int> const&) const(this=0x7fffffffde38, args=std::tuple containing = {...}) (../googletest-release-1.8.0/googlemock/include/gmock/gmock-spec-builders.h:1723)
4 0x555555563578 testing::internal::FunctionMockerBase<void (int)>::UntypedFindMatchingExpectation(void const*, void const**, bool*, std::ostream*, std::ostream*)(this=0x7fffffffde38, untyped_args=0x7fffde7fbe14, untyped_action=0x7fffde7fb7d0, is_excessive=0x7fffde7fb7c7, what=0x7fffde7fb900, why=0x7fffde7fba90) (../googletest-release-1.8.0/googlemock/include/gmock/gmock-spec-builders.h:1687)
5 0x5555555a265e testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith(void const*) () (??:??)
6 0x55555555fcba testing::internal::FunctionMockerBase<void (int)>::InvokeWith(std::tuple<int> const&)(this=0x7fffffffde38, args=std::tuple containing = {...}) (../googletest-release-1.8.0/googlemock/include/gmock/gmock-spec-builders.h:1585)
7 0x55555555f16c testing::internal::FunctionMocker<void (int)>::Invoke(int)(this=0x7fffffffde38, a1=1) (../googletest-release-1.8.0/googlemock/include/gmock/gmock-generated-function-mockers.h:101)
8 0x55555555ecb6 DummyMock::SomeMethod(this=0x7fffffffde30, gmock_a1=1) (/home/jos/Programming/ThreadSafeGMock/main.cpp:16)
9 0x55555555d31e SomeTest_Test100_Test::<lambda()>::operator()(void) const(__closure=0x5555557f5478) (/home/jos/Programming/ThreadSafeGMock/main.cpp:36)
10 0x55555555de98 std::_Bind_simple<SomeTest_Test100_Test::TestBody()::<lambda()>()>::_M_invoke<>(std::_Index_tuple<>)(this=0x5555557f5478) (/usr/include/c++/6/functional:1391)
11 0x55555555de22 std::_Bind_simple<SomeTest_Test100_Test::TestBody()::<lambda()>()>::operator()(void)(this=0x5555557f5478) (/usr/include/c++/6/functional:1380)
12 0x55555555ddf2 std::thread::_State_impl<std::_Bind_simple<SomeTest_Test100_Test::TestBody()::<lambda()>()> >::_M_run(void)(this=0x5555557f5470) (/usr/include/c++/6/thread:197)
13 0x7ffff7b0a83f ??() (/usr/lib/x86_64-linux-gnu/libstdc++.so.6:??)
14 0x7ffff76216da start_thread(arg=0x7fffde7fc700) (pthread_create.c:456)
15 0x7ffff735b17f clone() (../sysdeps/unix/sysv/linux/x86_64/clone.S:105)
Since it should be thread safe, I suspect that I am doing something wrong. But I do not see what.
Or did I hit a bug in GoogleTest or GoogleMock?
From the fine manual:
Important note: Google Mock requires expectations to be set before the mock functions are called, otherwise the behavior is undefined. In particular, you mustn't interleave EXPECT_CALL()s and calls to the mock functions.
Your original code fails on my system (cygwin) intermittently with error 22 or sometimes no message/error code whatsoever. This modification works flawlessly:
for (int i = 0; i < nrThreads; i++)
{
EXPECT_CALL(dummy, SomeMethod(i)).Times(Exactly(nrCallsPerThread));
}
std::vector<std::thread> threads;
for (int i = 0; i < nrThreads; i++)
{
threads.emplace_back([&dummy, i] ...
I have written a C++ application (deamon) which has some issues with application crashing due segmentation faults.
To get information about the code where the crash(es) happens i use libunwind. Which works very well on x86_x64 linux systems. There i get a nice stacktrace.
But on ARM architectures, the stacktrace only consists of the stack up to the "Deamon-code" iself and not into the shared libraries i use
I am using gcc 4.9 (for arm arm-linux-gnueabihf-g++4.9)
I am building the libraries and the deamon in debug mode (-g) with following parameters to get the tables for stack tracing
-funwind-tables -fasynchronous-unwind-tables -mapcs-frame -g
for the linker i use for the shared libs
-shared -pthread -static-libgcc -static-libstdc++ -rdynamic
Here is the sample stack trace of x86
1 0x0000000000403f68 sp=0x00007f0c2be8d630 crit_err_hdlr(int, siginfo_t*, void*) + 0x28
2 0x00007f0c2dfa33d0 sp=0x00007f0c2be8d680 __restore_rt + 0x0
3 0x00007f0c2dfa32a9 sp=0x00007f0c2be8dc18 raise + 0x29
4 0x00007f0c2e958bab sp=0x00007f0c2be8dc20 Raumserver::Request::RequestAction_Crash::crashLevel4() + 0x2b
5 0x00007f0c2e958d9c sp=0x00007f0c2be8dc60 Raumserver::Request::RequestAction_Crash::executeAction() + 0x12c
6 0x00007f0c2e96f045 sp=0x00007f0c2be8dce0 Raumserver::Request::RequestAction::execute() + 0x55
7 0x00007f0c2e940c83 sp=0x00007f0c2be8de00 Raumserver::Manager::RequestActionManager::requestProcessingWorkerThread()
+ 0x113
8 0x00007f0c2ee86380 sp=0x00007f0c2be8df00 execute_native_thread_routine + 0x20
9 0x00007f0c2df996fa sp=0x00007f0c2be8df20 start_thread + 0xca
10 0x00007f0c2dccfb5d sp=0x00007f0c2be8dfc0 clone + 0x6d
11 0x0000000000000000 sp=0x00007f0c2be8dfc8 + 0x6d
and here the trace from the arm devices. It seems that the stack information
for the shared libs is not present?!
1 0x0000000000013403 sp=0x00000000b49fe930 crit_err_hdlr(int, siginfo_t*, void*) + 0x1a
2 0x00000000b6a3b6a0 sp=0x00000000b49fe960 __default_rt_sa_restorer_v2 + 0x0
3 0x00000000b6b5774c sp=0x00000000b49fecd4 raise + 0x24
Here is a summary from the code for building the stacktrace which is called from the signal handler:
#ifdef __arm__
#include <libunwind.h>
#include <libunwind-arm.h>
#else
#include <libunwind.h>
#include <libunwind-x86_64.h>
#endif
...
void backtrace()
{
unw_cursor_t cursor;
unw_context_t context;
unw_getcontext(&context);
unw_init_local(&cursor, &context);
int n=0;
int err = unw_step(&cursor);
while ( err )
{
unw_word_t ip, sp, off;
unw_get_reg(&cursor, UNW_REG_IP, &ip);
unw_get_reg(&cursor, UNW_REG_SP, &sp);
char symbol[256] = {"<unknown>"};
char buffer[256];
char *name = symbol;
if ( !unw_get_proc_name(&cursor, symbol, sizeof(symbol), &off) )
{
int status;
if ( (name = abi::__cxa_demangle(symbol, NULL, NULL, &status)) == 0 )
name = symbol;
}
sprintf(buffer, "#%-2d 0x%016" PRIxPTR " sp=0x%016" PRIxPTR " %s + 0x%" PRIxPTR "\n",
++n,
static_cast<uintptr_t>(ip),
static_cast<uintptr_t>(sp),
name,
static_cast<uintptr_t>(off));
if ( name != symbol )
free(name);
...
err = unw_step(&cursor);
}
}
And here the summary for the deamon itself:
int main(int argc, char *argv[])
{
Raumserver::Raumserver raumserverObject;
//Set our Logging Mask and open the Log
setlogmask(LOG_UPTO(LOG_NOTICE));
openlog(DAEMON_NAME, LOG_CONS | LOG_NDELAY | LOG_PERROR | LOG_PID, LOG_USER);
pid_t pid, sid;
//Fork the Parent Process
pid = fork();
if (pid < 0) { syslog (LOG_NOTICE, "Error forking the parent process"); exit(EXIT_FAILURE); }
//We got a good pid, Close the Parent Process
if (pid > 0) { syslog (LOG_NOTICE, "Got pid, closing parent process"); exit(EXIT_SUCCESS); }
//Change File Mask
umask(0);
//Create a new Signature Id for our child
sid = setsid();
if (sid < 0) { syslog (LOG_NOTICE, "Signature ID for child process could not be created!"); exit(EXIT_FAILURE); }
// get the working directory of the executable
std::string workingDirectory = getWorkingDirectory();
//Change Directory
//If we cant find the directory we exit with failure.
if ((chdir("/")) < 0) { exit(EXIT_FAILURE); }
//Close Standard File Descriptors
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
// Add some system signal handlers for crash reporting
//raumserverObject.addSystemSignalHandlers();
AddSignalHandlers();
// set the log adapters we want to use (because we do not want to use the standard ones which includes console output)
std::vector<std::shared_ptr<Raumkernel::Log::LogAdapter>> adapters;
auto logAdapterFile = std::shared_ptr<Raumkernel::Log::LogAdapter_File>(new Raumkernel::Log::LogAdapter_File());
logAdapterFile->setLogFilePath(workingDirectory + "logs/");
adapters.push_back(logAdapterFile);
// create raumserver object and do init
raumserverObject.setSettingsFile(workingDirectory + "settings.xml");
raumserverObject.initLogObject(Raumkernel::Log::LogType::LOGTYPE_ERROR, workingDirectory + "logs/", adapters);
raumserverObject.init();
// go into an endless loop and wait until daemon is killed by the syste,
while(true)
{
sleep(60); //Sleep for 60 seconds
}
//Close the log
closelog ();
}
Anybody an idea if i have missed some compiler flags? Is something missing?
I have found a lot of posts and all refere to the mentioned compiler flags or to libunwind. I have tried several other codes i found but they only give me a stack depth of 1
Thanks!
EDIT 11.11.2016:
For testing i built static libs instead of shared ones and added them to the application, so there are no dynamic links anymore. Unfortunetaly the stack trace is the same bad one.
I have even tried to compile the whole stuff with the thumb mode and changed the arm compiler to to gcc5/g++5
-mthumb -mtpcs-frame -mtpcs-leaf-frame
Same issue :-(
I get the error
lldb: /home/hannes/.llvm/llvm/tools/clang/lib/AST/RecordLayoutBuilder.cpp:2271: uint64_t ::RecordLayoutBuilder::updateExternalFieldOffset(const clang::FieldDecl *, uint64_t): Assertion `ExternalFieldOffsets.find(Field) != ExternalFieldOffsets.end() && "Field does not have an external offset"' failed.
Aborted (core dumped)
when I try to print a vector<string>. Does anyone know why this happens, and how to fix it? The equivalent works just fine in gdb (there are a number of reason why I'd rather use / have to use lldb over gdb).
I'm running Ubuntu 12.10 with llvm, clang and lldb trunk.
The program, build instructions and lldb command sequence:
#include <iostream>
#include <algorithm>
#include <vector>
#include <string>
using std::for_each;
using std::begin;
using std::end;
int main() {
std::vector<std::string> vec{"Hello","World","!"};
for_each(begin(vec),end(vec),[](const std::string& s) {
std::cout << s << " ";
});
std::cout << std::endl;
return 0;
}
clang++ -g -c -std=c++11 main.cpp
clang++ -std=c++11 main.o -o test
lldb test
Current executable set to 'test' (x86_64).
(lldb) break -n main
invalid command 'breakpoint -n'
(lldb) breat set -n main
error: 'breat' is not a valid command.
(lldb) break set -n main
Breakpoint 1: where = test`main + 26 at main.cpp:5, address = 0x0000000000400aea
(lldb) run
Process 24489 launched: '/home/hannes/Documents/Programming/CXX/test/test' (x86_64)
Process 24489 stopped
* thread #1: tid = 0x5fa9, 0x0000000000400aea test`main + 26 at main.cpp:5, stop reason = breakpoint 1.1
frame #0: 0x0000000000400aea test`main + 26 at main.cpp:5
2 #include <string>
3
4 int main() {
-> 5 std::vector<std::string> vec{"Hello","World","!"};
6 return 0;
7 }
n
Process 24489 stopped
* thread #1: tid = 0x5fa9, 0x0000000000400c72 test`main + 418 at main.cpp:6, stop reason = step over
frame #0: 0x0000000000400c72 test`main + 418 at main.cpp:6
3
4 int main() {
5 std::vector<std::string> vec{"Hello","World","!"};
-> 6 return 0;
7 }
frame variable
lldb: /home/hannes/.llvm/llvm/tools/clang/lib/AST/RecordLayoutBuilder.cpp:2271: uint64_t <anonymous namespace>::RecordLayoutBuilder::updateExternalFieldOffset(const clang::FieldDecl *, uint64_t): Assertion `ExternalFieldOffsets.find(Field) != ExternalFieldOffsets.end() && "Field does not have an external offset"' failed.
Aborted (core dumped)
Log output with debug level 10:
Logging from function (<frame object at 0x3172f20>, '/usr/lib/python2.7/dist-packages/lldb/formatters/cpp/gnu_libstdcpp.py', 141, '__init__', ['\t\tlogger = lldb.formatters.Logger.Logger()\n'], 0)
Providing synthetic children for a map named vec
Logging from function (<frame object at 0x3170d10>, '/usr/lib/python2.7/dist-packages/lldb/formatters/cpp/gnu_libstdcpp.py', 214, 'update', ['\t\tlogger = lldb.formatters.Logger.Logger()\n'], 0)
Does the same thing happen if you say frame variable --raw?
This command is meant to dump your vector disabling data formatters. In the specific case you will get (assuming no crashes) the in-memory layout of the vector instead of the printout of the strings you stored there.
I am mostly trying to figure out if the data formatters are or not a part in this issue.
We use stack traces in proprietary assert like macro to catch developer mistakes - when error is caught, stack trace is printed.
I find gcc's pair backtrace()/backtrace_symbols() methods insufficient:
Names are mangled
No line information
1st problem can be resolved by abi::__cxa_demangle.
However 2nd problem s more tough. I found replacement for backtrace_symbols().
This is better than gcc's backtrace_symbols(), since it can retrieve line numbers (if compiled with -g) and you don't need to compile with -rdynamic.
Hoverer the code is GNU licenced, so IMHO I can't use it in commercial code.
Any proposal?
P.S.
gdb is capable to print out arguments passed to functions.
Probably it's already too much to ask for :)
PS 2
Similar question (thanks nobar)
So you want a stand-alone function that prints a stack trace with all of the features that gdb stack traces have and that doesn't terminate your application. The answer is to automate the launch of gdb in a non-interactive mode to perform just the tasks that you want.
This is done by executing gdb in a child process, using fork(), and scripting it to display a stack-trace while your application waits for it to complete. This can be performed without the use of a core-dump and without aborting the application. I learned how to do this from looking at this question: How it's better to invoke gdb from program to print it's stacktrace?
The example posted with that question didn't work for me exactly as written, so here's my "fixed" version (I ran this on Ubuntu 9.04).
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
#include <sys/prctl.h>
void print_trace() {
char pid_buf[30];
sprintf(pid_buf, "%d", getpid());
char name_buf[512];
name_buf[readlink("/proc/self/exe", name_buf, 511)]=0;
prctl(PR_SET_PTRACER, PR_SET_PTRACER_ANY, 0, 0, 0);
int child_pid = fork();
if (!child_pid) {
dup2(2,1); // redirect output to stderr - edit: unnecessary?
execl("/usr/bin/gdb", "gdb", "--batch", "-n", "-ex", "thread", "-ex", "bt", name_buf, pid_buf, NULL);
abort(); /* If gdb failed to start */
} else {
waitpid(child_pid,NULL,0);
}
}
As shown in the referenced question, gdb provides additional options that you could use. For example, using "bt full" instead of "bt" produces an even more detailed report (local variables are included in the output). The manpages for gdb are kind of light, but complete documentation is available here.
Since this is based on gdb, the output includes demangled names, line-numbers, function arguments, and optionally even local variables. Also, gdb is thread-aware, so you should be able to extract some thread-specific metadata.
Here's an example of the kind of stack traces that I see with this method.
0x00007f97e1fc2925 in waitpid () from /lib/libc.so.6
[Current thread is 0 (process 15573)]
#0 0x00007f97e1fc2925 in waitpid () from /lib/libc.so.6
#1 0x0000000000400bd5 in print_trace () at ./demo3b.cpp:496
2 0x0000000000400c09 in recursive (i=2) at ./demo3b.cpp:636
3 0x0000000000400c1a in recursive (i=1) at ./demo3b.cpp:646
4 0x0000000000400c1a in recursive (i=0) at ./demo3b.cpp:646
5 0x0000000000400c46 in main (argc=1, argv=0x7fffe3b2b5b8) at ./demo3b.cpp:70
Note: I found this to be incompatible with the use of valgrind (probably due to Valgrind's use of a virtual machine). It also doesn't work when you are running the program inside of a gdb session (can't apply a second instance of "ptrace" to a process).
Not too long ago I answered a similar question. You should take a look at the source code available on method #4, which also prints line numbers and filenames.
Method #4:
A small improvement I've done on method #3 to print line numbers. This could be copied to work on method #2 also.
Basically, it uses addr2line to convert addresses into file names and line numbers.
The source code below prints line numbers for all local functions. If a function from another library is called, you might see a couple of ??:0 instead of file names.
#include <stdio.h>
#include <signal.h>
#include <stdio.h>
#include <signal.h>
#include <execinfo.h>
void bt_sighandler(int sig, struct sigcontext ctx) {
void *trace[16];
char **messages = (char **)NULL;
int i, trace_size = 0;
if (sig == SIGSEGV)
printf("Got signal %d, faulty address is %p, "
"from %p\n", sig, ctx.cr2, ctx.eip);
else
printf("Got signal %d\n", sig);
trace_size = backtrace(trace, 16);
/* overwrite sigaction with caller's address */
trace[1] = (void *)ctx.eip;
messages = backtrace_symbols(trace, trace_size);
/* skip first stack frame (points here) */
printf("[bt] Execution path:\n");
for (i=1; i<trace_size; ++i)
{
printf("[bt] #%d %s\n", i, messages[i]);
/* find first occurence of '(' or ' ' in message[i] and assume
* everything before that is the file name. (Don't go beyond 0 though
* (string terminator)*/
size_t p = 0;
while(messages[i][p] != '(' && messages[i][p] != ' '
&& messages[i][p] != 0)
++p;
char syscom[256];
sprintf(syscom,"addr2line %p -e %.*s", trace[i], p, messages[i]);
//last parameter is the file name of the symbol
system(syscom);
}
exit(0);
}
int func_a(int a, char b) {
char *p = (char *)0xdeadbeef;
a = a + b;
*p = 10; /* CRASH here!! */
return 2*a;
}
int func_b() {
int res, a = 5;
res = 5 + func_a(a, 't');
return res;
}
int main() {
/* Install our signal handler */
struct sigaction sa;
sa.sa_handler = (void *)bt_sighandler;
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGUSR1, &sa, NULL);
/* ... add any other signal here */
/* Do something */
printf("%d\n", func_b());
}
This code should be compiled as: gcc sighandler.c -o sighandler -rdynamic
The program outputs:
Got signal 11, faulty address is 0xdeadbeef, from 0x8048975
[bt] Execution path:
[bt] #1 ./sighandler(func_a+0x1d) [0x8048975]
/home/karl/workspace/stacktrace/sighandler.c:44
[bt] #2 ./sighandler(func_b+0x20) [0x804899f]
/home/karl/workspace/stacktrace/sighandler.c:54
[bt] #3 ./sighandler(main+0x6c) [0x8048a16]
/home/karl/workspace/stacktrace/sighandler.c:74
[bt] #4 /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0x3fdbd6]
??:0
[bt] #5 ./sighandler() [0x8048781]
??:0
There is a robust discussion of essentially the same question at: How to generate a stacktrace when my gcc C++ app crashes. Many suggestions are provided, including lots of discussion about how to generate stack traces at run-time.
My personal favorite answer from that thread was to enable core dumps which allows you to view the complete application state at the time of the crash (including function arguments, line numbers, and unmangled names). An additional benefit of this approach is that it not only works for asserts, but also for segmentation faults and unhandled exceptions.
Different Linux shells use different commands to enable core dumps, but you can do it from within your application code with something like this...
#include <sys/resource.h>
...
struct rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
assert( setrlimit( RLIMIT_CORE, &core_limit ) == 0 ); // enable core dumps for debug builds
After a crash, run your favorite debugger to examine the program state.
$ kdbg executable core
Here's some sample output...
It is also possible to extract the stack trace from a core dump at the command line.
$ ( CMDFILE=$(mktemp); echo "bt" >${CMDFILE}; gdb 2>/dev/null --batch -x ${CMDFILE} temp.exe core )
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 22857]
#0 0x00007f4189be5fb5 in raise () from /lib/libc.so.6
#0 0x00007f4189be5fb5 in raise () from /lib/libc.so.6
#1 0x00007f4189be7bc3 in abort () from /lib/libc.so.6
#2 0x00007f4189bdef09 in __assert_fail () from /lib/libc.so.6
#3 0x00000000004007e8 in recursive (i=5) at ./demo1.cpp:18
#4 0x00000000004007f3 in recursive (i=4) at ./demo1.cpp:19
#5 0x00000000004007f3 in recursive (i=3) at ./demo1.cpp:19
#6 0x00000000004007f3 in recursive (i=2) at ./demo1.cpp:19
#7 0x00000000004007f3 in recursive (i=1) at ./demo1.cpp:19
#8 0x00000000004007f3 in recursive (i=0) at ./demo1.cpp:19
#9 0x0000000000400849 in main (argc=1, argv=0x7fff2483bd98) at ./demo1.cpp:26
Since the GPL licensed code is intended to help you during development, you could simply not include it in the final product. The GPL restricts you from distributing GPL licenses code linked with non-GPL compatible code. As long as you only use the GPL code inhouse, you should be fine.
Use the google glog library for it. It has new BSD licence.
It contains a GetStackTrace function in the stacktrace.h file.
EDIT
I found here http://blog.bigpixel.ro/2010/09/09/stack-unwinding-stack-trace-with-gcc/ that there is an utility called addr2line that translates program addresses into file names and line numbers.
http://linuxcommand.org/man_pages/addr2line1.html
Here's an alternative approach. A debug_assert() macro programmatically sets a conditional breakpoint. If you are running in a debugger, you will hit a breakpoint when the assert expression is false -- and you can analyze the live stack (the program doesn't terminate). If you are not running in a debugger, a failed debug_assert() causes the program to abort and you get a core dump from which you can analyze the stack (see my earlier answer).
The advantage of this approach, compared to normal asserts, is that you can continue running the program after the debug_assert is triggered (when running in a debugger). In other words, debug_assert() is slightly more flexible than assert().
#include <iostream>
#include <cassert>
#include <sys/resource.h>
// note: The assert expression should show up in
// stack trace as parameter to this function
void debug_breakpoint( char const * expression )
{
asm("int3"); // x86 specific
}
#ifdef NDEBUG
#define debug_assert( expression )
#else
// creates a conditional breakpoint
#define debug_assert( expression ) \
do { if ( !(expression) ) debug_breakpoint( #expression ); } while (0)
#endif
void recursive( int i=0 )
{
debug_assert( i < 5 );
if ( i < 10 ) recursive(i+1);
}
int main( int argc, char * argv[] )
{
rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
setrlimit( RLIMIT_CORE, &core_limit ); // enable core dumps
recursive();
}
Note: Sometimes "conditional breakpoints" setup within debuggers can be slow. By establishing the breakpoint programmatically, the performance of this method should be equivalent to that of a normal assert().
Note: As written, this is specific to the Intel x86 architecture -- other processors may have different instructions for generating a breakpoint.
A bit late, but you can use libbfb to fetch the filename and linenumber like refdbg does in symsnarf.c. libbfb is internally used by addr2line and gdb
here is my solution:
#include <execinfo.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <stdlib.h>
#include <iostream>
#include <zconf.h>
#include "regex"
std::string getexepath() {
char result[PATH_MAX];
ssize_t count = readlink("/proc/self/exe", result, PATH_MAX);
return std::string(result, (count > 0) ? count : 0);
}
std::string sh(std::string cmd) {
std::array<char, 128> buffer;
std::string result;
std::shared_ptr<FILE> pipe(popen(cmd.c_str(), "r"), pclose);
if (!pipe) throw std::runtime_error("popen() failed!");
while (!feof(pipe.get())) {
if (fgets(buffer.data(), 128, pipe.get()) != nullptr) {
result += buffer.data();
}
}
return result;
}
void print_backtrace(void) {
void *bt[1024];
int bt_size;
char **bt_syms;
int i;
bt_size = backtrace(bt, 1024);
bt_syms = backtrace_symbols(bt, bt_size);
std::regex re("\\[(.+)\\]");
auto exec_path = getexepath();
for (i = 1; i < bt_size; i++) {
std::string sym = bt_syms[i];
std::smatch ms;
if (std::regex_search(sym, ms, re)) {
std::string addr = ms[1];
std::string cmd = "addr2line -e " + exec_path + " -f -C " + addr;
auto r = sh(cmd);
std::regex re2("\\n$");
auto r2 = std::regex_replace(r, re2, "");
std::cout << r2 << std::endl;
}
}
free(bt_syms);
}
void test_m() {
print_backtrace();
}
int main() {
test_m();
return 0;
}
output:
/home/roroco/Dropbox/c/ro-c/cmake-build-debug/ex/test_backtrace_with_line_number
test_m()
/home/roroco/Dropbox/c/ro-c/ex/test_backtrace_with_line_number.cpp:57
main
/home/roroco/Dropbox/c/ro-c/ex/test_backtrace_with_line_number.cpp:61
??
??:0
"??" and "??:0" since this trace is in libc, not in my source
The one of solutions is to start a gdb with "bt"-script in failed assert handler. It is not very easy to integrate such gdb-starting, but It will give you both backtrace and args and demangle names (or you can pass gdb output via c++filt programm).
Both programms (gdb and c++filt) will be not linked into your application, so GPL will not require you to opensource complete application.
The same approach (exec a GPL programme) you can use with backtrace-symbols. Just generate ascii list of %eip's and map of exec file (/proc/self/maps) and pass it to separate binary.
You can use DeathHandler - small C++ class which does everything for you, reliable.
I suppose line numbers are related to current eip value, right?
SOLUTION 1:
Then you can use something like GetThreadContext(), except that you're working on linux. I googled around a bit and found something similar, ptrace():
The ptrace() system call provides a
means by which a parent process may
observe and control the execution of
another process, and examine and
change its core image and registers. [...]
The parent can initiate a trace by
calling fork(2) and having the
resulting child do a PTRACE_TRACEME,
followed (typically) by an exec(3).
Alternatively, the parent may commence
trace of an existing process using
PTRACE_ATTACH.
Now I was thinking, you can do a 'main' program which checks for signals that are sent to its child, the real program you're working on. after fork() it call waitid():
All of these system calls are used to
wait for state changes in a child of
the calling process, and obtain
information about the child whose
state has changed.
and if a SIGSEGV (or something similar) is caught call ptrace() to obtain eip's value.
PS: I've never used these system calls (well, actually, I've never seen them before ;) so I don't know if it's possible neither can help you. At least I hope these links are useful. ;)
SOLUTION 2:
The first solution is quite complicated, right? I came up with a much simpler one: using signal() catch the signals you are interested in and call a simple function that reads the eip value stored in the stack:
...
signal(SIGSEGV, sig_handler);
...
void sig_handler(int signum)
{
int eip_value;
asm {
push eax;
mov eax, [ebp - 4]
mov eip_value, eax
pop eax
}
// now you have the address of the
// **next** instruction after the
// SIGSEGV was received
}
That asm syntax is Borland's one, just adapt it to GAS. ;)
Here's my third answer -- still trying to take advantage of core dumps.
It wasn't completely clear in the question whether the "assert-like" macros were supposed to terminate the application (the way assert does) or they were supposed to continue executing after generating their stack-trace.
In this answer, I'm addressing the case where you want to show a stack-trace and continue executing. I wrote the coredump() function below to generate a core dump, automatically extract the stack-trace from it, then continue executing the program.
Usage is the same as that of assert(). The difference, of course, is that assert() terminates the program but coredump_assert() does not.
#include <iostream>
#include <sys/resource.h>
#include <cstdio>
#include <cstdlib>
#include <boost/lexical_cast.hpp>
#include <string>
#include <sys/wait.h>
#include <unistd.h>
std::string exename;
// expression argument is for diagnostic purposes (shows up in call-stack)
void coredump( char const * expression )
{
pid_t childpid = fork();
if ( childpid == 0 ) // child process generates core dump
{
rlimit core_limit = { RLIM_INFINITY, RLIM_INFINITY };
setrlimit( RLIMIT_CORE, &core_limit ); // enable core dumps
abort(); // terminate child process and generate core dump
}
// give each core-file a unique name
if ( childpid > 0 ) waitpid( childpid, 0, 0 );
static int count=0;
using std::string;
string pid = boost::lexical_cast<string>(getpid());
string newcorename = "core-"+boost::lexical_cast<string>(count++)+"."+pid;
string rawcorename = "core."+boost::lexical_cast<string>(childpid);
int rename_rval = rename(rawcorename.c_str(),newcorename.c_str()); // try with core.PID
if ( rename_rval == -1 ) rename_rval = rename("core",newcorename.c_str()); // try with just core
if ( rename_rval == -1 ) std::cerr<<"failed to capture core file\n";
#if 1 // optional: dump stack trace and delete core file
string cmd = "( CMDFILE=$(mktemp); echo 'bt' >${CMDFILE}; gdb 2>/dev/null --batch -x ${CMDFILE} "+exename+" "+newcorename+" ; unlink ${CMDFILE} )";
int system_rval = system( ("bash -c '"+cmd+"'").c_str() );
if ( system_rval == -1 ) std::cerr.flush(), perror("system() failed during stack trace"), fflush(stderr);
unlink( newcorename.c_str() );
#endif
}
#ifdef NDEBUG
#define coredump_assert( expression ) ((void)(expression))
#else
#define coredump_assert( expression ) do { if ( !(expression) ) { coredump( #expression ); } } while (0)
#endif
void recursive( int i=0 )
{
coredump_assert( i < 2 );
if ( i < 4 ) recursive(i+1);
}
int main( int argc, char * argv[] )
{
exename = argv[0]; // this is used to generate the stack trace
recursive();
}
When I run the program, it displays three stack traces...
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24251]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=2) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#6 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24259]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=3) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=2) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#6 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#7 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
Core was generated by `./temp.exe'.
Program terminated with signal 6, Aborted.
[New process 24267]
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#0 0x00007f2818ac9fb5 in raise () from /lib/libc.so.6
#1 0x00007f2818acbbc3 in abort () from /lib/libc.so.6
#2 0x0000000000401a0e in coredump (expression=0x403303 "i < 2") at ./demo3.cpp:29
#3 0x0000000000401f5f in recursive (i=4) at ./demo3.cpp:60
#4 0x0000000000401f70 in recursive (i=3) at ./demo3.cpp:61
#5 0x0000000000401f70 in recursive (i=2) at ./demo3.cpp:61
#6 0x0000000000401f70 in recursive (i=1) at ./demo3.cpp:61
#7 0x0000000000401f70 in recursive (i=0) at ./demo3.cpp:61
#8 0x0000000000401f8b in main (argc=1, argv=0x7fffc229eb98) at ./demo3.cpp:66
I had to do this in a production environment with many constraints, so I wanted to explain the advantages and disadvantages of the already posted methods.
attach GDB
+ very simple and robust
- Slow for large programs because GDB insists on loading the entire address to line # database upfront instead of lazily
- Interferes with signal handling. When GDB is attached, it intercepts signals like SIGINT (ctrl-c), which will cause the program to get stuck at the GDB interactive prompt? if some other process routinely sends such signals. Maybe there's some way around it, but this made GDB unusable in my case. You can still use it if you only care about printing a call stack once when your program crashes, but not multiple times.
addr2line. Here's an alternate solution that doesn't use backtrace_symbols.
+ Doesn't allocate from the heap, which is unsafe inside a signal handler
+ Don't need to parse output of backtrace_symbols
- Won't work on MacOS, which doesn't have dladdr1. You can use _dyld_get_image_vmaddr_slide instead, which returns the same offset as link_map::l_addr.
- Requires adding negative offset or else the translated line # will be 1 greater. backtrace_symbols does this for you
#include <execinfo.h>
#include <link.h>
#include <stdlib.h>
#include <stdio.h>
// converts a function's address in memory to its VMA address in the executable file. VMA is what addr2line expects
size_t ConvertToVMA(size_t addr)
{
Dl_info info;
link_map* link_map;
dladdr1((void*)addr,&info,(void**)&link_map,RTLD_DL_LINKMAP);
return addr-link_map->l_addr;
}
void PrintCallStack()
{
void *callstack[128];
int frame_count = backtrace(callstack, sizeof(callstack)/sizeof(callstack[0]));
for (int i = 0; i < frame_count; i++)
{
char location[1024];
Dl_info info;
if(dladdr(callstack[i],&info))
{
char command[256];
size_t VMA_addr=ConvertToVMA((size_t)callstack[i]);
//if(i!=crash_depth)
VMA_addr-=1; // https://stackoverflow.com/questions/11579509/wrong-line-numbers-from-addr2line/63841497#63841497
snprintf(command,sizeof(command),"addr2line -e %s -Ci %zx",info.dli_fname,VMA_addr);
system(command);
}
}
}
void Foo()
{
PrintCallStack();
}
int main()
{
Foo();
return 0;
}
I also want to clarify what addresses backtrace and backtrace_symbols generate and what addr2line expects.
addr2line expects FooVMA or if you're using --section=.text, then Foofile - textfile. backtrace returns Foomem. backtrace_symbols generates FooVMA somewhere.
One big mistake I made and saw in several other posts was assuming VMAbase = 0 or FooVMA = Foofile = Foomem - ELFmem, which is easy to calculate.
That often works, but for some compilers (i.e. linker scripts) use VMAbase > 0. Examples would be the GCC 5.4 on Ubuntu 16 (0x400000) and clang 11 on MacOS (0x100000000).
For shared libs, it's always 0. Seems VMAbase was only meaningful for non-position independent code. Otherwise it has no effect on where the EXE is loaded in memory.
Also, neither karlphillip's nor this one requires compiling with -rdynamic. That will increase the binary size, especially for a large C++ program or shared lib, with useless entries in the dynamic symbol table that never get imported
AFAICS all of the solutions provided so far won't print functions names and line numbers from shared libraries. That's what I needed, so i altered karlphillip's solution (and some other answer from a similar question) to resolve shared library addresses using /proc/id/maps.
#include <stdlib.h>
#include <inttypes.h>
#include <stdio.h>
#include <string.h>
#include <execinfo.h>
#include <stdbool.h>
struct Region { // one mapped file, for example a shared library
uintptr_t start;
uintptr_t end;
char* path;
};
static struct Region* getRegions(int* size) {
// parse /proc/self/maps and get list of mapped files
FILE* file;
int allocated = 10;
*size = 0;
struct Region* res;
uintptr_t regionStart = 0x00000000;
uintptr_t regionEnd = 0x00000000;
char* regionPath = "";
uintmax_t matchedStart;
uintmax_t matchedEnd;
char* matchedPath;
res = (struct Region*)malloc(sizeof(struct Region) * allocated);
file = fopen("/proc/self/maps", "r");
while (!feof(file)) {
fscanf(file, "%jx-%jx %*s %*s %*s %*s%*[ ]%m[^\n]\n", &matchedStart, &matchedEnd, &matchedPath);
bool bothNull = matchedPath == 0x0 && regionPath == 0x0;
bool similar = matchedPath && regionPath && !strcmp(matchedPath, regionPath);
if(bothNull || similar) {
free(matchedPath);
regionEnd = matchedEnd;
} else {
if(*size == allocated) {
allocated *= 2;
res = (struct Region*)realloc(res, sizeof(struct Region) * allocated);
}
res[*size].start = regionStart;
res[*size].end = regionEnd;
res[*size].path = regionPath;
(*size)++;
regionStart = matchedStart;
regionEnd = matchedEnd;
regionPath = matchedPath;
}
}
return res;
}
struct SemiResolvedAddress {
char* path;
uintptr_t offset;
};
static struct SemiResolvedAddress semiResolve(struct Region* regions, int regionsNum, uintptr_t address) {
// convert address from our address space to
// address suitable fo addr2line
struct Region* region;
struct SemiResolvedAddress res = {"", address};
for(region = regions; region < regions+regionsNum; region++) {
if(address >= region->start && address < region->end) {
res.path = region->path;
res.offset = address - region->start;
}
}
return res;
}
void printStacktraceWithLines(unsigned int max_frames)
{
int regionsNum;
fprintf(stderr, "stack trace:\n");
// storage array for stack trace address data
void* addrlist[max_frames+1];
// retrieve current stack addresses
int addrlen = backtrace(addrlist, sizeof(addrlist) / sizeof(void*));
if (addrlen == 0) {
fprintf(stderr, " <empty, possibly corrupt>\n");
return;
}
struct Region* regions = getRegions(®ionsNum);
for (int i = 1; i < addrlen; i++)
{
struct SemiResolvedAddress hres =
semiResolve(regions, regionsNum, (uintptr_t)(addrlist[i]));
char syscom[256];
sprintf(syscom, "addr2line -C -f -p -a -e %s 0x%jx", hres.path, (intmax_t)(hres.offset));
system(syscom);
}
free(regions);
}
C++23 <stacktrace>
Finally, this has arrived! More details/comparison with other systems at: print call stack in C or C++
stacktrace.cpp
#include <iostream>
#include <stacktrace>
void my_func_2(void) {
std::cout << std::stacktrace::current(); // Line 5
}
void my_func_1(double f) {
(void)f;
my_func_2(); // Line 10
}
void my_func_1(int i) {
(void)i;
my_func_2(); // Line 15
}
int main(int argc, char **argv) {
my_func_1(1); // Line 19
my_func_1(2.0); // Line 20
}
GCC 12.1.0 from Ubuntu 22.04 does not have support compiled in, so for now I built it from source as per: How to edit and re-build the GCC libstdc++ C++ standard library source? and set --enable-libstdcxx-backtrace=yes, and it worked!
Compile and run:
g++ -ggdb3 -O2 -std=c++23 -Wall -Wextra -pedantic -o stacktrace.out stacktrace.cpp -lstdc++_libbacktrace
./stacktrace.out
Output:
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(int) at /home/ciro/stacktrace.cpp:15
2# at :0
3# at :0
4# at :0
5#
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(double) at /home/ciro/stacktrace.cpp:10
2# at :0
3# at :0
4# at :0
5#
The trace is not perfect (missing main line) because of optimization I think. With -O0 it is better:
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(int) at /home/ciro/stacktrace.cpp:15
2# at /home/ciro/stacktrace.cpp:19
3# at :0
4# at :0
5# at :0
6#
0# my_func_2() at /home/ciro/stacktrace.cpp:5
1# my_func_1(double) at /home/ciro/stacktrace.cpp:10
2# at /home/ciro/stacktrace.cpp:20
3# at :0
4# at :0
5# at :0
6#
I don't know why the name main is missing, but the line is there.
The "extra" lines after main like:
3# at :0
4# at :0
5# at :0
6#
are probably stuff that runs before main and that ends up calling main: What happens before main in C++?