NS3 log Time precision - c++

I am using NS3.20 to find the time stamps of packet being sent from a node to another node in a FAT-TREE topology using the command
NS_LOG= "*=level_info|prefix_func|prefix_time|prefix_node" ./waf --run scratch/fat-tree &> save-log.txt
After grepping few selected lines containing info about pkt 196.The output is of the form
Timestamp, Nodeid, Functioncalled, DevNo, pktid
Example lines are as follows
1.74547s 34 CsmaChannel:TransmitStart(): UID is 196)
1.74548s 34 CsmaChannel:TransmitEnd(): UID is 196)
1.74548s 23 Node:ReceiveFromDevice(): Node 23 ReceiveFromDevice: dev 2 (type=ns3::CsmaNetDevice) Packet UID 196
1.74548s 23 BridgeNetDevice:ReceiveFromDevice(): UID is 196
1.74548s 23 CsmaChannel:TransmitStart(): UID is 196)
1.74548s 34 CsmaChannel:PropagationCompleteEvent(): UID is 196)
.
.
.
For my research I need the time stamps to be in nanoseconds. Is there a way to configure this?
NB : I tried cout, and fout within the functions but it just prints time without the node_id which is useless for me.
Please help

First, I will suggest using the latest version of ns-3, ns-3.30.1. Generally, it is a good idea to stay up-to-date. That being said my solution was tested using ns-3.30.1, so it is quite possible it doesn't work with ns-3.20. If it doesn't work, refer to the comments in sample-log-time-format.cc.
I referenced §11.3 of the ns-3 manual, which pointed me to the example, ./src/core/examples/sample-log-time-format.cc. Based off of that example, here is a minimal reproducible example of your desired output.
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
// set the time format and precision of LOG macros
// https://stackoverflow.com/questions/60148908
// this example is adapted from ./src/core/examples/sample-log-time-format.cc
#include "ns3/simulator.h"
#include "ns3/log.h"
#include "ns3/random-variable-stream.h"
using namespace ns3;
void
ReplacementTimePrinter (std::ostream &os)
{
os << Simulator::Now ().GetNanoSeconds () << " ns";
}
void
ReplaceTimePrinter (void)
{
std::cout << "Replacing time printer function after Simulator::Run ()" << std::endl;
LogSetTimePrinter (&ReplacementTimePrinter);
}
int
main (int argc, char *argv[])
{
LogComponentEnable ("RandomVariableStream", LOG_LEVEL_ALL);
LogComponentEnableAll (LOG_PREFIX_TIME);
Ptr<UniformRandomVariable> uniformRv = CreateObject<UniformRandomVariable> ();
Simulator::Schedule (Seconds (0), &ReplaceTimePrinter);
// schedule some bogus events to demonstrate the new TimePrinter
Simulator::Schedule (NanoSeconds (1), &UniformRandomVariable::SetAntithetic, uniformRv, false);
Simulator::Schedule (NanoSeconds (123), &UniformRandomVariable::SetAntithetic, uniformRv, false);
Simulator::Schedule (NanoSeconds (123456), &UniformRandomVariable::SetAntithetic, uniformRv, false);
Simulator::Schedule (NanoSeconds (123456789), &UniformRandomVariable::SetAntithetic, uniformRv, false);
Simulator::Run ();
Simulator::Destroy ();
}
Use
./waf --run <program-name>
to run your program, and output should be
RandomVariableStream:RandomVariableStream(0x7fc0aee287d0)
RandomVariableStream:UniformRandomVariable(0x7fc0aee287d0)
RandomVariableStream:SetStream(0x7fc0aee287d0, -1)
RandomVariableStream:SetAntithetic(0x7fc0aee287d0, 0)
Replacing time printer function after Simulator::Run ()
1 ns RandomVariableStream:SetAntithetic(0x7fc0aee287d0, 0)
123 ns RandomVariableStream:SetAntithetic(0x7fc0aee287d0, 0)
123456 ns RandomVariableStream:SetAntithetic(0x7fc0aee287d0, 0)
123456789 ns RandomVariableStream:SetAntithetic(0x7fc0aee287d0, 0)
RandomVariableStream:~RandomVariableStream(0x7fc0aee287d0)
Notice that rather than directly calling LogSetTimePrinter in our simulator code, we schedule an Event that will call LogSetTimePrinter instead with
Simulator::Schedule (Seconds (0), &ReplaceTimePrinter);
I wasn't sure why this is the case, but on further inspection of it's documentation, it appears that GetImpl() calls LogSetTimePrinter. GetImpl() is called by Simulator::Run() which means that Simulator::Run() will "override" any calls to LogSetTimePrinter before Simulator::Run() is called. Thus, we need to create an Event that sets the TimePrinter after the SimulatorImpl has been created.

Related

C/C++: How to print a stack trace? [duplicate]

I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace.
My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc).
I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas?
For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual.
Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler:
#include <stdio.h>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
void *array[10];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, 10);
// print out all the frames to stderr
fprintf(stderr, "Error: signal %d:\n", sig);
backtrace_symbols_fd(array, size, STDERR_FILENO);
exit(1);
}
void baz() {
int *foo = (int*)-1; // make a bad pointer
printf("%d\n", *foo); // causes segfault
}
void bar() { baz(); }
void foo() { bar(); }
int main(int argc, char **argv) {
signal(SIGSEGV, handler); // install our handler
foo(); // this will call foo, bar, and baz. baz segfaults.
}
Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace:
$ gcc -g -rdynamic ./test.c -o test
Executing this gets you this output:
$ ./test
Error: signal 11:
./test(handler+0x19)[0x400911]
/lib64/tls/libc.so.6[0x3a9b92e380]
./test(baz+0x14)[0x400962]
./test(bar+0xe)[0x400983]
./test(foo+0xe)[0x400993]
./test(main+0x28)[0x4009bd]
/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]
./test[0x40086a]
This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main, foo, bar, and baz.
It's even easier than "man backtrace", there's a little-documented library (GNU specific) distributed with glibc as libSegFault.so, which was I believe was written by Ulrich Drepper to support the program catchsegv (see "man catchsegv").
This gives us 3 possibilities. Instead of running "program -o hai":
Run within catchsegv:
$ catchsegv program -o hai
Link with libSegFault at runtime:
$ LD_PRELOAD=/lib/libSegFault.so program -o hai
Link with libSegFault at compile time:
$ gcc -g1 -lSegFault -o program program.cc
$ program -o hai
In all 3 cases, you will get clearer backtraces with less optimization (gcc -O0 or -O1) and debugging symbols (gcc -g). Otherwise, you may just end up with a pile of memory addresses.
You can also catch more signals for stack traces with something like:
$ export SEGFAULT_SIGNALS="all" # "all" signals
$ export SEGFAULT_SIGNALS="bus abrt" # SIGBUS and SIGABRT
The output will look something like this (notice the backtrace at the bottom):
*** Segmentation fault Register dump:
EAX: 0000000c EBX: 00000080 ECX:
00000000 EDX: 0000000c ESI:
bfdbf080 EDI: 080497e0 EBP:
bfdbee38 ESP: bfdbee20
EIP: 0805640f EFLAGS: 00010282
CS: 0073 DS: 007b ES: 007b FS:
0000 GS: 0033 SS: 007b
Trap: 0000000e Error: 00000004
OldMask: 00000000 ESP/signal:
bfdbee20 CR2: 00000024
FPUCW: ffff037f FPUSW: ffff0000
TAG: ffffffff IPOFF: 00000000
CSSEL: 0000 DATAOFF: 00000000
DATASEL: 0000
ST(0) 0000 0000000000000000 ST(1)
0000 0000000000000000 ST(2) 0000
0000000000000000 ST(3) 0000
0000000000000000 ST(4) 0000
0000000000000000 ST(5) 0000
0000000000000000 ST(6) 0000
0000000000000000 ST(7) 0000
0000000000000000
Backtrace:
/lib/libSegFault.so[0xb7f9e100]
??:0(??)[0xb7fa3400]
/usr/include/c++/4.3/bits/stl_queue.h:226(_ZNSt5queueISsSt5dequeISsSaISsEEE4pushERKSs)[0x805647a]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/player.cpp:73(_ZN6Player5inputESs)[0x805377c]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:159(_ZN6Socket4ReadEv)[0x8050698]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:413(_ZN12ServerSocket4ReadEv)[0x80507ad]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:300(_ZN12ServerSocket4pollEv)[0x8050b44]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/main.cpp:34(main)[0x8049a72]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5)[0xb7d1b775]
/build/buildd/glibc-2.9/csu/../sysdeps/i386/elf/start.S:122(_start)[0x8049801]
If you want to know the gory details, the best source is unfortunately the source: See http://sourceware.org/git/?p=glibc.git;a=blob;f=debug/segfault.c and its parent directory http://sourceware.org/git/?p=glibc.git;a=tree;f=debug
Linux
While the use of the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault has already been suggested, I see no mention of the intricacies necessary to ensure the resulting backtrace points to the actual location of the fault (at least for some architectures - x86 & ARM).
The first two entries in the stack frame chain when you get into the signal handler contain a return address inside the signal handler and one inside sigaction() in libc. The stack frame of the last function called before the signal (which is the location of the fault) is lost.
Code
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifndef __USE_GNU
#define __USE_GNU
#endif
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ucontext.h>
#include <unistd.h>
/* This structure mirrors the one found in /usr/include/asm/ucontext.h */
typedef struct _sig_ucontext {
unsigned long uc_flags;
ucontext_t *uc_link;
stack_t uc_stack;
sigcontext_t uc_mcontext;
sigset_t uc_sigmask;
} sig_ucontext_t;
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
void * array[50];
void * caller_address;
char ** messages;
int size, i;
sig_ucontext_t * uc;
uc = (sig_ucontext_t *)ucontext;
/* Get the address at the time the signal was raised */
#if defined(__i386__) // gcc specific
caller_address = (void *) uc->uc_mcontext.eip; // EIP: x86 specific
#elif defined(__x86_64__) // gcc specific
caller_address = (void *) uc->uc_mcontext.rip; // RIP: x86_64 specific
#else
#error Unsupported architecture. // TODO: Add support for other arch.
#endif
fprintf(stderr, "signal %d (%s), address is %p from %p\n",
sig_num, strsignal(sig_num), info->si_addr,
(void *)caller_address);
size = backtrace(array, 50);
/* overwrite sigaction with caller's address */
array[1] = caller_address;
messages = backtrace_symbols(array, size);
/* skip first stack frame (points here) */
for (i = 1; i < size && messages != NULL; ++i)
{
fprintf(stderr, "[bt]: (%d) %s\n", i, messages[i]);
}
free(messages);
exit(EXIT_FAILURE);
}
int crash()
{
char * p = NULL;
*p = 0;
return 0;
}
int foo4()
{
crash();
return 0;
}
int foo3()
{
foo4();
return 0;
}
int foo2()
{
foo3();
return 0;
}
int foo1()
{
foo2();
return 0;
}
int main(int argc, char ** argv)
{
struct sigaction sigact;
sigact.sa_sigaction = crit_err_hdlr;
sigact.sa_flags = SA_RESTART | SA_SIGINFO;
if (sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL) != 0)
{
fprintf(stderr, "error setting signal handler for %d (%s)\n",
SIGSEGV, strsignal(SIGSEGV));
exit(EXIT_FAILURE);
}
foo1();
exit(EXIT_SUCCESS);
}
Output
signal 11 (Segmentation fault), address is (nil) from 0x8c50
[bt]: (1) ./test(crash+0x24) [0x8c50]
[bt]: (2) ./test(foo4+0x10) [0x8c70]
[bt]: (3) ./test(foo3+0x10) [0x8c8c]
[bt]: (4) ./test(foo2+0x10) [0x8ca8]
[bt]: (5) ./test(foo1+0x10) [0x8cc4]
[bt]: (6) ./test(main+0x74) [0x8d44]
[bt]: (7) /lib/libc.so.6(__libc_start_main+0xa8) [0x40032e44]
All the hazards of calling the backtrace() functions in a signal handler still exist and should not be overlooked, but I find the functionality I described here quite helpful in debugging crashes.
It is important to note that the example I provided is developed/tested on Linux for x86. I have also successfully implemented this on ARM using uc_mcontext.arm_pc instead of uc_mcontext.eip.
Here's a link to the article where I learned the details for this implementation:
http://www.linuxjournal.com/article/6391
Even though a correct answer has been provided that describes how to use the GNU libc backtrace() function1 and I provided my own answer that describes how to ensure a backtrace from a signal handler points to the actual location of the fault2, I don't see any mention of demangling C++ symbols output from the backtrace.
When obtaining backtraces from a C++ program, the output can be run through c++filt1 to demangle the symbols or by using abi::__cxa_demangle1 directly.
1 Linux & OS X
Note that c++filt and __cxa_demangle are GCC specific
2 Linux
The following C++ Linux example uses the same signal handler as my other answer and demonstrates how c++filt can be used to demangle the symbols.
Code:
class foo
{
public:
foo() { foo1(); }
private:
void foo1() { foo2(); }
void foo2() { foo3(); }
void foo3() { foo4(); }
void foo4() { crash(); }
void crash() { char * p = NULL; *p = 0; }
};
int main(int argc, char ** argv)
{
// Setup signal handler for SIGSEGV
...
foo * f = new foo();
return 0;
}
Output (./test):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(crash__3foo+0x13) [0x8048e07]
[bt]: (2) ./test(foo4__3foo+0x12) [0x8048dee]
[bt]: (3) ./test(foo3__3foo+0x12) [0x8048dd6]
[bt]: (4) ./test(foo2__3foo+0x12) [0x8048dbe]
[bt]: (5) ./test(foo1__3foo+0x12) [0x8048da6]
[bt]: (6) ./test(__3foo+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
Demangled Output (./test 2>&1 | c++filt):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(foo::crash(void)+0x13) [0x8048e07]
[bt]: (2) ./test(foo::foo4(void)+0x12) [0x8048dee]
[bt]: (3) ./test(foo::foo3(void)+0x12) [0x8048dd6]
[bt]: (4) ./test(foo::foo2(void)+0x12) [0x8048dbe]
[bt]: (5) ./test(foo::foo1(void)+0x12) [0x8048da6]
[bt]: (6) ./test(foo::foo(void)+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
The following builds on the signal handler from my original answer and can replace the signal handler in the above example to demonstrate how abi::__cxa_demangle can be used to demangle the symbols. This signal handler produces the same demangled output as the above example.
Code:
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
sig_ucontext_t * uc = (sig_ucontext_t *)ucontext;
void * caller_address = (void *) uc->uc_mcontext.eip; // x86 specific
std::cerr << "signal " << sig_num
<< " (" << strsignal(sig_num) << "), address is "
<< info->si_addr << " from " << caller_address
<< std::endl << std::endl;
void * array[50];
int size = backtrace(array, 50);
array[1] = caller_address;
char ** messages = backtrace_symbols(array, size);
// skip first stack frame (points here)
for (int i = 1; i < size && messages != NULL; ++i)
{
char *mangled_name = 0, *offset_begin = 0, *offset_end = 0;
// find parantheses and +address offset surrounding mangled name
for (char *p = messages[i]; *p; ++p)
{
if (*p == '(')
{
mangled_name = p;
}
else if (*p == '+')
{
offset_begin = p;
}
else if (*p == ')')
{
offset_end = p;
break;
}
}
// if the line could be processed, attempt to demangle the symbol
if (mangled_name && offset_begin && offset_end &&
mangled_name < offset_begin)
{
*mangled_name++ = '\0';
*offset_begin++ = '\0';
*offset_end++ = '\0';
int status;
char * real_name = abi::__cxa_demangle(mangled_name, 0, 0, &status);
// if demangling is successful, output the demangled function name
if (status == 0)
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< real_name << "+" << offset_begin << offset_end
<< std::endl;
}
// otherwise, output the mangled function name
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< mangled_name << "+" << offset_begin << offset_end
<< std::endl;
}
free(real_name);
}
// otherwise, print the whole line
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << std::endl;
}
}
std::cerr << std::endl;
free(messages);
exit(EXIT_FAILURE);
}
Might be worth looking at Google Breakpad, a cross-platform crash dump generator and tools to process the dumps.
You did not specify your operating system, so this is difficult to answer. If you are using a system based on gnu libc, you might be able to use the libc function backtrace().
GCC also has two builtins that can assist you, but which may or may not be implemented fully on your architecture, and those are __builtin_frame_address and __builtin_return_address. Both of which want an immediate integer level (by immediate, I mean it can't be a variable). If __builtin_frame_address for a given level is non-zero, it should be safe to grab the return address of the same level.
Thank you to enthusiasticgeek for drawing my attention to the addr2line utility.
I've written a quick and dirty script to process the output of the answer provided here:
(much thanks to jschmier!) using the addr2line utility.
The script accepts a single argument: The name of the file containing the output from jschmier's utility.
The output should print something like the following for each level of the trace:
BACKTRACE: testExe 0x8A5db6b
FILE: pathToFile/testExe.C:110
FUNCTION: testFunction(int)
107
108
109 int* i = 0x0;
*110 *i = 5;
111
112 }
113 return i;
Code:
#!/bin/bash
LOGFILE=$1
NUM_SRC_CONTEXT_LINES=3
old_IFS=$IFS # save the field separator
IFS=$'\n' # new field separator, the end of line
for bt in `cat $LOGFILE | grep '\[bt\]'`; do
IFS=$old_IFS # restore default field separator
printf '\n'
EXEC=`echo $bt | cut -d' ' -f3 | cut -d'(' -f1`
ADDR=`echo $bt | cut -d'[' -f3 | cut -d']' -f1`
echo "BACKTRACE: $EXEC $ADDR"
A2L=`addr2line -a $ADDR -e $EXEC -pfC`
#echo "A2L: $A2L"
FUNCTION=`echo $A2L | sed 's/\<at\>.*//' | cut -d' ' -f2-99`
FILE_AND_LINE=`echo $A2L | sed 's/.* at //'`
echo "FILE: $FILE_AND_LINE"
echo "FUNCTION: $FUNCTION"
# print offending source code
SRCFILE=`echo $FILE_AND_LINE | cut -d':' -f1`
LINENUM=`echo $FILE_AND_LINE | cut -d':' -f2`
if ([ -f $SRCFILE ]); then
cat -n $SRCFILE | grep -C $NUM_SRC_CONTEXT_LINES "^ *$LINENUM\>" | sed "s/ $LINENUM/*$LINENUM/"
else
echo "File not found: $SRCFILE"
fi
IFS=$'\n' # new field separator, the end of line
done
IFS=$old_IFS # restore default field separator
ulimit -c <value> sets the core file size limit on unix. By default, the core file size limit is 0. You can see your ulimit values with ulimit -a.
also, if you run your program from within gdb, it will halt your program on "segmentation violations" (SIGSEGV, generally when you accessed a piece of memory that you hadn't allocated) or you can set breakpoints.
ddd and nemiver are front-ends for gdb which make working with it much easier for the novice.
It's important to note that once you generate a core file you'll need to use the gdb tool to look at it. For gdb to make sense of your core file, you must tell gcc to instrument the binary with debugging symbols: to do this, you compile with the -g flag:
$ g++ -g prog.cpp -o prog
Then, you can either set "ulimit -c unlimited" to let it dump a core, or just run your program inside gdb. I like the second approach more:
$ gdb ./prog
... gdb startup output ...
(gdb) run
... program runs and crashes ...
(gdb) where
... gdb outputs your stack trace ...
I hope this helps.
It looks like in one of last c++ boost version appeared library to provide exactly what You want, probably the code would be multiplatform.
It is boost::stacktrace, which You can use like as in boost sample:
#include <filesystem>
#include <sstream>
#include <fstream>
#include <signal.h> // ::signal, ::raise
#include <boost/stacktrace.hpp>
const char* backtraceFileName = "./backtraceFile.dump";
void signalHandler(int)
{
::signal(SIGSEGV, SIG_DFL);
::signal(SIGABRT, SIG_DFL);
boost::stacktrace::safe_dump_to(backtraceFileName);
::raise(SIGABRT);
}
void sendReport()
{
if (std::filesystem::exists(backtraceFileName))
{
std::ifstream file(backtraceFileName);
auto st = boost::stacktrace::stacktrace::from_dump(file);
std::ostringstream backtraceStream;
backtraceStream << st << std::endl;
// sending the code from st
file.close();
std::filesystem::remove(backtraceFileName);
}
}
int main()
{
::signal(SIGSEGV, signalHandler);
::signal(SIGABRT, signalHandler);
sendReport();
// ... rest of code
}
In Linux You compile the code above:
g++ --std=c++17 file.cpp -lstdc++fs -lboost_stacktrace_backtrace -ldl -lbacktrace
Example backtrace copied from boost documentation:
0# bar(int) at /path/to/source/file.cpp:70
1# bar(int) at /path/to/source/file.cpp:70
2# bar(int) at /path/to/source/file.cpp:70
3# bar(int) at /path/to/source/file.cpp:70
4# main at /path/to/main.cpp:93
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# _start
Ive been looking at this problem for a while.
And buried deep in the Google Performance Tools README
http://code.google.com/p/google-perftools/source/browse/trunk/README
talks about libunwind
http://www.nongnu.org/libunwind/
Would love to hear opinions of this library.
The problem with -rdynamic is that it can increase the size of the binary relatively significantly in some cases
The new king in town has arrived
https://github.com/bombela/backward-cpp
1 header to place in your code and 1 library to install.
Personally I call it using this function
#include "backward.hpp"
void stacker() {
using namespace backward;
StackTrace st;
st.load_here(99); //Limit the number of trace depth to 99
st.skip_n_firsts(3);//This will skip some backward internal function from the trace
Printer p;
p.snippet = true;
p.object = true;
p.color = true;
p.address = true;
p.print(st, stderr);
}
Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.
You can use DeathHandler - small C++ class which does everything for you, reliable.
Forget about changing your sources and do some hacks with backtrace() function or macroses - these are just poor solutions.
As a properly working solution, I would advice:
Compile your program with "-g" flag for embedding debug symbols to binary (don't worry this will not impact your performance).
On linux run next command: "ulimit -c unlimited" - to allow system make big crash dumps.
When your program crashed, in the working directory you will see file "core".
Run next command to print backtrace to stdout: gdb -batch -ex "backtrace" ./your_program_exe ./core
This will print proper readable backtrace of your program in human readable way (with source file names and line numbers).
Moreover this approach will give you freedom to automatize your system:
have a short script that checks if process created a core dump, and then send backtraces by email to developers, or log this into some logging system.
ulimit -c unlimited
is a system variable, wich will allow to create a core dump after your application crashes. In this case an unlimited amount. Look for a file called core in the very same directory. Make sure you compiled your code with debugging informations enabled!
regards
Look at:
man 3 backtrace
And:
#include <exeinfo.h>
int backtrace(void **buffer, int size);
These are GNU extensions.
As a Windows-only solution, you can get the equivalent of a stack trace (with much, much more information) using Windows Error Reporting. With just a few registry entries, it can be set up to collect user-mode dumps:
Starting with Windows Server 2008 and Windows Vista with Service Pack 1 (SP1), Windows Error Reporting (WER) can be configured so that full user-mode dumps are collected and stored locally after a user-mode application crashes. [...]
This feature is not enabled by default. Enabling the feature requires administrator privileges. To enable and configure the feature, use the following registry values under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps key.
You can set the registry entries from your installer, which has the required privileges.
Creating a user-mode dump has the following advantages over generating a stack trace on the client:
It's already implemented in the system. You can either use WER as outlined above, or call MiniDumpWriteDump yourself, if you need more fine-grained control over the amount of information to dump. (Make sure to call it from a different process.)
Way more complete than a stack trace. Among others it can contain local variables, function arguments, stacks for other threads, loaded modules, and so on. The amount of data (and consequently size) is highly customizable.
No need to ship debug symbols. This both drastically decreases the size of your deployment, as well as makes it harder to reverse-engineer your application.
Largely independent of the compiler you use. Using WER does not even require any code. Either way, having a way to get a symbol database (PDB) is very useful for offline analysis. I believe GCC can either generate PDB's, or there are tools to convert the symbol database to the PDB format.
Take note, that WER can only be triggered by an application crash (i.e. the system terminating a process due to an unhandled exception). MiniDumpWriteDump can be called at any time. This may be helpful if you need to dump the current state to diagnose issues other than a crash.
Mandatory reading, if you want to evaluate the applicability of mini dumps:
Effective minidumps
Effective minidumps (Part 2)
See the Stack Trace facility in ACE (ADAPTIVE Communication Environment). It's already written to cover all major platforms (and more). The library is BSD-style licensed so you can even copy/paste the code if you don't want to use ACE.
I can help with the Linux version: the function backtrace, backtrace_symbols and backtrace_symbols_fd can be used. See the corresponding manual pages.
*nix:
you can intercept SIGSEGV (usualy this signal is raised before crashing) and keep the info into a file. (besides the core file which you can use to debug using gdb for example).
win:
Check this from msdn.
You can also look at the google's chrome code to see how it handles crashes. It has a nice exception handling mechanism.
I have seen a lot of answers here performing a signal handler and then exiting.
That's the way to go, but remember a very important fact: If you want to get the core dump for the generated error, you can't call exit(status). Call abort() instead!
I found that #tgamblin solution is not complete.
It cannot handle with stackoverflow.
I think because by default signal handler is called with the same stack and
SIGSEGV is thrown twice. To protect you need register an independent stack for the signal handler.
You can check this with code below. By default the handler fails. With defined macro STACK_OVERFLOW it's all right.
#include <iostream>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <string>
#include <cassert>
using namespace std;
//#define STACK_OVERFLOW
#ifdef STACK_OVERFLOW
static char stack_body[64*1024];
static stack_t sigseg_stack;
#endif
static struct sigaction sigseg_handler;
void handler(int sig) {
cerr << "sig seg fault handler" << endl;
const int asize = 10;
void *array[asize];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, asize);
// print out all the frames to stderr
cerr << "stack trace: " << endl;
backtrace_symbols_fd(array, size, STDERR_FILENO);
cerr << "resend SIGSEGV to get core dump" << endl;
signal(sig, SIG_DFL);
kill(getpid(), sig);
}
void foo() {
foo();
}
int main(int argc, char **argv) {
#ifdef STACK_OVERFLOW
sigseg_stack.ss_sp = stack_body;
sigseg_stack.ss_flags = SS_ONSTACK;
sigseg_stack.ss_size = sizeof(stack_body);
assert(!sigaltstack(&sigseg_stack, nullptr));
sigseg_handler.sa_flags = SA_ONSTACK;
#else
sigseg_handler.sa_flags = SA_RESTART;
#endif
sigseg_handler.sa_handler = &handler;
assert(!sigaction(SIGSEGV, &sigseg_handler, nullptr));
cout << "sig action set" << endl;
foo();
return 0;
}
I would use the code that generates a stack trace for leaked memory in Visual Leak Detector. This only works on Win32, though.
If you still want to go it alone as I did you can link against bfd and avoid using addr2line as I have done here:
https://github.com/gnif/LookingGlass/blob/master/common/src/platform/linux/crash.c
This produces the output:
[E] crash.linux.c:170 | crit_err_hdlr | ==== FATAL CRASH (a12-151-g28b12c85f4+1) ====
[E] crash.linux.c:171 | crit_err_hdlr | signal 11 (Segmentation fault), address is (nil)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (0) /home/geoff/Projects/LookingGlass/client/src/main.c:936 (register_key_binds)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (1) /home/geoff/Projects/LookingGlass/client/src/main.c:1069 (run)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (2) /home/geoff/Projects/LookingGlass/client/src/main.c:1314 (main)
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f8aa65f809b]
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (4) ./looking-glass-client(_start+0x2a) [0x55c70fc4aeca]
In addition to above answers, here how you make Debian Linux OS generate core dump
Create a “coredumps” folder in the user's home folder
Go to /etc/security/limits.conf. Below the ' ' line, type “ soft core unlimited”, and “root soft core unlimited” if enabling core dumps for root, to allow unlimited space for core dumps.
NOTE: “* soft core unlimited” does not cover root, which is why root has to be specified in its own line.
To check these values, log out, log back in, and type “ulimit -a”. “Core file size” should be set to unlimited.
Check the .bashrc files (user, and root if applicable) to make sure that ulimit is not set there. Otherwise, the value above will be overwritten on startup.
Open /etc/sysctl.conf.
Enter the following at the bottom: “kernel.core_pattern = /home//coredumps/%e_%t.dump”. (%e will be the process name, and %t will be the system time)
Exit and type “sysctl -p” to load the new configuration
Check /proc/sys/kernel/core_pattern and verify that this matches what you just typed in.
Core dumping can be tested by running a process on the command line (“ &”), and then killing it with “kill -11 ”. If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.
gdb -ex 'set confirm off' -ex r -ex bt -ex q <my-program>
On Linux/unix/MacOSX use core files (you can enable them with ulimit or compatible system call). On Windows use Microsoft error reporting (you can become a partner and get access to your application crash data).
I forgot about the GNOME tech of "apport", but I don't know much about using it. It is used to generate stacktraces and other diagnostics for processing and can automatically file bugs. It's certainly worth checking in to.
You are probably not going to like this - all I can say in its favour is that it works for me, and I have similar but not identical requirements: I am writing a compiler/transpiler for a 1970's Algol-like language which uses C as it's output and then compiles the C so that as far as the user is concerned, they're generally not aware of C being involved, so although you might call it a transpiler, it's effectively a compiler that uses C as it's intermediate code. The language being compiled has a history of providing good diagnostics and a full backtrace in the original native compilers. I've been able to find gcc compiler flags and libraries etc that allow me to trap most of the runtime errors that the original compilers did (although with one glaring exception - unassigned variable trapping). When a runtime error occurs (eg arithmetic overflow, divide by zero, array index out of bounds, etc) the original compilers output a backtrace to the console listing all variables in the stack frames of every active procedure call. I struggled to get this effect in C, but eventually did so with what can only be described as a hack... When the program is invoked, the wrapper that supplies the C "main" looks at its argv, and if a special option is not present, it restarts itself under gdb with an altered argv containing both gdb options and the 'magic' option string for the program itself. This restarted version then hides those strings from the user's code by restoring the original arguments before calling the main block of the code written in our language. When an error occurs (as long as it is not one explicitly trapped within the program by user code), it exits to gdb which prints the required backtrace.
Keys lines of code in the startup sequence include:
if ((argc >= 1) && (strcmp(origargv[argc-1], "--restarting-under-gdb")) != 0) {
// initial invocation
// the "--restarting-under-gdb" option is how the copy running under gdb knows
// not to start another gdb process.
and
char *gdb [] = {
"/usr/bin/gdb", "-q", "-batch", "-nx", "-nh", "-return-child-result",
"-ex", "run",
"-ex", "bt full",
"--args"
};
The original arguments are appended to the gdb options above. That should be enough of a hint for you to do something similar for your own system.
I did look at other library-supported backtrace options (eg libbacktrace,
https://codingrelic.geekhold.com/2010/09/gcc-function-instrumentation.html, etc) but they only output the procedure call stack, not the local variables. However if anyone knows of any cleaner mechanism to get a similar effect, do please let us know. The main downside to this is that the variables are printed in C syntax, not the syntax of the language the user writes in. And (until I add suitable #line directives on every generated line of C :-() the backtrace lists the C source file and line numbers.
G
PS The gcc compile options I use are:
GCCOPTS=" -Wall -Wno-return-type -Wno-comment -g -fsanitize=undefined
-fsanitize-undefined-trap-on-error -fno-sanitize-recover=all -frecord-gcc-switches
-fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -ftrapv
-grecord-gcc-switches -O0 -ggdb3 "

Disable glog's "LOG(INFO)" logging

I'm trying to optimize my c++ program. It uses caffe.
When executing my program, caffe outputs around 1GB (!) of info logs every 15 mins. I suspect this impacts efficiency significantly. But I haven't found how to turn logging off. In this question someone suggested setting FLAGS_v manually.
With the following code I can disable VLOG logs by level, but LOG(x) logs are unaffected.
First lines in main():
FLAGS_v = 1; //disables vlog(2), vlog(3), vlog(4)
VLOG(0) << "Verbose 0";
VLOG(1) << "Verbose 1";
VLOG(2) << "Verbose 2";
VLOG(3) << "Verbose 3";
VLOG(4) << "Verbose 4";
LOG(INFO) << "LOG(INFO)";
LOG(WARNING) << "LOG(WARNING)";
LOG(ERROR) << "LOG(ERROR)";
Output:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0523 19:06:51.484634 14115 main.cpp:381] Verbose 0
I0523 19:06:51.484699 14115 main.cpp:382] Verbose 1
I0523 19:06:51.484705 14115 main.cpp:386] LOG(INFO)
W0523 19:06:51.484710 14115 main.cpp:387] LOG(WARNING)
E0523 19:06:51.484715 14115 main.cpp:388] LOG(ERROR)
Is there another flag I'm unaware of? I'm thinking of commenting every LOG(INFO) line out, but I would like a more elegant solution. (I'd prefer a c++ solution over a command line flag solution).
This works in C++ source code.
google::InitGoogleLogging("XXX");
google::SetCommandLineOption("GLOG_minloglevel", "2");
you need to set your environment variable
GLOG_minloglevel=2
then run your executable.
You can find more information here (at the bottom of this page there is a section on stripping LOG()s from your code using a macro definition).
If you want to turn off log from code level, you can use this.
Just add below line in your c++ code at src/caffe/net.cpp in Init method and build caffe:
fLI::FLAGS_minloglevel=3;
Partial view of the function where this line should be added:
template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
fLI::FLAGS_minloglevel=3;
// Set phase from the state.
phase_ = in_param.state().phase();
// Filter layers based on their include/exclude rules and
// the current NetState.
NetParameter filtered_param;
FilterNet(in_param, &filtered_param);
LOG(INFO) << "Initializing net from parameters: " << std::endl
<< filtered_param.DebugString();
// Create a copy of filtered_param with splits added where necessary.
NetParameter param;
InsertSplits(filtered_param, &param);
// Basically, build all the layers and set up their connections.
name_ = param.name();
.
.
.
.
Set log level according to your necessity.
The environment variable "GLOG_minloglevel" will filter some log but they have been compile in your executable file. If you want to disable them during compiling time, define a macro:
"#define GOOGLE_STRIP_LOG 1"
This is the comment in logging.h:
111 // The global value of GOOGLE_STRIP_LOG. All the messages logged to
112 // LOG(XXX) with severity less than GOOGLE_STRIP_LOG will not be displayed.
113 // If it can be determined at compile time that the message will not be
114 // printed, the statement will be compiled out.
115 //
116 // Example: to strip out all INFO and WARNING messages, use the value
117 // of 2 below. To make an exception for WARNING messages from a single
118 // file, add "#define GOOGLE_STRIP_LOG 1" to that file _before_ including
119 // base/logging.h
120 #ifndef GOOGLE_STRIP_LOG
121 #define GOOGLE_STRIP_LOG 0
122 #endif
To add to answer of Qi Cai,
if there is a dependency on Gflags library, than remove "GLOG_".
google::SetCommandLineOption("minloglevel", "2");
Each levels matches to
INFO : 0
WARNING : 1 ...
glog version
commit d4e8eb
Date: 2021-03-02 09:59:36 +0100
#include <glog/logging.h>
FLAGS_minloglevel = 100;

Process management code behaves different on Linux and Windows - Why?

(Context) I'm developing a cross-platform (Windows and Linux) application for distributing files among computers, based on BitTorrent Sync. I've made it in C# already, and am now porting to C++ as an exercise.
BTSync can be started in API mode, and for such, one must start the 'btsync' executable passing the name and location of a config file as arguments.
At this point, my greatest problem is getting my application to deal with the executable. I've come to found Boost.Process when searching for a cross-platform process management library, and decided to give it a try. It seems that v0.5 is it's latest working release, as some evidence suggests, and it can be infered there's a number of people using it.
I implemented the library as follows (relevant code only):
File: test.hpp
namespace testingBoostProcess
{
class Test
{
void StartSyncing();
};
}
File: Test.cpp
#include <string>
#include <vector>
#include <iostream>
#include <boost/process.hpp>
#include <boost/process/mitigate.hpp>
#include "test.hpp"
using namespace std;
using namespace testingBoostProcess;
namespace bpr = ::boost::process;
#ifdef _WIN32
const vector<wstring> EXE_NAME_ARGS = { L"btsync.exe", L"/config", L"conf.json" };
#else
const vector<string> EXE_NAME_ARGS = { "btsync", "--config", "conf.json" };
#endif
void Test::StartSyncing()
{
cout << "Starting Server...";
try
{
bpr::child exeServer = bpr::execute(bpr::initializers::set_args(EXE_NAME_ARGS),
bpr::initializers::throw_on_error(), bpr::initializers::inherit_env());
auto exitStatus = bpr::wait_for_exit(exeServer); // type will be either DWORD or int
int exitCode = BOOST_PROCESS_EXITSTATUS(exitStatus);
cout << " ok" << "\tstatus: " << exitCode << "\n";
}
catch (const exception& excStartExeServer)
{
cout << "\n" << "Error: " << excStartExeServer.what() << "\n";
}
}
(Problem) On Windows, the above code will start btsync and wait (block) until the process is terminated (either by using Task Manager or by the API's shutdown method), just like desired.
But on Linux, it finishes execution immediately after starting the process, as if wait_for_exit() isn't there at all, though the btsync process isn't terminated.
In an attempt to see if that has something to do with the btsync executable itself, I replaced it by this simple program:
File: Fake-Btsync.cpp
#include <cstdio>
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#define SLEEP Sleep(20000)
#include <Windows.h>
#else
#include <unistd.h>
#define SLEEP sleep(20)
#endif
using namespace std;
int main(int argc, char* argv[])
{
for (int i = 0; i < argc; i++)
{
printf(argv[i]);
printf("\n");
}
SLEEP;
return 0;
}
When used with this program, instead of the original btsync downloaded from the official website, my application works as desired. It will block for 20 seconds and then exit.
Question: What is the reason for the described behavior? The only thing I can think of is that btsync restarts itself on Linux. But how to confirm that? Or what else could it be?
Update: All I needed to do was to know about what forking is and how it works, as pointed in sehe's answer (thanks!).
Question 2: If I use the System Monitor to send an End command to the child process 'Fake-Btsync' while my main application is blocked, wait_for_exit() will throw an exception saying:
waitpid(2) failed: No child processes
Which is a different behavior than on Windows, where it simply says "ok" and terminates with status 0.
Update 2: sehe's answer is great, but didn't quite address Question 2 in a way I could actually understand. I'll write a new question about that and post the link here.
The problem is your assumption about btsync. Let's start it:
./btsync
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
BitTorrent Sync forked to background. pid = 24325. default port = 8888
So, that's the whole story right there: BitTorrent Sync forked to background. Nothing more. Nothing less. If you want to, btsync --help tells you to pass --nodaemon.
Testing Process Termination
Let's pass --nodaemon run btsync using the test program. In a separate subshell, let's kill the child btsync process after 5 seconds:
sehe#desktop:/tmp$ (./test; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24553
[2] 24554
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
[20141029 10:51:16.344] total physical memory 536870912 max disk cache 2097152
[20141029 10:51:16.344] Using IP address 192.168.2.136
[20141029 10:51:16.346] Loading config file version 1.4.93
[20141029 10:51:17.389] UPnP: Device error "http://192.168.2.1:49000/l2tpv3.xml": (-2)
[20141029 10:51:17.407] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564. Deleting mapping and trying again: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.415] UPnP: ERROR removing TCP port 43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.423] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:21.428] Received shutdown request via signal 15
[20141029 10:51:21.428] Shutdown. Saving config sync.dat
Starting Server... ok status: 0
exit code 0
[1]- Done ( ./test; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m6.093s
user 0m0.003s
sys 0m0.026s
No problem!
A Better Fake Btsync
This should still be portable and be (much) better behaved when killed/terminated/interrupted:
#include <boost/asio/signal_set.hpp>
#include <boost/asio.hpp>
#include <iostream>
int main(int argc, char* argv[])
{
boost::asio::io_service is;
boost::asio::signal_set ss(is);
boost::asio::deadline_timer timer(is, boost::posix_time::seconds(20));
ss.add(SIGINT);
ss.add(SIGTERM);
auto stop = [&]{
ss.cancel(); // one of these will be redundant
timer.cancel();
};
ss.async_wait([=](boost::system::error_code ec, int sig){
std::cout << "Signal received: " << sig << " (ec: '" << ec.message() << "')\n";
stop();
});
timer.async_wait([&](boost::system::error_code ec){
std::cout << "Timer: '" << ec.message() << "'\n";
stop();
});
std::copy(argv, argv+argc, std::ostream_iterator<std::string>(std::cout, "\n"));
is.run();
return 0;
}
You can test whether it is well-behaved
(./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
The same test can be run with "official" btsync and "fake" btsync. Output on my linux box:
sehe#desktop:/tmp$ (./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24654
[2] 24655
./btsync
--nodaemon
Signal received: 15 (ec: 'Success')
Timer: 'Operation canceled'
exit code 0
[1]- Done ( ./btsync --nodaemon; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m5.014s
user 0m0.001s
sys 0m0.014s

OpenCV and Network Cameras -or- How to spy on the neighbors?

A bit of context; this program was built originally to work with USB cameras - but because of the setup between where the cameras needs to be and where the computer is it makes more sense to switch to cameras run over a network. Now I'm trying to convert the program to accomplish this, but my efforts thus far have met with poor results. I've also asked this same question over on the OpenCV forums. Help me spy on my neighbors! (This is with their permission, of course!) :D
I'm using:
OpenCV v2.4.6.0
C++
D-Link Cloud Camera 7100 (Installer is DCS-7010L, according to the instructions.)
I am trying to access the DLink camera's video feed through OpenCV.
I can access the camera through it's IP address with a browser without any issues. Unfourtunately; my program is less cooperative. When attempting to access the camera the program gives the OpenCV-generated error:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:529)
This error occurs with just about everything I try that doesn't somehow generate more problems.
For reference - the code in OpenCV's cap_ffmpeg_impl.hpp around line 529 is as follows:
522 bool CvCapture_FFMPEG::open( const char* _filename )
523 {
524 unsigned i;
525 bool valid = false;
526
527 close();
528
529 #if LIBAVFORMAT_BUILD >= CALC_FFMPEG_VERSION(52, 111, 0)
530 int err = avformat_open_input(&ic, _filename, NULL, NULL);
531 #else
532 int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
533 #endif
...
616 }
...for which I have no idea what I'm looking at. It seems to be looking for the ffmpeg version - but I've already installed the latest ffmpeg on that computer, so that shouldn't be the issue.
This is the edited down version I tried to use as per Sebastian Schmitz's recommendation:
1 #include <fstream> // File input/output
2 #include <iostream> // cout / cin / etc
3 #include <windows.h> // Windows API stuff
4 #include <stdio.h> // More input/output stuff
5 #include <string> // "Strings" of characters strung together to form words and stuff
6 #include <cstring> // "Strings" of characters strung together to form words and stuff
7 #include <streambuf> // For buffering load files
8 #include <array> // Functions for working with arrays
9 #include <opencv2/imgproc/imgproc.hpp> // Image Processor
10 #include <opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat, Scalar)
11 #include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
12 #include "opencv2/calib3d/calib3d.hpp"
13 #include "opencv2/features2d/features2d.hpp"
14 #include "opencv2/opencv.hpp"
15 #include "resource.h" // Included for linking the .rc file
16 #include <conio.h> // For sleep()
17 #include <chrono> // To get start-time of program.
18 #include <algorithm> // For looking at whole sets.
19
20 #ifdef __BORLANDC__
21 #pragma argsused
22 #endif
23
24 using namespace std; // Standard operations. Needed for most basic functions.
25 using namespace std::chrono; // Chrono operations. Needed getting starting time of program.
26 using namespace cv; // OpenCV operations. Needed for most OpenCV functions.
27
28 string videoFeedAddress = "";
29 VideoCapture videoFeedIP = NULL;
30 Mat clickPointStorage; //Artifact from original program.
31
32 void displayCameraViewTest()
33 {
34 VideoCapture cv_cap_IP;
35 Mat color_img_IP;
36 int capture;
37 IplImage* color_img;
38 cv_cap_IP.open(videoFeedAddress);
39 Sleep(100);
40 if(!cv_cap_IP.isOpened())
41 {
42 cout << "Video Error: Video input will not work.\n";
43 cvDestroyWindow("Camera View");
44 return;
45 }
46 clickPointStorage.create(color_img_IP.rows, color_img_IP.cols, CV_8UC3);
47 clickPointStorage.setTo(Scalar(0, 0, 0));
48 cvNamedWindow("Camera View", 0); // create window
49 IplImage* IplClickPointStorage = new IplImage(clickPointStorage);
50 IplImage* Ipl_IP_Img;
51
52 for(;;)
53 {
54 cv_cap_IP.read(color_img_IP);
55 IplClickPointStorage = new IplImage(clickPointStorage);
56 Ipl_IP_Img = new IplImage(color_img_IP);
57 cvAdd(Ipl_IP_Img, IplClickPointStorage, color_img);
58 cvShowImage("Camera View", color_img); // show frame
59 capture = cvWaitKey(10); // wait 10 ms or for key stroke
60 if(capture == 27 || capture == 13 || capture == 32){break;} // if ESC, Return, or space; close window.
61 }
62 cv_cap_IP.release();
63 delete Ipl_IP_Img;
64 delete IplClickPointStorage;
65 cvDestroyWindow("Camera View");
66 return;
67 }
68
69 int main()
70 {
71 while(1)
72 {
73 cout << "Please Enter Video-Feed Address: ";
74 cin >> videoFeedAddress;
75 if(videoFeedAddress == "exit"){return 0;}
76 cout << "\nvideoFeedAddress: " << videoFeedAddress << endl;
77 displayCameraViewTest();
78 if(cvWaitKey(10) == 27){return 0;}
79 }
80 return 0;
81 }
Using added 'cout's I was able to narrow it down to line 38: "cv_cap_IP.open(videoFeedAddress);"
No value I enter for the videoFeedAddress variable seems to get a different result. I found THIS site that lists a number of possible addresses to connect to it. Since there exists no 7100 anywhere in the list & considering that the install is labeled "DCS-7010L" I used the addresses found next to the DCS-7010L listings. When trying to access the camera most of them can be reached through the browser, confirming that they reach the camera - but they don't seem to affect the outcome when I use them in the videoFeedAddress variable.
I've tried many of them both with and without username:password, the port number (554), and variations on ?.mjpg (the format) at the end.
I searched around and came across a number of different "possible" answers - but none of them seem to work for me. Some of them did give me the idea for including the above username:password, etc stuff, but it doesn't seem to be making a difference. Of course, the number of possible combinations is certainly rather large- so I certainly have not tried all of them (more direction here would be appreciated). Here are some of the links I found:
This is one of the first configurations my code was in. No dice.
This one is talking about files - not cameras. It also mentions codecs - but I wouldn't be able to watch it in a web browser if that were the problem, right? (Correct me if I'm wrong here...)
This one has the wrong error code/points to the wrong line of code!
This one mentions compiling OpenCV with ffmpeg support - but I believe 2.4.6.0 already comes with that all set and ready! Otherwise it's not that different from what I've already tried.
Now THIS one appears to be very similar to what I have, but the only proposed solution doesn't really help as I had already located a list of connections. I do not believe this is a duplicate, because as per THIS meta discussion I had a lot more information and so didn't feel comfortable taking over someone else's question - especially if I end up needing to add even more information later.
Thank you for reading this far. I realize that I am asking a somewhat specific question - although I would appreciate any advice you can think of regarding OpenCV & network cameras or even related topics.
TLDR: Network Camera and OpenCV are not cooperating. I'm unsure if
it's the address I'm using to direct the program to the camera or the
command I'm using - but no adjustment I make seems to improve the
result beyond what I've already done! Now my neighbors will go unwatched!
There's a number of ways to get the video. ffmpeg is not the only way although it's most convenient. To diagnose if ffmpeg is capable of reading the stream, you should use the standalone ffmpeg/ffplay to try to open the url. If it can open directly, it may be some minor things like url formatting such as double slashes(rtsp://IPADDRESS:554/live1.sdp instead of rtsp://IPADDRESS:554//live1.sdp). If it cannot open it directly, it may need some extra commandline switches to make it work. Then you would need to modify opencv's ffmpeg implementation # line 529 to pass options to avformat_open_input. This may require quite bit of tweaking before you can get a working program.
You can also check if the camera provide a http mjpeg stream by consulting it's manual. I do not have the camera you are using. So I cannot be of much help on this.
Alternatively, I have two suggestions below, which might help you up and running relatively quickly since you mentioned that vlc is working.
method 1
i assume that you can at least open mjpeg url with your existing opencv/ffmpeg combination. since vlc is working, just use vlc to transcode the video into mjpeg like
vlc.exe --ignore-config -I dummy rtsp://admin:admin#10.10.204.111 --sout=#transcode"{vcodec=MJPG,vb=5000,scale=1,acodec=none}:std{access=http,m‌​ux=raw,dst=127.0.0.1:9080/frame.mjpg}"
after that use http://127.0.0.1:9080/frame.mjpg to grab the frame using opencv VideoCapture. this just requires that you have a transcoder program that can convert the incoming stream into mjpeg.
method 2
you can also directly use vlc api programmatically. the following piece of code use vlc to grab the frames. relevant info for compilation
C:\Program Files (x86)\VideoLAN\VLC\sdk\include
C:\Program Files (x86)\VideoLAN\VLC\sdk\lib
libvlc.lib,libvlccore.lib
code
#include "opencv2/highgui/highgui.hpp"
#include <windows.h>
#include <vlc/vlc.h>
using namespace cv;
struct ctx
{
Mat* image;
HANDLE mutex;
uchar* pixels;
};
bool isRunning=true;
Size getsize(const char* path)
{
libvlc_instance_t *vlcInstance;
libvlc_media_player_t *mp;
libvlc_media_t *media;
const char * const vlc_args[] = {
"-R",
"-I", "dummy",
"--ignore-config",
"--quiet",
};
vlcInstance = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
media = libvlc_media_new_location(vlcInstance, path);
mp = libvlc_media_player_new_from_media(media);
libvlc_media_release(media);
libvlc_video_set_callbacks(mp, NULL, NULL, NULL, NULL);
libvlc_video_set_format(mp, "RV24",100,100, 100 * 24 / 8); // pitch = width * BitsPerPixel / 8
libvlc_media_player_play(mp);
Sleep(2000);//wait a while so that something get rendered so that size info is available
unsigned int width=640,height=480;
libvlc_video_get_size(mp,0,&width,&height);
if(width==0 || height ==0)
{
width=640;
height=480;
}
libvlc_media_player_stop(mp);
libvlc_release(vlcInstance);
libvlc_media_player_release(mp);
return Size(width,height);
}
void *lock(void *data, void**p_pixels)
{
struct ctx *ctx = (struct ctx*)data;
WaitForSingleObject(ctx->mutex, INFINITE);
*p_pixels = ctx->pixels;
return NULL;
}
void display(void *data, void *id){
(void) data;
assert(id == NULL);
}
void unlock(void *data, void *id, void *const *p_pixels)
{
struct ctx *ctx = (struct ctx*)data;
Mat frame = *ctx->image;
if(frame.data)
{
imshow("frame",frame);
if(waitKey(1)==27)
{
isRunning=false;
//exit(0);
}
}
ReleaseMutex(ctx->mutex);
}
int main( )
{
string url="rtsp://admin:admin#10.10.204.111";
//vlc sdk does not know the video size until it is rendered, so need to play it a bit so that size is known
Size sz = getsize(url.c_str());
// VLC pointers
libvlc_instance_t *vlcInstance;
libvlc_media_player_t *mp;
libvlc_media_t *media;
const char * const vlc_args[] = {
"-R",
"-I", "dummy",
"--ignore-config",
"--quiet",
};
vlcInstance = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
media = libvlc_media_new_location(vlcInstance, url.c_str());
mp = libvlc_media_player_new_from_media(media);
libvlc_media_release(media);
struct ctx* context = ( struct ctx* )malloc( sizeof( *context ) );
context->mutex = CreateMutex(NULL, FALSE, NULL);
context->image = new Mat(sz.height, sz.width, CV_8UC3);
context->pixels = (unsigned char *)context->image->data;
libvlc_video_set_callbacks(mp, lock, unlock, display, context);
libvlc_video_set_format(mp, "RV24", sz.width, sz.height, sz.width * 24 / 8); // pitch = width * BitsPerPixel / 8
libvlc_media_player_play(mp);
while(isRunning)
{
Sleep(1);
}
libvlc_media_player_stop(mp);
libvlc_release(vlcInstance);
libvlc_media_player_release(mp);
free(context);
return 0;
}

How to automatically generate a stacktrace when my program crashes

I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace.
My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc).
I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas?
For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual.
Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler:
#include <stdio.h>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
void *array[10];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, 10);
// print out all the frames to stderr
fprintf(stderr, "Error: signal %d:\n", sig);
backtrace_symbols_fd(array, size, STDERR_FILENO);
exit(1);
}
void baz() {
int *foo = (int*)-1; // make a bad pointer
printf("%d\n", *foo); // causes segfault
}
void bar() { baz(); }
void foo() { bar(); }
int main(int argc, char **argv) {
signal(SIGSEGV, handler); // install our handler
foo(); // this will call foo, bar, and baz. baz segfaults.
}
Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace:
$ gcc -g -rdynamic ./test.c -o test
Executing this gets you this output:
$ ./test
Error: signal 11:
./test(handler+0x19)[0x400911]
/lib64/tls/libc.so.6[0x3a9b92e380]
./test(baz+0x14)[0x400962]
./test(bar+0xe)[0x400983]
./test(foo+0xe)[0x400993]
./test(main+0x28)[0x4009bd]
/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]
./test[0x40086a]
This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main, foo, bar, and baz.
It's even easier than "man backtrace", there's a little-documented library (GNU specific) distributed with glibc as libSegFault.so, which was I believe was written by Ulrich Drepper to support the program catchsegv (see "man catchsegv").
This gives us 3 possibilities. Instead of running "program -o hai":
Run within catchsegv:
$ catchsegv program -o hai
Link with libSegFault at runtime:
$ LD_PRELOAD=/lib/libSegFault.so program -o hai
Link with libSegFault at compile time:
$ gcc -g1 -lSegFault -o program program.cc
$ program -o hai
In all 3 cases, you will get clearer backtraces with less optimization (gcc -O0 or -O1) and debugging symbols (gcc -g). Otherwise, you may just end up with a pile of memory addresses.
You can also catch more signals for stack traces with something like:
$ export SEGFAULT_SIGNALS="all" # "all" signals
$ export SEGFAULT_SIGNALS="bus abrt" # SIGBUS and SIGABRT
The output will look something like this (notice the backtrace at the bottom):
*** Segmentation fault Register dump:
EAX: 0000000c EBX: 00000080 ECX:
00000000 EDX: 0000000c ESI:
bfdbf080 EDI: 080497e0 EBP:
bfdbee38 ESP: bfdbee20
EIP: 0805640f EFLAGS: 00010282
CS: 0073 DS: 007b ES: 007b FS:
0000 GS: 0033 SS: 007b
Trap: 0000000e Error: 00000004
OldMask: 00000000 ESP/signal:
bfdbee20 CR2: 00000024
FPUCW: ffff037f FPUSW: ffff0000
TAG: ffffffff IPOFF: 00000000
CSSEL: 0000 DATAOFF: 00000000
DATASEL: 0000
ST(0) 0000 0000000000000000 ST(1)
0000 0000000000000000 ST(2) 0000
0000000000000000 ST(3) 0000
0000000000000000 ST(4) 0000
0000000000000000 ST(5) 0000
0000000000000000 ST(6) 0000
0000000000000000 ST(7) 0000
0000000000000000
Backtrace:
/lib/libSegFault.so[0xb7f9e100]
??:0(??)[0xb7fa3400]
/usr/include/c++/4.3/bits/stl_queue.h:226(_ZNSt5queueISsSt5dequeISsSaISsEEE4pushERKSs)[0x805647a]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/player.cpp:73(_ZN6Player5inputESs)[0x805377c]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:159(_ZN6Socket4ReadEv)[0x8050698]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:413(_ZN12ServerSocket4ReadEv)[0x80507ad]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:300(_ZN12ServerSocket4pollEv)[0x8050b44]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/main.cpp:34(main)[0x8049a72]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5)[0xb7d1b775]
/build/buildd/glibc-2.9/csu/../sysdeps/i386/elf/start.S:122(_start)[0x8049801]
If you want to know the gory details, the best source is unfortunately the source: See http://sourceware.org/git/?p=glibc.git;a=blob;f=debug/segfault.c and its parent directory http://sourceware.org/git/?p=glibc.git;a=tree;f=debug
Linux
While the use of the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault has already been suggested, I see no mention of the intricacies necessary to ensure the resulting backtrace points to the actual location of the fault (at least for some architectures - x86 & ARM).
The first two entries in the stack frame chain when you get into the signal handler contain a return address inside the signal handler and one inside sigaction() in libc. The stack frame of the last function called before the signal (which is the location of the fault) is lost.
Code
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifndef __USE_GNU
#define __USE_GNU
#endif
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ucontext.h>
#include <unistd.h>
/* This structure mirrors the one found in /usr/include/asm/ucontext.h */
typedef struct _sig_ucontext {
unsigned long uc_flags;
ucontext_t *uc_link;
stack_t uc_stack;
sigcontext_t uc_mcontext;
sigset_t uc_sigmask;
} sig_ucontext_t;
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
void * array[50];
void * caller_address;
char ** messages;
int size, i;
sig_ucontext_t * uc;
uc = (sig_ucontext_t *)ucontext;
/* Get the address at the time the signal was raised */
#if defined(__i386__) // gcc specific
caller_address = (void *) uc->uc_mcontext.eip; // EIP: x86 specific
#elif defined(__x86_64__) // gcc specific
caller_address = (void *) uc->uc_mcontext.rip; // RIP: x86_64 specific
#else
#error Unsupported architecture. // TODO: Add support for other arch.
#endif
fprintf(stderr, "signal %d (%s), address is %p from %p\n",
sig_num, strsignal(sig_num), info->si_addr,
(void *)caller_address);
size = backtrace(array, 50);
/* overwrite sigaction with caller's address */
array[1] = caller_address;
messages = backtrace_symbols(array, size);
/* skip first stack frame (points here) */
for (i = 1; i < size && messages != NULL; ++i)
{
fprintf(stderr, "[bt]: (%d) %s\n", i, messages[i]);
}
free(messages);
exit(EXIT_FAILURE);
}
int crash()
{
char * p = NULL;
*p = 0;
return 0;
}
int foo4()
{
crash();
return 0;
}
int foo3()
{
foo4();
return 0;
}
int foo2()
{
foo3();
return 0;
}
int foo1()
{
foo2();
return 0;
}
int main(int argc, char ** argv)
{
struct sigaction sigact;
sigact.sa_sigaction = crit_err_hdlr;
sigact.sa_flags = SA_RESTART | SA_SIGINFO;
if (sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL) != 0)
{
fprintf(stderr, "error setting signal handler for %d (%s)\n",
SIGSEGV, strsignal(SIGSEGV));
exit(EXIT_FAILURE);
}
foo1();
exit(EXIT_SUCCESS);
}
Output
signal 11 (Segmentation fault), address is (nil) from 0x8c50
[bt]: (1) ./test(crash+0x24) [0x8c50]
[bt]: (2) ./test(foo4+0x10) [0x8c70]
[bt]: (3) ./test(foo3+0x10) [0x8c8c]
[bt]: (4) ./test(foo2+0x10) [0x8ca8]
[bt]: (5) ./test(foo1+0x10) [0x8cc4]
[bt]: (6) ./test(main+0x74) [0x8d44]
[bt]: (7) /lib/libc.so.6(__libc_start_main+0xa8) [0x40032e44]
All the hazards of calling the backtrace() functions in a signal handler still exist and should not be overlooked, but I find the functionality I described here quite helpful in debugging crashes.
It is important to note that the example I provided is developed/tested on Linux for x86. I have also successfully implemented this on ARM using uc_mcontext.arm_pc instead of uc_mcontext.eip.
Here's a link to the article where I learned the details for this implementation:
http://www.linuxjournal.com/article/6391
Even though a correct answer has been provided that describes how to use the GNU libc backtrace() function1 and I provided my own answer that describes how to ensure a backtrace from a signal handler points to the actual location of the fault2, I don't see any mention of demangling C++ symbols output from the backtrace.
When obtaining backtraces from a C++ program, the output can be run through c++filt1 to demangle the symbols or by using abi::__cxa_demangle1 directly.
1 Linux & OS X
Note that c++filt and __cxa_demangle are GCC specific
2 Linux
The following C++ Linux example uses the same signal handler as my other answer and demonstrates how c++filt can be used to demangle the symbols.
Code:
class foo
{
public:
foo() { foo1(); }
private:
void foo1() { foo2(); }
void foo2() { foo3(); }
void foo3() { foo4(); }
void foo4() { crash(); }
void crash() { char * p = NULL; *p = 0; }
};
int main(int argc, char ** argv)
{
// Setup signal handler for SIGSEGV
...
foo * f = new foo();
return 0;
}
Output (./test):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(crash__3foo+0x13) [0x8048e07]
[bt]: (2) ./test(foo4__3foo+0x12) [0x8048dee]
[bt]: (3) ./test(foo3__3foo+0x12) [0x8048dd6]
[bt]: (4) ./test(foo2__3foo+0x12) [0x8048dbe]
[bt]: (5) ./test(foo1__3foo+0x12) [0x8048da6]
[bt]: (6) ./test(__3foo+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
Demangled Output (./test 2>&1 | c++filt):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(foo::crash(void)+0x13) [0x8048e07]
[bt]: (2) ./test(foo::foo4(void)+0x12) [0x8048dee]
[bt]: (3) ./test(foo::foo3(void)+0x12) [0x8048dd6]
[bt]: (4) ./test(foo::foo2(void)+0x12) [0x8048dbe]
[bt]: (5) ./test(foo::foo1(void)+0x12) [0x8048da6]
[bt]: (6) ./test(foo::foo(void)+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
The following builds on the signal handler from my original answer and can replace the signal handler in the above example to demonstrate how abi::__cxa_demangle can be used to demangle the symbols. This signal handler produces the same demangled output as the above example.
Code:
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
sig_ucontext_t * uc = (sig_ucontext_t *)ucontext;
void * caller_address = (void *) uc->uc_mcontext.eip; // x86 specific
std::cerr << "signal " << sig_num
<< " (" << strsignal(sig_num) << "), address is "
<< info->si_addr << " from " << caller_address
<< std::endl << std::endl;
void * array[50];
int size = backtrace(array, 50);
array[1] = caller_address;
char ** messages = backtrace_symbols(array, size);
// skip first stack frame (points here)
for (int i = 1; i < size && messages != NULL; ++i)
{
char *mangled_name = 0, *offset_begin = 0, *offset_end = 0;
// find parantheses and +address offset surrounding mangled name
for (char *p = messages[i]; *p; ++p)
{
if (*p == '(')
{
mangled_name = p;
}
else if (*p == '+')
{
offset_begin = p;
}
else if (*p == ')')
{
offset_end = p;
break;
}
}
// if the line could be processed, attempt to demangle the symbol
if (mangled_name && offset_begin && offset_end &&
mangled_name < offset_begin)
{
*mangled_name++ = '\0';
*offset_begin++ = '\0';
*offset_end++ = '\0';
int status;
char * real_name = abi::__cxa_demangle(mangled_name, 0, 0, &status);
// if demangling is successful, output the demangled function name
if (status == 0)
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< real_name << "+" << offset_begin << offset_end
<< std::endl;
}
// otherwise, output the mangled function name
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< mangled_name << "+" << offset_begin << offset_end
<< std::endl;
}
free(real_name);
}
// otherwise, print the whole line
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << std::endl;
}
}
std::cerr << std::endl;
free(messages);
exit(EXIT_FAILURE);
}
Might be worth looking at Google Breakpad, a cross-platform crash dump generator and tools to process the dumps.
You did not specify your operating system, so this is difficult to answer. If you are using a system based on gnu libc, you might be able to use the libc function backtrace().
GCC also has two builtins that can assist you, but which may or may not be implemented fully on your architecture, and those are __builtin_frame_address and __builtin_return_address. Both of which want an immediate integer level (by immediate, I mean it can't be a variable). If __builtin_frame_address for a given level is non-zero, it should be safe to grab the return address of the same level.
Thank you to enthusiasticgeek for drawing my attention to the addr2line utility.
I've written a quick and dirty script to process the output of the answer provided here:
(much thanks to jschmier!) using the addr2line utility.
The script accepts a single argument: The name of the file containing the output from jschmier's utility.
The output should print something like the following for each level of the trace:
BACKTRACE: testExe 0x8A5db6b
FILE: pathToFile/testExe.C:110
FUNCTION: testFunction(int)
107
108
109 int* i = 0x0;
*110 *i = 5;
111
112 }
113 return i;
Code:
#!/bin/bash
LOGFILE=$1
NUM_SRC_CONTEXT_LINES=3
old_IFS=$IFS # save the field separator
IFS=$'\n' # new field separator, the end of line
for bt in `cat $LOGFILE | grep '\[bt\]'`; do
IFS=$old_IFS # restore default field separator
printf '\n'
EXEC=`echo $bt | cut -d' ' -f3 | cut -d'(' -f1`
ADDR=`echo $bt | cut -d'[' -f3 | cut -d']' -f1`
echo "BACKTRACE: $EXEC $ADDR"
A2L=`addr2line -a $ADDR -e $EXEC -pfC`
#echo "A2L: $A2L"
FUNCTION=`echo $A2L | sed 's/\<at\>.*//' | cut -d' ' -f2-99`
FILE_AND_LINE=`echo $A2L | sed 's/.* at //'`
echo "FILE: $FILE_AND_LINE"
echo "FUNCTION: $FUNCTION"
# print offending source code
SRCFILE=`echo $FILE_AND_LINE | cut -d':' -f1`
LINENUM=`echo $FILE_AND_LINE | cut -d':' -f2`
if ([ -f $SRCFILE ]); then
cat -n $SRCFILE | grep -C $NUM_SRC_CONTEXT_LINES "^ *$LINENUM\>" | sed "s/ $LINENUM/*$LINENUM/"
else
echo "File not found: $SRCFILE"
fi
IFS=$'\n' # new field separator, the end of line
done
IFS=$old_IFS # restore default field separator
ulimit -c <value> sets the core file size limit on unix. By default, the core file size limit is 0. You can see your ulimit values with ulimit -a.
also, if you run your program from within gdb, it will halt your program on "segmentation violations" (SIGSEGV, generally when you accessed a piece of memory that you hadn't allocated) or you can set breakpoints.
ddd and nemiver are front-ends for gdb which make working with it much easier for the novice.
It's important to note that once you generate a core file you'll need to use the gdb tool to look at it. For gdb to make sense of your core file, you must tell gcc to instrument the binary with debugging symbols: to do this, you compile with the -g flag:
$ g++ -g prog.cpp -o prog
Then, you can either set "ulimit -c unlimited" to let it dump a core, or just run your program inside gdb. I like the second approach more:
$ gdb ./prog
... gdb startup output ...
(gdb) run
... program runs and crashes ...
(gdb) where
... gdb outputs your stack trace ...
I hope this helps.
It looks like in one of last c++ boost version appeared library to provide exactly what You want, probably the code would be multiplatform.
It is boost::stacktrace, which You can use like as in boost sample:
#include <filesystem>
#include <sstream>
#include <fstream>
#include <signal.h> // ::signal, ::raise
#include <boost/stacktrace.hpp>
const char* backtraceFileName = "./backtraceFile.dump";
void signalHandler(int)
{
::signal(SIGSEGV, SIG_DFL);
::signal(SIGABRT, SIG_DFL);
boost::stacktrace::safe_dump_to(backtraceFileName);
::raise(SIGABRT);
}
void sendReport()
{
if (std::filesystem::exists(backtraceFileName))
{
std::ifstream file(backtraceFileName);
auto st = boost::stacktrace::stacktrace::from_dump(file);
std::ostringstream backtraceStream;
backtraceStream << st << std::endl;
// sending the code from st
file.close();
std::filesystem::remove(backtraceFileName);
}
}
int main()
{
::signal(SIGSEGV, signalHandler);
::signal(SIGABRT, signalHandler);
sendReport();
// ... rest of code
}
In Linux You compile the code above:
g++ --std=c++17 file.cpp -lstdc++fs -lboost_stacktrace_backtrace -ldl -lbacktrace
Example backtrace copied from boost documentation:
0# bar(int) at /path/to/source/file.cpp:70
1# bar(int) at /path/to/source/file.cpp:70
2# bar(int) at /path/to/source/file.cpp:70
3# bar(int) at /path/to/source/file.cpp:70
4# main at /path/to/main.cpp:93
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# _start
Ive been looking at this problem for a while.
And buried deep in the Google Performance Tools README
http://code.google.com/p/google-perftools/source/browse/trunk/README
talks about libunwind
http://www.nongnu.org/libunwind/
Would love to hear opinions of this library.
The problem with -rdynamic is that it can increase the size of the binary relatively significantly in some cases
The new king in town has arrived
https://github.com/bombela/backward-cpp
1 header to place in your code and 1 library to install.
Personally I call it using this function
#include "backward.hpp"
void stacker() {
using namespace backward;
StackTrace st;
st.load_here(99); //Limit the number of trace depth to 99
st.skip_n_firsts(3);//This will skip some backward internal function from the trace
Printer p;
p.snippet = true;
p.object = true;
p.color = true;
p.address = true;
p.print(st, stderr);
}
Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.
You can use DeathHandler - small C++ class which does everything for you, reliable.
Forget about changing your sources and do some hacks with backtrace() function or macroses - these are just poor solutions.
As a properly working solution, I would advice:
Compile your program with "-g" flag for embedding debug symbols to binary (don't worry this will not impact your performance).
On linux run next command: "ulimit -c unlimited" - to allow system make big crash dumps.
When your program crashed, in the working directory you will see file "core".
Run next command to print backtrace to stdout: gdb -batch -ex "backtrace" ./your_program_exe ./core
This will print proper readable backtrace of your program in human readable way (with source file names and line numbers).
Moreover this approach will give you freedom to automatize your system:
have a short script that checks if process created a core dump, and then send backtraces by email to developers, or log this into some logging system.
ulimit -c unlimited
is a system variable, wich will allow to create a core dump after your application crashes. In this case an unlimited amount. Look for a file called core in the very same directory. Make sure you compiled your code with debugging informations enabled!
regards
Look at:
man 3 backtrace
And:
#include <exeinfo.h>
int backtrace(void **buffer, int size);
These are GNU extensions.
As a Windows-only solution, you can get the equivalent of a stack trace (with much, much more information) using Windows Error Reporting. With just a few registry entries, it can be set up to collect user-mode dumps:
Starting with Windows Server 2008 and Windows Vista with Service Pack 1 (SP1), Windows Error Reporting (WER) can be configured so that full user-mode dumps are collected and stored locally after a user-mode application crashes. [...]
This feature is not enabled by default. Enabling the feature requires administrator privileges. To enable and configure the feature, use the following registry values under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps key.
You can set the registry entries from your installer, which has the required privileges.
Creating a user-mode dump has the following advantages over generating a stack trace on the client:
It's already implemented in the system. You can either use WER as outlined above, or call MiniDumpWriteDump yourself, if you need more fine-grained control over the amount of information to dump. (Make sure to call it from a different process.)
Way more complete than a stack trace. Among others it can contain local variables, function arguments, stacks for other threads, loaded modules, and so on. The amount of data (and consequently size) is highly customizable.
No need to ship debug symbols. This both drastically decreases the size of your deployment, as well as makes it harder to reverse-engineer your application.
Largely independent of the compiler you use. Using WER does not even require any code. Either way, having a way to get a symbol database (PDB) is very useful for offline analysis. I believe GCC can either generate PDB's, or there are tools to convert the symbol database to the PDB format.
Take note, that WER can only be triggered by an application crash (i.e. the system terminating a process due to an unhandled exception). MiniDumpWriteDump can be called at any time. This may be helpful if you need to dump the current state to diagnose issues other than a crash.
Mandatory reading, if you want to evaluate the applicability of mini dumps:
Effective minidumps
Effective minidumps (Part 2)
See the Stack Trace facility in ACE (ADAPTIVE Communication Environment). It's already written to cover all major platforms (and more). The library is BSD-style licensed so you can even copy/paste the code if you don't want to use ACE.
I can help with the Linux version: the function backtrace, backtrace_symbols and backtrace_symbols_fd can be used. See the corresponding manual pages.
*nix:
you can intercept SIGSEGV (usualy this signal is raised before crashing) and keep the info into a file. (besides the core file which you can use to debug using gdb for example).
win:
Check this from msdn.
You can also look at the google's chrome code to see how it handles crashes. It has a nice exception handling mechanism.
I have seen a lot of answers here performing a signal handler and then exiting.
That's the way to go, but remember a very important fact: If you want to get the core dump for the generated error, you can't call exit(status). Call abort() instead!
I found that #tgamblin solution is not complete.
It cannot handle with stackoverflow.
I think because by default signal handler is called with the same stack and
SIGSEGV is thrown twice. To protect you need register an independent stack for the signal handler.
You can check this with code below. By default the handler fails. With defined macro STACK_OVERFLOW it's all right.
#include <iostream>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <string>
#include <cassert>
using namespace std;
//#define STACK_OVERFLOW
#ifdef STACK_OVERFLOW
static char stack_body[64*1024];
static stack_t sigseg_stack;
#endif
static struct sigaction sigseg_handler;
void handler(int sig) {
cerr << "sig seg fault handler" << endl;
const int asize = 10;
void *array[asize];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, asize);
// print out all the frames to stderr
cerr << "stack trace: " << endl;
backtrace_symbols_fd(array, size, STDERR_FILENO);
cerr << "resend SIGSEGV to get core dump" << endl;
signal(sig, SIG_DFL);
kill(getpid(), sig);
}
void foo() {
foo();
}
int main(int argc, char **argv) {
#ifdef STACK_OVERFLOW
sigseg_stack.ss_sp = stack_body;
sigseg_stack.ss_flags = SS_ONSTACK;
sigseg_stack.ss_size = sizeof(stack_body);
assert(!sigaltstack(&sigseg_stack, nullptr));
sigseg_handler.sa_flags = SA_ONSTACK;
#else
sigseg_handler.sa_flags = SA_RESTART;
#endif
sigseg_handler.sa_handler = &handler;
assert(!sigaction(SIGSEGV, &sigseg_handler, nullptr));
cout << "sig action set" << endl;
foo();
return 0;
}
I would use the code that generates a stack trace for leaked memory in Visual Leak Detector. This only works on Win32, though.
If you still want to go it alone as I did you can link against bfd and avoid using addr2line as I have done here:
https://github.com/gnif/LookingGlass/blob/master/common/src/platform/linux/crash.c
This produces the output:
[E] crash.linux.c:170 | crit_err_hdlr | ==== FATAL CRASH (a12-151-g28b12c85f4+1) ====
[E] crash.linux.c:171 | crit_err_hdlr | signal 11 (Segmentation fault), address is (nil)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (0) /home/geoff/Projects/LookingGlass/client/src/main.c:936 (register_key_binds)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (1) /home/geoff/Projects/LookingGlass/client/src/main.c:1069 (run)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (2) /home/geoff/Projects/LookingGlass/client/src/main.c:1314 (main)
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f8aa65f809b]
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (4) ./looking-glass-client(_start+0x2a) [0x55c70fc4aeca]
In addition to above answers, here how you make Debian Linux OS generate core dump
Create a “coredumps” folder in the user's home folder
Go to /etc/security/limits.conf. Below the ' ' line, type “ soft core unlimited”, and “root soft core unlimited” if enabling core dumps for root, to allow unlimited space for core dumps.
NOTE: “* soft core unlimited” does not cover root, which is why root has to be specified in its own line.
To check these values, log out, log back in, and type “ulimit -a”. “Core file size” should be set to unlimited.
Check the .bashrc files (user, and root if applicable) to make sure that ulimit is not set there. Otherwise, the value above will be overwritten on startup.
Open /etc/sysctl.conf.
Enter the following at the bottom: “kernel.core_pattern = /home//coredumps/%e_%t.dump”. (%e will be the process name, and %t will be the system time)
Exit and type “sysctl -p” to load the new configuration
Check /proc/sys/kernel/core_pattern and verify that this matches what you just typed in.
Core dumping can be tested by running a process on the command line (“ &”), and then killing it with “kill -11 ”. If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.
gdb -ex 'set confirm off' -ex r -ex bt -ex q <my-program>
On Linux/unix/MacOSX use core files (you can enable them with ulimit or compatible system call). On Windows use Microsoft error reporting (you can become a partner and get access to your application crash data).
I forgot about the GNOME tech of "apport", but I don't know much about using it. It is used to generate stacktraces and other diagnostics for processing and can automatically file bugs. It's certainly worth checking in to.
You are probably not going to like this - all I can say in its favour is that it works for me, and I have similar but not identical requirements: I am writing a compiler/transpiler for a 1970's Algol-like language which uses C as it's output and then compiles the C so that as far as the user is concerned, they're generally not aware of C being involved, so although you might call it a transpiler, it's effectively a compiler that uses C as it's intermediate code. The language being compiled has a history of providing good diagnostics and a full backtrace in the original native compilers. I've been able to find gcc compiler flags and libraries etc that allow me to trap most of the runtime errors that the original compilers did (although with one glaring exception - unassigned variable trapping). When a runtime error occurs (eg arithmetic overflow, divide by zero, array index out of bounds, etc) the original compilers output a backtrace to the console listing all variables in the stack frames of every active procedure call. I struggled to get this effect in C, but eventually did so with what can only be described as a hack... When the program is invoked, the wrapper that supplies the C "main" looks at its argv, and if a special option is not present, it restarts itself under gdb with an altered argv containing both gdb options and the 'magic' option string for the program itself. This restarted version then hides those strings from the user's code by restoring the original arguments before calling the main block of the code written in our language. When an error occurs (as long as it is not one explicitly trapped within the program by user code), it exits to gdb which prints the required backtrace.
Keys lines of code in the startup sequence include:
if ((argc >= 1) && (strcmp(origargv[argc-1], "--restarting-under-gdb")) != 0) {
// initial invocation
// the "--restarting-under-gdb" option is how the copy running under gdb knows
// not to start another gdb process.
and
char *gdb [] = {
"/usr/bin/gdb", "-q", "-batch", "-nx", "-nh", "-return-child-result",
"-ex", "run",
"-ex", "bt full",
"--args"
};
The original arguments are appended to the gdb options above. That should be enough of a hint for you to do something similar for your own system.
I did look at other library-supported backtrace options (eg libbacktrace,
https://codingrelic.geekhold.com/2010/09/gcc-function-instrumentation.html, etc) but they only output the procedure call stack, not the local variables. However if anyone knows of any cleaner mechanism to get a similar effect, do please let us know. The main downside to this is that the variables are printed in C syntax, not the syntax of the language the user writes in. And (until I add suitable #line directives on every generated line of C :-() the backtrace lists the C source file and line numbers.
G
PS The gcc compile options I use are:
GCCOPTS=" -Wall -Wno-return-type -Wno-comment -g -fsanitize=undefined
-fsanitize-undefined-trap-on-error -fno-sanitize-recover=all -frecord-gcc-switches
-fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -ftrapv
-grecord-gcc-switches -O0 -ggdb3 "