This question already has answers here:
Advice on Mocking System Calls
(6 answers)
Closed 9 years ago.
We are now introducing unit tests in our company and thinking about best way of mocking system calls.
Consider the following code
int fd = open(path, O_RDONLY);
if (fd < 0) {
LOG.error() << "cannot load plugin " << path << std::endl;
return ERROR(ERROR_OPENING_PLUGING);
}
// do other stuff
Obviously, we need to mock system call to open
I have found following ways to do so:
Correct - in the terms of design but ugly way. Create interface and impl
class ISystem
{
public:
typedef std::auto_ptr<ISystem> Ptr;
ISystem() {};
virtual ~ISystem() {};
virtual int open(const char* file, int path) = 0;
};
class System : public ISystem
{
public:
System() {};
virtual ~System() {};
virtual int open(const char* file, int path);
static ISystem::Ptr Get();
};
and use it
Common::Error dlopen_signed(ISystem::Ptr& system, const char* path, int flags, void*& ret)
{
int fd = system->open(path, O_RDONLY);
if (fd < 0) {
LOG.error() << "cannot load plugin " << path << std::endl;
return ERROR(ERROR_OPENING_PLUGING);
}
char fd_path[32];
I dont like it because every function will need to have one more argument - ISystem::Ptr& system, that will be the same for all production code.
Also, not sure about speed (this involves extra layers for basic system calls that must be really fast)
2) Use a link seam
Linker is designed so that it prefer your versions of functions than system ones.
But this does not work for some system calls, for example open (not sure about the reason), and this solution is a little bit hackish.
3) use --wrap compiler functionality
--wrap symbol
Use a wrapper function for symbol. Any undefined reference to symbol will be resolved to __wrap_symbol. Any undefined reference to __real_symbol will be resolved to symbol. This can be used to provide a wrapper for a system function. The wrapper function should be called __wrap_symbol. If it wishes to call the system function, it should call __real_symbol. Here is a trivial example:
void *
__wrap_malloc (int c)
{
printf ("malloc called with %ld\n", c);
return __real_malloc (c);
}
This solution is nice but does not work for all compilers I guess.
The question is - which one are you using on your projects?
You have to draw a line to what to mock, and what to unit test. 100% unit tests coverage is not the ultimate goal.
If you really want to mock systam calls, the best is to put them in a wrapper (the first option in the question). Not one huge wrapper, but you should split them into several wrappers by functionality.
System calls are infrastructure
We wrap infra logic using a wrapper
Also design the wrapper such that there can be many wrappers based on context.
Also I agree with "BЈовић" 100% coverage is not the ultimate goal - balance it.
(1) example is incorrect about mock, maybe because you don't want to clarify what you are doing and your example doesn't show me what mocks what. Mocking is actually a practical exemplified use of polymorphism. People tend to wrap all things up in wrapper functions and then even in wrapper classes
IParam param=new ParamMock(xxx);
File file(param); // mocked
....
file.open(xxx)
(2) I don't know about it
(3) definitely it shows you know too much and spend too much effort for unit-testing. I always put in mind that unit-tests will be thrown away after all.
What development process are you following ?
Related
Is it possible to write some f() template function that takes a type T and a pointer to member function of signature void(T::*pmf)() as (template and/or function) arguments and returns a const char* that points to the member function's __func__ variable (or to the mangled function name)?
EDIT: I am asked to explain my use-case. I am trying to write a unit-test library (I know there is a Boost Test library for this purpose). And my aim is not to use any macros at all:
struct my_test_case : public unit_test::test {
void some_test()
{
assert_test(false, "test failed.");
}
};
My test suite runner will call my_test_case::some_test() and if its assertion fails, I want it log:
ASSERTION FAILED (&my_test_case::some_test()): test failed.
I can use <typeinfo> to get the name of the class but the pointer-to-member-function is just an offset, which gives no clue to the user about the test function being called.
It seems like what you are trying to achieve, is to get the name of the calling function in assert_test(). With gcc you can use
backtace to do that. Here is a naive example:
#include <iostream>
#include <execinfo.h>
#include <cxxabi.h>
namespace unit_test
{
struct test {};
}
std::string get_my_caller()
{
std::string caller("???");
void *bt[3]; // backtrace
char **bts; // backtrace symbols
size_t size = sizeof(bt)/sizeof(*bt);
int ret = -4;
/* get backtrace symbols */
size = backtrace(bt, size);
bts = backtrace_symbols(bt, size);
if (size >= 3) {
caller = bts[2];
/* demangle function name*/
char *name;
size_t pos = caller.find('(') + 1;
size_t len = caller.find('+') - pos;
name = abi::__cxa_demangle(caller.substr(pos, len).c_str(), NULL, NULL, &ret);
if (ret == 0)
caller = name;
free(name);
}
free(bts);
return caller;
}
void assert_test(bool expression, const std::string& message)
{
if (!expression)
std::cout << "ASSERTION FAILED " << get_my_caller() << ": " << message << std::endl;
}
struct my_test_case : public unit_test::test
{
void some_test()
{
assert_test(false, "test failed.");
}
};
int main()
{
my_test_case tc;
tc.some_test();
return 0;
}
Compiled with:
g++ -std=c++11 -rdynamic main.cpp -o main
Output:
ASSERTION FAILED my_test_case::some_test(): test failed.
Note: This is a gcc (linux, ...) solution, which might be difficult to port to other platforms!
TL;DR: It is not possible to do this in a reasonably portable way, other than using macros. Using debug symbols is really a hard solution, which will introduce a maintenance and architecture problem in the future, and a bad solution.
The names of functions, in any form, is not guaranteed to be stored in the binary [or anywhere else for that matter]. Static free functions certainly won't have to expose their name to the rest of the world, and there is no real need for virtual member functions to have their names exposed either (except when the vtable is formed in A.c and the member function is in B.c).
It is also entirely permissible for the linker to remove ALL names of functions and variables. Names MAY be used by shared libraries to find functions not present in the binary, but the "ordinal" way can avoid that too, if the system is using that method.
I can't see any other solution than making assert_test a macro - and this is actually a GOOD use-case of macros. [Well, you could of course pass __func__ as a an argument, but that's certainly NOT better than using macros in this limited case].
Something like:
#define assert_test(x, y) do_assert_test(x, y, __func__)
and then implment do_assert_test to do what your original assert_test would do [less the impossible bit of figuring out the name of the function].
If it's unit tests, and you can be sure that you will always do this with debug symbols, you could solve it in a very non-portable way by building with debug symbols and then using the debug interface to find the name of the function you are currently in. The reason I say it's non-portable is that the debug API for a given OS is not standard - Windows does it one way, Linux another, and I'm not sure how it works in MacOS - and to make matters worse, my quick search on the subject seems to indicate that reading debug symbols doesn't have an API as such - there is a debug API that allows you to inspect the current process and figure out where you are, what the registers contain, etc, but not to find out what the name of the function is. So that's definitely a harder solution than "convince whoever needs to be convinced that this is a valid use of a macro".
Consider the following code:
file_1.hpp:
typedef void (*func_ptr)(void);
func_ptr file1_get_function(void);
file1.cpp:
// file_1.cpp
#include "file_1.hpp"
static void some_func(void)
{
do_stuff();
}
func_ptr file1_get_function(void)
{
return some_func;
}
file2.cpp
#include "file1.hpp"
void file2_func(void)
{
func_ptr function_pointer_to_file1 = file1_get_function();
function_pointer_to_file1();
}
While I believe the above example is technically possible - to call a function with internal linkage only via a function pointer, is it bad practice to do so? Could there be some funky compiler optimizations that take place (auto inline, for instance) that would make this situation problematic?
There's no problem, this is fine. In fact , IMHO, it is a good practice which lets your function be called without polluting the space of externally visible symbols.
It would also be appropriate to use this technique in the context of a function lookup table, e.g. a calculator which passes in a string representing an operator name, and expects back a function pointer to the function for doing that operation.
The compiler/linker isn't allowed to make optimizations which break correct code and this is correct code.
Historical note: back in C89, externally visible symbols had to be unique on the first 6 characters; this was relaxed in C99 and also commonly by compiler extension.
In order for this to work, you have to expose some portion of it as external and that's the clue most compilers will need.
Is there a chance that there's a broken compiler out there that will make mincemeat of this strange practice because they didn't foresee someone doing it? I can't answer that.
I can only think of false reasons to want to do this though: Finger print hiding, which fails because you have to expose it in the function pointer decl, unless you are planning to cast your way around things, in which case the question is "how badly is this going to hurt".
The other reason would be facading callbacks - you have some super-sensitive static local function in module m and you now want to expose the functionality in another module for callback purposes, but you want to audit that so you want a facade:
static void voodoo_function() {
}
fnptr get_voodoo_function(const char* file, int line) {
// you tagged the question as C++, so C++ io it is.
std::cout << "requested voodoo function from " << file << ":" << line << "\n";
return voodoo_function;
}
...
// question tagged as c++, so I'm using c++ syntax
auto* fn = get_voodoo_function(__FILE__, __LINE__);
but that's not really helping much, you really want a wrapper around execution of the function.
At the end of the day, there is a much simpler way to expose a function pointer. Provide an accessor function.
static void voodoo_function() {}
void do_voodoo_function() {
// provide external access to voodoo
voodoo_function();
}
Because here you provide the compiler with an optimization opportunity - when you link, if you specify whole program optimization, it can detect that this is a facade that it can eliminate, because you let it worry about function pointers.
But is there a really compelling reason not just to remove the static from infront of voodoo_function other than not exposing the internal name for it? And if so, why is the internal name so precious that you would go to these lengths to hide that?
static void ban_account_if_user_is_ugly() {
...;
}
fnptr do_that_thing() {
ban_account_if_user_is_ugly();
}
vs
void do_that_thing() { // ban account if user is ugly
...
}
--- EDIT ---
Conversion. Your function pointer is int(*)(int) but your static function is unsigned int(*)(unsigned int) and you don't want to have to cast it.
Again: Just providing a facade function would solve the problem, and it will transform into a function pointer later. Converting it to a function pointer by hand can only be a stumbling block for the compiler's whole program optimization.
But if you're casting, lets consider this:
// v1
fnptr get_fn_ptr() {
// brute force cast because otherwise it's 'hassle'
return (fnptr)(static_fn);
}
int facade_fn(int i) {
auto ui = static_cast<unsigned int>(i);
auto result = static_fn(ui);
return static_cast<int>(result);
}
Ok unsigned to signed, not a big deal. And then someone comes along and changes what fnptr needs to be to void(int, float);. One of the above becomes a weird runtime crash and one becomes a compile error.
Imaging a class which is doing the following thing
class AClass
{
AClass() : mode(0) {}
void a()
{
if (mode != 0) throw ("Error mode should be 0");
// we pass the test, so now do something
...
mode = 1;
}
void b()
{
if (mode != 1) throw("Error mode should be 1");
// we pass the test, so now do something
...
}
int mode;
};
The class contains many methods (easily than 20) and for each one of these methods we need to do a check on the value of mode which is obviously a lot of code duplication. Furthermore, we can identify two categories of methods, those who will throw an error if mode !=0 and those who will throw an error if mode != 1. Could it somehow be possible to group these methods in two categories (category A = method who throw an error if mode != 0) and category B for method who throw an error if mode != 1)?
EDIT: Looking at the current answers I realise the way I formulate the question and the problem is probably not clear enough. What I want to avoid is to have to call for a function in each method of the class. Whether we write code at the beginning of the methods or put this code in a function and call this function is not the problem. The question is whether we can avoid this all together. Whether there is a technique that would help to automatically check whether the call to a method of a class is valid depending on some context.
AClass is actually an API in the context of my project. a(), b(), etc. are some functions that the programmer can call if she/he wants to use the API however some of these methods can only be called in some precise order. For example you can see in the code that a() sets mode = 1. So the programmer could do something like this:
a(); // mode = 0 so it's good
b(); // mode = 1 so it's good
but this code needs to fail (it will compile of course but at execution time I need to throw an error mentioning that the context in which b() was called was wrong.
b(); // mode 0 so it won't work
a(); // it will compile but throw an exception
I tried to see if any pattern could work for doing this but couldn't find anything at all. It seems impossible to me and I believe the only option is really to write the necessary code. Could anyone though suggest something? Thank you very much.
Just add private member functions:
void assert_mode_0() {
assert_mode(0);
}
void assert_mode_1() {
assert_mode(1);
}
void assert_mode(int m) {
if (mode != m)
throw msg[m];
}
with a suitable definition of msg, of course.
Aside from implementing the check in a dedicated method (a great suggestion), you could also consider decomposing the behavior in AClass into two distinct classes, or delegate the specific portion to a new pair of classes. This seems especially appropriate if the mode is invariant for an instance (as it is in the example).
Well I guess the simplest solution would be defining a macro or some inline function like this:
#define checkErrorMode0(x) \
if ((x) != 0) throw ("Error mode should be 0");
#define checkErrorMode1(x) \
if ((x) != 1) throw ("Error mode should be 1");
// or, probably within your class
inline void checkErrorMode0(int x){
if ( x != 0 ) throw ("Error mode should be 0");
}
inline void checkErrorMode1(int x){
if ( x != 1 ) throw ("Error mode should be 1");
}
So you could simply call one of these methods inside of the functions that require them.
But most likely there is a more elegant workaround for what you want to do.
After looking into the problem a bit more, it seems that the closest helpful answer is (by Nick):
Try looking into Aspect Oriented Software Development en.wikipedia.org/wiki/Aspect-oriented_software_development – Nick
The Wikipedia page is not easy to read and doesn't provide a C++ example, so it stays very abstract at first, but if you search for Aspect Oriented Programming and C++ you will find links with examples.
The idea behind it (and it just a very quick summary) is to find a way of adding "services" or "functionalities" to a class. These services can notably be added at compile time through the use of templates. This is what I was intuitively experimenting with as an attempt at solving my problem, and I am glad to see this technique has been around for many years.
This document is a good reference:
Aspect-Oriented Programming & C++ By Christopher Diggins, August 01, 2004.
And I found this link with example useful to understand the concept:
Implementing Aspects using Generative Programming by Calum Grant.
this is my first question after long time checking on this marvelous webpage.
Probably my question is a little silly but I want to know others opinion about this. What is better, to create several specific methods or, on the other hand, only one generic method? Here is an example...
unsigned char *Method1(CommandTypeEnum command, ParamsCommand1Struct *params)
{
if(params == NULL) return NULL;
// Construct a string (command) with those specific params (params->element1, ...)
return buffer; // buffer is a member of the class
}
unsigned char *Method2(CommandTypeEnum command, ParamsCommand2Struct *params)
{
...
}
unsigned char *Method3(CommandTypeEnum command, ParamsCommand3Struct *params)
{
...
}
unsigned char *Method4(CommandTypeEnum command, ParamsCommand4Struct *params)
{
...
}
or
unsigned char *Method(CommandTypeEnum command, void *params)
{
switch(command)
{
case CMD_1:
{
if(params == NULL) return NULL;
ParamsCommand1Struct *value = (ParamsCommand1Struct *) params;
// Construct a string (command) with those specific params (params->element1, ...)
return buffer;
}
break;
// ...
default:
break;
}
}
The main thing I do not really like of the latter option is this,
ParamsCommand1Struct *value = (ParamsCommand1Struct *) params;
because "params" could not be a pointer to "ParamsCommand1Struct" but a pointer to "ParamsCommand2Struct" or someone else.
I really appreciate your opinions!
General Answer
In Writing Solid Code, Steve Macguire's advice is to prefer distinct functions (methods) for specific situations. The reason is that you can assert conditions that are relevant to the specific case, and you can more easily debug because you have more context.
An interesting example is the standard C run-time's functions for dynamic memory allocation. Most of it is redundant, as realloc can actually do (almost) everything you need. If you have realloc, you don't need malloc or free. But when you have such a general function, used for several different types of operations, it's hard to add useful assertions and it's harder to write unit tests, and it's harder to see what's happening when debugging. Macquire takes it a step farther and suggests that, not only should realloc just do _re_allocation, but it should probably be two distinct functions: one for growing a block and one for shrinking a block.
While I generally agree with his logic, sometimes there are practical advantages to having one general purpose method (often when operations is highly data-driven). So I usually decide on a case by case basis, with a bias toward creating very specific methods rather than overly general purpose ones.
Specific Answer
In your case, I think you need to find a way to factor out the common code from the specifics. The switch is often a signal that you should be using a small class hierarchy with virtual functions.
If you like the single method approach, then it probably should be just a dispatcher to the more specific methods. In other words, each of those cases in the switch statement simply call the appropriate Method1, Method2, etc. If you want the user to see only the general purpose method, then you can make the specific implementations private methods.
Generally, it's better to offer separate functions, because they by their prototype names and arguments communicate directly and visibly to the user that which is available; this also leads to more straightforward documentation.
The one time I use a multi-purpose function is for something like a query() function, where a number of minor query functions, rather than leading to a proliferation of functions, are bundled into one, with a generic input and output void pointer.
In general, think about what you're trying to communicate to the API user by the API prototypes themselves; a clear sense of what the API can do. He doesn't need excessive minutae; he does need to know the core functions which are the entire point of having the API in the first place.
First off, you need to decide which language you are using. Tagging the question with both C and C++ here makes no sense. I am assuming C++.
If you can create a generic function then of course that is preferable (why would you prefer multiple, redundant functions?) The question is; can you? However, you seem to be unaware of templates. We need to see what you have omitted here to tell if you if templates are suitable however:
// Construct a string (command) with those specific params (params->element1, ...)
In the general case, assuming templates are appropriate, all of that turns into:
template <typename T>
unsigned char *Method(CommandTypeEnum command, T *params) {
// more here
}
On a side note, how is buffer declared? Are you returning a pointer to dynamically allocated memory? Prefer RAII type objects and avoid dynamically allocating memory like that if so.
If you are using C++ then I would avoid using void* as you don't really need to. There is nothing wrong with having multiple methods. Note that you don't actually have to rename the function in your first set of examples - you can just overload a function using different parameters so that there is a separate function signature for each type. Ultimately, this kind of question is very subjective and there are a number of ways of doing things. Looking at your functions of the first type, you would perhaps be well served by looking into the use of templated functions
You could create a struct. That's what I use to handle console commands.
typedef int (* pFunPrintf)(const char*,...);
typedef void (CommandClass::*pKeyFunc)(char *,pFunPrintf);
struct KeyCommand
{
const char * cmd;
unsigned char cmdLen;
pKeyFunc pfun;
const char * Note;
long ID;
};
#define CMD_FORMAT(a) a,(sizeof(a)-1)
static KeyCommand Commands[]=
{
{CMD_FORMAT("one"), &CommandClass::CommandOne, "String Parameter",0},
{CMD_FORMAT("two"), &CommandClass::CommandTwo, "String Parameter",1},
{CMD_FORMAT("three"), &CommandClass::CommandThree, "String Parameter",2},
{CMD_FORMAT("four"), &CommandClass::CommandFour, "String Parameter",3},
};
#define AllCommands sizeof(Commands)/sizeof(KeyCommand)
And the Parser function
void CommandClass::ParseCmd( char* Argcommand )
{
unsigned int x;
for ( x=0;x<AllCommands;x++)
{
if(!memcmp(Commands[x].cmd,Argcommand,Commands[x].cmdLen ))
{
(this->*Commands[x].pfun)(&Argcommand[Commands[x].cmdLen],&::printf);
break;
}
}
if(x==AllCommands)
{
// Unknown command
}
}
I use a thread safe printf pPrintf, so ignore it.
I don't really know what you want to do, but in C++ you probably should derive multiple classes from a Formatter Base class like this:
class Formatter
{
virtual void Format(unsigned char* buffer, Command command) const = 0;
};
class YourClass
{
public:
void Method(Command command, const Formatter& formatter)
{
formatter.Format(buffer, command);
}
private:
unsigned char* buffer_;
};
int main()
{
//
Params1Formatter formatter(/*...*/);
YourClass yourObject;
yourObject.Method(CommandA, formatter);
// ...
}
This removes the resposibility to handle all that params stuff from your class and makes it closed for changes. If there will be new commands or parameters during further development you don't have to modifiy (and eventually break) existing code but add new classes that implement the new stuff.
While not full answer this should guide you in correct direction: ONE FUNCTION ONE RESPONSIBILITY. Prefer the code where it is responsible for one thing only and does it well. The code whith huge switch statement (which is not bad by itself) where you need cast void * to some other type is a smell.
By the way I hope you do realise that according to standard you can only cast from void * to <type> * only when the original cast was exactly from <type> * to void *.
I am writing a library that I would like to be portable. Thus, it should not depend on glibc or Microsoft extensions or anything else that is not in the standard. I have a nice hierarchy of classes derived from std::exception that I use to handle errors in logic and input. Knowing that a particular type of exception was thrown at a particular file and line number is useful, but knowing how the execution got there would be potentially much more valuable, so I have been looking at ways of acquiring the stack trace.
I am aware that this data is available when building against glibc using the functions in execinfo.h (see question 76822) and through the StackWalk interface in Microsoft's C++ implementation (see question 126450), but I would very much like to avoid anything that is not portable.
I was thinking of implementing this functionality myself in this form:
class myException : public std::exception
{
public:
...
void AddCall( std::string s )
{ m_vCallStack.push_back( s ); }
std::string ToStr() const
{
std::string l_sRet = "";
...
l_sRet += "Call stack:\n";
for( int i = 0; i < m_vCallStack.size(); i++ )
l_sRet += " " + m_vCallStack[i] + "\n";
...
return l_sRet;
}
private:
...
std::vector< std::string > m_vCallStack;
};
ret_type some_function( param_1, param_2, param_3 )
{
try
{
...
}
catch( myException e )
{
e.AddCall( "some_function( " + param_1 + ", " + param_2 + ", " + param_3 + " )" );
throw e;
}
}
int main( int argc, char * argv[] )
{
try
{
...
}
catch ( myException e )
{
std::cerr << "Caught exception: \n" << e.ToStr();
return 1;
}
return 0;
}
Is this a terrible idea? It would mean a lot of work adding try/catch blocks to every function, but I can live with that. It would not work when the cause of the exception is memory corruption or lack of memory, but at that point you are pretty much screwed anyway. It may provide misleading information if some functions in the stack do not catch exceptions, add themselves to the list, and rethrow, but I can at least provide a guarantee that all of my library functions do so. Unlike a "real" stack trace I will not get the line number in calling functions, but at least I would have something.
My primary concern is the possibility that this will cause a slowdown even when no exceptions are actually thrown. Do all of these try/catch blocks require an additional set-up and tear-down on each function invocation, or is somehow handled at compile-time? Or are there other issues I have not considered?
I think this is a really bad idea.
Portability is a very worthy goal, but not when it results in a solution that is intrusive, performance-sapping, and an inferior implementation.
Every platform (Windows/Linux/PS2/iPhone/etc) I've worked on has offered a way to walk the stack when an exception occurs and match addresses to function names. Yes, none of these are portable but the reporting framework can be and it usually takes less than a day or two to write a platform-specific version of stack walking code.
Not only is this less time than it'd take creating/maintaining a cross-platform solution, but the results are far better;
No need to modify functions
Traps crashes in standard or third party libraries
No need for a try/catch in every function (slow and memory intensive)
Look up Nested Diagnostic Context once. Here is a little hint:
class NDC {
public:
static NDC* getContextForCurrentThread();
int addEntry(char const* file, unsigned lineNo);
void removeEntry(int key);
void dump(std::ostream& os);
void clear();
};
class Scope {
public:
Scope(char const *file, unsigned lineNo) {
NDC *ctx = NDC::getContextForCurrentThread();
myKey = ctx->addEntry(file,lineNo);
}
~Scope() {
if (!std::uncaught_exception()) {
NDC *ctx = NDC::getContextForCurrentThread();
ctx->removeEntry(myKey);
}
}
private:
int myKey;
};
#define DECLARE_NDC() Scope s__(__FILE__,__LINE__)
void f() {
DECLARE_NDC(); // always declare the scope
// only use try/catch when you want to handle an exception
// and dump the stack
try {
// do stuff in here
} catch (...) {
NDC* ctx = NDC::getContextForCurrentThread();
ctx->dump(std::cerr);
ctx->clear();
}
}
The overhead is in the implementation of the NDC. I was playing with a lazily evaluated version as well as one that only kept a fixed number of entries as well. The key point is that if you use constructors and destructors to handle the stack so that you don't need all of those nasty try/catch blocks and explicit manipulation everywhere.
The only platform specific headache is the getContextForCurrentThread() method. You can use a platform specific implementation using thread local storage to handle the job in most if not all cases.
If you are more performance oriented and live in the world of log files, then change the scope to hold a pointer to the file name and line number and omit the NDC thing altogether:
class Scope {
public:
Scope(char const* f, unsigned l): fileName(f), lineNo(l) {}
~Scope() {
if (std::uncaught_exception()) {
log_error("%s(%u): stack unwind due to exception\n",
fileName, lineNo);
}
}
private:
char const* fileName;
unsigned lineNo;
};
This will give you a nice stack trace in your log file when an exception is thrown. No need for any real stack walking, just a little log message when an exception is being thrown ;)
I don't think there's a "platform independent" way to do this - after all, if there was, there wouldn't be a need for StackWalk or the special gcc stack tracing features you mention.
It would be a bit messy, but the way I would implement this would be to create a class that offers a consistent interface for accessing the stack trace, then have #ifdefs in the implementation that use the appropriate platform-specific methods to actually put the stack trace together.
That way your usage of the class is platform independent, and just that class would need to be modified if you wanted to target some other platform.
In the debugger:
To get the stack trace of where an exception is throw from I just stcik the break point in std::exception constructor.
Thus when the exception is created the debugger stops and you can then see the stack trace at that point. Not perfect but it works most of the time.
Stack managing is one of those simple things that get complicated very quickly. Better leave it for specialized libraries. Have you tried libunwind? Works great and AFAIK it's portable, though I've never tried it on Windows.
This will be slower but looks like it should work.
From what I understand the problem in making a fast, portable, stack trace is that the stack implementation is both OS and CPU specific, so it is implicitly a platform specific problem. An alternative would be to use the MS/glibc functions and to use #ifdef and appropriate preprocessor defines (e.g. _WIN32) to implement the platform specific solutions in different builds.
Since stack usage is highly platform and implementation dependent, there is no way to do it directly that is completely portable. However, you could build a portable interface to a platform and compiler specific implementation, localizing the issues as much as possible. IMHO, this would be your best approach.
The tracer implementation would then link to whatever platform specific helper libraries are available. It would then operate only when an exception occurs, and even then only if you called it from a catch block. Its minimal API would simply return a string containing the whole trace.
Requiring the coder to inject catch and rethrow processing in the call chain has significant runtime costs on some platforms, and imposes a large future maintenance cost.
That said, if you do choose to use the catch/throw mechanism, don't forget that even C++ still has the C preprocessor available, and that the macros __FILE__ and __LINE__ are defined. You can use them to include the source file name and line number in your trace information.