I am writing a library that I would like to be portable. Thus, it should not depend on glibc or Microsoft extensions or anything else that is not in the standard. I have a nice hierarchy of classes derived from std::exception that I use to handle errors in logic and input. Knowing that a particular type of exception was thrown at a particular file and line number is useful, but knowing how the execution got there would be potentially much more valuable, so I have been looking at ways of acquiring the stack trace.
I am aware that this data is available when building against glibc using the functions in execinfo.h (see question 76822) and through the StackWalk interface in Microsoft's C++ implementation (see question 126450), but I would very much like to avoid anything that is not portable.
I was thinking of implementing this functionality myself in this form:
class myException : public std::exception
{
public:
...
void AddCall( std::string s )
{ m_vCallStack.push_back( s ); }
std::string ToStr() const
{
std::string l_sRet = "";
...
l_sRet += "Call stack:\n";
for( int i = 0; i < m_vCallStack.size(); i++ )
l_sRet += " " + m_vCallStack[i] + "\n";
...
return l_sRet;
}
private:
...
std::vector< std::string > m_vCallStack;
};
ret_type some_function( param_1, param_2, param_3 )
{
try
{
...
}
catch( myException e )
{
e.AddCall( "some_function( " + param_1 + ", " + param_2 + ", " + param_3 + " )" );
throw e;
}
}
int main( int argc, char * argv[] )
{
try
{
...
}
catch ( myException e )
{
std::cerr << "Caught exception: \n" << e.ToStr();
return 1;
}
return 0;
}
Is this a terrible idea? It would mean a lot of work adding try/catch blocks to every function, but I can live with that. It would not work when the cause of the exception is memory corruption or lack of memory, but at that point you are pretty much screwed anyway. It may provide misleading information if some functions in the stack do not catch exceptions, add themselves to the list, and rethrow, but I can at least provide a guarantee that all of my library functions do so. Unlike a "real" stack trace I will not get the line number in calling functions, but at least I would have something.
My primary concern is the possibility that this will cause a slowdown even when no exceptions are actually thrown. Do all of these try/catch blocks require an additional set-up and tear-down on each function invocation, or is somehow handled at compile-time? Or are there other issues I have not considered?
I think this is a really bad idea.
Portability is a very worthy goal, but not when it results in a solution that is intrusive, performance-sapping, and an inferior implementation.
Every platform (Windows/Linux/PS2/iPhone/etc) I've worked on has offered a way to walk the stack when an exception occurs and match addresses to function names. Yes, none of these are portable but the reporting framework can be and it usually takes less than a day or two to write a platform-specific version of stack walking code.
Not only is this less time than it'd take creating/maintaining a cross-platform solution, but the results are far better;
No need to modify functions
Traps crashes in standard or third party libraries
No need for a try/catch in every function (slow and memory intensive)
Look up Nested Diagnostic Context once. Here is a little hint:
class NDC {
public:
static NDC* getContextForCurrentThread();
int addEntry(char const* file, unsigned lineNo);
void removeEntry(int key);
void dump(std::ostream& os);
void clear();
};
class Scope {
public:
Scope(char const *file, unsigned lineNo) {
NDC *ctx = NDC::getContextForCurrentThread();
myKey = ctx->addEntry(file,lineNo);
}
~Scope() {
if (!std::uncaught_exception()) {
NDC *ctx = NDC::getContextForCurrentThread();
ctx->removeEntry(myKey);
}
}
private:
int myKey;
};
#define DECLARE_NDC() Scope s__(__FILE__,__LINE__)
void f() {
DECLARE_NDC(); // always declare the scope
// only use try/catch when you want to handle an exception
// and dump the stack
try {
// do stuff in here
} catch (...) {
NDC* ctx = NDC::getContextForCurrentThread();
ctx->dump(std::cerr);
ctx->clear();
}
}
The overhead is in the implementation of the NDC. I was playing with a lazily evaluated version as well as one that only kept a fixed number of entries as well. The key point is that if you use constructors and destructors to handle the stack so that you don't need all of those nasty try/catch blocks and explicit manipulation everywhere.
The only platform specific headache is the getContextForCurrentThread() method. You can use a platform specific implementation using thread local storage to handle the job in most if not all cases.
If you are more performance oriented and live in the world of log files, then change the scope to hold a pointer to the file name and line number and omit the NDC thing altogether:
class Scope {
public:
Scope(char const* f, unsigned l): fileName(f), lineNo(l) {}
~Scope() {
if (std::uncaught_exception()) {
log_error("%s(%u): stack unwind due to exception\n",
fileName, lineNo);
}
}
private:
char const* fileName;
unsigned lineNo;
};
This will give you a nice stack trace in your log file when an exception is thrown. No need for any real stack walking, just a little log message when an exception is being thrown ;)
I don't think there's a "platform independent" way to do this - after all, if there was, there wouldn't be a need for StackWalk or the special gcc stack tracing features you mention.
It would be a bit messy, but the way I would implement this would be to create a class that offers a consistent interface for accessing the stack trace, then have #ifdefs in the implementation that use the appropriate platform-specific methods to actually put the stack trace together.
That way your usage of the class is platform independent, and just that class would need to be modified if you wanted to target some other platform.
In the debugger:
To get the stack trace of where an exception is throw from I just stcik the break point in std::exception constructor.
Thus when the exception is created the debugger stops and you can then see the stack trace at that point. Not perfect but it works most of the time.
Stack managing is one of those simple things that get complicated very quickly. Better leave it for specialized libraries. Have you tried libunwind? Works great and AFAIK it's portable, though I've never tried it on Windows.
This will be slower but looks like it should work.
From what I understand the problem in making a fast, portable, stack trace is that the stack implementation is both OS and CPU specific, so it is implicitly a platform specific problem. An alternative would be to use the MS/glibc functions and to use #ifdef and appropriate preprocessor defines (e.g. _WIN32) to implement the platform specific solutions in different builds.
Since stack usage is highly platform and implementation dependent, there is no way to do it directly that is completely portable. However, you could build a portable interface to a platform and compiler specific implementation, localizing the issues as much as possible. IMHO, this would be your best approach.
The tracer implementation would then link to whatever platform specific helper libraries are available. It would then operate only when an exception occurs, and even then only if you called it from a catch block. Its minimal API would simply return a string containing the whole trace.
Requiring the coder to inject catch and rethrow processing in the call chain has significant runtime costs on some platforms, and imposes a large future maintenance cost.
That said, if you do choose to use the catch/throw mechanism, don't forget that even C++ still has the C preprocessor available, and that the macros __FILE__ and __LINE__ are defined. You can use them to include the source file name and line number in your trace information.
Related
In short: it is a smart pointers in C question. Reason: embedded programming and need to ensure that if complex algorithm is used, then proper deallocation occurs with little effort on the developer side.
My favorite feature of C++ is ability to execute a proper deallocation of object allocated on stack and that goes out of scope. GO language defer provides same functionality and it is a bit closer in spirit to C.
GO defer would be the desired way of doing things in C. Is there a practical way to add such functionality?
The goal of doing so is simplification of tracking when and where object goes out of scope. Here is a quick example:
struct MyDataType *data = malloc(sizeof(struct MyDataType));
defer(data, deallocator);
if (condition) {
// dallocator(data) is called automatically
return;
}
// do something
if (irrelevant) {
struct DT *localScope = malloc(...);
defer(localScope, deallocator);
// deallocator(localScope) is called when we exit this scope
}
struct OtherType *data2 = malloc(...);
defer(data2, deallocator);
if (someOtherCondition) {
// dallocator(data) and deallocator(data2) are called in the order added
return;
}
In other languages I could create an anonymous function inside the code block, assign it to the variable and execute manually in front of every return. This would be at least a partial solution. In GO language defer functions can be chained. Manual chaining with anonymous functions in C is error prone and impractical.
Thank you
In C++, I've seen "stack based classes" that follow the RAII pattern. You could make a general purpose Defer class (or struct) that can take any arbitrary function or lambda.
For example:
#include <cstddef>
#include <functional>
#include <iostream>
#include <string>
using std::cout;
using std::endl;
using std::function;
using std::string;
struct Defer {
function<void()> action;
Defer(function<void()> doLater) : action{doLater} {}
~Defer() {
action();
}
};
void Subroutine(int i) {
Defer defer1([]() { cout << "Phase 1 done." << endl; });
if (i == 1) return;
char const* p = new char[100];
Defer defer2([p]() { delete[] p; cout << "Phase 2 done, and p deallocated." << endl; });
if (i == 2) return;
string s = "something";
Defer defer3([&s]() { s = ""; cout << "Phase 3 done, and s set to empty string." << endl; });
}
int main() {
cout << "Call Subroutine(1)." << endl;
Subroutine(1);
cout << "Call Subroutine(2)." << endl;
Subroutine(2);
cout << "Call Subroutine(3)." << endl;
Subroutine(3);
return EXIT_SUCCESS;
}
Many different answers, but a few interesting details was not said.
Of course destructors of C++ are very strong and should be used very often. Sometime some smart pointers could help you. But the mechanism, that is the most resemble to defer is ON_BLOCK_EXIT/ON_BLOCK_EXIT_OBJ (see http://http://www.drdobbs.com/cpp/generic-change-the-way-you-write-excepti/184403758 ). Do not forgot to read about ByRef.
One big difference between C++ and go is when deffered is called. In C++ when your program leaving scope, where is was created. But in go when your program leaving function. That means, this code won't work at all:
for i:=0; i < 10; i++ {
mutex.Lock()
defer mutex.Unlock()
/* do something under the mutex */
}
Of course C does not pretends that is object oriented and therefore there are no destructors at all. It help a lot of readability of code, because you know that your program at line X do only what is written in that line. In contrast of C++ where each closing curly bracket could cause calling of dozens destructors.
In C you can use hated statement goto. Don't use it for anything else, but it is practical to have cleanup label at the end of function call goto cleanup from many places. Bit more complicated is when more than one resource you want do release, than you need more that one cleanup. Than your function finish with
cleanup_file:
fclose(f);
cleanup_mutex:
pthread_mutex_unlock(mutex);
return ret;
}
C does not have destructors (unless you think of the GCC specific variable attribute cleanup, which is weird and rarely used; notice also that the GCC function attribute destructor is not what other languages, C++ notably, call destructor). C++ have them. And C & C++ are very different languages.
In C++11, you might define your class, having a std::vector or std::function-s, initialized using a std::initialized_list of lambda expressions (and perhaps dynamically augmented by some push_back). Then its destructor could mimic Go's defer-ed statements. But this is not idiomatic.
Go have defer statements and they are idiomatic in Go.
I recommend sticking to the idioms of your programming languages.
(In other words: don't think in Go while coding in C++)
You could also embed some interpreter (e.g. Lua or Guile) in your application. You might also learn more about garbage collection techniques and concepts and use them in your software (in other words, design your application with its specific GC).
Reason: embedded programming and need to ensure that if complex algorithm is used, then proper deallocation occurs with little effort on the developer side.
You might use arena-based allocation techniques, and de-allocate the arena when suitable... When you think about that, it is similar to copying GC techniques.
Maybe you dream of some homoiconic language with a powerful macro system suitable for meta-programming. Then look into Common Lisp.
I just implemented a very simple thing like defer in golang several days ago.
The only one behaviour different from golang is my defer will not be executed when you throw an exception but does not catch it at all. Another difference is this cannot accept a function with multiple arguments like in golang, but we can deal it with lambda capturing local variables.
The implementations are here.
class _Defer {
std::function<void()> __callback;
public:
_Defer(_Defer &&);
~_Defer();
template <typename T>
_Defer(T &&);
};
_Defer::_Defer(_Defer &&__that)
: __callback{std::forward<std::function<void()>>(__that.__callback)} {
}
template <typename T>
_Defer::_Defer(T &&__callback)
: __callback{
static_cast<std::function<void()>>(std::forward<T>(__callback))
} {
static_assert(std::is_convertible<T, std::function<void()>>::value,
"Cannot be convert to std::function<void()>.");
}
_Defer::~_Defer() {
this->__callback();
}
And then I defined some macros to make my defer like a keyword in C++ (just for fun)
#define __defer_concatenate(__lhs, __rhs) \
__lhs##__rhs
#define __defer_declarator(__id) \
if (0); /* You may forgot a `;' or deferred outside of a scope. */ \
_Defer __defer_concatenate(__defer, __id) =
#define defer \
__defer_declarator(__LINE__)
The if (0); is used to prevent defer a function out of a scope. And then we can use defer like in golang.
#include <iostream>
void foo() {
std::cout << "foo" << std::endl;
}
int main() {
defer []() {
std::cout << "bar" << std::endl;
};
defer foo;
}
This will print
foo
bar
to screen.
GO defer would be the desired way of doing things in C. Is there a practical way to add such functionality?
The goal of doing so is simplification of tracking when and where object goes out of scope.
C does not have any built-in mechanism for automatically invoking any kind of behavior at the end of an object's lifetime. The object itself ceases to exist, and any memory it occupied is available for re-use, but there is no associated hook for executing code.
For some kinds of objects, that is entirely satisfactory by itself -- those whose values do not refer to other objects with allocated storage duration that need to be cleaned up as well. In particular, if struct MyDataType in your example is such a type, then you get automatic cleanup for free by declaring instances as automatic variables instead of allocating them dynamically:
void foo(void) {
// not a pointer:
struct MyDataType data /* = initializer */;
// ...
/* The memory (directly) reserved for 'data' is released */
}
For objects that require attention at the end of their lifetime, it is generally a matter of code style and convention to ensure that you know when to clean up. It helps, for example, to declare all of your variables at the top of the innermost block containing them, though C itself does not require this. It can also help to structure your code so that for each object that requires custom cleanup, all code paths that may execute during its lifetime converge at the end of that lifetime.
Myself, as a matter of personal best practices, I always try to write any cleanup code needed for a given object as soon as I write its declaration.
In other languages I could create an anonymous function inside the code block, assign it to the variable and execute manually in front of every return. This would be at least a partial solution. In GO language defer functions can be chained. Manual chaining with anonymous functions in C is error prone and impractical
C has neither anonymous functions nor nested ones. It often does make sense, however, to write (named) cleanup functions for data types that require cleanup. These are analogous to C++ destructors, but you must call them manually.
The bottom line is that many C++ paradigms such as smart pointers, and coding practices that depend on them, simply do not work in C. You need different approaches, and they exist, but converting a large body of existing C++ code to idiomatic C is a distinctly non-trivial undertaking.
For those using C, I’ve built a preprocessor in C (open source, Apache license) that inserts the deferred code at the end of each block:
https://sentido-labs.com/en/library/#cedro
GitHub: https://github.com/Sentido-Labs/cedro/
It includes a C utility that wraps the compiler (works out-of-the-box with GCC and clang, configurable) so you can use it as drop-in replacement for cc, called cedrocc, and if you decide to get rid of it, running cedro on a C source file will produce plain C. (see the examples in the manual)
The alternatives I know about are listed in the “Related work” part of the documentation:
Apart from the already mentioned «A defer mechanism for C», there are macros that use a for loop as for (allocation and initialization; condition; release) { actions } [a] or other techniques [b].
[a] “P99 Scope-bound resource management with for-statements” from the same author (2010), “Would it be possible to create a scoped_lock implementation in C?” (2016), ”C compatible scoped locks“ (2021), “Modern C and What We Can Learn From It - Luca Sas [ ACCU 2021 ] 00:17:18”, 2021
[b] “Would it be possible to create a scoped_lock implementation in C?” (2016), “libdefer: Go-style defer for C” (2016), “A Defer statement for C” (2020), “Go-like defer for C that works with most optimization flag combinations under GCC/Clang” (2021)
Compilers like GCC and clang have non-standard features to do this like the __cleanup__ variable attribute.
This implementation avoids dynamic allocation and most limitations of other implementations shown here
#include<type_traits>
#include<utility>
template<typename F>
struct deferred
{
std::decay_t<F> f;
template<typename G>
deferred(G&& g) : f{std::forward<G>(g)} {}
~deferred() { f(); }
};
template<typename G>
deferred(G&&) -> deferred<G>;
#define CAT_(x, y) x##y
#define CAT(x, y) CAT_(x, y)
#define ANONYMOUS_VAR(x) CAT(x, __LINE__)
#define DEFER deferred ANONYMOUS_VAR(defer_variable) = [&]
And use it like
#include<iostream>
int main()
{
DEFER {
std::cout << "world!\n";
};
std::cout << "Hello ";
}
Now, whether to allow exceptions in DEFER is a design choice bordering on philosophy, and I'll leave it to Andrei to fill in the details.
Note all such deferring functionalities in C++ necessarily has to be bound to the scope at which it is declared, as opposed to Go's which binds to the function at which it is declared.
Consider the following code:
file_1.hpp:
typedef void (*func_ptr)(void);
func_ptr file1_get_function(void);
file1.cpp:
// file_1.cpp
#include "file_1.hpp"
static void some_func(void)
{
do_stuff();
}
func_ptr file1_get_function(void)
{
return some_func;
}
file2.cpp
#include "file1.hpp"
void file2_func(void)
{
func_ptr function_pointer_to_file1 = file1_get_function();
function_pointer_to_file1();
}
While I believe the above example is technically possible - to call a function with internal linkage only via a function pointer, is it bad practice to do so? Could there be some funky compiler optimizations that take place (auto inline, for instance) that would make this situation problematic?
There's no problem, this is fine. In fact , IMHO, it is a good practice which lets your function be called without polluting the space of externally visible symbols.
It would also be appropriate to use this technique in the context of a function lookup table, e.g. a calculator which passes in a string representing an operator name, and expects back a function pointer to the function for doing that operation.
The compiler/linker isn't allowed to make optimizations which break correct code and this is correct code.
Historical note: back in C89, externally visible symbols had to be unique on the first 6 characters; this was relaxed in C99 and also commonly by compiler extension.
In order for this to work, you have to expose some portion of it as external and that's the clue most compilers will need.
Is there a chance that there's a broken compiler out there that will make mincemeat of this strange practice because they didn't foresee someone doing it? I can't answer that.
I can only think of false reasons to want to do this though: Finger print hiding, which fails because you have to expose it in the function pointer decl, unless you are planning to cast your way around things, in which case the question is "how badly is this going to hurt".
The other reason would be facading callbacks - you have some super-sensitive static local function in module m and you now want to expose the functionality in another module for callback purposes, but you want to audit that so you want a facade:
static void voodoo_function() {
}
fnptr get_voodoo_function(const char* file, int line) {
// you tagged the question as C++, so C++ io it is.
std::cout << "requested voodoo function from " << file << ":" << line << "\n";
return voodoo_function;
}
...
// question tagged as c++, so I'm using c++ syntax
auto* fn = get_voodoo_function(__FILE__, __LINE__);
but that's not really helping much, you really want a wrapper around execution of the function.
At the end of the day, there is a much simpler way to expose a function pointer. Provide an accessor function.
static void voodoo_function() {}
void do_voodoo_function() {
// provide external access to voodoo
voodoo_function();
}
Because here you provide the compiler with an optimization opportunity - when you link, if you specify whole program optimization, it can detect that this is a facade that it can eliminate, because you let it worry about function pointers.
But is there a really compelling reason not just to remove the static from infront of voodoo_function other than not exposing the internal name for it? And if so, why is the internal name so precious that you would go to these lengths to hide that?
static void ban_account_if_user_is_ugly() {
...;
}
fnptr do_that_thing() {
ban_account_if_user_is_ugly();
}
vs
void do_that_thing() { // ban account if user is ugly
...
}
--- EDIT ---
Conversion. Your function pointer is int(*)(int) but your static function is unsigned int(*)(unsigned int) and you don't want to have to cast it.
Again: Just providing a facade function would solve the problem, and it will transform into a function pointer later. Converting it to a function pointer by hand can only be a stumbling block for the compiler's whole program optimization.
But if you're casting, lets consider this:
// v1
fnptr get_fn_ptr() {
// brute force cast because otherwise it's 'hassle'
return (fnptr)(static_fn);
}
int facade_fn(int i) {
auto ui = static_cast<unsigned int>(i);
auto result = static_fn(ui);
return static_cast<int>(result);
}
Ok unsigned to signed, not a big deal. And then someone comes along and changes what fnptr needs to be to void(int, float);. One of the above becomes a weird runtime crash and one becomes a compile error.
What error handling schemes people use in c++ when its necessary, for X or Y reason, to avoid exceptions? I've implemented my own strategy, but i want to know what other people have come up with, and bring discussion on the topic about benefits and drawbacks of each approach
Now, to explain the scheme i'm using on a particular project, it can be summed up like this. Methods that would normally require to throw, implement an interface like:
bool methodName( ...parameters.... , ErrorStack& errStack)
{
if (someError) { errStack.frames.push_back( ErrorFrame( ErrorType , ErrorSource ) );
return false;
}
... normal processing ...
return true;
}
in short, the return parameter says if the processing was ok or an error occurred. the Error Stack is basically a std::vector of error frames that contain detailed information about the error:
enum ErrorCondition {
OK,
StackOverflowInminent,
IndexOutOfBounds,
OutOfMemory
};
struct ErrorFrame {
ErrorCondition condition;
std::string source;
ErrorFrame( ErrorCondition cnd , const char* src ) : condition(cnd) , source(src) {}
inline bool isOK() const {
return OK == condition;
}
};
struct ErrorStack {
std::vector< ErrorFrame > frames;
void clear() {
frames.clear();
}
};
The advantage of this approach is a detailed stack of errors similar to what java exceptions give, but without the runtime overhead of exceptions. The main drawback is that (besides the non-standardness and that i still have to handle exceptions from third-party code somehow and tranlate to an ErrorCondition), is that is hard to mantain the ErrorCondition enum, since multiple components of the source base require different errors, so a second version of this strategy could use a inheritance hierarchy of some sort for the errorConditions, but i'm still not confident about the best way to achieve it
check out http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf Page 32 onwards
I have a setup that looks like this.
class Checker
{ // member data
Results m_results; // see below
public:
bool Check();
private:
bool Check1();
bool Check2();
// .. so on
};
Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below)
bool Checker::Check()
{ // initilisations etc.
Check1();
Check2();
// ... so on
}
A typical Check function would look like this:
bool Checker::Check1()
{ double result;
// lots of code
m_results.SetCheck1Result(result);
}
And the results class looks something like this:
class Results
{ double m_check1Result;
double m_check2Result;
// ...
public:
void SetCheck1Result(double d);
double GetOverallResult()
{ return max(m_check1Result, m_check2Result, ...); }
};
Note: all code is oversimplified.
The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either:
Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR
In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check().
The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc.
The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding?
Is there a better 3rd option?
I don't think this is a good idea. Exceptions should be limited to, well, exceptional situations. Yours is a question of normal control flow.
It seems you could very well move all the redundant code dealing with the result out of the checks and into the calling function. The resulting code would be cleaner and probably much easier to understand than non-exceptional exceptions.
Change your CheckX() functions to return the double they produce and leave dealing with the result to the caller. The caller can more easily do this in a way that doesn't involve redundancy.
If you want to be really fancy, put those functions into an array of function pointers and iterate over that. Then the code for dealing with the results would all be in a loop. Something like:
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx]();
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
Edit: I had overlooked that you need to call any of N SetCheckResultX() functions, too, which would be impossible to incorporate into my sample code. So either you can shoehorn this into an array, too, (change them to SetCheckResult(std::size_t idx, double result)) or you would have to have two function pointers in each table entry:
struct check_tbl_entry {
check_fnc_t checker;
set_result_fnc_t setter;
};
check_tbl_entry check_tbl[] = { { &Checker::Check1, &Checker::SetCheck1Result }
, { &Checker::Check2, &Checker::SetCheck2Result }
// ...
};
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx].checker();
check_tbl[idx].setter(result);
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
(And, no, I'm not going to attempt to write down the correct syntax for a member function pointer's type. I've always had to look this up and still never ot this right the first time... But I know it's doable.)
Exceptions are meant for cases that shouldn't happen during normal operation. They're hardly non-intrusive; their very nature involves unwinding the call stack, calling destructors all over the place, yanking the control to a whole other section of code, etc. That stuff can be expensive, depending on how much of it you end up doing.
Even if it were free, though, using exceptions as a normal flow control mechanism is a bad idea for one other, very big reason: exceptions aren't meant to be used that way, so people don't use them that way, so they'll be looking at your code and scratching their heads trying to figure out why you're throwing what looks to them like an error. Head-scratching usually means you're doing something more "clever" than you should be.
I have some kind of an ideological question, so:
Suppose I have some templated function
template <typename Stream>
void Foo(Stream& stream, Object& object) { ... }
which does something with this object and the stream (for example, serializes that object to the stream or something like that).
Let's say I also add some plain wrappers like (and let's say the number of these wrappers equals 2 or 3):
void FooToFile(const std::string& filename, Object& object)
{
std::ifstream stream(filename.c_str());
Foo(stream, object);
}
So, my question is:
Where in this case (ideologically) should I throw the exception if my stream is bad? Should I do this in each wrapper or just move that check to my Foo, so that it's body would look like
if (!foo.good()) throw (something);
// Perform ordinary actions
I understand that this may be not the most important part of coding and these solutions are actually equal, but I just wan't to know "the proper" way to implement this.
Thank you.
In this case it's better to throw it in the lower-level Foo function so that you don't have to copy the validation and exception throwing code in all of your wrappers. In general using exceptions correctly can make your code a lot cleaner by removing a lot of data validation checking that you might otherwise do redundantly at multiple levels in the call stack.
I would prefer not to delay notifying an error. If you know after you have created the stream, that it is no good, why call a method that works on it? I know that to reduce code-redundancy you plan to move it further down. But the downside of that approach is a less-specific error message. So this depends to some extent on the source-code context. If you could get away with a generic error message at the lower-function level you can add the code there, this will surely ease maintanence of the code especially when there are new developers on the team. If you need a specific error message better handle it at the point of failure itself.
To avoid code redundancy call a common function that makes this exception/error for you. Do not copy/paste the code in every wrapper.
The sooner you catch the exceptiont the better. The more specific the exception is - the better. Don't be scared of including most of your code into a try catch blocks, apart fromt he declaration.
For example:
int count = 0;
bool isTrue = false;
MyCustomerObject someObject = null;
try
{
// Initialise count, isTrue, someObject. Process.
}
catch(SpecificException e)
{
// Handle and throw up the stack. You don't want to lose the exception.
}
I like to use helper functions for this:
struct StreamException : std::runtime_error
{
StreamException(const std::string& s) : std::runtime_error(s) { }
virtual ~StreamException() throw() { }
};
void EnsureStreamIsGood(const std::ios& s)
{
if (!s.good()) { throw StreamException(); }
}
void EnsureStreamNotFail(const std::ios& s)
{
if (s.fail()) { throw StreamException(); }
}
I test them immediately before and after performing stream operations if I don't expect a failure.
Traditionally in C++, stream operations don't throw exceptions. This is partly for historic reasons, and partly because streaming failures are expected errors. The way C++ standard stream classes deal with this is to set a flag on a stream to indicate an error has occurred, which user code can check. Not using exceptions makes resumption (which is often required for streaming ops) easier than if exceptions were thrown.