Related
I'm implementing a helper class which has a number of useful functions which will be used in a large number of classes. However, a few of them are not designed to be called from within certain sections of code (from interrupt functions, this is an embedded project).
However, for users of this class the reasons why some functions are allowed while others are prohibited from being called from interrupt functions might not be immediately obvious, and in many cases the prohibited functions might work but can cause very subtle and hard to find bugs later on.
The best solution for me would be to cause a compiler error if the offending function is called from a code section it shouldn't be called from.
I've also considered a few non-technical solutions, but a technical one would be preferred.
Indicate it in the documentation with a warning. Might be easily missed, especially when the function seems obvious, like read_byte(), why would anyone study the documentation whether the function is reentrant or not?
Indicate it in the function's name. Ugly. Who likes function names like read_byte_DO_NOT_CALL_FROM_INTERRUPT() ?
Have a global variable in a common header, included in each and every file, which is set to true at the beginning of each interrupt, set to false at the end, and the offending functions check it at their beginning, and exit if it's set. Problem: interrupts might interrupt each other. Also, it doesn't cause compile-time warnings or errors.
Similar to #3, have a global handler with a stack, so that nested interrupts can be handled. Still has the problem of only working at runtime and it also adds a lot of overhead. Interrupts should not waste more than a clock cycle or two for this feature, if at all.
Abusing the preprocessor. Unfortunately, the naive way of a #define at the beginning and an #undef at the end of each interrupt, with an #ifdef at the beginning of the offending function doesn't work, because the preprocessor doesn't care about scope.
As interrupts are always classless functions, I could make the offending functions protected, and declare them as friends in all classes which use them. This way, it would be impossible to use them directly from within interrupts. As main() is classless, I'll have to place most of it into a class method. I don't like this too much, as it can become needlessly complicated, and the error it generates is not obvious (so users of this function might encapsulate them to "solve" the problem, without realizing what the real problem was). A compiler or linker error message like "ERROR: function_name() is not to be used from within an interrupt" would be much more preferable.
Checking the interrupt registers within the function has several issues. In a large microcontroller there are a lot of registers to check. Also, there is a very small but dangerous chance of a false positive when an interrupt flag is being set exactly one clock cycle before, so my function would fail because it thinks it was called from an interrupt, while the interrupt would be called in the next cycle. Also, in nested interrupts, the interrupt flags are cleared, causing a false negative. And finally, this is yet another runtime solution.
I did play with some very basic template metaprogramming a while ago, but I'm not that experienced with it to find a very simple and elegant solution. I would rather try other ways before committing myself to try to implement a template metaprogramming bloatware.
A solution working with only features available in C would also be acceptable, even preferable.
Some comments below. As a warning, they won't be fun reading, but I won't do you a service by not pointing out what's wrong here.
If you are calling external functions from inside an ISR, no amount of documentation or coding will help you. Since in most cases, it is bad practice to do so. The programmer must know what they are doing, or no amount of documentation or coding mechanisms will save the program.
Programmers do not design library functions specifically for the purpose of getting called from inside an ISR. Rather, programmers design ISR:s with all the special restrictions that come with an ISR in mind: make sure interrupt flags are cleared correctly, keep the code short, do not call external functions, do not block the MCU longer than necessary, consider re-entrancy, consider dangerous compiler optimizations (use volatile). A person who does not know this is not competent enough to write ISRs.
If you actually have a function int read_byte(int address) then this suggests that the program design is bad to begin with. This function could do one of two things:
Either it can read a byte some some peripheral hardware, in which case the function name is very bad and should be changed.
Or it could read any generic byte from an address, in which case the function is 100% useless "bloatware". You can safely assume that a somewhat competent C programmer can read a byte from a memory address without some bloatware holding their hand.
In either case, int is not a byte. It is a word of 16 or 32 bits. The function should be returning uint8_t. Similarly, if the parameter passed is used to descibe a memory-mapped address of an MCU, it should either have type void*, uint8_t* or uintptr_t. Everything else is wrong.
Notably, if you are using int rather than stdint.h for embedded systems programming, then this whole discussion is the least of your problems, as you haven't even gotten the fundamental basics right. Your programs will be filled to the brim with undefined behavior and implicit promotion bugs.
Overall, all the solutions you suggest are simply not acceptable. The root of the problem here appears to be the program design. Deal with that instead of inventing ways to defend the broken design with horrible meta programming.
I would suggest option 8 & 9.
Peer reviews & assertions.
You state in the comments that your interrupt functions are short. If that's really the case, then reviewing them will be trivial. Adding comments in the header will make it so that anyone can see what's going on. On adding an assert, while you make it viable that debug builds will return the wrong result in error, it will also ensure that you you will catch any calls; and give you a fighting chance during testing to catch the problem.
Ultimately, the macro processing just won't work since the best you can do is catch if a header has been included, but if the callstack goes via another wrapper (that doesn't have comments) then you just can't catch that.
Alternatively you could make your helper a template, but then that would mean every wrapper around your helper would also have to be a template so that can know if you're in an interrupt routine... which will ultimately be your entire code base.
if you have one file for all interrupt routine then this might be helpful:
define one macro in class header ,say FORBID_INTERRUPT_ROUTINE_ACCESS.
and in interrupt handler file check for that macro definition :
#ifdef FORBID_INTERRUPT_ROUTINE_ACCESS
#error : cannot access function from interrupt handlers.
#endif
if someone add header file for that class to use that class in interrupt handler then it will throw an error.
Note : you have to build target by specifying that warnings will be considered as error.
Here is the C++ template functions suggestion.
I don't think this is metaprogramming or bloatware.
First make 2 classes which will define the context which the user will be using the functions in:
class In_Interrupt_Handler;
class In_Non_Interrupt_Handler;
If You will have some common implementations between the 2 contexts, a Base class can be added:
class Handy_Base
{
protected:
static int Handy_protected() { return 0; }
public:
static int Handy_public() { return 0; }
};
The primary template definition, without any implementations. The implemenations will be provided by the specialization classes:
template< class Is_Interrupt_Handler >
class Handy_functions;
And the specializations.
// Functions can be used when inside an interrupt handler
template<>
struct Handy_functions< In_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 1; }
static int Handy2() { return 2; }
};
// Functions can be used when inside any function
template<>
struct Handy_functions< In_Non_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 4; }
static int Handy2() { return 8; }
};
In this way if the user of the API wants to access the functions, the only way is by specifing what type of functions are needed.
Example of usage:
int main()
{
using IH_funcs = Handy_functions<In_Interrupt_Handler>;
std::cout << IH_funcs::Handy1() << '\n';
std::cout << IH_funcs::Handy2() << '\n';
using Non_IH_funcs = Handy_functions<In_Non_Interrupt_Handler>;
std::cout << Non_IH_funcs::Handy1() << '\n';
std::cout << Non_IH_funcs::Handy2() << '\n';
}
In the end I think the problem boils down to the developer using Your framework. And How much Your framework requires the devloper to boilerplate.
The above does not stop the developer calling the Non Interrupt Handler functions from inside an Interrupt Handler.
I think that type of analysis would require some type of static analysis checking system.
I can't quite wrap my head around how a user will be able to distinguish between the exceptions my functions can throw. One of my functions can throw two instances of std::invalid_argument.
For example, in a constructor:
#include <stdexcept> // std::invalid_argument
#include <string>
class Foo
{
public:
void Foo(int hour, int minute)
:h(hour), m(minute)
{
if(hour < 0 || hour > 23)
throw std::invalid_argument(std::string("..."));
if(minute < 0 || minute > 59)
throw std::invalid_argument(std::string("..."));
}
}
Note: It's an example, please do not answer with bounded integers.
Say the user calls with foo(23, 62);, how would the user's exception handler distinguish between the two possible instances of std::invalid_argument?
Or am I doing this wrong, and I should inherit from std::invalid_argument to distinguish between them? That is,
class InvalidHour: public std::invalid_argument
{
public:
InvalidHour(const std::string& what_arg)
:std::invalid_argument(msg) {};
}
class InvalidMinute: public std::invalid_argument
{
public:
InvalidMinute(const std::string& what_arg)
:std::invalid_argument(msg) {};
}
Then, throw InvalidHour and InvalidMinute?
Edit: Creating a class for every possible exception seems a little too much to me, especially in a large program. Does every program that effectively uses exceptions this way come with extensive documentation on what to catch?
As mentioned in an answer, I have considered assert to. Looking around stackoverflow, I have found a majority of people saying you should throw an exception (as my particular case is for a constructor).
After looking around a lot of online information on when to use exceptions, the general consensus is to use an assert for logic errors and exceptions for runtime errors. Although, calling foo(int, int) with invalid arguments could be a runtime error. This is what I want to address.
The standard exception hierarchy is unsuitable for logic errors. Use an assert and be done with it. If you absolutely do want to transform hard bugs into harder to detect run time errors, then note that there are only two reasonable things a handler can do: achieve the contractual goal in some possibly different way (possibly just retrying the operation), or in turn throw an exception (usually just rethrowing), and that the exact cause of the original exception seldom plays any rĂ´le in this. Finally, if you do want to support code that really tries various combinations of arguments until it finds one that doesn't throw, no matter how silly that appears now that it's written out in words, well, you have std::system_error for passing an integer error code up, but you can define derived exception classes.
All that said, go for the assert.
That's what it's for.
You could also create further error classes that derive from invalid_argument, and that would make them distinguishable, but this is not a solution that scales. If what you actually want is to show the suer a message that he can understand, then the string parameter to the invalid_argument would serve that purpose.
The standard exceptions do not allow storing the additional information you want, and parsing exception messages is a bad idea. One solution is to subclass, as you mention. There are others - with the advent of std::exception_ptr. it is possible to use "inner" (or "nested") exceptions as in Java or .NET, though this feature is more applicable to exception translation. Some prefer Boost.Exception, as another solution for exceptions extensible at runtime.
Don't fall into the "just assert trap" like Cheers and hth. Simple example:
void safe_copy(const char *from, std::size_t fromLen, char *buf, std::size_t bufLen)
{
assert( fromLen <= bufLen );
std::copy(from, from + fromLen, buf);
}
There's nothing wrong with the assert per se, but if the code is compiled for release (with NDEBUG set), then safe_copy will not be safe at all, and the result may be a buffer overrun, potentially allowing a malicious party to take over the process. Throwing an exception to indicate a logic error has its own problems, as mentioned, but at least it will prevent the immediate undefined behavior in the release build. I'd therefore suggest, in security-critical functions, to use assertions in the debug, and exceptions in the release build:
void safe_copy(const char *from, std::size_t fromLen, char *buf, std::size_t bufLen)
{
assert( fromLen <= bufLen );
if ( fromLen > bufLen )
throw std::invalid_argument("safe_copy: fromLen greater than bufLen");
std::copy(from, from + fromLen, buf);
}
Of course, if you use this pattern a lot, you may wish to define a macro of your own to simplify the task. This is beyond the scope of the current topic, however.
Two other reasons to throw exceptions rather than have assertions is when you are implementing a library or some form of exportable code and cannot tell apriori how a user will want to handle some form of error or when you need checking when the user builds your code in RELEASE mode (which users often do). Note that building in RELEASE mode "takes away" any assertions.
For example, take a look at this code:
struct Node
{
int data;
Node* next;
Node(int d) : data(d), next(nullptr) {}
};
// some code
Node* n = new Node(5);
assert(n && "Nodes can't be null");
// use n
When this code is build in RELEASE mode, that assertion "doesn't exist" and the caller might get n being nullptr at runtime.
If the code threw an exception instead of the assertion, the caller can still "react" to a nullptr anomaly in both debug and release builds. The downside is that the exception approach requires a lot more boiler plate code.
I started using exceptions some weeks ago and now I wonder if there is a way to just throw a warning. This warning shouldn't force the application to exit if it isn't caught. I will give you an example in what situation I would like to use that.
There is a system that appends properties to unique ids. When I somehow try to add a property to a not yet existing id, the system should create that id internally for me, add the property to it afterwards and return the result. Of course this can't be done quietly. But since the application could stay running, I do not want to throw an exception.
How can I notify that something wasn't quite correct, but the system runs on?
Who do you want to notify? The end-user? In which case, just write a suitable message to cerr. Or better, write a wrapper function (e.g. LOG_WARNING()) to do it in a controlled manner. Or better still, use a logging framework.
But since the application could stay running, I do not want to throw an exception.
Note that an exception doesn't have to result in the application terminating. You can catch an exception higher up the stack, and handle the situation appropriately.
No, that's not possible. You can only throw and catch exceptions. If you want to be cheeky you could do
class warning : public std::exception
{
public:
warning(const std::string& msg) {}
const char* what() { return msg.c_str(); } //message of warning
private:
std::string msg;
};
Then you can:
throw warning("this is a warning");
This could be an artificially made up warning system if you want.
While there's no throwing a warning. I believe the functionality you're looking for is available from errno
You can set it to any of the standard errors or make up your own error codes. (Please document them well though.)
This could be useful if your library is meant to be used by other developers. An example of when this might be useful is with a JSON parser. JSON supports arbitrarily large numbers with arbitrary accuracy. So if internally your parser uses doubles to represent numbers if it encountered a number that it couldn't represent then it could round the number to the nearest representable number the set errno=EDOM; (argument out of range) that way, it leaves the decision up to the developers as to whether the rounding will matter. If you want to be super nice you could even add in a way to retrieve the locations of the rounds possibly even with the original text.
All of that said, this should only be used in situations where:
the warning really can be bypassed completely in some scenarios
the root source of the warning is input to the library you're writing
the in some situations the consumer of the library might care about the warning but most of the time wouldn't.
there's not a more suitable way to return the information (like a status passed by reference with an overload that doesn't require the status)
Just print a message to stderr or to your logs.
Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.
First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error
int foo(int arg) {
if (everything fine)
return some_value;
return -1; //on failure
}
The scond approach is to return a function status and pass the result within a reference
bool foo(int arg, int & result) {
if (everything fine) {
result = some_value;
return true;
}
return false; //on failure
}
Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?
Don't ignore exceptions, for exceptional and unexpected errors.
However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...
If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).
Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).
boost optional is a brilliant technique. An example will assist.
Say you have a function that returns an double and you want to signify
an error when that cannot be calculated.
double divide(double a, double b){
return a / b;
}
what to do in the case where b is 0;
boost::optional<double> divide(double a, double b){
if ( b != 0){
return a / b;
}else{
return boost::none;
}
}
use it like below.
boost::optional<double> v = divide(a, b);
if(v){
// Note the dereference operator
cout << *v << endl;
}else{
cout << "divide by zero" << endl;
}
The idea of special return values completely falls apart when you start using templates. Consider:
template <typename T>
T f( const T & t ) {
if ( SomeFunc( t ) ) {
return t;
}
else { // error path
return ???; // what can we return?
}
}
There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..
Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.
While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,
account1.rate = account2.rate = current_rate();
vs.:
set_current_rate(account1.rate);
account2.rate = account1.rate;
or:
set_current_rate(account1.rate);
set_current_rate(account2.rate);
The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.
Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.
In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).
Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.
The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.
There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.
If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.
C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.
Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.
And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()
It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:
Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.
If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.
When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.
These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.
For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.
For C, I find the fugly out parameters less offensive than fugly "magic numbers".
You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.
There's a lot to be said for this.
Example:
int count;
if (!TryParse("12x3", &count))
DisplayError(GetLastError());
edit
This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:
HKEY key;
long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);
if (errcode != ERROR_SUCCESS)
return DisplayError(errcode);
Contrast this with:
HKEY key;
if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))
return DisplayError(GetLastError());
(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)
In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:
HKEY key;
if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)
return DisplayGenericError();
edit
Looking at R.'s request, I've found a scenario where it can actually be satisfied.
For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.
However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.
I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.
In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.
I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.
I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.
If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.
Well, the first one will compile either in C and C++, so to do portable code it's fine.
The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.
I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.
Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.
Thus when you get a negative return to a function call, you can check the context from the GError variable.
A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.
For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?
Apart from doing it the correct way, which of these two stupid ways do you prefer?
I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.
Does additional parameter in the
second method bring notable
performance overhead?
Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.
I'm writing a reactive software, which repeatedly recieves input, processes it and emits relevant output. The main loop looks something like:
initialize();
while (true) {
Message msg,out;
recieve(msg);
process(msg,out);
//no global state is saved between loop iterations!
send(out);
}
I want that whatever error occured during the proccess phase, whetehr it is out of memory error, logical error, invalid assertion etc, the program will clean up whatever it did, and keep running. I'll assume it is invalid input, and simply ignore it.
C++'s exception are exceptionally good for that situation, I could surround process with try/catch clause, and throw exception whenever something goes wrog. The only thing I need to make sure that I clean up all my resources before throwing an exception. This could be verified by RAII, or by writing a global resource allocator (for instance, if your destructor might throw an exception), and use it exclusively for all resources.
Socket s = GlobalResourceHandler.manageSocket(new Socket());
...
try {
process(msg,out);
catch (...) {
GlobalResourceHandler.cleanUp();
}
However, using exception is forbidden in our coding standard (also in Google's C++ standard BTW), as a result all the code is compiled with exceptions off, and I believe nobody's going to change the way everything work just for my design problem.
Also, this is code for embedded platform, so the less C++ extra feature we use, the faster the code becomes, and the more portable it is.
Is there an alternative design I can consider?
update:
I appreciate everyones answer about idiotic code standard. The only thing I can say is, in big organizations you have to have strict and sometimes illogical rules, to make sure no idiot would come and make your good code unmaintainable. The standard is more about people than about technicalities. Yes, bad man can make every code a mess, but it's much worse if you give him extra tools for the task.
I'm still looking for a technical answer.
Coding these kind of services all day long I understand your problem. Although we do have exceptions within our code, we don't return them to the external libraries that invoke it, instead we have a simple 'tribool'.
enum ReturnCode
{
OK = 0, // OK (there is a reason for it to be 0)
KO, // An error occurred, wait for next message
FATAL // A critical error occurred, reboot
};
I must say FATAL is... exceptional. There isn't any code path in the application that returns it, apart from the initialization (can't do much if you're not initialized properly).
C++ here brings much with RAII, since it laughs multiple paths of return off and guarantees deterministic release of the objects it holds.
For the actual code checking, you can simply use some macros:
// Here is the reason for OK being 0 and KO and Fatal being something else
#define CHECK_RETURN(Expr) if (ReturnCode code = (Expr)) return code;
#define CHECK_BREAK(Expr) if (ReturnCode code = (Expr)) \
if (KO == code) break; else return code;
Then you can use them like so:
CHECK_RETURN( initialize() )
while(true)
{
Message msg,out;
CHECK_BREAK( receive(msg) )
CHECK_BREAK( process(msg,out) )
CHECK_BREAK( send(out) )
}
As noted, the real bummer is about constructors. You can't have "normal" constructors with such a situation.
Perhaps can you use boost::optional, if you can't, I would really suggest duplicating the functionality. Combine that with systemic factory functions in lieu of constructors and you're off to go:
boost::optional<MyObject> obj = MyObject::Build(1, 2, 3);
if (!obj) return KO;
obj->foo();
Looks much like a pointer, except that it's stack allocated and thus involves near zero overhead.
If you can't throw an exception, then the alternative is to return (or to return false or similar error code).
Whether you throw or return, you still use C++ deterministic destructors to release resources.
The one thing that you can't easily just 'return' from is a constructor. If you have an unrecoverable error in a constructor, then it's a good time to throw; but if you're not allowed to throw, then instead you must return, and in that case you need some other way to signal construction failure:
Have private constructors and static factory methods; have the factory method return null on construction failure; don't forget to check for a null return when you call a factory method
Have a "get_isConstructedOk()" property which you invoke after each constructor (and don't forget to invoke/check it on every newly-constructed object)
Implement 'two-stage' construction: in which you say that any code which might fail mustn't be in a constructor, and must instead be in a separate bool initialize() method that's called after the constructor (and don't forget to call initialize and don't forget to check its return value).
However, using exception is forbidden
in our coding standard (also in
Google's C++ standard BTW). Is there
an alternative design I can consider?
Short answer is no.
Long answer yes :). You can make all functions return an error code (similar to the implementation of Microsoft's COM platform.
The main disadvantages of this approach are:
you have to handle all exceptional cases explicitly
your code size increases dramatically
the code becomes more difficult to read.
Instead of:
initialize();
while (true) {
Message msg,out;
recieve(msg);
process(msg,out);
//no global state is saved between loop iterations!
send(out);
}
you have:
if( !succeedded( initialize() ) )
return SOME_ERROR;
while (true) {
Message msg,out;
if( !succeeded( RetVal rv = recieve(msg) ) )
{
SomeErrorHandler(rv);
break;
}
if( !succeeded( RetVal rv = process(msg,out) ) )
{
SomeErrorHandler(rv);
break;
}
//no global state is saved between loop iterations!
if( !succeeded( RetVal rv = send(out) ) )
{
SomeErrorHandler(rv);
break;
}
}
furthermore, the implementation all your functions will have to do the same: surround each function call with an if.
In the example above, you also have to decide if the rv value on each iteration constitutes an error for the current function and (eventually) return it directly from the while, or break the while on any error, and return the value.
In short, except for possibly using RAII in your code and templates (are you allowed to use them?), you end up close to "C code, using the C++ compiler".
Your code transforms each function from a two-liner into an eight-liner and so on. You can improve this with use of extra functions and #defined macros but macros have their own idiosyncrasies that you have to be really careful about.
In short, your coding standards are making your code unnecessarily longer, more error prone, harder to understand and more difficult to maintain.
This is a good case to present to whoever is in charge of the coding standards in your company :(
Edit: You can also implement this with signals but they are a bad replacement for exceptions: they do the same thing as exceptions, only they also disable RAII completely and make your code even less elegant and more error prone.
Just because using exceptions is forbidden in your current coding standards this does not mean that you should dismiss them out of hand for future problems you encounter such as this. It may the case that your current coding standards did not envisage such a scenario arising. If they did they would probably give you help as to what the alternative implementation would be.
This sounds to me like a good time to challenge your current coding standards. If the people that wrote them are still there then speak to them directly as they will either be able to answer your question as to alternative strategies or they will accept that this is a valid use-case for exceptions.
However, using exception is forbidden in our coding standard (also in Google's C++ standard BTW). Is there an alternative design I can consider?
Coding standards like that are nuts.
I suggest that you ask the person / people who developed and mandated that standard how to solve your problem. If they have no good answer, use this as justification for ignoring that part of the coding standard ... with your bosses permission of course.
If you are running under windows, you could use SEH exceptions. They also have the advantage of pre-stack-unwind handler which can stop unwind (EXCEPTION_CONTINUE_EXECUTION).
Off the top of my head, you might be able to achieve something similar with signals.
Set up a signal handler to catch appropriate signals and have it clean things up. For example, if your code generates a SIGSEGV as a result of something that would otherwise have thrown an exception a little earlier, you can try catching it with the signal handler.
There may be more to it than this as I have not thought it through.
Hope this helps.
Do you call any libraries that could raise exceptions? If it's the case, you will need a try catch anyway. For your internal errors, each method will need to return an error code (use return only for error code, use reference parameters to return the actual values). If you want to make memory cleanup 100% reliable, you could start your application using a monitor application. If your application crash, the monitor start it again. You still need to close files and DB connection, tho.
Another approach is, instead of throwing exception, set a global error indicator, and return a legal but arbitary input. Then checking in every loop iteration whether or not the global error indicator is set, and if it is - return.
If you're careful enough, you can make sure that returning legal data will never cause you to crash or to cause undefined behaviour. Thus you shouldn't care that the software will keep running a bit until it reaches to the nearest error checking condition.
For example
#define WHILE_R(cond,return_value) while (cond) {\
if (exception_thrown) return return_value
#define ENDWHILE() }
bool isPolyLegal(Poly p) {
PolyIter it(p);
WHILE_R(it.next(),true) //return value is arbitary
...
if (not_enough_memory) exception_thrown = true;
...
ENDWHILE()
}