I have a question re my C++ class design. Often I face little issues like this and I'd like some advice as to what is better accepted.
I have some class which monitors the temperature of some device over UDP. If the device receives a packet of data, it is to print "x\n" to stdout to show that it received something. Then it is to check the data in that packet and verify that the data doesn't show that the temperature of the device is too high. If it is too high, I must call some function. If it isn't, I must call some other function.
I'm not sure if I should do this:
enum temperature {TEMPERATURE_FINE, TEMPERATURE_EXCEEDED};
int main(int argc, char* argv[])
{
std::vector<std::string> args(argv+1, argv + argc);
if(!args.size())
cout << "No parameters entered.\n";
else
{
CTemperatureMonitor tempMonitor(args);
if(tempMonitor.MonitorTemperature() == TEMPERATURE_EXCEEDED)
tempMonitor.ActivateAlarm();
else
tempMonitor.DisableAlarm();
}
return 0;
}
where tempMonitor.MonitorTemperature() calls std::cout << "x\n". So std::cout << "x\n" is built into the class.
Or:
enum temperature {TEMPERATURE_FINE, TEMPERATURE_EXCEEDED};
int main(int argc, char* argv[])
{
std::vector<std::string> args(argv+1, argv + argc);
if(!args.size())
cout << "No parameters entered.\n";
else
{
CTemperatureMonitor tempMonitor(args);
temperature tempExceeded = tempMonitor.MonitorTemperature();
std::cout << "x\n";
if(tempExceeded == TEMPERATURE_EXCEEDED)
tempMonitor.ActivateAlarm();
else
tempMonitor.DisableAlarm();
}
return 0;
}
where std::cout << "x\n" is not included in the class.
The std::cout << "x\n" must occur before calling CTemperatureMonitor::ActivateAlarm() and CTemperatureMonitor::DisableAlarm().
I know this might seem really minor and simplistic, but I'm often wondering what exactly should be part of the class. Should the class be outputting to stdout? Does it make any difference whether I do one or the other? Am I being to pedantic about this?
Also, as an aside, I know global variable are considered poor practise. I use the temperature enum in both the main and the class. Should I declare it twice, once in main and once in the CTemperatureMonitor class, or once globally? Although this question seems rather specific, it would actually clear a whole lot of other things up for me.
Thanks.
First off, I'd like to point out that there are various size of projects, and that depending on the size (and criticity), the advices will actually be different. So first a rule of thumb:
The size of the "framework" that you put in place (Logger, Option Parser, etc...) should probably not exceed 10% of the total program. After this point, it's just overkill. Unless it's the goal of the exercise!
That being said, we can start have a look at your actual questions.
Also, as an aside, I know global variable are considered poor practice. I use the temperature enum in both the main and the class. Should I declare it twice, once in main and once in the CTemperatureMonitor class, or once globally?
You are actually mistaking variables and types here. temperature is a type (of enum kind).
In general, Types are used as bridges between various parts of the program, and to do so it is important that all those parts share the same definition of the type. Therefore, for types, it would be bad practice to actually declare it twice.
Furthermore, not all globals are evil. Global variables are (shared state), but global constants are fine, and usually play a role similar to types.
I know this might seem really minor and simplistic, but I'm often wondering what exactly should be part of the class. Should the class be outputting to stdout? Does it make any difference whether I do one or the other? Am I being to pedantic about this?
There are two kinds of output:
logging output, which is used to diagnose issues when they are encountered
real output, which is what the program does
Depending on programs, you might have either, both or none.
From a pedantic point of view, you would generally prefer not mix those. For example, you could perfectly send the logging to a file, or stderr when it's serious, and use stdout for the "useful" stuff.
This actually drives the design somewhat, as then you need two sinks: one for each output.
Being that you have a quite simplistic program, the easiest way might be to simply pass two different std::ostream& to your class upon construction. Or, even simpler, just have two generic functions and use the (evil) global variables.
In larger program, you would probably design a Logger class that would have various log levels and provide specific macros to register (automatically) the function name, file name and line number of the log line. You would also probably have a lightweight logging mechanism that would allow you to disable logging DEBUG/DEV level traces in the Release builds.
The single level of abstraction principle would favor doing all I/O in the same method instead of doing some at a high level and some at a low level of abstraction.
In other words, if you believe in that principle, keeping both input and output via cin/cout in the same method instead of showing some and hiding some is a good idea. It tends to give more readable code with less dependencies in each class.
According to the Single responsibility principle, the second option is preferred (any class should have exactly one responsibility, which is monitoring the temperature in your case, not outputting the results,) though you might want to set up another class to handle the temperature monitoring results (e.g. to write the results to a certain log file or something).
It's normal to have some way of logging information about your program.
It's also normal for this way to be, well, global.
Otherwise things just get too complicated when calling a method.
The only thing you should do to improve this is:
Have a logger class (or use an existing one) that can have its output stream set to whatever you choose (including std::out and an empty stream that doesn't print anything).
Ultimately, have a logger than can be hidden behind a #define so that it does not slow down code running in Release mode.
In my opinion in such cases you should not include cout into class, as maybe sometimes later you will need to output to the file or don't output at all. So if the cout is not put to the class you will have opportunity to reuse your class code without any changes.
Neither. Responsibility of main is to start the application with the command line arguments and to give a return value. Everything else should not be there. You might want to take a look at a book like "Object Design, Roles, Responsibilities, and Collaborations"
Related
I'm implementing a helper class which has a number of useful functions which will be used in a large number of classes. However, a few of them are not designed to be called from within certain sections of code (from interrupt functions, this is an embedded project).
However, for users of this class the reasons why some functions are allowed while others are prohibited from being called from interrupt functions might not be immediately obvious, and in many cases the prohibited functions might work but can cause very subtle and hard to find bugs later on.
The best solution for me would be to cause a compiler error if the offending function is called from a code section it shouldn't be called from.
I've also considered a few non-technical solutions, but a technical one would be preferred.
Indicate it in the documentation with a warning. Might be easily missed, especially when the function seems obvious, like read_byte(), why would anyone study the documentation whether the function is reentrant or not?
Indicate it in the function's name. Ugly. Who likes function names like read_byte_DO_NOT_CALL_FROM_INTERRUPT() ?
Have a global variable in a common header, included in each and every file, which is set to true at the beginning of each interrupt, set to false at the end, and the offending functions check it at their beginning, and exit if it's set. Problem: interrupts might interrupt each other. Also, it doesn't cause compile-time warnings or errors.
Similar to #3, have a global handler with a stack, so that nested interrupts can be handled. Still has the problem of only working at runtime and it also adds a lot of overhead. Interrupts should not waste more than a clock cycle or two for this feature, if at all.
Abusing the preprocessor. Unfortunately, the naive way of a #define at the beginning and an #undef at the end of each interrupt, with an #ifdef at the beginning of the offending function doesn't work, because the preprocessor doesn't care about scope.
As interrupts are always classless functions, I could make the offending functions protected, and declare them as friends in all classes which use them. This way, it would be impossible to use them directly from within interrupts. As main() is classless, I'll have to place most of it into a class method. I don't like this too much, as it can become needlessly complicated, and the error it generates is not obvious (so users of this function might encapsulate them to "solve" the problem, without realizing what the real problem was). A compiler or linker error message like "ERROR: function_name() is not to be used from within an interrupt" would be much more preferable.
Checking the interrupt registers within the function has several issues. In a large microcontroller there are a lot of registers to check. Also, there is a very small but dangerous chance of a false positive when an interrupt flag is being set exactly one clock cycle before, so my function would fail because it thinks it was called from an interrupt, while the interrupt would be called in the next cycle. Also, in nested interrupts, the interrupt flags are cleared, causing a false negative. And finally, this is yet another runtime solution.
I did play with some very basic template metaprogramming a while ago, but I'm not that experienced with it to find a very simple and elegant solution. I would rather try other ways before committing myself to try to implement a template metaprogramming bloatware.
A solution working with only features available in C would also be acceptable, even preferable.
Some comments below. As a warning, they won't be fun reading, but I won't do you a service by not pointing out what's wrong here.
If you are calling external functions from inside an ISR, no amount of documentation or coding will help you. Since in most cases, it is bad practice to do so. The programmer must know what they are doing, or no amount of documentation or coding mechanisms will save the program.
Programmers do not design library functions specifically for the purpose of getting called from inside an ISR. Rather, programmers design ISR:s with all the special restrictions that come with an ISR in mind: make sure interrupt flags are cleared correctly, keep the code short, do not call external functions, do not block the MCU longer than necessary, consider re-entrancy, consider dangerous compiler optimizations (use volatile). A person who does not know this is not competent enough to write ISRs.
If you actually have a function int read_byte(int address) then this suggests that the program design is bad to begin with. This function could do one of two things:
Either it can read a byte some some peripheral hardware, in which case the function name is very bad and should be changed.
Or it could read any generic byte from an address, in which case the function is 100% useless "bloatware". You can safely assume that a somewhat competent C programmer can read a byte from a memory address without some bloatware holding their hand.
In either case, int is not a byte. It is a word of 16 or 32 bits. The function should be returning uint8_t. Similarly, if the parameter passed is used to descibe a memory-mapped address of an MCU, it should either have type void*, uint8_t* or uintptr_t. Everything else is wrong.
Notably, if you are using int rather than stdint.h for embedded systems programming, then this whole discussion is the least of your problems, as you haven't even gotten the fundamental basics right. Your programs will be filled to the brim with undefined behavior and implicit promotion bugs.
Overall, all the solutions you suggest are simply not acceptable. The root of the problem here appears to be the program design. Deal with that instead of inventing ways to defend the broken design with horrible meta programming.
I would suggest option 8 & 9.
Peer reviews & assertions.
You state in the comments that your interrupt functions are short. If that's really the case, then reviewing them will be trivial. Adding comments in the header will make it so that anyone can see what's going on. On adding an assert, while you make it viable that debug builds will return the wrong result in error, it will also ensure that you you will catch any calls; and give you a fighting chance during testing to catch the problem.
Ultimately, the macro processing just won't work since the best you can do is catch if a header has been included, but if the callstack goes via another wrapper (that doesn't have comments) then you just can't catch that.
Alternatively you could make your helper a template, but then that would mean every wrapper around your helper would also have to be a template so that can know if you're in an interrupt routine... which will ultimately be your entire code base.
if you have one file for all interrupt routine then this might be helpful:
define one macro in class header ,say FORBID_INTERRUPT_ROUTINE_ACCESS.
and in interrupt handler file check for that macro definition :
#ifdef FORBID_INTERRUPT_ROUTINE_ACCESS
#error : cannot access function from interrupt handlers.
#endif
if someone add header file for that class to use that class in interrupt handler then it will throw an error.
Note : you have to build target by specifying that warnings will be considered as error.
Here is the C++ template functions suggestion.
I don't think this is metaprogramming or bloatware.
First make 2 classes which will define the context which the user will be using the functions in:
class In_Interrupt_Handler;
class In_Non_Interrupt_Handler;
If You will have some common implementations between the 2 contexts, a Base class can be added:
class Handy_Base
{
protected:
static int Handy_protected() { return 0; }
public:
static int Handy_public() { return 0; }
};
The primary template definition, without any implementations. The implemenations will be provided by the specialization classes:
template< class Is_Interrupt_Handler >
class Handy_functions;
And the specializations.
// Functions can be used when inside an interrupt handler
template<>
struct Handy_functions< In_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 1; }
static int Handy2() { return 2; }
};
// Functions can be used when inside any function
template<>
struct Handy_functions< In_Non_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 4; }
static int Handy2() { return 8; }
};
In this way if the user of the API wants to access the functions, the only way is by specifing what type of functions are needed.
Example of usage:
int main()
{
using IH_funcs = Handy_functions<In_Interrupt_Handler>;
std::cout << IH_funcs::Handy1() << '\n';
std::cout << IH_funcs::Handy2() << '\n';
using Non_IH_funcs = Handy_functions<In_Non_Interrupt_Handler>;
std::cout << Non_IH_funcs::Handy1() << '\n';
std::cout << Non_IH_funcs::Handy2() << '\n';
}
In the end I think the problem boils down to the developer using Your framework. And How much Your framework requires the devloper to boilerplate.
The above does not stop the developer calling the Non Interrupt Handler functions from inside an Interrupt Handler.
I think that type of analysis would require some type of static analysis checking system.
Suppose there are 30 numbers I had to input into an executable, because of the large amount of input, it is not reasonable to input them via command line. One standard way is to save them into a single XML file and use XML parser like tinyxml2 to parse them. The problem is if I use tinyxml2 to parse the input directly I will have a very bloated main function, which seems to contradict the common good practice.
For example:
int main(int argc, char **argv){
int a[30];
tinyxml2::XMLDocument doc_xml;
if (doc_xml.LoadFile(argv[1])){
std::cerr << "failed to load input file";
}
else {
tinyxml2::XMLHandle xml(&doc_xml);
tinyxml2::XMLHandle a0_xml =
xml.FirstChildElement("INPUT").FirstChildElement("A0");
if (a0_xml.ToElement()) {
a0_xml.ToElement()->QueryIntText(&a[0]);
}
else {
std::cerr << "A0 missing";
}
tinyxml2::XMLHandle a1_xml =
xml.FirstChildElement("INPUT").FirstChildElement("A1");
if (a1_xml.ToElement()) {
a1_xml.ToElement()->QueryIntText(&a[1]);
}
else {
std::cerr << "A1 missing";
}
// parsing all the way to A29 ...
}
// do something with a
return 0;
}
But on the other hand, if I write an extra class just to parse these specific type of input in order to shorten the main function, it doesn't seem to be right either, because this extra class will be useless unless it's used in conjunction with this main function since it can't be reused elsewhere.
int main(int argc, char **argv){
int a[30];
ParseXMLJustForThisExeClass ParseXMLJustForThisExeClass_obj;
ParseXMLJustForThisExeClass_obj.Run(argv[1], a);
// do something with a
return 0;
}
What is the best way to deal with it?
Note, besides reading XML files you can also pass lots of data through stdin. It's pretty common practice to use e.g. mycomplexcmd | hexdump -C, where hexdump is reading from stdin through the pipe.
Now up to the rest of the question: there's a reason to go with the your multiple-functions example (here it's not very important whether they're constructors or usual functions). It's pretty much the same as why would you want any function to be smaller — readability. That said, I don't know about the "common good practice", and I've seen many terminal utilities with very big main().
Imagine someone new is reading 1-st variant of main(). They'd be going through the hoops of figuring out all these handles, queries, children, parents — when all they wanted is to just look at the part after // do something with a. It's because they don't know if it's relevant to their problem or not. But in the 2-nd variant they'll quickly figure it out "Aha, it's the parsing logic, it's not what I am looking for".
That said, of course you can break the logic with detailed comments. But now imagine something went wrong, someone is debugging the code, and they pinned down the problem to this function (alright, it's funny given the function is main(), maybe they just started debugging). The bug turned out to be very subtle, unclear, and one is checking everything in the function. Now, because you're dealing with mutable language, you'd often find yourself in situation where you think "oh, may be it's something with this variable, where it's being changed?"; and you first look up every use of the variable through this large function, then conditions that could lead to blocks where it's changed; then you figuring out what does this another big block, relevant to the condition, that could've been extracted to a separate function, what variables are used in there; and to the moment you figured out what it's doing you already forgot half of what you were looking before!
Of course sometimes big functions are unavoidable. But if you asked the question, it's probably not your case.
Rule of thumb: you see a function doing two different things having little in common, you want to break it to 2 separate functions. In your case it's parsing XML and "doing something with a". Though if that 2-nd part is a few lines, probably not worth extracting — speculate a bit. Don't worry about the overhead, compilers are good at optimizing. You can either use LTO, or you can declare a function in .cpp file only as static (non-class static), and depending on optimization options a compiler may inline the code.
P.S.: you seem to be in the state where it's very useful to learn'n'play with some Haskell. You don't have to use it for real serious projects, but insights you'd get can be applied anywhere. It forces you into better design, in particular you'd quickly start feeling when it's necessary to break a function (aside of many other things).
Let's say I have a program made of several "basic" algorithms on integral variables, such as :
if(a<b)
a += c;
Is there a tool that would allow me to automatically log all the changes made to the different variables at run time?
For instance it would display in that case in a log file:
"condition passed because 5=a < b=10
a += 10; because c=10"
or some equivalent.
I am aware that I could manually log each operation but that would be much too complex.
Is there any tool that would allow me to do something like that? I don't care about refactoring / recompiling as long as it's not totally manual.
You can write your own integer class that overloads the operators accordingly (with automatic logging). If the class also provides implicit conversion (a constructor from int and a conversion operator to int), then you "only" need to change the types of variables and parameters to have your automatic logging of values. But instead of names you could only log addresses (or something derived from it like var20). With the help of a #define you could easily switch between raw ints (without logging) or your integer class with logging.
To get also the names of the variables into the logging one would either have to rewrite the operators with macros like
if (LESS(a,b))
INC(a,c)
or have a parser that automatically transforms your code into something like this. But I am not aware of any existing tool providing this.
I have a hard time imagining that logging the complete execution of a program like this would be useful. A simple std::cout << "hello, world!\n"; would produce a mass of useless logs.
What do you actually need to do? If you want to debug code you should probably use a debugger to examine the program as it runs instead of using a printf-debugging-gone-horribly-wrong strategy. If you want a way to describe the complete execution for later examination/manipulation you could make sure the program behaves deterministically and just save the program input.
The right solution depends on the actual problem, but it's not likely that complete execution logging is the correct solution to anything.
Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.
First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error
int foo(int arg) {
if (everything fine)
return some_value;
return -1; //on failure
}
The scond approach is to return a function status and pass the result within a reference
bool foo(int arg, int & result) {
if (everything fine) {
result = some_value;
return true;
}
return false; //on failure
}
Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?
Don't ignore exceptions, for exceptional and unexpected errors.
However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...
If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).
Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).
boost optional is a brilliant technique. An example will assist.
Say you have a function that returns an double and you want to signify
an error when that cannot be calculated.
double divide(double a, double b){
return a / b;
}
what to do in the case where b is 0;
boost::optional<double> divide(double a, double b){
if ( b != 0){
return a / b;
}else{
return boost::none;
}
}
use it like below.
boost::optional<double> v = divide(a, b);
if(v){
// Note the dereference operator
cout << *v << endl;
}else{
cout << "divide by zero" << endl;
}
The idea of special return values completely falls apart when you start using templates. Consider:
template <typename T>
T f( const T & t ) {
if ( SomeFunc( t ) ) {
return t;
}
else { // error path
return ???; // what can we return?
}
}
There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..
Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.
While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,
account1.rate = account2.rate = current_rate();
vs.:
set_current_rate(account1.rate);
account2.rate = account1.rate;
or:
set_current_rate(account1.rate);
set_current_rate(account2.rate);
The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.
Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.
In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).
Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.
The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.
There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.
If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.
C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.
Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.
And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()
It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:
Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.
If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.
When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.
These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.
For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.
For C, I find the fugly out parameters less offensive than fugly "magic numbers".
You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.
There's a lot to be said for this.
Example:
int count;
if (!TryParse("12x3", &count))
DisplayError(GetLastError());
edit
This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:
HKEY key;
long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);
if (errcode != ERROR_SUCCESS)
return DisplayError(errcode);
Contrast this with:
HKEY key;
if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))
return DisplayError(GetLastError());
(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)
In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:
HKEY key;
if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)
return DisplayGenericError();
edit
Looking at R.'s request, I've found a scenario where it can actually be satisfied.
For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.
However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.
I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.
In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.
I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.
I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.
If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.
Well, the first one will compile either in C and C++, so to do portable code it's fine.
The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.
I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.
Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.
Thus when you get a negative return to a function call, you can check the context from the GError variable.
A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.
For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?
Apart from doing it the correct way, which of these two stupid ways do you prefer?
I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.
Does additional parameter in the
second method bring notable
performance overhead?
Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.
This may be a short & simple question, but I've never found a satisfying answer to it:
What code does the main() function usually consist of in a large C++ project? Would it be an incorrect assumption to think that it is usually just initializing a (wrapping) class object and calling a function inside of it to set things off?
Why is main() not a method in the first place? Is it to preserve backwards-compatibility with C?
In my code, it's basically a constructor call, possibly a method call, and some exception handling. This is the main for own of my projects (headers and comments omitted, and formatting messed up by SO, as usual):
int main( int argc, char * argv[] ) {
int result = 0;
try {
CLIHandler ch( argc, argv );
result = ch.ExecCommand();
}
catch( const Exception & ex ) {
result = ExceptionHandler::HandleMyError( ex );
}
catch( const std::exception & ex ) {
result = ExceptionHandler::HandleOtherError( ex );
}
catch( ... ) {
result = ExceptionHandler::HandleUnknownError();
}
return result;
}
Mine usually do
Command-line parsing
Initialization of top-level objects
Exception handling
entering main 'exec' loop
As I understand it, int main(int argc, char *argv[]) is essentially a convention due to the C heritage. Never struck me as odd, but rather as useful. C++ extends C after all ... (and yes there are fine difference but that wasn't the question here).
Yes, the reason is backward compatibility. main is the only entry point allowed in a C program producing executables, and therefore in a C++ program.
As for what to do in a C++ main, it depends. In general, I used to:
perform global initialization (e.g. of the logging subsystem)
parse command line arguments and define a proper class containing them
allocate an application object, setting it up etc.
run the application object (in my case, an infinite loop method. GUI programming)
do finalization after the object has completed its task.
oh and I forgot the most important part of an application
show the splashscreen
The short answer: it depends. It may well create a few local objects that are needed for the duration of the program, configure them, tell them about each other and call a long running method on one of them.
A program needs an entry point. If main had to be a method on an object, what class type should it be?
With main as a global entry point it can choose what to set up.
My main() function often constructs various top-level objects, giving them references to one another. This helps minimize coupling, keeping the exact relationships between the different top-level objects confined to the main.
Often those top-level objects have distinct life cycles, with init(), stop(), and start() methods. The main() function manages getting the objects into the desired running state, waits for whatever indicates it is time to shut down, and then shutting everything down in a controlled fashion. Again, this helps keep things properly decoupled, and keeps top-level life cycle management in one easily understood place. I see this pattern a lot in reactive systems, especially those with a lot of threads.
You can use a static class member function in place of main with the MSVC++ compiler by choosing the entry point in the project settings, under the advanced linker options.
It really depends on your project as to what you want to place in there... if it is small you may as well put message loops, initialization and shutdown code in there. In larger projects you will have to move these into their own classes/functions or less have a monolithic entry point function.
Not all C++ applications are OOP and either way all code requires some entry point to start from.
When I'm writing OOP code, my main() tends to include an object instantiation, maybe proceeded by some user input. I do it this way because I feel that the 'work' is meant to be done within an object, otherwise the code isn't written in the 'spirit' of OOP.
I usually use main for reading in the command line, initializing global variables, and then calling the appropriate functions/methods.
Really large projects tend not comprise only a single program. Hence there will be several executables each with their own main. In passing, it's quite common for these executables to communicate asynchronously via queues.
Yes each main does tend to be very small, initialising a framework or whatever.
Do you mean why is main() a function rather than a method of class? Well, what class would it be a method of? I think it's mostly C++'s heritage from C, but ... everything got to start somewhere :-)