Is (bool | bool) safe? - c++

I'm writing some C++ code, and I'd like to call two functions (checkXDirty and checkYDirty), and return true if either returns true. I need to evaluate both even if one returns true, so my first thought was to use
return checkXDirty() | checkYDirty();
This looks a little weird (dirty, perhaps). Does this always produce the correct result in C++? What about C, with the _Bool type? (This code might end up being adapted for either language, and I don't want unpleasant surprises when I port the code).

I need to evaluate both even if one returns true, so my first thought was to use...
Then stop trying to be tricky and making your code fit in as few lines as possible. Just call both functions and make it obvious that they need to be called:
const bool x_dirty = is_x_dirty();
const bool y_dirty = is_y_dirty();
return x_dirty || y_dirty;
Next, rename or break apart your functions as is_xxx_dirty really should not be producing side effects. Your code is harder to maintain as a result

As long as the values are not indeterminate, it's technically OK to use the bitwise operators. However, since that's fraught with problems as a general coding habit, I would instead just write a little inline OR-function, and let the compiler optimize. The compiler is good at optimizing, so, let it.
return eitherOrBothTrue( checkXDirty(), checkYDirty() );
Or perhaps, if you're bold and dare to take on the challenge of explaining the code to those who will maintain it,
return !bothFalse( checkXDirty(), checkYDirty() );
Or now that I read #EdS' answer, is perhaps equally good to just store the values in variables, but do then add const, like this:
bool const xIsDirty = checkXDirty();
bool const yIsDirty = checkYDirty();
return xIsDirty || yIsDirty;

Related

Evaluate expressions until one returns true

I have a savePotentiometerState(...) -function, which returns true if the there were changes to save, and false if nothing was done. Further, I know that on any single pass through my main loop, at most one of the potentiometers may have changed (due to the way they're read out).
This is on a very time-constrained embedded platform, so it's important (or at least matters) that I don't call savePotentiometerState more often than I have to. However, the code I come up with naturally seems silly, something likely to end up at thedailywtf:
if (!savePotentiometerState(pot1))
if (!savePotentiometerState(pot2))
...
if (!savePotentiometerState(potn));
Another way to do this would be to use short-circuit evaluation:
const bool retval = savePotentiometerState(pot1) || savePotentiometerState(pot2) || ... || savePotentiometerState(potn);
I suppose I could even drop the assignment here. But this doesn't feel like good style either, since I'm abusing the short circuiting of the || operator.
The various potn objects are member variables of the containing class, so there's no obvious way to write this as a loop.
I feel like I'm missing something obvious here, so my question is: is there an idiomatic/easy to read way to do this, which doesn't sacrifice efficiency? If it matters, I'm using C++17.
Loop seems the way to go:
for (auto& pot : {std::ref(pot1), std::ref(pot2), /*..,*/ std::ref(potn)}) {
if (savePotentiometerState(pot)) {
break;
}
}
Since you can use C++17 you can leverage fold expressions and write a helper function to do the evaluation for you.
template<typename... Args>
bool returtnPotentiometerState(Args&&... args)
{
return (... || savePotentiometerState(args));
}
and then you would call it like
if (returtnPotentiometerState(pot1, pot2, ..., potn))
This means you don't have a loop, and you get short circuiting.
Personally - I'd avoid the algorithm you're using.
I'd save the state for every pot all the time; and track the current and previous values; and then only call a given callback if if the value had changed.
This way, savePotState is always as fast as it needs to be for a given pot; and you'll never get into the state where pot1 to pot(n-1) can block potn from being read.

Is there a way to say the object only once when I have to use it again and again?

Take for example:
int main(void){
numberComparator comparator1;
comparator1.setA(78.321);
comparator1.showA();
comparator1.setB('c');
comparator1.setB("Yes");
comparator1.setB(124.213);
comparator1.showB();
comparator1.setB(12);
return 0;
}
Instead of saying comparator1 over and over again, can I do something shorter?
I understand that this doesn't really change much about how the program works, but it does make it easier to work around with testing a class I make.
I am doing overloading so that for an assortment of inputs into my comparator, my program can handle them without making the results go crazy. In this case, I want the input to be an int, but what if the input isn't?
The answer could be lying around the internet, but as my title may infer, I do not know how to state the question.
You are looking for something like with keyword which is part of, for example, Pascal language.
Unfortunately, C++ doesn't provide similar feature. Using the references, one can shorten the name of the class and somewhat alleviate the pain, i.e.
Comparator comparator1;
...
{
Comparator& cr = comparator1;
cr.a();
cr.b();
cr.c();
}
It depends. If numberComparator has a "fluent" interface, then each member function will return a reference to *this, and you can write:
comparator1
.setA(78.321)
.showA()
.setB('c')
.setB("Yes")
.setB(124.213)
.showB()
.setB(12);
Note that this is a bitch to debug by step-into (you have to step into every function until you get to the one you are interested in).
The alternative of course is "use a shorter name".
int main(void){
numberComparator c1;
c1.setA(78.321);
c1.showA();
c1.setB('c');
c1.setB("Yes");
c1.setB(124.213);
c1.showB();
c1.setB(12);
return 0;
}
There is really no point in having a particularly long name if it is limited in scope to a few lines. For a local variable, if it isn't limited in scope to a few lines, your function is probably too long.

Referenced parameters in C++

this is more like an ethical question:
if i have the following code:
void changeInt(int& value)
{
value = 7;
}
and i do:
int number = 3;
changeInt(number);
number will have value 7
I know that when the new stack frame will be created for changeInt function, new variables will be created and &value will point to number.
My concern here is that the caller, if it's not paying attention , can be fooled by thinking that is passing by value which actually, on the function frame , a reference will be created.
I know he can look in the header files and it's a perfect legitimate expression but still I find it unethical a bit :)
i think this should be somehow marked and enforced by syntax. Like in C# where you have ref keyword.
What do you guys think ?
This is one of those things where references are less clear than pointers. However, using pointers may lead to something like this:
changeInt(NULL);
when they actually should have done:
changeInt(&number);
which is just as bad. If the function is as clearly named as this, it's hardly a mystery that it actually changes the value passed in.
Another solution is of course to do:
int calculateNewInt(/* may need some input */)
{
return 7;
}
now
int number = 3;
...
number = calculateNewInt();
is quite obviously (potentially) changing number.
But if the name of the function "sounds like it changes the input value", then it's definitely fair to change the value. If in doubt, read the documentatin. If you write code that has local variables that you don't want to alter, make them const.
const int number = 3;
changeInt(number); /* Makes an error */
(Of course, that means the number is not changeable elsewhere either).
I know he can look in the header files and it's a perfect legitimate expression but still I find it unethical a bit :)
I think that's perfectly normal and part of the language. Actually, this is one of the bad things of C and C++: you have to check the headers all the time when dealing with an unknown API, since when calling a function you don't pass by reference explicitly.
That's not the case for all system languages though. IIRC Rust makes it obligatory to pass references explicitly.

Why use int functions over void?

I was looking over some example functions and methods (I'm currently in a C++ class), and I noticed that there were a few functions that, rather than being void, they were something like
int myFunction() {
// ...;
return 0;
}
Where the ellipses is obviously some other statement. Why are they returning zero? What's the point of returning a specific value every time you run a function?
I understand that main() has to be int (at least according to the standards) because it is related (or is?) the exit code and thus works with the operating system. However, I can't think of a reason a non-main function would do this.
Is there any particular reason why someone might want to do this, as opposed to simply making a void function?
If that's really what they're doing, returning 0 regardless of what the function does, then it's entirely pointless and they shouldn't be doing it.
In the C world, an int return type is a convention so that you can return your own "error code", but not only is this not idiomatic C++ but if, again, your programmer is always returning 0, then it's entirely silly.
Specifically:
I understand that main() has to be int (at least according to the standards) because it is related (or is?) the exit code and thus works with the operating system. However, I can't think of a reason a non-main function would do this.
I agree.
There's a common convention of int functions returning 0 for success and some non-zero error code for failure.
An int function that always returns 0 might as well be a void function if viewed in isolation. But depending on the context, there might be good reasons to make it compatible with other functions that returning meaningful results. It could mean that the function's return type won't have to be changed if it's later modified so it detects errors -- or it might be necessary for its declaration to be compatible with other int-returning functions, if it's used as a callback or template argument.
I suggest examining other similar functions in the library or program.
It's a convention, particularly among C programmers, to return 0 if the function did not experience any errors and return a nonzero value if there was an error.
This has carried over into C++, and although it's less common and less of a convention due to exception handling and other more object-oriented-friendly ways of handling errors, it does come up often enough.
One more issue that was not touched by other answers. Within the ellipses may be another return statement:
int myFunction() {
// ...;
if (error)
return code;
// ...;
return 0;
}
in which case myFunction is not always returning 0, but rather only when no error has occurred. Such return statements are often preferred over more structured but more verbose if/else code blocks, and may often be disguised within long, sloppy code.
Most of the time function like this should be returning void.
Another possibility is that this function is one of a series of closed-related functions that have the same signature. The return int value may signal the status, say returning 0 for success, and a few of these functions always succeed. To change the signature may break the consistency, or would make the function unusable as function objects since the signature does not match.
Is there any particular reason why someone might want to do this, as opposed to simply making a void function?
Why does your mother cut the ends off the roast before putting it in the oven? Answer: Because that's what her grandmother did. However, her grandmother did that for a simple reason: Her roast pan wasn't big enough to hold a full-sized roast.
I work with a simulation tool that in its earliest incarnations required that all functions callable by the simulation engine must return a success status: 0=success, non-zero=failure. Functions that could never fail were coded to always returned zero. The simulation engine has been able to accommodate functions that return void for a long, long, time. That returning an integer success code was the required behavior from some previous millennium hasn't stopped cargo cult programmers from carrying this behavior of writing functions that always returning zero forward to the current day.
In certain programming languages you find procedures and functions. In C, C++ and similar languages you don't. Rather you only have functions.
In practice, a procedure is a part of a program that performs a certain task. A function on the other hand is like a procedure but the function can return an answer back.
Since C++ has only functions, how would you create a procedure? That's when you would either create a void function or return any value you like to show that the task is complete. It doesn't have to be 0. You can even return a character if you like to.
Take for example, the cout statement. It just outputs something but not return anything. This works like a procedure.
Now consider a math function like tan(x). It is meant to use x and return an answer back to the program that called it. In this case, you cannot return just anything. You must return the value of the TAN operation.
So if you need to write your own functions, you must return a value based on what you're doing. If there's nothing to return, you may just write a void function or return a dummy value like 0 or anything else.
In practice though, it's common to find functions returning 0 to indicate that 'all went off well' but this is not necessarily a rule.
here's an example of a function I would write, which returns a value:
float Area ( int radius)
{
float Answer = 3.14159 * radius * radius;
return Answer;
}
This takes the radius as a parameter and returns the calculated answer (area). In this case you cannot just say return 0.
I hope this is clear.

Which school of reporting function failures is better

Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.
First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error
int foo(int arg) {
if (everything fine)
return some_value;
return -1; //on failure
}
The scond approach is to return a function status and pass the result within a reference
bool foo(int arg, int & result) {
if (everything fine) {
result = some_value;
return true;
}
return false; //on failure
}
Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?
Don't ignore exceptions, for exceptional and unexpected errors.
However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...
If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).
Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).
boost optional is a brilliant technique. An example will assist.
Say you have a function that returns an double and you want to signify
an error when that cannot be calculated.
double divide(double a, double b){
return a / b;
}
what to do in the case where b is 0;
boost::optional<double> divide(double a, double b){
if ( b != 0){
return a / b;
}else{
return boost::none;
}
}
use it like below.
boost::optional<double> v = divide(a, b);
if(v){
// Note the dereference operator
cout << *v << endl;
}else{
cout << "divide by zero" << endl;
}
The idea of special return values completely falls apart when you start using templates. Consider:
template <typename T>
T f( const T & t ) {
if ( SomeFunc( t ) ) {
return t;
}
else { // error path
return ???; // what can we return?
}
}
There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..
Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.
While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,
account1.rate = account2.rate = current_rate();
vs.:
set_current_rate(account1.rate);
account2.rate = account1.rate;
or:
set_current_rate(account1.rate);
set_current_rate(account2.rate);
The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.
Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.
In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).
Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.
The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.
There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.
If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.
C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.
Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.
And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()
It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:
Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.
If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.
When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.
These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.
For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.
For C, I find the fugly out parameters less offensive than fugly "magic numbers".
You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.
There's a lot to be said for this.
Example:
int count;
if (!TryParse("12x3", &count))
DisplayError(GetLastError());
edit
This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:
HKEY key;
long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);
if (errcode != ERROR_SUCCESS)
return DisplayError(errcode);
Contrast this with:
HKEY key;
if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))
return DisplayError(GetLastError());
(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)
In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:
HKEY key;
if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)
return DisplayGenericError();
edit
Looking at R.'s request, I've found a scenario where it can actually be satisfied.
For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.
However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.
I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.
In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.
I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.
I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.
If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.
Well, the first one will compile either in C and C++, so to do portable code it's fine.
The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.
I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.
Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.
Thus when you get a negative return to a function call, you can check the context from the GError variable.
A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.
For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?
Apart from doing it the correct way, which of these two stupid ways do you prefer?
I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.
Does additional parameter in the
second method bring notable
performance overhead?
Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.