Expressions with no side effects in C++ - c++

See, what I don't get is, why should programs like the following be legal?
int main()
{
static const int i = 0;
i < i > i;
}
I mean, surely, nobody actually has any current programs that have expressions with no side effects in them, since that would be very pointless, and it would make parsing & compiling the language much easier. So why not just disallow them? What benefit does the language actually gain from allowing this kind of syntax?
Another example being like this:
int main() {
static const int i = 0;
int x = (i);
}
What is the actual benefit of such statements?
And things like the most vexing parse. Does anybody, ever, declare functions in the middle of other functions? I mean, we got rid of things like implicit function declaration, and things like that. Why not just get rid of them for C++0x?

Probably because banning then would make the specification more complex, which would make compilers more complex.

it would make parsing & compiling the
language much easier
I don't see how. Why is it easier to parse and compile i < i > i if you're required to issue a diagnostic, than it is to parse it if you're allowed to do anything you damn well please provided that the emitted code has no side-effects?
The Java compiler forbids unreachable code (as opposed to code with no effect), which is a mixed blessing for the programmer, and requires a little bit of extra work from the compiler than what a C++ compiler is actually required to do (basic block dependency analysis). Should C++ forbid unreachable code? Probably not. Even though C++ compilers certainly do enough optimization to identify unreachable basic blocks, in some cases they may do too much. Should if (foo) { ...} be an illegal unreachable block if foo is a false compile-time constant? What if it's not a compile-time constant, but the optimizer has figured out how to calculate the value, should it be legal and the compiler has to realise that the reason it's removing it is implementation-specific, so as not to give an error? More special cases.
nobody actually has any current
programs that have expressions with no
side effects in them
Loads. For example, if NDEBUG is true, then assert expands to a void expression with no effect. So that's yet more special cases needed in the compiler to permit some useless expressions, but not others.
The rationale, I believe, is that if it expanded to nothing then (a) compilers would end up throwing warnings for things like if (foo) assert(bar);, and (b) code like this would be legal in release but not in debug, which is just confusing:
assert(foo) // oops, forgot the semi-colon
foo.bar();
things like the most vexing parse
That's why it's called "vexing". It's a backward-compatibility issue really. If C++ now changed the meaning of those vexing parses, the meaning of existing code would change. Not much existing code, as you point out, but the C++ committee takes a fairly strong line on backward compatibility. If you want a language that changes every five minutes, use Perl ;-)
Anyway, it's too late now. Even if we had some great insight that the C++0x committee had missed, why some feature should be removed or incompatibly changed, they aren't going to break anything in the FCD unless the FCD is definitively in error.
Note that for all of your suggestions, any compiler could issue a warning for them (actually, I don't understand what your problem is with the second example, but certainly for useless expressions and for vexing parses in function bodies). If you're right that nobody does it deliberately, the warnings would cause no harm. If you're wrong that nobody does it deliberately, your stated case for removing them is incorrect. Warnings in popular compilers could pave the way for removing a feature, especially since the standard is authored largely by compiler-writers. The fact that we don't always get warnings for these things suggests to me that there's more to it than you think.

It's convenient sometimes to put useless statements into a program and compile it just to make sure they're legal - e.g. that the types involve can be resolved/matched etc.
Especially in generated code (macros as well as more elaborate external mechanisms, templates where Policies or types may introduce meaningless expansions in some no-op cases), having less special uncompilable cases to avoid keeps things simpler
There may be some temporarily commented code that removes the meaningful usage of a variable, but it could be a pain to have to similarly identify and comment all the variables that aren't used elsewhere.
While in your examples you show the variables being "int" immediately above the pointless usage, in practice the types may be much more complicated (e.g. operator<()) and whether the operations have side effects may even be unknown to the compiler (e.g. out-of-line functions), so any benefit's limited to simpler cases.
C++ needs a good reason to break backwards (and retained C) compatibility.

Why should doing nothing be treated as a special case? Furthermore, whilst the above cases are easy to spot, one could imagine far more complicated programs where it's not so easy to identify that there are no side effects.

As an iteration of the C++ standard, C++0x have to be backward compatible. Nobody can assert that the statements you wrote does not exist in some piece of critical software written/owned by, say, NASA or DoD.
Anyway regarding your very first example, the parser cannot assert that i is a static constant expression, and that i < i > i is a useless expression -- e.g. if i is a templated type, i < i > i is an "invalid variable declaration", not a "useless computation", and still not a parse error.

Maybe the operator was overloaded to have side effects like cout<<i; This is the reason why they cannot be removed now. On the other hand C# forbids non-assignment or method calls expresions to be used as statements and I believe this is a good thing as it makes the code more clear and semantically correct. However C# had the opportunity to forbid this from the very beginning which C++ does not.

Expressions with no side effects can turn up more often than you think in templated and macro code. If you've ever declared std::vector<int>, you've instantiated template code with no side effects. std::vector must destruct all its elements when releasing itself, in case you stored a class for type T. This requires, at some point, a statement similar to ptr->~T(); to invoke the destructor. int has no destructor though, so the call has no side effects and will be removed entirely by the optimizer. It's also likely it will be inside a loop, then the entire loop has no side effects, so the entire loop is removed by the optimizer.
So if you disallowed expressions with no side effects, std::vector<int> wouldn't work, for one.
Another common case is assert(a == b). In release builds you want these asserts to disappear - but you can't re-define them as an empty macro, otherwise statements like if (x) assert(a == b); suddenly put the next statement in to the if statement - a disaster! In this case assert(x) can be redefined as ((void)0), which is a statement that has no side effects. Now the if statement works correctly in release builds too - it just does nothing.
These are just two common cases. There are many more you probably don't know about. So, while expressions with no side effects seem redundant, they're actually functionally important. An optimizer will remove them entirely so there's no performance impact, too.

Related

Why can I pass a reference to an uninitialized element in c++?

Why does the following code compile?
class Demo
{
public:
Demo() : a(this->a){}
int& a;
};
int main()
{
Demo d;
}
In this case, a is a reference to an integer. However, when I initialize Demo, I pass a reference to a reference of an integer which has not yet been initialized. Why does this compile?
This still compiles even if instead of int, I use a reference to a class which has a private default constructor. Why is this allowed?
Why does this compile?
Because it is syntactically valid.
C++ is not a safe programming language. There are several features that make it easy to do the right thing, but preventing someone from doing the wrong thing is not a priority. If you are determined to do something foolish, nothing will stop you. As long as you follow the syntax, you can try to do whatever you want, no matter how ludicrous the semantics. Keep that in mind: compiling is about syntax, not semantics.*
That being said, the people who write compilers are not without pity. They know the common mistakes (probably from personal experience), and they recognize that your compiler is in a good position to spot certain kinds of semantic mistakes. Hence, most compilers will emit warnings when you do certain things (not all things) that do not make sense. That is why you should always enable compiler warnings.
Warnings do not catch all logical errors, but for the ones they do catch (such as warning: 'Demo::a' is initialized with itself and warning: '*this.Demo::a' is used uninitialized), you've saved yourself a ton of debugging time.
* OK, there are some semantics involved in compiling, such as giving a meaning to identifiers. When I say compiling is not about semantics, I am referring to a higher level of semantics, such as the intended behavior.
Why does this compile?
Because there is no rule that would make the program ill-formed.
Why is this allowed?
To be clear, the program is well-formed, so it compiles. But the behaviour of the program is undefined, so from that perspective, the premise of your question is flawed. This isn't allowed.
It isn't possible to prove all cases where an indeterminate value is used, and it isn't easy to specify which of the easy cases should be detected by the compiler, and which would be considered to be too difficult. As such, the standard doesn't attempt to specify it, and leaves it up to the compiler to warn when it is able to detect it. For what it's worth, GCC is able to detect it in this case for example.
C++ allows you to pass a reference to a reference to uninitialized data because you might want to use the called function as the initializer.

How to force a compile error in C++(17) if a function return value isn't checked? Ideally through the type system

We are writing safety-critical code and I'd like a stronger way than [[nodiscard]] to ensure that checking of function return values is caught by the compiler.
[Update]
Thanks for all the discussion in the comments. Let me clarify that this question may seem contrived, or not "typical use case", or not how someone else would do it. Please take this as an academic exercise if that makes it easier to ignore "well why don't you just do it this way?". The question is exactly whether it's possible to create a type(s) that fails compiling if it is not assigned to an l-value as the return result of a function call .
I know about [[nodiscard]], warnings-as-errors, and exceptions, and this question asks if it's possible to achieve something similar, that is a compile time error, not something caught at run-time. I'm beginning to suspect it's not possible, and so any explanation why is very much appreciated.
Constraints:
MSVC++ 2019
Something that doesn't rely on warnings
Warnings-as-Errors also doesn't work
It's not feasible to constantly run static analysis
Macros are OK
Not a runtime check, but caught by the compiler
Not exception-based
I've been trying to think how to create a type(s) that, if it's not assigned to a variable from a function return, the compiler flags an error.
Example:
struct MustCheck
{
bool success;
...???...
};
MustCheck DoSomething( args )
{
...
return MustCheck{true};
}
int main(void) {
MustCheck res = DoSomething(blah);
if( !res.success ) { exit(-1); }
DoSomething( bloop ); // <------- compiler error
}
If such a thing is provably impossible through the type system, I'll also accept that answer ;)
(EDIT) Note 1: I have been thinking about your problem and reached the conclusion that the question is ill posed. It is not clear what you are looking for because of a small detail: what counts as checking? How the checkings compose and how far from the point of calling?
For example, does this count as checking? note that composition of boolean values (results) and/or other runtime variable matters.
bool b = true; // for example
auto res1 = DoSomething1(blah);
auto res2 = DoSomething2(blah);
if((res1 and res2) or b){...handle error...};
The composition with other runtime variables makes it impossible to make any guarantee at compile-time and for composition with other "results" you will have to exclude certain logical operators, like OR or XOR.
(EDIT) Note 2: I should have asked before but 1) if the handling is supposed to always abort: why not abort from the DoSomething function directly? 2) if handling does a specific action on failure, then pass it as a lambda to DoSomething (after all you are controlling what it returns, and what it takese). 3) composition of failures or propagation is the only not trivial case, and it is not well defined in your question.
Below is the original answer.
This doesn't fulfill all the (edited) requirements you have (I think they are excessive) but I think this is the only path forward really.
Below my comments.
As you hinted, for doing this at runtime there are recipes online about "exploding" types (they assert/abort on destruction if they where not checked, tracked by an internal flag).
Note that this doesn't use exceptions (but it is runtime and it is not that bad if you test the code often, it is after all a logical error).
For compile-time, it is more tricky, returning (for example a bool) with [[nodiscard]] is not enough because there are ways of no discarding without checking for example assigning to a (bool) variable.
I think the next layer is to active -Wunused-variable -Wunused-expression -Wunused-parameter (and treat it like an error -Werror=...).
Then it is much harder to not check the bool because comparison is pretty much to only operation you can really do with a bool.
(You can assign to another bool but then you will have to use that variable).
I guess that's quite enough.
There are still Machiavelian ways to mark a variable as used.
For that you can invent a bool-like type (class) that is 1) [[nodiscard]] itself (classes can be marked nodiscard), 2) the only supported operation is ==(bool) or !=(bool) (maybe not even copyable) and return that from your function. (as a bonus you don't need to mark your function as [[nodiscard]] because it is automatic.)
I guess it is impossible to avoid something like (void)b; but that in itself becomes a flag.
Even if you cannot avoid the absence of checking, you can force patterns that will immediately raise eyebrows at least.
You can even combine the runtime and compile time strategy.
(Make CheckedBool exploding.)
This will cover so many cases that you have to be happy at this point.
If compiler flags don’t protect you, you will have still a backup that can be detected in unit tests (regardless of taking the error path!).
(And don’t tell me now that you don’t unit test critical code.)
What you want is a special case of substructural types. Rust is famous for implementing a special case called "affine" types, where you can "use" something "at most once". Here, you instead want "relevant" types, where you have to use something at least once.
C++ has no official built-in support for such things. Maybe we can fake it? I thought not. In the "appendix" to this answer I include my original logic for why I thought so. Meanwhile, here's how to do it.
(Note: I have not tested any of this; I have not written any C++ in years; use at your own risk.)
First, we create a protected destructor in MustCheck. Thus, if we simply ignore the return value, we will get an error. But how do we avoid getting an error if we don't ignore the return value? Something like this.
(This looks scary: don't worry, we wrap most of it in a macro.)
int main(){
struct Temp123 : MustCheck {
void f() {
MustCheck* mc = this;
*mc = DoSomething();
}
} res;
res.f();
if(!res.success) print "oops";
}
Okay, that looks horrible, but after defining a suitable macro, we get:
int main(){
CAPTURE_RESULT(res, DoSomething());
if(!res.success) print "oops";
}
I leave the macro as an exercise to the reader, but it should be doable. You should probably use __LINE__ or something to generate the name Temp123, but it shouldn't be too hard.
Disclaimer
Note that this is all sorts of hacky and terrible, and you likely don't want to actually use this. Using [[nodiscard]] has the advantage of allowing you to use natural return types, instead of this MustCheck thing. That means that you can create a function, and then one year later add nodiscard, and you only have to fix the callers that did the wrong thing. If you migrate to MustCheck, you have to migrate all the callers, even those that did the right thing.
Another problem with this approach is that it is unreadable without macros, but IDEs can't follow macros very well. If you really care about avoiding bugs then it really helps if your IDE and other static analyzers understand your code as well as possible.
As mentioned in the comments you can use [[nodiscard]] as per:
https://learn.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-160
And modify to use this warning as compile error:
https://learn.microsoft.com/en-us/cpp/preprocessor/warning?view=msvc-160
That should cover your use case.

Why can't constexpr just be the default?

constexpr permits expressions which can be evaluated at compile time to be ... evaluated at compile time.
Why is this keyword even necessary? Why not permit or require that compilers evaluate all expressions at compile time if possible?
The standard library has an uneven application of constexpr which causes a lot of inconvenience. Making constexpr the "default" would address that and likely improve a huge amount of existing code.
It already is permitted to evaluate side-effect-free computations at compile time, under the as-if rule.
What constexpr does is provide guarantees on what data-flow analysis a compliant compiler is required to do to detect1 compile-time-computable expressions, and also allow the programmer to express that intent so that they get a diagnostic if they accidentally do something that cannot be precomputed.
Making constexpr the default would eliminate that very useful diagnostic ability.
1 In general, requiring "evaluate all expressions at compile time if possible" is a non-starter, because detecting the "if possible" requires solving the Halting Problem, and computer scientists know that this is not possible in the general case. So instead a relaxation is used where the outputs are { "Computable at compile-time", "Not computable at compile-time or couldn't decide" }. And the ability of different compilers to decide would depend on how smart their test was, which would make this feature non-portable. constexpr defines the exact test to use. A smarter compiler can still pre-compute even more expressions than the Standard test dictates, but if they fail the test, they can't be marked constexpr.
Note: despite the below, I admit to liking the idea of making constexpr the default. But you asked why it wasn't already done, so to answer that I will simply elaborate on mattnewport's last comment:
Consider the situation today. You're trying to use some function from the standard library in a context that requires a constant expression. It's not marked as constexpr, so you get a compiler error. This seems dumb, since "clearly" the ONLY thing that needs to change for this to work is to add the word constexpr to the definition.
Now consider life in the alternate universe where we adopt your proposal. Your code now compiles, yay! Next year you decide you to add Windows support to whatever project you're working on. How hard can it be? You'll compile using Visual Studio for your Windows users and keep using gcc for everyone else, right?
But the first time you try to compile on Windows, you get a bunch of compiler errors: this function can't be used in a constant expression context. You look at the code of the function in question, and compare it to the version that ships with gcc. It turns out that they are slightly different, and that the version that ships with gcc meets the technical requirements for constexpr by sheer accident, and likewise the one that ships with Visual Studio does not meet those requirements, again by sheer accident. Now what?
No problem you say, I'll submit a bug report to Microsoft: this function should be fixed. They close your bug report: the standard never says this function must be usable in a constant expression, so we can implement however we want. So you submit a bug report to the gcc maintainers: why didn't you warn me I was using non-portable code? And they close it too: how were we supposed to know it's not portable? We can't keep track of how everyone else implements the standard library.
Now what? No one did anything really wrong. Not you, not the gcc folks, nor the Visual Studio folks. Yet you still end up with un-portable code and are not a happy camper at this point. All else being equal, a good language standard will try to make this situation as unlikely as possible.
And even though I used an example of different compilers, it could just as well happen when you try to upgrade to a newer version of the same compiler, or even try to compile with different settings. For example: the function contains an assert statement to ensure it's being called with valid arguments. If you compile with assertions disabled, the assertion "disappears" and the function meets the rules for constexpr; if you enable assertions, then it doesn't meet them. (This is less likely these days now that the rules for constexpr are very generous, but was a bigger issue under the C++11 rules. But in principle the point remains even today.)
Lastly we get to the admittedly minor issue of error messages. In today's world, if I try to do something like stick in a cout statement in constexpr function, I get a nice simple error right away. In your world, we would have the same situation that we have with templates, deep stack-traces all the way to the very bottom of the implementation of output streams. Not fatal, but surely annoying.
This is a year and a half late, but I still hope it helps.
As Ben Voigt points out, compilers are already allowed to evaluate anything at compile time under the as-if rule.
What constexpr also does is lay out clear rules for expressions that can be used in places where a compile time constant is required. That means I can write code like this and know it will be portable:
constexpr int square(int x) { return x * x; }
...
int a[square(4)] = {};
...
Without the keyword and clear rules in the standard I'm not sure how you could specify this portably and provide useful diagnostics on things the programmer intended to be constexpr but don't meet the requirements.

Should I really massively introduce the explicit keyword?

When I used the (recently released) Cppcheck 1.69 on my code1, it showed a whole lot of messages where I expected none. Disabling noExplicitConstructor proved that all of them were of exactly this kind.
But I found that I'm not the only one with a lot of new Cppcheck messages, look at the results of the analysis of LibreOffice (which I'm allowed to show in public):
What would an experienced programmer do:
Suppress the check?
Massively introduce the explicit keyword?
1 This is of course not my code but code I have to work at work, it's legacy code: a mix of C and C++ in several (pre-)standard flavors (let's say C++98), and it's a pretty large code base.
I've been bitten in the past by performance hits introduced by implicit conversions as well as outright bugs. So I tend to always use explicit for all constructors that I do not want to participate in implicit conversions so that the compiler can help me catch my errors - and I then try to always also add a "// implicit intended" comment to the ctors where I explicitly intend for them to be used as converting ctors implicitly. I find that this helps me write more correct code with fewer surprises.
… So I'd say "yes, go add explicit" - in the long run you'll be glad you did - that's what I did when I first learned about it, and I'm glad I did.

Why use 'function address == NULL' instead of 'false'?

Browsing among some legacy code I've found such function:
static inline bool EmptyFunc()
{
return (void*) EmptyFunc == NULL;
}
What are the differences from this one:
static inline bool EmptyFunc()
{
return false;
}
This code was created to compile under several different platforms, like PS2, Wii, PC... Are there any reason to use the first function? Like better optimization or avoiding some strange compiler misbehavior?
Semantically both functions are the same: they always return false*. Folding the first expression to a constant value "false" is completely allowed by the standard since it would not change any observable side-effects (of which there are none). Since the compiler sees the entire function it also free to optimize away any calls to it and replace it with a constant "false" value.
That is, there is no "general" value in the first form and is likely a mistake on the part of the programmer. The only possibility is that it exploits some special behaviour (or defect) in a specific compiler/version. To what end I don't know however. If you wish to prevent inlining using a compiler-specific attribute would be the correct approach -- anything else is prone to breaking should the compiler change.
(*This assumes that NULL is never defined to be EmptyFunc, which would result in true being returned.).
Strictly speaking, a function pointer may not be cast to a void pointer, what happens then is outside the scope of the standard. The C11 standard lists it as a "common extension" in J.5.7 (I suspect that the same applies in C++). So the only difference between the two cases in that the former is non-portable.
It would seem that the most likely cause of the former version is either a confused programmer or a confused compiler. We can tell for certain that the programmer was confused/sloppy by the lack of an explaining comment.
It doesn't really make much sense to declare a function as inline and then try to trick the compiler into not inlining the code by including the function address in the code. So I think we can rule out that theory, unless of course the programmer was confused and thought it made sense.