What will a "single variable as a statement" do? - c++

Below is the C++ function in a project I took over lately. Each of the last two statements is just a variable, containing no assignment. What will such kind of statement do? Lately, I saw such kinds of statements usually.
__fastcall TCardActionArea::TCardActionArea(TComponent* Owner)
:TArea(Owner,"CardActionArea")
{
// Get the thread id
ThreadId = std::__threadid();
this->Visible= false;
m_pBackGroundPicture = NULL;
m_pActionButtonMap.clear();
m_ActionsButtonDisplayed.clear();
m_changecnt = 0;
m_isNextbtn = true;
m_PictureParamPath1;
m_PictureParamPath2;
}

Normally, these statements do not do anything, and it is definitely not a common practice to write them.
Maybe the author just wanted to explicitly note that they do not need to assign any values to these members (although a comment would do better).
Maybe this is some hack for a particular compiler to prevent some optimization (e.g. to prevent the member from being optimized away), but it would be a very slippery hack that may not work on a next compiler version.
Maybe the author intended to assign something to these variables and just forgot to do this, so this may be a bug.
Or maybe the author just had some kind of template, e.g. listing all the members to make sure they did not forgot anything, and just kept the parts of template they did not need to change.

The only time I've seen statements like this used was to silence compiler warnings about unreferenced variables (usually function arguments). I haven't checked whether MSVC (which features of this code lead me to believe was used, at least originally) issues such warnings about unused members, although that does seem a stretch as it would only work in some whole-code analysis mode.

Related

How to force a compile error in C++(17) if a function return value isn't checked? Ideally through the type system

We are writing safety-critical code and I'd like a stronger way than [[nodiscard]] to ensure that checking of function return values is caught by the compiler.
[Update]
Thanks for all the discussion in the comments. Let me clarify that this question may seem contrived, or not "typical use case", or not how someone else would do it. Please take this as an academic exercise if that makes it easier to ignore "well why don't you just do it this way?". The question is exactly whether it's possible to create a type(s) that fails compiling if it is not assigned to an l-value as the return result of a function call .
I know about [[nodiscard]], warnings-as-errors, and exceptions, and this question asks if it's possible to achieve something similar, that is a compile time error, not something caught at run-time. I'm beginning to suspect it's not possible, and so any explanation why is very much appreciated.
Constraints:
MSVC++ 2019
Something that doesn't rely on warnings
Warnings-as-Errors also doesn't work
It's not feasible to constantly run static analysis
Macros are OK
Not a runtime check, but caught by the compiler
Not exception-based
I've been trying to think how to create a type(s) that, if it's not assigned to a variable from a function return, the compiler flags an error.
Example:
struct MustCheck
{
bool success;
...???...
};
MustCheck DoSomething( args )
{
...
return MustCheck{true};
}
int main(void) {
MustCheck res = DoSomething(blah);
if( !res.success ) { exit(-1); }
DoSomething( bloop ); // <------- compiler error
}
If such a thing is provably impossible through the type system, I'll also accept that answer ;)
(EDIT) Note 1: I have been thinking about your problem and reached the conclusion that the question is ill posed. It is not clear what you are looking for because of a small detail: what counts as checking? How the checkings compose and how far from the point of calling?
For example, does this count as checking? note that composition of boolean values (results) and/or other runtime variable matters.
bool b = true; // for example
auto res1 = DoSomething1(blah);
auto res2 = DoSomething2(blah);
if((res1 and res2) or b){...handle error...};
The composition with other runtime variables makes it impossible to make any guarantee at compile-time and for composition with other "results" you will have to exclude certain logical operators, like OR or XOR.
(EDIT) Note 2: I should have asked before but 1) if the handling is supposed to always abort: why not abort from the DoSomething function directly? 2) if handling does a specific action on failure, then pass it as a lambda to DoSomething (after all you are controlling what it returns, and what it takese). 3) composition of failures or propagation is the only not trivial case, and it is not well defined in your question.
Below is the original answer.
This doesn't fulfill all the (edited) requirements you have (I think they are excessive) but I think this is the only path forward really.
Below my comments.
As you hinted, for doing this at runtime there are recipes online about "exploding" types (they assert/abort on destruction if they where not checked, tracked by an internal flag).
Note that this doesn't use exceptions (but it is runtime and it is not that bad if you test the code often, it is after all a logical error).
For compile-time, it is more tricky, returning (for example a bool) with [[nodiscard]] is not enough because there are ways of no discarding without checking for example assigning to a (bool) variable.
I think the next layer is to active -Wunused-variable -Wunused-expression -Wunused-parameter (and treat it like an error -Werror=...).
Then it is much harder to not check the bool because comparison is pretty much to only operation you can really do with a bool.
(You can assign to another bool but then you will have to use that variable).
I guess that's quite enough.
There are still Machiavelian ways to mark a variable as used.
For that you can invent a bool-like type (class) that is 1) [[nodiscard]] itself (classes can be marked nodiscard), 2) the only supported operation is ==(bool) or !=(bool) (maybe not even copyable) and return that from your function. (as a bonus you don't need to mark your function as [[nodiscard]] because it is automatic.)
I guess it is impossible to avoid something like (void)b; but that in itself becomes a flag.
Even if you cannot avoid the absence of checking, you can force patterns that will immediately raise eyebrows at least.
You can even combine the runtime and compile time strategy.
(Make CheckedBool exploding.)
This will cover so many cases that you have to be happy at this point.
If compiler flags don’t protect you, you will have still a backup that can be detected in unit tests (regardless of taking the error path!).
(And don’t tell me now that you don’t unit test critical code.)
What you want is a special case of substructural types. Rust is famous for implementing a special case called "affine" types, where you can "use" something "at most once". Here, you instead want "relevant" types, where you have to use something at least once.
C++ has no official built-in support for such things. Maybe we can fake it? I thought not. In the "appendix" to this answer I include my original logic for why I thought so. Meanwhile, here's how to do it.
(Note: I have not tested any of this; I have not written any C++ in years; use at your own risk.)
First, we create a protected destructor in MustCheck. Thus, if we simply ignore the return value, we will get an error. But how do we avoid getting an error if we don't ignore the return value? Something like this.
(This looks scary: don't worry, we wrap most of it in a macro.)
int main(){
struct Temp123 : MustCheck {
void f() {
MustCheck* mc = this;
*mc = DoSomething();
}
} res;
res.f();
if(!res.success) print "oops";
}
Okay, that looks horrible, but after defining a suitable macro, we get:
int main(){
CAPTURE_RESULT(res, DoSomething());
if(!res.success) print "oops";
}
I leave the macro as an exercise to the reader, but it should be doable. You should probably use __LINE__ or something to generate the name Temp123, but it shouldn't be too hard.
Disclaimer
Note that this is all sorts of hacky and terrible, and you likely don't want to actually use this. Using [[nodiscard]] has the advantage of allowing you to use natural return types, instead of this MustCheck thing. That means that you can create a function, and then one year later add nodiscard, and you only have to fix the callers that did the wrong thing. If you migrate to MustCheck, you have to migrate all the callers, even those that did the right thing.
Another problem with this approach is that it is unreadable without macros, but IDEs can't follow macros very well. If you really care about avoiding bugs then it really helps if your IDE and other static analyzers understand your code as well as possible.
As mentioned in the comments you can use [[nodiscard]] as per:
https://learn.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-160
And modify to use this warning as compile error:
https://learn.microsoft.com/en-us/cpp/preprocessor/warning?view=msvc-160
That should cover your use case.

Why not use __if_exists with local variables?

The MSDN documentation for the Microsoft-specific __if_exists statement says the following (emphasis added):
Apply the __if_exists statement to identifiers both inside or outside a class. Do not apply the __if_exists statement to local variables.
Unfortunately there is no explanation for why you should not apply this to local variables. It compiles fine and has the expected effect, so I'm wondering if anyone knows why they say not to do this. Is it a correctness issue, or a maintainability issue, or something else?
I realize that this is a Microsoft-specific feature and not portable, but let's assume for argument's sake that there's a good reason to use it.
EDIT: Some folks are curious why I'm doing this, so here's an explanation. I realize this is a dirty hack, so unless you have a good suggestion for a better way to do it, please don't bother pointing out that it's gross. It's the least-gross alternative we were able to find given the large size of the code base.
We have a large body of legacy code (millions of lines) that uses the Microsoft-specific __FUNCTION__ macro as part of an error logging package. A significant fraction of that code is now wrapped inside lambda functions so that we can catch structured exceptions (with __try/__except) and still use unwindable objects. Inside those lambda functions, __FUNCTION__ evaluates to something useless like `anonymous-namespace'::<lambda23>::operator(), which is not useful for anything. Our workaround for this is to define new __FUNCTION__-like macro which checks for the existence of an alternate local variable with the enclosing function name, using __if_exists. Due to how the macros work, we can easily switch to the new __FUNCTION__ substitute and easily define the alternate name variable without changing tons of code, so it's a reasonably clean solution given the limitations. That is, of course, assuming that it's valid to use __if_exists this way.
As I said above, I know it's a dirty hack, so please don't tell me how ugly it is unless you have good ideas on how to do it better.
I don't know for sure, but one guess is a local variable might be optimized away by compiler, and maybe not of course, which renders __if_exists test unrelieable.
And I also don't see the reason to do this for a local variable, you are in that specific scope, you know everything, why you want to test if a local variable exist?
__if_exists is a dirty old hack inside Visual C++, with severe implementation limitations as it was only intended for ATL.
Local variables are special because you can have two local variables with the same name:
void foo()
{
int i = 1;
{
int i = 2;
}
}
This means there's a more complicated datastructure inside the compiler to track them. __if_exists has to do a name lookup, which may not be correct for some types of nested scopes like this.
Another historical case is that in Visual C++, for wasn't correctly scoped:
void foo()
{
for (int i = 1; false; ) { }
__if_exists(i) // What do you expect? VC++ let i escape.
}

Why is there no warning when I write an empty main?

If I write a program like the following one, g++ and visual studio have the courtesy of warning me that the local variable a is never used :
int main()
{
int a; // An unused variable? Warning! Warning!
}
If I remove the unused variable (to make the compiler happy), it leaves me with the following program :
int main()
{
// An empty main? That's fine.
}
Now, I am left with a useless program.
Maybe I am missing something, but, if an unused variable is bad enough to raise a warning, why would an empty program be ok?
The example above is pretty simple. But in real life, if I have a big program with an empty main (because I forgot to put anything in it). Then having a warning should be a good thing, isn't it.
Maybe I am missing an option in g++ or visual studio that can raise a warning/error when the main is empty?
The reason for this is simple, if there is no return statement in main it implicitly returns EXIT_SUCCESS, as defined by the standard.
So an empty main is fine, no return needed, no function calls needed, nothing.
To answer the question why GCC doesn't warn you is because warnings are there to help you with common mistakes. Leaving a variable unused can lead to confusing errors, and code bloat.
However forgetting entirely to write a main function isn't a common mistake by anything but a beginner and isn't worth warning about (because it's entirely legal as well).
I suspect a lot of it is that compilers generally try to warn about things that are potential problems, but aren't necessarily apparent.
Now it's certainly true that if all your main contains a definition of a variable that's never used, that's fairly apparent -- but if you've defined 16 variables (or whatever) and one of them is no longer used, that may not be so obvious.
In the case of main containing nothing, I suppose the same could happen with an empty main -- for example, you could have a whole web of #ifdef/#elif/etc., that led to main being entirely empty for some particular platform. I'm pretty sure I've never run across this though, and I'm pretty sure I've never heard of anybody else seeing it either. At least to me, that suggests that it probably doesn't arise often enough in practice for most people to care much about the possibility.
if an unused variable is bad enough to raise a warning, why would an empty program be ok?
First of all, an empty main does not equal an empty program. There could be static objects with non-trivial constructors/destructors. These would get invoked irrespective of whether main is empty.
Secondly, one could think of lots and lots of potential errors that a compiler could warn about, but most compilers don't. I think this particular one doesn't come up very often (and takes seconds to figure out). I therefore don't see a compelling case for specifically diagnosing it.
When I was cleaning up inherited C code that comprised the customized runner for Informix 4GL, I fixed every warning having set the warning flag to catch everything, and there were lots of warnings.
I haven't used Visual C++ in a long time. Can't VC++ be configured to flag the most severe warnings? It is probably not the default setting, but one you have to change.
It is possible then that at least the unused variable would be flagged.
In a global sense int main() is just a definition of the main function of the program which returns SUCCESS when finishes.
The main function is the point by where all C++ programs start their execution, independently of its location within the source code.
So this:
int main()
{
// An empty main? That's fine.
// notice that the "return 0;" part is here by default, whether you wrote it or not
}
is just a definition of a function which returns admissible value.
So everything is ok, that's why the compiler is silent.

Expressions with no side effects in C++

See, what I don't get is, why should programs like the following be legal?
int main()
{
static const int i = 0;
i < i > i;
}
I mean, surely, nobody actually has any current programs that have expressions with no side effects in them, since that would be very pointless, and it would make parsing & compiling the language much easier. So why not just disallow them? What benefit does the language actually gain from allowing this kind of syntax?
Another example being like this:
int main() {
static const int i = 0;
int x = (i);
}
What is the actual benefit of such statements?
And things like the most vexing parse. Does anybody, ever, declare functions in the middle of other functions? I mean, we got rid of things like implicit function declaration, and things like that. Why not just get rid of them for C++0x?
Probably because banning then would make the specification more complex, which would make compilers more complex.
it would make parsing & compiling the
language much easier
I don't see how. Why is it easier to parse and compile i < i > i if you're required to issue a diagnostic, than it is to parse it if you're allowed to do anything you damn well please provided that the emitted code has no side-effects?
The Java compiler forbids unreachable code (as opposed to code with no effect), which is a mixed blessing for the programmer, and requires a little bit of extra work from the compiler than what a C++ compiler is actually required to do (basic block dependency analysis). Should C++ forbid unreachable code? Probably not. Even though C++ compilers certainly do enough optimization to identify unreachable basic blocks, in some cases they may do too much. Should if (foo) { ...} be an illegal unreachable block if foo is a false compile-time constant? What if it's not a compile-time constant, but the optimizer has figured out how to calculate the value, should it be legal and the compiler has to realise that the reason it's removing it is implementation-specific, so as not to give an error? More special cases.
nobody actually has any current
programs that have expressions with no
side effects in them
Loads. For example, if NDEBUG is true, then assert expands to a void expression with no effect. So that's yet more special cases needed in the compiler to permit some useless expressions, but not others.
The rationale, I believe, is that if it expanded to nothing then (a) compilers would end up throwing warnings for things like if (foo) assert(bar);, and (b) code like this would be legal in release but not in debug, which is just confusing:
assert(foo) // oops, forgot the semi-colon
foo.bar();
things like the most vexing parse
That's why it's called "vexing". It's a backward-compatibility issue really. If C++ now changed the meaning of those vexing parses, the meaning of existing code would change. Not much existing code, as you point out, but the C++ committee takes a fairly strong line on backward compatibility. If you want a language that changes every five minutes, use Perl ;-)
Anyway, it's too late now. Even if we had some great insight that the C++0x committee had missed, why some feature should be removed or incompatibly changed, they aren't going to break anything in the FCD unless the FCD is definitively in error.
Note that for all of your suggestions, any compiler could issue a warning for them (actually, I don't understand what your problem is with the second example, but certainly for useless expressions and for vexing parses in function bodies). If you're right that nobody does it deliberately, the warnings would cause no harm. If you're wrong that nobody does it deliberately, your stated case for removing them is incorrect. Warnings in popular compilers could pave the way for removing a feature, especially since the standard is authored largely by compiler-writers. The fact that we don't always get warnings for these things suggests to me that there's more to it than you think.
It's convenient sometimes to put useless statements into a program and compile it just to make sure they're legal - e.g. that the types involve can be resolved/matched etc.
Especially in generated code (macros as well as more elaborate external mechanisms, templates where Policies or types may introduce meaningless expansions in some no-op cases), having less special uncompilable cases to avoid keeps things simpler
There may be some temporarily commented code that removes the meaningful usage of a variable, but it could be a pain to have to similarly identify and comment all the variables that aren't used elsewhere.
While in your examples you show the variables being "int" immediately above the pointless usage, in practice the types may be much more complicated (e.g. operator<()) and whether the operations have side effects may even be unknown to the compiler (e.g. out-of-line functions), so any benefit's limited to simpler cases.
C++ needs a good reason to break backwards (and retained C) compatibility.
Why should doing nothing be treated as a special case? Furthermore, whilst the above cases are easy to spot, one could imagine far more complicated programs where it's not so easy to identify that there are no side effects.
As an iteration of the C++ standard, C++0x have to be backward compatible. Nobody can assert that the statements you wrote does not exist in some piece of critical software written/owned by, say, NASA or DoD.
Anyway regarding your very first example, the parser cannot assert that i is a static constant expression, and that i < i > i is a useless expression -- e.g. if i is a templated type, i < i > i is an "invalid variable declaration", not a "useless computation", and still not a parse error.
Maybe the operator was overloaded to have side effects like cout<<i; This is the reason why they cannot be removed now. On the other hand C# forbids non-assignment or method calls expresions to be used as statements and I believe this is a good thing as it makes the code more clear and semantically correct. However C# had the opportunity to forbid this from the very beginning which C++ does not.
Expressions with no side effects can turn up more often than you think in templated and macro code. If you've ever declared std::vector<int>, you've instantiated template code with no side effects. std::vector must destruct all its elements when releasing itself, in case you stored a class for type T. This requires, at some point, a statement similar to ptr->~T(); to invoke the destructor. int has no destructor though, so the call has no side effects and will be removed entirely by the optimizer. It's also likely it will be inside a loop, then the entire loop has no side effects, so the entire loop is removed by the optimizer.
So if you disallowed expressions with no side effects, std::vector<int> wouldn't work, for one.
Another common case is assert(a == b). In release builds you want these asserts to disappear - but you can't re-define them as an empty macro, otherwise statements like if (x) assert(a == b); suddenly put the next statement in to the if statement - a disaster! In this case assert(x) can be redefined as ((void)0), which is a statement that has no side effects. Now the if statement works correctly in release builds too - it just does nothing.
These are just two common cases. There are many more you probably don't know about. So, while expressions with no side effects seem redundant, they're actually functionally important. An optimizer will remove them entirely so there's no performance impact, too.

c++ optimization

I'm working on some existing c++ code that appears to be written poorly, and is very frequently called. I'm wondering if I should spend time changing it, or if the compiler is already optimizing the problem away.
I'm using Visual Studio 2008.
Here is an example:
void someDrawingFunction(....)
{
GetContext().DrawSomething(...);
GetContext().DrawSomething(...);
GetContext().DrawSomething(...);
.
.
.
}
Here is how I would do it:
void someDrawingFunction(....)
{
MyContext &c = GetContext();
c.DrawSomething(...);
c.DrawSomething(...);
c.DrawSomething(...);
.
.
.
}
Don't guess at where your program is spending time. Profile first to find your bottlenecks, then optimize those.
As for GetContext(), that depends on how complex it is. If it's just returning a class member variable, then chances are that the compiler will inline it. If GetContext() has to perform a more complicated operation (such as looking up the context in a table), the compiler probably isn't inlining it, and you may wish to only call it once, as in your second snippet.
If you're using GCC, you can also tag the GetContext() function with the pure attribute. This will allow it to perform more optimizations, such as common subexpression elimination.
If you're sure it's a performance problem, change it. If GetContext is a function call (as opposed to a macro or an inline function), then the compiler is going to HAVE to call it every time, because the compiler can't necessarily see what it's doing, and thus, the compiler probably won't know that it can eliminate the call.
Of course, you'll need to make sure that GetContext ALWAYS returns the same thing, and that this 'optimization' is safe.
If it is logically correct to do it the second way, i.e. calling GetContext() once on multiple times does not affect your program logic, i'd do it the second way even if you profile it and prove that there are no performance difference either way, so the next developer looking at this code will not ask the same question again.
Obviously, if GetContext() has side effects (I/O, updating globals, etc.) than the suggested optimization will produce different results.
So unless the compiler can somehow detect that GetContext() is pure, you should optimize it yourself.
If you're wondering what the compiler does, look at the assembly code.
That is such a simple change, I would do it.
It is quicker to fix it than to debate it.
But do you actually have a problem?
Just because it's called often doesn't mean it's called TOO often.
If it seems qualitatively piggy, sample it to see what it's spending time at.
Chances are excellent that it is not what you would have guessed.