I don't know if what I'd like is possible, but I figure give it an ask anyway.
I have some Boost library code where I'd like to hint to clang-tidy to issue a warning where through static analysis, a clear instance of undefined behaviour could occur due to bad logic. https://akrzemi1.wordpress.com/2016/12/12/concealing-bugs/ suggests that __builtin_unreachable() might get clang-tidy to trip like this, but I have failed to make it happen (though it trips the UB sanitiser very nicely):
#include <optional>
int main()
{
std::optional<int> f;
// Spot the UB before it happens and flag it
if(!f)
{
__builtin_unreachable();
}
// Here is the UB
return *f;
}
In the code above, a static analyser can clearly tell that __builtin_unreachable() must be called. I want clang-tidy to report this, yet clang-tidy-5.0 -checks=* -header-filter=.* temp.cpp -- -std=c++17 reports nothing.
Note I don't need to use __builtin_unreachable(), it's just what Andrzej's C++ Blog suggested. Any technique for getting the clang static analyser, or the MSVC static analyser, or ideally clang-tidy, to deduce when UB must obviously occur through static deduction and flag it at compile time is what I am looking for.
What I am not looking for is a construct which always trips a warning during static analysis irrespective of use case. I only want the static analysis warning to appear when through static analysis alone, it is obvious at compile time that UB could be invoked given some statically deducible logic error.
My thanks in advance!
So, yeah, it turns out that this cannot be done in clang-tidy at least, and probably most other static analysers.
The gory details of why not can be found at https://lists.llvm.org/pipermail/cfe-dev/2017-June/054120.html, but in essence:
The halting problem i.e. no idea if and how loops are executed.
clang-tidy and other analysers are built to avoid seeing dead code as much as possible. This exactly is opposite to what is needed for the kind of check above.
Related
struct Thing {
std::string something;
};
Clang-tidy complains:
warning: an exception may be thrown in function 'Thing' which should not throw exceptions [bugprone-exception-escape]
I know about the bugprone-exception-escape.FunctionsThatShouldNotThrow setting, but I could not figure out what to put there to suppress that warning for all structures that contain strings.
Any suggestions?
It is a bug in the check, see https://github.com/llvm/llvm-project/issues/54668, apparently introduced with LLVM 14. Going by the issue description there isn't really much you can do except suppressing it individually for each class.
If that is too much work, you might want to consider disabling the check until they have fixed this. According to the linked issue a fix is already under review: https://reviews.llvm.org/D134588
#include <vector>
std::vector<int>::iterator foo();
void bar(void*) {}
int main()
{
void* p;
while (foo() != foo() && (p = 0, true))
{
bar(p);
}
return 0;
}
Results in error:
c:\users\jessepepper\source\repos\testcode\consoleapplication1\consoleapplication1.cpp(15): error C4703: potentially uninitialized local pointer variable 'p' used
It's kind of a bug, but very typical for the kind of code you write.
First, this isn't an error, it's a warning. C4703 is a level 4 warning (meaning that it isn't even enabled by default). So in order to get it reported as an error (and thus interrupt compilation), compiler arguments or pragmas were passed to enable this warning and turn it into an error (/W4 and /Werror are the most likely I think).
Then there's a trade-off in the compiler. How complex should the data flow analysis be to determine whether a variable is actually uninitialized? Should it be interprocedural? The more complex it is, the slower the compiler gets (and because of the halting problem, the issue may be undecidable anyway). The simpler it is, the more false positives you get because the condition that guarantees initialization is too complex for the compiler to understand.
In this case, I suspect that the compiler's analysis works as follows: the assignment to p is behind a conditional (it only happens if foo() != foo()). The usage of p is also behind a conditional (it only happens if that complex and-expression is true). The compiler cannot establish a relationship between these conditions (the analysis is not complex enough to realize that foo() != foo() is a precondition to the entire while loop condition being true). Thus, the compiler errs on the side of assuming that the access could happen without prior initialization and emits the warning.
So it's an engineering trade-off. You could report the bug, but if you do, I suggest you supply a more compelling real-world example of idiomatic code to argue in favor of making the analysis more complex. Are you sure you can't restructure your original code to make it more approachable to the compiler, and more readable for humans at the same time?
I did some experimenting with VC++2017 Preview.
It's definitely a bug bug. It makes it impossible to compile and link code that might be correct, albetit smelly.
A warning would be acceptable. (See #SebastianRedl answer.) But in the latest and greatest VC++2017, it is being treated as an error, not warning, even with warnings turned off, and "Treat warnings as errors" set to No. Something odd is happening. The "error" is being thrown late - after it says, "Generating code". I would guess, and it's only a guess, that the "Generating code" pass is doing global analysis to determine if un-initialized access is possible, and it's getting it wrong. Even then, you should be able to disable the error, IMO.
I do not know if this is new behavior. Reading Sebastian's answer, I presume it is. When I get any kind of warning at any level, I always fix it in the code, so I would not know.
Jesse, click on the triangular flag near the top right of Visual Studio, and report it.
For sure it's a bug. I tried to remove it in all possible ways, including #pragma. The real thing is that this is reported as an error, not as a warning as Microsoft say. This is a big mistake from Microsoft. It's NOT a WARNING, it's an ERROR. Please, do not repeat again that it's a warning, because it's NOT.
What I'm doing is trying to compile some third party library whose sources I do not want to fix in any way, and should compile in normal cases, but it DOESN'T compile in VS2017 because the infamous "error C4703: potentially uninitialized local pointer variable *** used".
Someone found a solution for that?
constexpr permits expressions which can be evaluated at compile time to be ... evaluated at compile time.
Why is this keyword even necessary? Why not permit or require that compilers evaluate all expressions at compile time if possible?
The standard library has an uneven application of constexpr which causes a lot of inconvenience. Making constexpr the "default" would address that and likely improve a huge amount of existing code.
It already is permitted to evaluate side-effect-free computations at compile time, under the as-if rule.
What constexpr does is provide guarantees on what data-flow analysis a compliant compiler is required to do to detect1 compile-time-computable expressions, and also allow the programmer to express that intent so that they get a diagnostic if they accidentally do something that cannot be precomputed.
Making constexpr the default would eliminate that very useful diagnostic ability.
1 In general, requiring "evaluate all expressions at compile time if possible" is a non-starter, because detecting the "if possible" requires solving the Halting Problem, and computer scientists know that this is not possible in the general case. So instead a relaxation is used where the outputs are { "Computable at compile-time", "Not computable at compile-time or couldn't decide" }. And the ability of different compilers to decide would depend on how smart their test was, which would make this feature non-portable. constexpr defines the exact test to use. A smarter compiler can still pre-compute even more expressions than the Standard test dictates, but if they fail the test, they can't be marked constexpr.
Note: despite the below, I admit to liking the idea of making constexpr the default. But you asked why it wasn't already done, so to answer that I will simply elaborate on mattnewport's last comment:
Consider the situation today. You're trying to use some function from the standard library in a context that requires a constant expression. It's not marked as constexpr, so you get a compiler error. This seems dumb, since "clearly" the ONLY thing that needs to change for this to work is to add the word constexpr to the definition.
Now consider life in the alternate universe where we adopt your proposal. Your code now compiles, yay! Next year you decide you to add Windows support to whatever project you're working on. How hard can it be? You'll compile using Visual Studio for your Windows users and keep using gcc for everyone else, right?
But the first time you try to compile on Windows, you get a bunch of compiler errors: this function can't be used in a constant expression context. You look at the code of the function in question, and compare it to the version that ships with gcc. It turns out that they are slightly different, and that the version that ships with gcc meets the technical requirements for constexpr by sheer accident, and likewise the one that ships with Visual Studio does not meet those requirements, again by sheer accident. Now what?
No problem you say, I'll submit a bug report to Microsoft: this function should be fixed. They close your bug report: the standard never says this function must be usable in a constant expression, so we can implement however we want. So you submit a bug report to the gcc maintainers: why didn't you warn me I was using non-portable code? And they close it too: how were we supposed to know it's not portable? We can't keep track of how everyone else implements the standard library.
Now what? No one did anything really wrong. Not you, not the gcc folks, nor the Visual Studio folks. Yet you still end up with un-portable code and are not a happy camper at this point. All else being equal, a good language standard will try to make this situation as unlikely as possible.
And even though I used an example of different compilers, it could just as well happen when you try to upgrade to a newer version of the same compiler, or even try to compile with different settings. For example: the function contains an assert statement to ensure it's being called with valid arguments. If you compile with assertions disabled, the assertion "disappears" and the function meets the rules for constexpr; if you enable assertions, then it doesn't meet them. (This is less likely these days now that the rules for constexpr are very generous, but was a bigger issue under the C++11 rules. But in principle the point remains even today.)
Lastly we get to the admittedly minor issue of error messages. In today's world, if I try to do something like stick in a cout statement in constexpr function, I get a nice simple error right away. In your world, we would have the same situation that we have with templates, deep stack-traces all the way to the very bottom of the implementation of output streams. Not fatal, but surely annoying.
This is a year and a half late, but I still hope it helps.
As Ben Voigt points out, compilers are already allowed to evaluate anything at compile time under the as-if rule.
What constexpr also does is lay out clear rules for expressions that can be used in places where a compile time constant is required. That means I can write code like this and know it will be portable:
constexpr int square(int x) { return x * x; }
...
int a[square(4)] = {};
...
Without the keyword and clear rules in the standard I'm not sure how you could specify this portably and provide useful diagnostics on things the programmer intended to be constexpr but don't meet the requirements.
I have developed a cross-platform library which makes fair use of type-punning in socket communications. This library is already being used in a number of projects, some of which I may not be aware of.
Using this library incorrectly can result in dangerously Undefined Behavior. I would like to ensure to the best of my ability that this library is being used properly.
Aside from documentation of course, under G++ the best way I'm aware of to do that is to use the -fstrict_aliasing and -Wstrict-aliasing options.
Is there a way under GCC to apply these options at a source file level?
In other words, I'd like to write something like the following:
MyFancyLib.h
#ifndef MY_FANCY_LIB_H
#define MY_FANCY_LIB_H
#pragma (something that pushes the current compiler options)
#pragma (something to set -fstrict_aliasing and -Wstrict-aliasing)
// ... my stuff ...
#pragma (something to pop the compiler options)
#endif
Is there a way?
I rather dislike nay-sayers. You can see an excellent post at this page: https://www.codingame.com/playgrounds/58302/using-pragma-for-compile-optimization
All the other answers clearly have nothing to do with the question so here is the actual documentation for GCC:
https://gcc.gnu.org/onlinedocs/gcc/Pragmas.html
Other compilers will have their own methods so you will need to look those up and create some macros to handle this.
Best of luck. Sorry that it took you 10 years to get any relevant answer.
Let's start with what I think is a false premise:
Using this library incorrectly can result in dangerously Undefined Behavior. I would like to ensure to the best of my ability that this library is being used properly.
If your library does type punning in a way that -fstrict-aliasing breaks, then it has undefined behavior according to the C++ standard regardless of what compiler flags are passed. The fact that the program seems to work on certain compilers when compiled with certain flags (in particular, -fno-strict-aliasing) does not change that.
Therefore, the best solution is to do what Florian said: change the code so it conforms to the C++ language specification. Until you do that, you're perpetually on thin ice.
"Yes, yes", you say, "but until then, what can I do to mitigate the problem?"
I recommend including a run-time check, used during library initialization, to detect the condition of having been compiled in a way that will cause it to misbehave. For example:
// Given two pointers to the *same* address, return 1 if the compiler
// is behaving as if -fstrict-aliasing is specified, and 0 if not.
//
// Based on https://blog.regehr.org/archives/959 .
static int sae_helper(int *h, long *k)
{
// Write a 1.
*h = 1;
// Overwrite it with all zeroes using a pointer with a different type.
// With naive semantics, '*h' is now 0. But when -fstrict-aliasing is
// enabled, the compiler will think 'h' and 'k' point to different
// memory locations ...
*k = 0;
// ... and therefore will optimize this read as 1.
return *h;
}
int strict_aliasing_enabled()
{
long k = 0;
// Undefined behavior! But we're only doing this because other
// code in the library also has undefined behavior, and we want
// to predict how that code will behave.
return sae_helper((int*)&k, &k);
}
(The above is C rather than C++ just to ease use in both languages.)
Now in your initialization routine, call strict_aliasing_enabled(), and if it returns 1, bail out immediately with an error message saying the library has been compiled incorrectly. This will help protect end users from misbehavior and alert the developers of the client programs that they need to fix their build.
I have tested this code with gcc-5.4.0 and clang-8.0.1. When -O2 is passed, strict_aliasing_enabled() returns 1. When -O2 -fno-strict-aliasing is passed, that function returns 0.
But let me emphasize again: my code has undefined behavior! There is (can be) no guarantee it will work. A standard-conforming C++ compiler could compile it into code that returns 0, crashes, or that initiates Global Thermonuclear War! Which is also true of the code you're presumably already using elsewhere in the library if you need -fno-strict-aliasing for it to behave as intended.
You can try the Diagnostic pragmas and change the level in error for your warnings. More details here:
http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html
If your library is a header-only library, I think the only way to deal with this is to fix the strict aliasing violations. If the violations occur between types you define, you can use the usual tricks involving unions, or the may_alias type attribute. If your library uses the predefined sockaddr types, this could be difficult.
I'm a student, and I'm trying to write and run some test code for an assignment to check it before I turn it in. What I'm trying to do now is test that my code prevents value semantics properly. In my assignment, I have declared for each of my classes its own private copy constructor and assignment operator that have no definition, and so do nothing. When they are called in my test program, I am getting compile errors like I expected. Something like this:
error: 'myClass::myClass(const &myClass)' is private'
error: 'myClass& myClass::operator=(const myClass&)' is private
Is there a way to use try/catch so that my test code will compile and run, but show me that these errors did occur?
I've tried:
myClass obj1(...);
myClass obj2(...);
try{
obj1 = obj2;
throw 1;
}
catch(int e){
assert(e==1);
}
but the compiler is still giving me the above errors. Are these not 'exceptions'? Will they not trigger a throw?
If I'm understanding try/catch correctly, it handles runtime errors, not the kind errors I was getting above, correct?
After doing some more research, it seems that there is no (easy) way of testing for certain compile errors natively within C++ (this maybe true for most languages, now that I think about it). I read a post that suggests writing some test code in a scripting language that attempts to compile snippets of C++ code and checks for any errors, and another post that recommends using Boost.Build.
What is the easiest/best way of doing what I'm trying to do?
I looked at the documentation for Boost.Build and it's a bit over my head. If I used it, how would I test that a file, say 'test.cpp' compiles, and maybe handle specific compile errors that occur with 'test.cpp'?
Thanks for your help!
P.S. This is one of my first posts, hopefully I've done "enough" research, and done everything else properly. Sorry if I didn't.
These are compiler errors, not exceptions. Exceptions are a mechanism for programmers to throw run-time errors and catch/handle them. The compiler fails to even build an executable for you to run because it recognizes that the code is malformed and is invalid C++ code.
If you want to make this a run-time error, make the method public/use friends/whatever you need to do to provide access to something and throw an exception in the method's definition, the catch and handle the exception in the calling code.
I don't see a purpose in doing this however. Always prefer a compile-time error to a run-time error. Always.
The C++ standard defines what is valid or invalid code, with some things left as undefined and other things left up to whoever implements the compiler. Any standard compliant C++ compiler will give an error because something does not meet the standard/definition and is thus invalid. The errors are generally to say that something is ambiguous or straight up nonsensical and you need to revise what you've written.
Run-time errors are either crashes or behavior that is unintended and unwanted from the perspective of the user. Compiler errors are the compiler saying "I don't understand what you're saying. This doesn't make sense.". Compiler warnings are the compiler saying "I'll let you do this, but I probably shouldn't. Are you really sure this is what you meant?".
try-catch happens at runtime, whereas the compiler statically tries to link functions you are calling at compile time, so compilation will always fail.
Alternatively, If you are willing to use C++ exceptions, then you could just implement the copy and assignment methods, make them public, and just throw an exception in the body of those functions. Note that in basically every situation, you should prefer static/compile-time checks over runtime checks if you have a choice.
What you really want to test is not the compiler failing, but you want to test certain assumptions about your class.
In your test file, put #include <type_traits>
and then add
assert((std::is_assignable <myClass, myClass> ::value) == FALSE);
assert((std::is_copy_assignable<myClass> ::value) == FALSE);
assert((std::is_copy_constructible<myClass> ::value) == FALSE);
The various traits you can check for are documented here:
http://en.cppreference.com/w/cpp/types
Notice, you'll have to compile for C++11 to use most of these functions.
(as described first in Assert that code does NOT compile)
After doing some more research, it seems that there is no (easy) way of testing for certain compile errors natively within C++
I think this might not be the case anymore if you can use C++2a.
As I am currently writing tests for templated code, I also tried to test for compile time errors.
In particular, I want to test for a negative feature, hence provide a guarantee that certain construct will fail to compile. That is possible using c++20 requires expressions as follows:
Simple example
Below, I check that the nonexistent function invalid_function cannot be called on a Struct of type S:
struct S {}; //< Example struct on which I perform the test
template <typename T> constexpr bool does_invalid_function_compile = requires(T a) {
a.invalid_function();
};
static_assert(!does_invalid_function_compile<S>, "Error, invalid function does compile.");
Note that you could replace the static_assert by the apporpriate function of your testing framework, which records a test error at runtime and hence avoids this compile test to stop other tests from executing.
Example form question
This example can of course be adapted to work with the scenario depicted in the question, which might look approximately like this:
/// Test struct with deleted assignment operator
struct myClass {
auto operator=(myClass const &) = delete;
};
/// Requires expression which is used in order to check if assigment is possible
template <myClass cl1, myClass cl2> constexpr bool does_assignment_compile = requires() {
cl1 = cl2;
};
int main() {
myClass cl1;
myClass cl2;
// Note that static assert can only be used if this can be known at compile time. Otherwise use
// the boolean otherwise.
static_assert(!does_assignment_compile<cl1, cl2>);
}
The code is available on Compiler Explorer.
Use cases
I use this for template metaprogramming in order to make sure that the code complies with certain theoretical constraints.
This kind of compile errors can't be supressed. They are errors from the C++ standarts' point of view.
Surely you can supress some of them in your own (or patched) compiler.