Is an implementation allowed to issue a diagnostic message for a well-formed program?
For example some compilers issue a warning about unused expression result when compiling the following well-formed program:
int main() { 0; }
Are such compilers allowed to consider that warning a diagnostic message?
It's perfectly fine to issue a diagnostic, as long as the rules below are met in any corresponding scenario. §1.4/2:
Although this International Standard states only requirements on C ++
implementations, those requirements are often easier to understand if
they are phrased as requirements on programs, parts of programs, or
execution of programs. Such requirements have the following meaning:
If a program contains no violations of the rules in this International Standard, a conforming implementation shall, within
its resource limits, accept and correctly execute that program.
If a program contains a violation of any diagnosable rule or an occurrence of a construct described in this Standard as
“conditionally-supported” when the implementation does not support
that construct, a conforming implementation shall issue at least one
diagnostic message.
If a program contains a violation of a rule for which no diagnostic is required, this International Standard places no requirement on
implementations with respect to that program.
"Accepting" solely addresses the acknowledgment of the implementation that this is a well-formed program, not the absence of any diagnostics. After all, despite any warnings issued in the process, implementations still yield the object file you asked for.
However, there is one rule concerning templates that does require that there be no diagnostic issued; §14.6/8:
No diagnostic shall be issued for a template for which a valid
specialization can be generated.
An implementation can issue (1)any number of diagnostics it wants, as long as it does issue the required diagnostics.
It must accept correct programs, to the degree that it's able to,
C++14 §1.4/2:
” If a program contains no violations of the rules in this International Standard, a conforming imple-
mentation shall, within its resource limits, accept and correctly execute that program"
but it can issue diagnostics about it.
The C++ standard does not differentiate between error messages and warning messages, but this is a de facto standard. An error message means (by convention) that no binary is produced, because the problem is too severe. A warning message means (by convention) that there is a potential problem, but not a direct violation of language rules, and so a binary is produced unless there are also errors.
Sometimes the lines are a bit blurred, where implementations incorrectly but for pragmatic reasons accept invalid code, with only warnings or even no diagnostics. For new code one may therefore ask the compiler to treat every warning as an error, and aim for completely clean compiles. And as I understand it that's now absolutely not uncommon.
With some compilers, e.g. Visual C++, it can however be problematic, because the compiler issues too many silly-warnings, warnings about perfectly legitimate and non-problematic constructs. Then one has to somehow suppress those warnings. E.g. via #pragma directives, if possible, or by code rewrites.
Happily for Visual C++ there exists a header with such #pragma directives that turn off sillywarnings, compiled about five years ago from a community effort in the comp.lang.c++ Usenet group. And happily, for the community edition of Visual Studio 2015 there is an extension that provides a project template with that header included. These are both by me.
For the code in question,
int main() { 0; }
… instead of suppressing the warning, which generally is a useful one, you should rewrite the code to express your intent explicitly:
int main() { (void)0; }
The (void) cast tells the compiler that it's your intent to discard the value of that expression.
In the case of using this construct for an otherwise unused function argument, you can additionally declare an incomplete class of the same name, to prevent inadvertent use of the name:
(void)arg_name; struct arg_name;
But since it's unconventional it may trip up other programmers – with the compilers I use the error message for later use of the name is not exactly intuitive.
(1) Except as noted by Columbo in his answer, C++14 §14.6/8 “No diagnostic shall be issued for a template for which a valid specialization can be generated.”.
Related
As I wrote an answer to How is it possible to use pow without including cmath library I fear to have proven that missing an include of a needed header is actually undefined behavior, but since I have not found any consent of that fact I like to impose the formal question:
Is missing a required header i.e.
#include <iostream>
int main()
{
std::cout << std::pow(10, 2);
}
Ill-formed ( [defns.ill.formed] ) code?
Invoking undefined behavior ( [defns.undefined] )?
If it is not 1 and 2, is it unspecified behavior [defns.unspecified] or implementation-defined behavior [defns.impl.defined]?
If not 1. i.e. if this code is well-formed, wouldn't that contradict [using.headers] and [intro.compliance] "accept and correctly execute a well-formed program"?
As in my answer I tend to affirm both questions, but [using.headers] is very confusing because of Difference between Undefined Behavior and Ill-formed, no diagnostic message required . As [defns.well.formed] implies that a program constructed to the ODR is well formed, and there is specification of whenever the for example iostream must not define pow, one could argue this is still unspecified behavior ( [defns.unspecified]). I don't want to rely only of my standard interpretation skills for a definitive answer for such an important question. Note that the accepted i.e. the only other answer does not answer if the code is UB nor does the question asks it.
It is unspecified whether this program is well-formed or ill-formed (with a required diagnostic, because name lookup doesn’t find pow). The possibilities arise from the statement that one C++ header may include another, which grants permission to the implementation to give this program either of just two possible interpretations.
Several similar rules (e.g., that a template must have at least one valid potential specialization) are described as rendering the program ill-formed, no diagnostic required, but in this situation that freedom is not extended to the implementation (which is arguably preferable). That said, an implementation is allowed to process an ill-formed program in an arbitrary fashion so long as it issues at least one diagnostic message, so it’s not completely unreasonable to group this situation with true undefined behavior even though the symptoms differ usefully in practice.
The [[nodiscard]] attribute introduced in C++17 standard, and in case of the
... potentially-evaluated discarded-value expression,..., implementations are encouraged to issue a warning in such cases.
Source: n4659, C++17 final working draft.
Similar phrasing is used on cppreference, that in case of "violation":
the compiler is encouraged to issue a warning.
Why is the word encouraged used instead of required? Are there situations (except, the explicit cast to void) when a compiler is better off not issuing a warning? What is the reason behind softening the standard language in the particular case of relatively safe requirement to issue a warning no matter what (again, except, say, explicit cast to void)?
The C++ standard specifies the behavior of a valid C++ program. In so doing, it also defines what "valid C++ program" means.
Diagnostics are only required for code which is ill-formed, code which is syntactically or semantically incorrect (and even then, there are some ill-formed circumstances that don't require diagnostics). Either the code is well-formed, or it is ill-formed and (usually) a diagnostic is displayed.
So the very idea of a "warning" is just not something the C++ standard recognizes, or is meant to recognize. Notice that even the "implementations are encouraged to issue a warning" statement is in a non-normative notation, rather than a legitimate specification of behavior.
For an ill-formed C++ program like:
foo^##$bar%$
Is it standard-compliant for compiler to yield compiled binary with diagnostic message, rather than interrupting the compilation as g++/clang++ do?
intro.compliance states that:
If a program contains a violation of any diagnosable rule or an
occurrence of a construct described in this Standard as
“conditionally-supported” when the implementation does not support
that construct, a conforming implementation shall issue at least one
diagnostic message.
which does not require compilation error in this case.
Possibly related:
What is the C++ compiler required to do with ill-formed programs according to the Standard?
Ill-Formed, No Diagnostic Required (NDR): ConstExpr Function Throw in C++14
Yes, it is legal for the implementation to produce a binary when the input is an ill-formed program. Here is [intro.compliance]/8 in C++14:
A conforming implementation may have extensions (including additional library functions), provided they do
not alter the behavior of any well-formed program. Implementations are required to diagnose programs that
use such extensions that are ill-formed according to this International Standard. Having done so, however,
they can compile and execute such programs.
In such cases the diagnostic would usually be referred to as a "warning" (as opposed to "error").
Could C++ standards gurus please enlighten me:
Since which C++ standard version has this statement failed because (v) seems to be equivalent to (*&v)?
I.e. for example the code:
#define DEC(V) ( ((V)>0)? ((V)-=1) : 0 )
...{...
register int v=1;
int r = DEC(v) ;
...}...
This now produces warnings under -std=c++17 like:
cannot take address of register variable
left hand side of operand must be lvalue
Many C macros enclose ALL macro parameters in parentheses, of which the above is meant only to be a representative example.
The actual macros that produce warnings are for instance
the RTA_* macros in /usr/include/linux/rtnetlink.h.
Short of not using/redefining these macros in C++, is there any workaround?
If you look at the revision summary of the latest C++1z draft, you'd see this in [diff.cpp14.dcl.dcl]
[dcl.stc]
Change: Removal of register storage-class-specifier.
Rationale: Enable repurposing of deprecated keyword in future
revisions of this International Standard.
Effect on original feature: A valid C++ 2014 declaration utilizing the register
storage-class-specifier is ill-formed in this International Standard.
The specifier can simply be removed to retain the original meaning.
The warning may be due to that.
register is no longer a storage class specifier, you should remove it. Compilers may not be issuing the right error or warnings but your code should not have register to begin with
The following is a quote from the standard informing people about what they should do with regards to register in their code (relevant part emphasized), you probably have an old version of that file
C.1.6 Clause 10: declarations [diff.dcl]
Change: In C++, register is not a storage class specifier.
Rationale: The storage class specifier had no effect in C++.
Effect on original feature: Deletion of semantically well-defined feature.
Difficulty of converting: Syntactic transformation.
How widely used: Common.
Your worry is unwarranted since the file in question does not actually contain the register keyword:
grep "register" /usr/include/linux/rtnetlink.h
outputs nothing. Either way, you shouldn't be receiving the warning since:
System headers don't emit warnings by default, at least in GCC
It isn't wise to try to compile a file that belongs to a systems project like the linux kernel in C++ mode, as there may be subtle and nasty breaking changes
Just include the file normally or link the C code to your C++ binary. Report a bug if you really are getting a warning that should normally be suppressed to your compiler vendor.
Most people are familiar with the "undefined" and "unspecified" behaviour notes in C++, but what about "no diagnostic required"?
I note this question and answer, dealing with ill formed programs, but not much detail on the root of "no diagnostic required" statements.
What is the general approach applied by the committee when classifying something as "no diagnostic required"?
How bad does the error need to be for the standards committee to specify it as such?
Are these errors of such a nature that it would be near impossible to detect, hence diagnose?
Examples of "undefined" and "unspecified" behaviour are not in short supply; short of the ODR, what practical example(s) are there for the "no diagnostic required" type errors?
There was a discussion here: https://groups.google.com/a/isocpp.org/forum/#!topic/std-discussion/lk1qAvCiviY with utterances by various committee members.
The general consensus appears to be
there is no normative difference
ill-formed; no diagnostic required is used only for compile-time rule violations, never for runtime rule violations.
As I said in that thread, I did once hear in a discussion (I can't remember anymore in which one, but I'm certain there were insightful committee members involved)
ill-formed; no diagnostic required for cases that clearly are bad rule violations and that can in principle be diagnosed at compile time, but would require huge efforts from an implementation.
undefined behavior for things that implementations could find useful meanings for, so don't neccessarily are pure evil, and for any runtime violations that results in arbitrary consequences.
The rough guide for me is; if it is at compile time, it tends to be "ill-formed; no diagnostic required" and if it is at runtime, it always is "undefined behavior".
I would try to explain "no diagnostic required" for behaviours categorized as undefined behaviour (UB).
The Standard by saying "UB doesn't require diagnostic"1, gives compilers total freedom to optimize the code, as the compiler can eliminate many overheads only by assuming your program is completely well-defined (which means your program doesn't have UBs) which is a good assumption — after all if that assumption is wrong, then anything which compiler does based on this (wrong) assumption is going to behave in an undefined (i.e unpredictable) way which is completely consistent because your program has undefined behaviours anyway!
Note that a program which contains UBs has the freedom to behave like anything. Note again that I said "consistent" because it is consistent with Standard's stance : "neither the language specification nor the compilers give any guarantee of your program's behaviour if it contains UB(s)".
1. The opposite is "diagnostic required" which means the compiler is required to provide diagnostics to the programmer either by emitting warning or error messages. In other words, it is not allowed to assume the program is well-defined so as to optimize certain parts of code.
Here is an article (on LLVM blog) which explains this further using example:
Advantages of Undefined Behavior in C, with Examples (part one)
An except from the article (italicised mine):
Signed integer overflow: If arithmetic on an 'int' type (for example)
overflows, the result is undefined. One example is that "INT_MAX+1" is
not guaranteed to be INT_MIN. This behavior enables certain classes of
optimizations that are important for some code. For example, knowing
that INT_MAX+1 is undefined allows optimizing "X+1 > X" to "true".
Knowing the multiplication "cannot" overflow (because doing so would
be undefined) allows optimizing "X*2/2" to "X". While these may seem
trivial, these sorts of things are commonly exposed by inlining and
macro expansion. A more important optimization that this allows is for
"<=" loops like this:
for (i = 0; i <= N; ++i) { ... }
In this loop, the compiler can assume that the loop will iterate
exactly N+1 times if "i" is undefined on overflow, which allows a
broad range of loop optimizations to kick in. On the other hand, if
the variable is defined to wrap around on overflow, then the compiler
must assume that the loop is possibly infinite (which happens if N is
INT_MAX) - which then disables these important loop optimizations.
This particularly affects 64-bit platforms since so much code uses
"int" as induction variables.
It is worth noting that unsigned overflow is guaranteed to be defined
as 2's complement (wrapping) overflow, so you can always use them. The
cost to making signed integer overflow defined is that these sorts of
optimizations are simply lost (for example, a common symptom is a ton
of sign extensions inside of loops on 64-bit targets). Both Clang and
GCC accept the "-fwrapv" flag which forces the compiler to treat
signed integer overflow as defined (other than divide of INT_MIN by
-1).
I would recommend you to read the entire article — it has three parts, all are good.
part two
part three
Hope that helps.