Why is gcc's option "-Wstrict-prototypes" not valid for C++? - c++

Here is a warning I, and lots of people out there on the web, see when running gcc on C++ code:
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
The warning text is very clear: 'C++' is not in the set [Ada/C/ObjC ], so I have no question at all about why gcc gives this warning when compiling C++ code. (FYI the reason we have this flag turned on in spite of having C++ code is because it's mostly C code, we have chosen a strict (high level of) warning options list, but we've added a bit of C++ code.
My question is: Why isn't this warning valid for C++?
The gcc documentation for the warning option, from http://gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc/Warning-Options.html, is:
-Wstrict-prototypes (C and Objective-C only) Warn if a function is declared or defined without specifying the argument types. (An
old-style function definition is permitted without a warning if
preceded by a declaration which specifies the argument types.)
Now I just know I'm forgetting something obvious about C++, but doesn't C++ also require specifying argument types for functions in a prototype? True that those function prototypes are often in class declarations because the functions are often member functions, but aren't prototypes nevertheless required? Or even if they're just good practice, then why wouldn't gcc offer support by this option? Or if not, by a parallel option for C++?

I suppose it's becuase C++ requires strict prototypes as part of the language, so the option is superfluous. Why that makes it so GCC needs to complain about it is beyond me.
I have that option set in my build script for small sample/test C or C++ programs, and the warning kind of irritates me - it seems like there's no reason to warn just because the default behavior for a language is what I'm asking for. But it's there, so one day when it irritates me enough I'll fix my script to not bother with that option for C++ builds.

It's required by the C++ standard so there's no meaning of turning it on or off: It's always on in the language.

It is implicit in C++ because declaring/defining a function without specifying the argument types is illegal C++ by the standard (yes, this is one of the differences between C and C++ which makes C++ not a true superset).
This is legal C99, but not legal C++03:
void foo(x, y)
int x;
char *y;
{
// ...
}
GCC gives a warning for this in C if compiled with -Wstrict-prototypes.

Another interesting special case:
extern int foo();
By C semantics this declaration specifies an incomplete type for foo, as a function where the number and type of arguments remains unspecified. This is nevertheless a fully valid declaration in C99/C11; however -Wstrict-prototypes forces a warning for this valid declaration in C.
By C++ semantics, this declaration specifies a complete type for foo, as a function that takes no arguments (i.e., it is equivalent to extern int foo(void)). Hence -Wstrict-prototypes is not relevant for this case in C++.

Related

What does the ms-extensions flag do exactly with gcc?

GCC has a flag -fms-extensions.
What does this flag do exactly? Why is it sometimes on by default, and why does it exist?
According to the gcc 9.1.0 source code (grepped for flag_ms_extensions) the effects are:
(C) Allow Microsoft's version of anonymous unions and struct. This includes support for C11 anonymous unions and structs as well as Microsoft-specific flavours, including omitting the braced member list entirely, and placing members in the parent namespace even if the struct/union had an identifier.
(C++) Allow a class member to have the same name as its type (e.g. using foo = int; struct A { foo foo; }). With ms-extensions disabled, the behaviour is to accept this code in C (where it is legal); or an extern "C" block unless -pedantic flag was given. The error message for this is declaration of NAME changes meaning of NAME.
(C++) Allow implicit int; any situation that would have produced the diagnostic ISO C++ forbids declaration of NAME with no type is now allowed, with int assumed as the type. Examples: const *p; or const f();.
(C++) Allow implicit conversion from a qualified-id naming a non-static member function, to a pointer-to-member. In ISO C++ the & operator is required to perform that conversion.
(C++) Allow &f to form a pointer-to-member, if f (an unqualified-id) names a non-overloaded member function in that context. ISO C++ requires explicit qualification with the class name.
The flag is turned on by default if the Target ABI is a Microsoft ABI. It can be disabled by manually specifying -fno-ms-extensions.
The rationale behind this is a tougher question. The documentation has to say:
Accept some non-standard constructs used in Microsoft header files.
Disable Wpedantic warnings about constructs used in MFC.
So I assume the rationale is to allow g++ to build MFC applications which depend on non-standard code in MSVC vendor-supplied headers.
I am not sure how relevant that still is in 2019 and I think a good case could be made for gcc to default to having this flag turned off. (Users can always specify it if they want to build an old MFC app).
For example MSVC 19.xx (the latest version to date) no longer allows the last three bullet points in its default mode. (It does still allow foo foo; even with /Za flag).

Why is C++ more restrictive regarding forward declaration of function prototypes (signatures)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I noticed that C++ is more restrictive than C with regards to declaring function signatures before using them even if function definitions are declared after the function that actually calls them?
I always thought that C is more restrictive but it seems like this is not the case.
Why has the philosophy changed when making the standards for C++ programming language?
For example the following code runs compiles fine on gcc command but outputs an error when trying to compile with g++
#include<stdio.h>
int main()
{
int a=sum(4,6);
printf("%d",a);
return 0;
}
int sum(int a,int b)
{
return a+b;
}
The error is
‘sum’ was not declared in this scope
In older (before C99) C standard, there's a thing called "implicit function declaration" which has been removed since C99.
So if you compile in C90 mode, a compiler has to support that "feature". Whereas in C++, "implicit function declaration" has never been there. So GCC errors out. Your code is not valid in modern C (C99 or later) either.
Compile with stricter compiler switches (e.g. -std=c99 -Wall -Wextra -pedantic-errors) and pay attention to all diagnostics.
I always thought that C is more restrictive but it seems like this is not the case.
You have it backward. In nearly all the places where C++ isn't a superset of C, it is because C++ is more restrictive. The C++ type system is stricter than the C type system, legacy features like the one you tripped over ("implicit declaration") are removed, there are many more reserved words, etc.
It is true that C++ has many more features than C, but you mustn't confuse the number of features a language has with a lack of restrictions. A major aspect of the design philosophy of the ML/Haskell language family is to provide lots and lots of features but also lots and lots of strictness.
C originally allowed functions to be called from the program without having been defined, allowing for them to be "defined later". And then if you didn't define the function, the compiler simply made up a calling convention such as "ok I don't know what this function returns, so lets guess that it returns int". Similar assumptions could be made for the parameters... which you could optionally define the types for. Old "K&R style" C functions looked like
int func (a, b)
int a;
int b;
{
...
}
To force the type of the parameters, you would have to use so-called "prototype format", with a forward declaration like:
int func (int a, int b); // function prototype since the parameter types are explicit
All of the implicit function declaration behavior was plain dangerous nonsense and lead to fatal bugs. Yet this dangerous behavior was only partially phased out in the 1990 years standardization. The compiler was still allowed to make implicit assumptions about the function if no prototype was visible. (For example, this is why malloc used to freak out completely if you forgot to include stdlib.h.)
This is why your code compiles, you are using an old version of gcc (4.x or older) which defaults to -std=gnu90, which uses the 1990 years C standard + non-standard extensions. Newer versions of gcc, 5.0 or later, default to -std=gnu11 which is the current C standard (C11) + non-standard extensions.
C++ never allowed this behavior, and C fixed it too, with the C99 standard in the year 1999. Even if you have an old gcc compiler, you should be able to compile with gcc -std=c99 -pedantic-errors, which means "actually follow the C standard, 1999 years version". Then you get compiler errors if no proper function declaration/definition is visible before calling a function.
There are many reasons. One of them is function overloading:
void func(double);
// void func(int);
int main()
{
func(1);
}
If I uncomment the line with void func(int x);, it will be called, otherwise 1 will be promoted to double and void func(double) will be called.
When a C compiler sees a call to a function that it doesn't know about, it guesses what the return value and the parameter types should be. The return type is guessed as int, and the parameter types are guessed to be the same as the value passed in after applying the "usual promotions".
So if you just call
double result = cube (1);
the compiler guesses that the function "cube" has one argument of type int, and returns int.
What happens if that "guess" is wrong? Tough. You have undefined behaviour, your code may crash or worse.
Because of this "guessing", the call sum (4, 6) is allowed in C, and because the actual function had all the right types (two arguments of type int, return value has type in) it actually works. But this is obviously a very dangerous thing to do.
Because it is so dangerous, C++ doesn't implicit declarations (that is the C++ compiler isn't allowed to guess the argument types. That's why it doesn't compile.
There are a few languages around nowadays where the compiler doesn't need a function to be declared before it is used.

C vs C++ function questions

I am learning C, and after starting out learning C++ as my first compiled language, I decided to "go back to basics" and learn C.
There are two questions that I have concerning the ways each language deals with functions.
Firstly, why does C "not care" about the scope that functions are defined in, whereas C++ does?
For example,
int main()
{
donothing();
return 0;
}
void donothing() { }
the above will not compile in a C++ compiler, whereas it will compile in a C compiler. Why is this? Isn't C++ mostly just an extension on C, and should be mostly "backward compatible"?
Secondly, the book that I found (Link to pdf) does not seem to state a return type for the main function. I check around and found other books and websites and these also commonly do not specify return types for the main function. If I try to compile a program that does not specify a return type for main, it compiles fine (although with some warnings) in a C compiler, but it doesn't compile in a C++ compiler. Again, why is that? Is it better style to always specify the return type as an integer rather than leaving it out?
Thanks for any help, and just as a side note, if anyone can suggest a better book that I should buy that would be great!
Firstly, why does C "not care" about the scope that functions are defined in, whereas C++ does?
Actually, C does care. It’s just that C89 allows implicitly declared functions and infers its return type as int and its parameters from usage. C99 no longer allows this.
So in your example it’s as if you had declared a prototype as
int dosomething();
The same goes for implicit return types: missing return types are inferred as int in C89 but not C99. Compiling your code with gcc -std=c99 -pedantic-errors yields something similar to the following:
main.c: In function 'main':
main.c:2:5: error: implicit declaration of function 'donothing' [-Wimplicit-function-declaration]
main.c: At top level:
main.c:5:6: error: conflicting types for 'donothing'
main.c:2:5: note: previous implicit declaration of 'donothing' was her
For the record, here’s the code I’ve used:
int main() {
donothing();
return 0;
}
void donothing() { }
It's because C++ supports optional parameters. When C++ sees donothing(); it can't tell if donothing is:
void donothing(void);
or
void donothing(int j = 0);
It has to pass different parameters in these two cases. It's also because C++ is more strongly typed than C.
int main() {
donothing();
return 0;
}
void donothing() { }
Nice minimum working example.
With gcc 4.2.1, the above code gets a warning regarding the conflicting types for void donothing() with default compiler settings. That's what the C89 standard says to do with this kind of problem. With clang, the above code fails on void donothing(). The C99 standard is a bit stricter.
It's a good idea to compile your C++ code with warnings enabled and set to a high threshold. This becomes even more important in C. Compile with warnings enabled and treat implicit function declarations as an error.
Another difference between C and C++: In C++ there is no difference between the declarations void donothing(void); and void donothing(); There is a huge difference between these two in C. The first is a function that takes no parameters. The latter is a function with an unspecified calling sequence.
Never use donothing() to specify a function that takes no arguments. The compiler has no choice but to accept donothing(1,2,3) with this form. It knows to reject donothing(1,2,3) when the function is declared as void donothing(void).
he above will not compile in a C++ compiler, whereas it will compile in a C compiler. Why is this?
Because C++ requires a declaration (or definition) of the function to be in scope at the point of the call.
Isn't C++ mostly just an extension on C
Not exactly. It was originally based on a set of C extensions, and it refers to the C standard (with a few modifications) for the definitions of the contents of standard headers from C. The C++ "language itself" is similar to C but is not an extension of it.
and should be mostly "backward compatible"?
Emphasis on "mostly". Most C features are available in C++, and a lot of the ones removed were to make C++ a more strictly typed language than C. But there's no particular expectation that C code will compile as C++. Even when it does, it doesn't always have the same meaning.
I check around and found other books and websites and these also commonly do not specify return types for the main function
The C and C++ standards have always said that main returns int.
In C89, if you omit the return type of a function it is assumed to be int. C++ and C99 both lack this implicit int return type, but a lot of C tutorial books and tutorials (and compilers and code) still use the C89 standard.
C has some allowances for implementations to accept other return types, but not for portable programs to demand them. Both languages have a concept of a "freestanding implementation", which can define program entry and exit any way it likes -- again, because this is specific to an implementation it's not suitable for general teaching of C.
IMO, even if you're going to use a C89 compiler it's worth writing your code to also be valid C99 (especially if you have a C99 compiler available to check it). The features removed in C99 were considered harmful in some way. It's not worth even trying to write code that's both C and C++, except in header files intended for inter-operation between the languages.
I decided to "go back to basics" and learn C.
You shouldn't think of C as a prerequisite or "basic form" of C++, because it isn't. It is a simpler language, though, with fewer features for higher-level programming. This is often cited as an advantage of C by users of C. And an advantage of C++ by users of C++. Sometimes those users are the same people using the languages for different purposes.
Typical coding style in C is different from typical coding style in C++, and so you might well learn certain basics more readily in C than in C++. It is possible to learn low-level programming using C++, and the code you write when you do so may or may not end up looking a lot like C code.
So, what you learn while learning C may or may not inform the way you write C++. If it does, that may or may not be for the better.
C++ has changed these rules on purpose, to make C++ a more typesafe language.
C.1.4 Clause 5: expressions [diff.expr]
5.2.2
Change: Implicit declaration of functions is not allowed
Rationale: The type-safe nature of C++.
Effect on original feature: Deletion of semantically well-defined feature. Note: the original feature was
labeled as “obsolescent” in ISO C.
Difficulty of converting: Syntactic transformation. Facilities for producing explicit function declarations
are fairly widespread commercially.
How widely used: Common.
You can find other similar changes in appendix C of this Draft C++ standard
Isn't C++ mostly just an extension on C
No. If you think of C++ as "C with Classes", you're doing it very, very wrong. Whilst strictly, most valid C is valid C++, there's virtually no good C that's good C++. The reality is that good C++ code is vastly different to what you'd see as good C code.
Firstly, why does C "not care" about the scope that functions are
defined in, whereas C++ does?
Essentially, because not enforcing the same rules as C++ makes doing this in C hideously unsafe and in fact, nobody sane should ever do that. C99 tightened this up, along with implicit-int and other defects in the C language.

Shouldn't you always need to define functions before using them in a C file?

I have the following bit of C code:
int main() {
myFunctionABC(2);
return 0;
}
void myFunctionABC(int n) {
printf("%d\n", n);
}
So... this code is working and I don't understand why. I always thought that a C compiler would always need every referred function to be already "known", otherwise would fail the compilation process.
Why is this working?
There has never been any requrement to define functions before calling them in C or in C++ (as the title of your question suggests). What is required in C++ and C99 (and in some cases in C89/90) is to declare functions before calling them.
As for your code... Your code is not "working". The best you can hope for is that your code will produce undefined behavior that will just happen to resemble "working".
Firstly, the code will not even compile as C++ or as C99 (and you tagged your question as both C and C++). C++ and C99 unconditionally require functions to be declared before they are called.
Secondly, with C89/90 compiler the code might compile, but will produce the aforementioned undefined behavior anyway. Even in C89/90 calling variadic functions (like printf) without declaring them first is illegal - it produces undefined behavior.
For non-variadic functions calling them without declaring them is OK - the implicit declaration rules of C89/90 will take care of that. But these rules will make the compiler to conclude that your undeclared myFunctionABC function returns an int, while in reality you defined it as returning void - this discrepancy leads to undefined behavior as well. Most self-respecting compilers will at least warn you about the problem.
gcc rightfully complains:
make 4356180
4356180.c:6: warning: conflicting types for ‘myFunctionABC’
4356180.c:2: note: previous implicit declaration of ‘myFunctionABC’ was here
If I add -Wall, I get these at well:
make CFLAGS=-Wall 4356180
4356180.c: In function ‘main’:
4356180.c:2: warning: implicit declaration of function ‘myFunctionABC’
It is not erroneous to declare a function prototype before actually calling it even if its definition is in a subsequent location, but it is a good practice to help the compiler out, check this similar post

How does this function definition work?

I generated a hash function with gperf couple of days ago. What I saw for the hash function was alien to me. It was something like this (I don't remember the exact syntax) :
unsigned int
hash(str, size)
register char* str;
register unsigned int size;
{
//Definition
}
Now, when I tried to compile with a C++ compiler (g++) it threw errors at me for not having str and size declared. But this compiled on the C compiler (gcc). So, questions:
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
How does the C compiler understand the definition? str and size are undeclared when they first appear.
What is the purpose of declaring str and size after function signature but before function body rather than following the normal approach of doing it in either of the two places?
How do I get this function to compile on g++ so I can use it in my C++ code? Or should I try generating C++ code from gperf? Is that possible?
1. C++ is not a superset, although this is not standard C either.
2/3. This is a K&R function declaration. See What are the major differences between ANSI C and K&R C?
.
4. gperf does in fact have an option, -L, to specify the language. You can just use -L C++ to use C++.
The Old C syntax for the declaration of a function's formal arguments is still supported by some compilers.
For example
int func (x)
int x
{
}
is old style (K&R style) syntax for defining a function.
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
Nopes! C++ is not a superset of C. This style(syntax) of function declaration/definition was once a part of C but has never been a part of C++. So it shouldn't compile with a C++ compiler.
This appears to be "old-school" C code. Declaring the types of the parameters outside of the parentheses but before the open curl-brace of the code block is a relic of the early days of C programming (I'm not sure why but I guess it has something to do with variable management on the stack and/or compiler design).
To answer your questions:
Calling C++ a "superset" of C is somewhat a misnomer. While they share basic syntax features, and you can even make all sorts of C library calls from C++, they have striking differences with respect to type safety, warnings vs. errors (C is more permissible), and compiler/preprocessor options.
Most contemporary C compilers understand legacy code (such as this appears to be). The C compiler holds the function parameter names sort of like "placeholders" until their type can be declared immediately following the function header name.
No real "purpose" other than again, this appears to be ancient code, and the style back in the day was like this. The "normal" approach is IMO the better, more intuitive way.
My suggestion:
unsigned int hash(register char *str, register unsigned int size)
{
// Definition
}
A word of advice: Consider abandoning the register keyword - this was used in old C programs as a way of specifying that the variable would be stored in a memory register (for speed/efficiency), but nowadays compilers are better at optimizing away this need. I believe that modern compilers ignore it. Also, you cannot use the & (address of) operator in C/C++ on a register variable.