This question already has answers here:
Is it undefined behavior to redefine a standard name?
(3 answers)
Closed 2 years ago.
Why does this not return an error for duplicate function definition since the c++ standard library does external linkage.
This shouldn't be function overloading since its is type double(double,double) which is the exact same as the pow defined in math.h
#include <iostream>
#include <math.h>
double pow(double base, double exponent) {
return 1;
}
int main() {
std::cout << pow(2, 2);
}
The function signature pow(double, double) is reserved to the language implementation in the global namespace. By defining a reserved name, the behaviour of the program is undefined.
Then the behaviour of the program is undefined, you are not guaranteed to be "raised an error for duplicate definition".
Related standard rules (from latest draft):
[reserved.names.general]
If a program declares or defines a name in a context where it is reserved, other than as explicitly allowed by [library], its behavior is undefined.
[extern.names]
Each name from the C standard library declared with external linkage is reserved to the implementation for use as a name with extern "C" linkage, both in namespace std and in the global namespace.
Each function signature from the C standard library declared with external linkage is reserved to the implementation for use as a function signature with both extern "C" and extern "C++" linkage, or as a name of namespace scope in the global namespace.
Why does this not return an error for duplicate function definition...?
Because the definition of pow provided by the C library does not need to be in the included header file. This header file (math.h) may include only its declaration. In such a case the resulting translation unit may look like:
...
double pow(double, double);
...
double pow(double base, double exponent)
{
return 1;
}
int main()
{
std::cout << pow(2, 2);
}
There is absolutely nothing wrong with this translation unit. But, the problem occurs during linking, since a linker can link your pow call to two different functions at a machine-code level — the one provided by the C library and the other provided by the object file created from the translation unit.
What the linker will actually do is not defined by the C++ Standard. It simply says only that the behavior for your code is undefined.
Related
#include <iostream>
#include <cmath>
/* Intentionally incorrect abs() which seems to override std::abs() */
int abs(int a) {
return a > 0? -a : a;
}
int main() {
int a = abs(-5);
int b = std::abs(-5);
std::cout<< a << std::endl << b << std::endl;
return 0;
}
I expected that the output will be -5and 5, but the output is the -5 and -5.
I wonder why this case will happen?
Does it have anything to do with the use of std or what?
The language specification allows implementations to implement <cmath> by declaring (and defining) the standard functions in global namespace and then bringing them into namespace std by means of using-declarations. It is unspecified whether this approach is used
20.5.1.2 Headers
4 [...] In the C++ standard library, however, the declarations (except for names which are defined as macros in C) are within namespace scope (6.3.6) of the namespace std. It is unspecified whether these names (including any overloads
added in Clauses 21 through 33 and Annex D) are first declared within the global namespace scope and are then injected into namespace std by explicit using-declarations (10.3.3).
Apparently, you are dealing with one of implementations that decided to follow this approach (e.g. GCC). I.e. your implementation provides ::abs, while std::abs simply "refers" to ::abs.
One question that remains in this case is why in addition to the standard ::abs you were able to declare your own ::abs, i.e. why there's no multiple definition error. This might be caused by another feature provided by some implementations (e.g. GCC): they declare standard functions as so called weak symbols, thus allowing you to "replace" them with your own definitions.
These two factors together create the effect you observe: weak-symbol replacement of ::abs also results in replacement of std::abs. How well this agrees with the language standard is a different story... In any case, don't rely on this behavior - it is not guaranteed by the language.
In GCC this behavior can be reproduced by the following minimalistic example. One source file
#include <iostream>
void foo() __attribute__((weak));
void foo() { std::cout << "Hello!" << std::endl; }
Another source file
#include <iostream>
void foo();
namespace N { using ::foo; }
void foo() { std::cout << "Goodbye!" << std::endl; }
int main()
{
foo();
N::foo();
}
In this case you will also observe that the new definition of ::foo ("Goodbye!") in the second source file also affects the behavior of N::foo. Both calls will output "Goodbye!". And if you remove the definition of ::foo from the second source file, both calls will dispatch to the "original" definition of ::foo and output "Hello!".
The permission given by the above 20.5.1.2/4 is there to simplify implementation of <cmath>. Implementations are allowed to simply include C-style <math.h>, then redeclare the functions in std and add some C++-specific additions and tweaks. If the above explanation properly describes the inner mechanics of the issue, then a major part of it depends on replaceability of weak symbols for C-style versions of the functions.
Note that if we simply globally replace int with double in the above program, the code (under GCC) will behave "as expected" - it will output -5 5. This happens because C standard library does not have abs(double) function. By declaring our own abs(double), we do not replace anything.
But if after switching from int with double we also switch from abs to fabs, the original weird behavior will reappear in its full glory (output -5 -5).
This is consistent with the above explanation.
Your code causes undefined behaviour.
C++17 [extern.names]/4:
Each function signature from the C standard library declared with external linkage is reserved to the implementation for use as a function signature with both extern "C" and extern "C++" linkage, or as a name of namespace scope in the global namespace.
So you cannot make a function with the same prototype as the Standard C library function int abs(int);. Regardless of which headers you actually include or whether those headers also put C library names into the global namespace.
However, it would be allowed to overload abs if you provide different parameter types.
Consider following code:
#include <iostream>
#include <math.h>
double log(double) { return 42; }
int main() {
std::cout << log(1) << std::endl;
}
While build debug version all used compilers (msvc,gcc,clang) prints 42.
But when i try build (and run) in release mode i got:
compilation error in msvc: error C2169: 'log' : intrinsic function, cannot be defined;
prints 42 for gcc;
prints 0 for clang.
Why release/debug results are different for same compiler?
Why got different results for different compilers in release mode?
You are defining a function that is already declared in <math.h> with external linkage.
C11 standard, §7.12.6.7:
#include <math.h>
double log(double x);
§7.1.2:
Any declaration of a library function shall have external linkage.
[extern.names]/3:
Each name from the Standard C library declared with external linkage
is reserved to the implementation for use as a name with extern "C"
linkage, both in namespace std and in the global namespace.
According to [reserved.names]/2 the behavior is undefined; The implementation can thus do what it wants, including the issuance of nonsensical error messages.
So according to the standard (17.6.1.2.4):
In the C++ standard library, however, the declarations (except for names which are defined as macros in C) are within namespace scope (3.3.6) of the namespace std. It is unspecified whether these names are first declared within the global namespace scope and are then injected into namespace std by explicit using-declarations (7.3.3).
It's unspecified whether or not the log() in math.h (really cmath) is in namespace std or not. If it is (like it is for libstdc++ for gcc), then calling log(1) quite simply calls your function because the other one is named std::log(). But for clang, apparently it puts it in the global namespace. Since there is a
template <typename T> double log(T x);
That will be preferred to yours since you're passing an int, so on clang it will call that one. (I can't check this right now since I can't access coliru and don't have clang installed, but this is a best guess).
I'm a newbie to C++. I don't understand why it is okay (i.e. why the compiler allows it) for 1 function to be declared twice. For example, the following code is legal:
#include <iostream>
#include <string>
int hello();
int hello();
int main(){
cout << "hello, world" << endl;
}
int hello(){
return 1;
}
Why does the compiler not complain?
In C and C++ forward declarations are very weak. They provide a formal "promise" to the compiler that if a function with a specified signature appears at all, it would have the signature that you specify. The function is not even guaranteed to appear: unless you call or otherwise reference the declared function, the compiler is not going to complain that there is a declaration with no definition. The standard requires compilers to treat identical forward declarations as a single declaration.
Unlike definitions which must be unique according to the single definition rule
3.2 No translation unit shall contain more than one definition of any variable, function, class type, enumeration type, or template
declarations are merely required to refer to the same definition, i.e. be equivalent to each other:
3.3.4 Given a set of declarations in the same declarative region, each of which specifies the same unqualified name, they shall all refer to the same entity, or all refer to functions or function templates, [...]
Your doubt will be cleared by "One Definition Rule". It is defined in the ISO C++ Standard (ISO/IEC 14882) 2003, at section 3.2.
It states that:
In any translation unit, a template, type, function, or object can
have no more than one definition. Some of these can have any number of
declarations.
Read more about it on Wikipedia (http://en.wikipedia.org/wiki/One_Definition_Rule)
In a mixed C/C++ project, we need to make a call from C to a C++ function. The function to be called is overloaded as three separate functions, but we can ignore that from the C-side, we just pick the one most suitable and stick to that one.
There's two ways to do this: (1) write a small C++ wrapper with a extern "C" function that forwards the call to the chosen overloaded function, or (2) the hackish way to just declare the one function we want to call from C as extern "C".
The question is, is there any disadvantages (apart from nightmares and bad karma) to go for the second variant? In other words, given three overloaded function, where one is declared as exern "C", should we expect trouble with the C++ side, or is this well defined according to the standard?
I believe the language in the standard is specifically written to allow exactly one function with "C" linkage, and an arbitrary number of other functions with "C++" linkage that overload the same name (§[dcl.link]/6):
At most one function with a particular name can have C language linkage. Two declarations for a function with C language linkage with the same function name (ignoring the namespace names that qualify it) that appear in different namespace scopes refer to the same function. Two declarations for an object with C language linkage with the same name (ignoring the namespace names that qualify it) that appear in different namespace scopes refer to the same object.
The standard shows the following example:
complex sqrt(complex); // C + + linkage by default
extern "C" {
double sqrt(double); // C linkage
}
Even if it was allowed by the standard, future maintainers of the code will probably be extremely confused and might even remove the extern "C", breaking the C code (possibly far enough later that the events aren't linkable).
Just write the wrapper.
EDIT:
From C++03 7.5/5:
If two declarations of the same
function or object specify different
linkage specifications (that is, the
linkage specifications of these
declarations specify different
string literals), the program is
ill-formed if the declarations appear
in the same translation unit, and the
one definition rule (3.2) applies if
the declarations appear in different
translation units...
I interpret this to not apply since C and C++ functions with the same name aren't actually the same function but this interpretation may be wrong.
Then from C++03 7.5/6:
At most one function with a particular
name can have C language linkage...
This then implies that you could have other, non-C-linkage, functions with the same name. In this case, C++ overloads.
As long as you follow the other rules for extern-C functions (such as their special name requirements), specifying one of the overloads as extern-C is fine according to the standard. If you happen to use function pointers to these functions, be aware that language linkage is part of the function type, and needing a function pointer to this function may decide the issue for you.
Otherwise, I don't see any significant disadvantages. Even the potential disadvantage of copying parameters and return value can be mitigated by compiler- and implementation-specifics that allow you to inline the function – if that is determined to be a problem.
namespace your_project { // You do use one, right? :)
void f(int x);
void f(char x);
void f(other_overloads x);
}
extern "C"
void f(int x) {
your_project::f(x);
}
(This answer applies to C++14; other answers so far are C++03).
It is permitted to use overloading. If there is an extern "C" function definition of some particular name then the following conditions apply (references to C++14 in brackets):
The declaration of the extern "C" function must be visible at the point of any declaration or definition of overloads of that function name (7.5/5)
There must be no other extern "C" definition of a function or variable with the same name, anywhere. (7.5/6)
An overloaded function with the same name must not be declared at global scope. (7.5/6)
Within the same namespace as the extern "C" function, there must not be another function declaration with the same name and parameter list. (7.5/5)
If any violation of the above rules occurs in the same translation unit the compiler must diagnose it; otherwise it is undefined behaviour with no diagnostic required.
So your header file might look something like:
namespace foo
{
extern "C" void bar();
void bar(int);
void bar(std::string);
}
The last bullet point says that you cannot overload solely on linkage; this is ill-formed:
namespace foo
{
extern "C" void bar();
void bar(); // error
}
However you can do this at different namespaces:
extern "C" void bar();
namespace foo
{
void bar();
}
in which case , normal rules of unqualified lookup determine whether a call bar() in some code finds ::bar, foo::bar, or ambiguous.
I noticed a very curious behavior that, if standard, I would be very happy to exploit (what I'd like to do with it is fairly complex to explain and irrelevant to the question).
The behavior is:
static void name();
void name() {
/* This function is now static, even if in the declaration
* there is no static keyword. Tested on GCC and VS. */
}
What's curious is that the inverse produces a compile time error:
void name();
static void name() {
/* Illegal */
}
So, is this standard and can I expect other compilers to behave the same way? Thanks!
C++ standard:
7.1.1/6: "A name declared in a namespace scope without a
storage-class-specifier has external
linkage unless it has internal linkage
because of a previous declaration" [or unless it's const].
In your first case, name is declared in a namespace scope (specifically, the global namespace). The first declaration therefore alters the linkage of the second declaration.
The inverse is banned because:
7.1.1/7: "The linkages implied by successive declarations for a given
entity shall agree".
So, in your second example, the first declaration has external linkage (by 7.1.1/6), and the second has internal linkage (explicitly), and these do not agree.
You also ask about C, and I imagine it's the same sort of thing. But I have the C++ book right here, whereas you're as capable of looking in a draft C standard online as I am ;-)
Qualifiers that you put on the function prototype (or that are implied) are automatically used when the function is declared.
So in your second case the lack of static on the prototype meant that the function was defined as NOT static, and then when it was later declared as static, that was an error.
If you were to leave off the return type in the prototype, then the default would be int and then you would get an error again with the void return type. The same thing happens with __crtapi and __stdcall and __declspec() (in the Microsoft C compiler).