Following piece of code does not work right on Alpine Linux:
#ifndef __cpp_lib_uncaught_exceptions
namespace std {
int uncaught_exceptions() noexcept {
return std::uncaught_exception();
}
}
#endif
Source
Error:
[ 89%] Linking CXX executable AdsLibTest.bin
/usr/lib/gcc/x86_64-alpine-linux-musl/11.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: /usr/lib/gcc/x86_64-alpine-linux-musl/11.2.1/../../../../lib/libstdc++.a(eh_catch.o): in function `std::uncaught_exceptions()':
/home/buildozer/aports/main/gcc/src/gcc-11.2.1_git20220219/libstdc++-v3/libsupc++/eh_catch.cc:149: multiple definition of `std::uncaught_exceptions()'; CMakeFiles/AdsLibTest.bin.dir/main.cpp.o:main.cpp:(.text+0x8d0): first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [_deps/ads-build/AdsLibTest/CMakeFiles/AdsLibTest.bin.dir/build.make:98: _deps/ads-build/AdsLibTest/AdsLibTest.bin] Error 1
It looks like GCC does not provide __cpp_lib_uncaught_exceptions feature. Why this could happen?
I see a few issues with your code:
__cpp_lib_uncaught_exceptions is only documented to be defined (when applicable) if you've #include-ed <version> or <exception>; you've included neither. Add #include <exceptions> above that feature test somewhere, and it should work.
Your code as written will redefine uncaught_exceptions() in every compilation unit that includes it when the macro is not defined, because you made the definition in a header and did not make it inline, static or both, so every .cpp file including your header ends up getting its own exportable definition. Without the necessary headers included in your header, whether that feature test macro is defined will depend on whether each .cpp file includes <exception>/<version>, and whether it does so before or after including your header. If they aren't uniform, some files could get your header's definition, while others get the built-in definition.
Adding declarations to namespace std causes (with a few specific exceptions) undefined behavior. There is no reason that this should work, even if the compiler does correctly report that it doesn't provide std::uncaught_exceptions.
In particular, if the standard library implementation supported std::uncaught_exceptions and the file was compiled with language standard set to below C++17, then the feature test will claim that std::uncaught_exceptions is unsupported, but the standard library .a/.so may still provide the definition for it. This would cause a duplicate definition error.
As #ShadowRanger notes, there is also likely an inline missing on the function definition, because it may be included in multiple in multiple translation units.
Also, in order to use a feature test macro, it is necessary to include <version> (since C++20) or the header file corresponding to that features, e.g. in this case <exception>. Otherwise the macro is not defined and the feature check will always fail. It seems that the header is not including <exception>, but is including <stdexcept> though, which technically is not sufficient. However, practically, <stdexcept> is likely going to include <exception> and hence that would likely still work. For current libstdc++ this seems to be the case at least.
So, I would assume that your issue lies in the chosen compiler options, e.g. the language standard to compile for.
Related
I am writing a library for neural nets. There are some necessary functions I needed so I separated them in a separate header file. I also provided the definition guards. I also included the header file in only one file but then also linker claims that there are multiple definitions of all the functions in the program.
The library structure is as follows:
namespace maya:
class neuron [neuron.hpp, neuron.cpp]
class ffnet [ffnet.hpp, ffnet.cpp]
struct connection [connection.hpp]
functions [functions.hpp]
the functions header file is written something like this:
#ifndef FUNCTIONS_HPP
#define FUNCTIONS_HPP
// some functions here
double random_double(){//some code}
#endif
this functions.hpp file is included only one in neuron.hpp and since ffnet is dependent on neuron, I included neuron.hpp in ffnet only once. This ffnet.hpp is included in main.cpp only once. main.cpp is file that I use for testing my library.
this linker throws error something like this:
/usr/bin/ld: /tmp/ccN7ywby.o: in function `maya::random_double()':
neuron.cpp:(.text+0x0): multiple definition of maya::random_double()'; /tmp/ccvDr1aG.o:main.cpp:(.text+0x0): first defined here
/usr/bin/ld: /tmp/cc66mBIr.o: in function `maya::random_double()':``
ffnet.cpp:(.text+0x0): multiple definition of `maya::random_double()'; /tmp/ccvDr1aG.o:main.cpp:(.text+0x0): first defined here
Also I compiled my program using :
g++ main.cpp neuron.cpp ffnet.cpp -o net
I don't think this will be needed but just in case :
$ uname -a
Linux brightprogrammer 4.19.0-kali3-amd64 #1 SMP Debian 4.19.20-1kali1 (2019-02-14) x86_64 GNU/Linux
You must write code of random_double() in .cpp file other than .hpp or .h file. Or, add inline before double random_double() { //some code } if you keep your code in your .hpp file.
The problem
You have your function definition with their full code in a header that you include in several compilation units. This causes the function to be defined in each compilation unit (cpp) and this breaks the One Definition Rule (ODR).
The include guards make sure that the same definition doesn't occur several time in the same compilation unit (e.g. if you include function.hpp in neuron.hpp and also include it directly). But here this header is included directly or indirectly in main.cpp, ffnet.cpp and neuron.cpp, which makes a first definition and 2 invalid redefinitions.
The solution
You must change function.hpp to keep only the function declaration:
#ifndef FUNCTIONS_HPP
#define FUNCTIONS_HPP
double random_double(); // no body !!
#endif
and move the function bodies to a separate function.cpp, which must be added to your compiler command.
The advantage of this approach are:
You can then compile the utility functions separately. Every time you'd change the function body, you'd no longer have to recompile all the cpp.
Encapsulation is improved, by sharing in the hpp only what other modules need to know, and hiding the implementation details.
Reuse could be facilitated across projects by making a library of functions.
The includes would be shorter (in case in some distant future your code would evolve to a large project with thousands of hpp, this could make you gain some time)
Additional remarks
Not sure that it applies, but be aware also that it's not a good idea to include a header into a namespace.
I also recommend reading this article about headers. It's old but the advice is still very relevant :-)
Note that there are exceptions to the ODR for classes and inline functions, in which case the multiple definitions must be exactly the same sequence of tokens.
The following program compiles correctly:
#include <algorithm>
int main(int argc, char *argv[]) {
return int(log(23.f));
}
(under g++ 4.9.2 with the flag -std=c++11)
The code uses the function log, which is defined on <cmath>. However, it does not include the header <cmath>, only the header <algorithm>. Why is it that g++ doesn't give any warnings, and compiles the code correctly?
According to the standard, some headers do include others. As an example, <cinttypes> includes <cstdint>. See the Includes section here. With respect to <algorithm>, there is no such statement as to which other headers it should include (see here). So, the conclusion is, <algorithm> is not required to include <cmath>, and your example code is not portable. It may fail to compile on other C++ implementations.
In the C++11 standard, [res.on.headers]/1 states that
A C++ header may include other C++ headers. A C++ header shall provide
the declarations and definitions that appear in its synopsis. A C++
header shown in its synopsis as including other C++ headers shall
provide the declarations and definitions that appear in the synopses
of those other headers.
Now consider [algorithms.general]/2:
Header <algorithm> synopsis
#include <initializer_list>
namespace std {
// ......
<cmath> isn't listed and clearly not included in <initializer_list>. Thus your program is not guaranteed to compile on a standard-conforming implementation. One should never rely on "implicit inclusion" - the general guideline is to include every header from which an entity is used.
Exceptions are e.g. <iostream> including <ostream>, which is guaranteed since C++11.
To answer your question:
Why is it that g++ doesn't give any warnings, and compiles the code correctly?
Because C++ implementations aren't required to and it's actually quite difficult to implement this warning given the way #include works. Attempts have been made, but there are problems that haven't been entirely addressed.
Moving to a different model can enable this kind of checking. However, in the interests of backwards compatibility and allowing the easiest possible transition the 'modularizations' of the standard library I've used happen to explicitly allow code that previously depended on indirect includes to continue to work.
You can see this, for example, in libc++'s module map; Those export * lines declare "any modules imported by this module are also exported." Which is to say, a module std.algorithm that imports a module std.cmath also exports, so anyone that imports std.algorithm also gets access to std.cmath.
For new code it would be very nice if these 'legacy exports' could be turned off, but for pre-existing large projects it is very nice to be able to just flip on -fmodules and have the project work with no changes.
Using clang's implementation of modules with libc++, and modifying the module map file to remove the non-portable, indirect include behavior, clang reports such errors like:
main.cpp:5:16: error: declaration of 'log' must be imported from module 'Darwin.C.math' before it is required
return int(log(23.f));
^
/usr/include/math.h:387:15: note: previous declaration is here
extern double log(double);
^
1 error generated.
libc++ <algorithm> doesn't include <cmath>, so I used <random> instead. Otherwise the source that produced the above is the same as what you show.
I'm collaborating with a friend using MSVC and I'm using clang. One thing I've noticed is that clang seems to be automatically including some standard headers, while MSVC is not. For example, I just used the functions assert() and round() from assert.h and math.h respectively.
The code complied fine for me without explicitly including the headers -- is there a switch to turn this behavior off? It's driving me up a wall. I want there to be an error unless I explicitly include the header.
The Standard allows standard header files to include each other, or forward declare items normally found in other files... most likely what is happening is that these definitions are being provided (directly or indirectly) by one of the headers you are including.
This behavior is Standard-conformant, so unfortunately writing portable code is going to require testing using multiple compilers.
A C++ header may include other C++ headers. A C++ header shall provide the declarations and definitions that appear in its synopsis. A C++ header shown in its synopsis as including other C++ headers shall provide the declarations and definitions that appear in the synopses of those other headers.
Certain types and macros are defined in more than one header. Every such entity shall be defined such that any header that defines it may be included after any other header that also defines it (3.2).
The C standard headers (D.5) shall include only their corresponding C++ standard header, as described in 17.6.1.2.
In addition to headers included by other headers, there's also the fact that the C++ headers may or may not introduce global names, while the C headers may or may not introduce names inside namespace std. I've seen a lot of people recommending "This is C++, you should be using <cxyz> instead of xyz.h". This advice is misleading; the two headers do not provide the same definitions, so inclusions of the C++ header cannot simply replace inclusion of the C header. Whether all the code should be rewritten to use the C++ qualified names instead is debatable... but is a prerequisite for changing the includes.
One thing that may help you is to list the headers actually used using gcc -M. Then you can decide whether any of the indirectly included headers should be listed in your code.
Standard headers can include each other. The headers you want to not include might be included by another header you use.
In this case you may want to manually #define the header guards of the headers you don't want before including anything, but that's a little dirty.
In my code base, I 'hide' implementation details of heavily templated code in .tcc files inside a bits sub-directory, i.e.
// file inc/foo.h:
#ifndef my_foo_h // include guard
#define my_foo_h
namespace my {
/* ... */ // templated code available for user
}
#include "bits/foo.tcc" // includes implementation details
namespace my {
/* ... */ // more templated code using details from foo.tcc
}
#endif
// file inc/bits/foo.tcc:
#ifndef my_foo_tcc // include guard
#define my_foo_tcc
#ifndef my_foo_h
# error foo.tcc must be #included from foo.h
#endif
namespace my { namespace details {
/* ... */ // defails needed in foo.h
} }
#endif
Of course, there must only be one file bits/foo.tcc in the include path. Otherwise, there will be a clash and (hopefully) a compilation error. This just happened to me with bits/vector.tcc, which is included from gcc's (4.8) vector but also my own header (using #include "bits/vector.tcc" and not #include <bits/vector.h>).
My question: is this formally a bug of gcc (since it uses a name bits/vector.tcc which is not protected by the standard) or correct, i.e. even formally my fault? If the latter, what names for header files are guaranteed to be okay to use?
(note I don't want to hear obvious advices of how to avoid this).
Edit The problem is that the header file vector provided by the standard library (shipped by the compiler) has a preprocessor directive #include <bits/vector.tcc> which causes the preprocessor to load my file rather than that provided with the standard library.
Here's what the C++11 standard [cpp.include] has to say about this:
1 A #include directive shall identify a header or source file that can be processed by the implementation.
2 A preprocessing directive of the form
# include < h-char-sequence> new-line
searches a sequence of implementation-defined places for a header identified uniquely by the specified sequence
between the < and > delimiters, and causes the replacement of that directive by the entire contents
of the header. How the places are specified or the header identified is implementation-defined.
3 A preprocessing directive of the form
# include " q-char-sequence" new-line
causes the replacement of that directive by the entire contents of the source file identified by the specified
sequence between the " delimiters. The named source file is searched for in an implementation-defined
manner. If this search is not supported, or if the search fails, the directive is reprocessed as if it read
# include < h-char-sequence> new-line
with the identical contained sequence (including > characters, if any) from the original directive.
In other words, #include < > is intended for searching for headers only. A header is one of the things provided by the standard library. I say "things" because the standard doesn't specify what it is - it doesn't have to a file at all (although all compilers I know implement headers as files).
#include " " is intended for "everything else" - in terms of the standard, they're all "source files," although in general speech we usually refer to files intended for being #included as "header files." Also note that if no such source file is found, a (standard library) header will be searched for instead.
So, in your case:
The standard doesn't say anything about files like bits/vector.tcc; in fact, it doesn't say anything about any files. All of this falls under the "implementation-defined" heading as is thus up to your compiler and its documentation.
At the same time (thanks to #JamesKanze for pointing this out in the comments), the standard clearly specifies what #include <vector> should do, and never mentions that it could depend on a file's presence or absence. So in this regard, gcc loading your bits/vector.tcc instead of its own is a gcc bug. If gcc loaded its own bits/vector.tcc instead of yours, it would be within its "implementation-defined" scope.
#include "vector" is primarily intended to include a source file named vector. However, if no such file is found, the effect is the same as including the standard header <vector> (which causes class template std::vector to be considered defined).
The standard is pretty open, but... including <vector> should
work; I don't see anything that authorizes it not to (provided
you've done #include <vector>, and not #include "vector"),
regardless of the names your personal includes.
More generally, the more or less universal algorithm for
searching for a header is to first search in the directory which
contains the file which does the include. This is done
precisely to avoid the type of problems you have encountered.
Not doing this (or not using some other mechanism to ensure that
includes from standard headers find the file they're supposed
to) is an error in the compiler. A serious one, IMHO. (Of
course, the compiler may document that certain options introduce
certain restrictions, or that you need to use certain options
for it to behave in a standard manner. I don't think that g++
documents -I as being incompatible with the standard headers,
but it does say that if you use -iquote, it shouldn't
influence anything included using <...>.)
EDIT:
The second paragraph above really only applies to the "..."
form of the include. #include <vector> should find the
standard header, even if you have a file vector in the same
directory as the file you are compiling.
In the absense of -I options, this works. Universally,
however, the -I option adds the directory in the search lists
for both types of include. The reason for this is that you,
as a developer, will probably want to treat various third party
libraries (e.g. X-Windows) as if they were part of the system as
well. (I think Linux does put X-Windows as part of the system,
putting its headers in /usr/include, but this wasn't the usual
case in other Unices in the past.) So you use -I to specify
them, as well as you're other include directories. And if you
have a file vector in one of your other directories, it will
"override" the system one.
This is clearly a flaw: if I recall correctly (but it's been
some time), g++ at one time did have additional options to put
a directory in the list for only one type of include. And in
modern gcc/g++, there's -iquote (and -I-, which specifies
that all of the earlier -I options are for the "..."
includes only). These features are little used, however,
because gcc/g++ is the only compiler which supported them.
Given all this, the gcc/g++ handling is probably the best you
can hope for. And the error isn't in the compiler itself, but
the library headers, which use <bits/vector.tcc> when it
absolutely wants the include file from the same directory as the
file doing the including. (Another way of saying this is that
bits/vector.tcc isn't a system header, in any sense of the
word, but an implementation header of system library.)
None of which helps the original poster much, unless he feels
like modifying the library headers for g++. (If portability
isn't any issue, and he's not considering his headers as part of
the system, he could change the -I to -iquote.)
Suppose i am editing some large C++ source file, and i add a few lines of code that happen to use auto_ptr, like in the following example.
#include <string>
// ... (much code here)
void do_stuff()
{
std::string str("hello world");
// ... (much code here too)
std::auto_ptr<int> dummy; // MY NEW CODE
// ...
}
This example compiles on gcc 3.4.4 (cygwin), because the standard header <string> happens to include the header <memory> needed for compilation of auto_ptr. However, this doesn't work on gcc 4.5.0 (mingw); they seem to have cleaned up their header files or something.
So, when i add code that uses auto_ptr, should i immediately go look whether the file contains #include <memory> at the beginning, as this answer implies? I never do it (i find it too annoying); i always rely on the compiler to check whether any #include is missing.
Is there any option that would not be disruptive to coding, and would ensure portability of my code?
Is there a C++ standard library implementation whose headers don't include each other more than is required?
If you use something in the standard library, you should include the header in which it is defined. That is the only portable option. That way you avoid the instance you cite where one header happens to include another in one version or compiler, but not another. auto_ptr is defined in <memory>, so if you use it, include that header.
[edit...]
In answer to your comment... Are you asking if the compiler can help detect when you use something from a standard header you didn't directly include? This would be helpful, but I think it's a little too much to ask. This would require the compiler to know which standard library headers contain which standard library definitions, and then check that you included the right ones for the definitions you used.
Determining exactly how a header was included is also a tall task. If you are using a standard library definition, then you must be including the header somehow. The compiler would have to tell whether you included the header yourself (possibly through headers of your own or a third-party library) or whether it came through another standard library header. (For instance, in your example, it would have to be able to tell the difference between <memory> being included via <string> or being included within your own code.)
It would have to handle different version of the standard library (e.g. C++03 vs C++0x) and different vendors. And what if those vendors of a third-party stdlib do not exactly follow the standard, then you could get bad warnings about which headers to include.
I'm only saying this to try to explain (with my limited compiler/stdlib knowledge) why I don't think compilers have this feature. I do agree, it would be helpful, but I think the cost outweighs the benefit.
The best way is to include the correct header in which the construct is defined.
and Include files should protect against multiple inclusion through the use of macros that "guard" the files
Generally header files have "include guards" surrounding them. The guards are formed by:
MyHeader.h:
#ifndef __MY_HEADER_H__
# define __MY_HEADER_H__
//body of MyHeader.h
#endif
So, you could include MyHeader.h as many times as you want:
#include "MyHeader.h"
#include "MyHeader.h"
#include "MyHeader.h"
#include "MyHeader.h"
And it won't cause any problems for the compiler (it will only ever be included once). Moreover you could include another file that includes "MyHeader.h", and the same rule would apply.
What this means is, if you ever want to use something that is defined in a header - include it! (Even if you think something else might include it, there is no reason not to be safe).