I was trying out some CUDA/Thrust code on Linux/GCC and wanted to use some TR1 libraries, when I noticed something peculiar: Most libraries will invariably pull in tr1_impl/type_traits (4.4) or just type_traits (4.6), and that header will always contain variadic templates, like so:
template<typename _Res, typename... _ArgTypes>
struct is_function<_Res(_ArgTypes...)>
: public true_type { };
However, these headers also get used when I run GCC in C++98 or C++03 mode! How can this work?
The actual problem I encountered is that the CUDA toolchain doesn't recognize C++0x constructions, and cudafe++ (the CUDA front end, i.e. the program that separates the joint source code into host and device source code) rightly aborts with an error when encountering the variadic template parameter.
So... how can GCC support and rely on variadic templates in non-0x dialects of C++? And is there a way to obtain a genuine C++03 version of TR1?
Welp, an implementation is not required to provide headers. It's required that an #include <stuff> does The Right Thing. So that means that if an implementation decides to use headers for this functionality, it's not required that those headers be conforming C++. And in fact GCC has supported variadic templates as an extension for quite some time.
Furthermore, I can't help but notice
#pragma GCC system header
in the <tr1/random> header that you mention. GCC will treat the file specially, and e.g. not report errors warnings in it. I would have thought using an extension in conforming mode can easily be turned into an error so I'm not sure what's going on but at least legally it's an option.
There's also the special status of TR1, which is not binding. On my implementation as far I can tell the only C++03 header that includes <type_traits> is <functional> and it properly only does that in C++0x mode (i.e. the rest of the time it's a valid C++03 file via preprocessing, unlike <tr1/random>). (I didn't check for other cases though.)
Related
I've encountered a case where I may want to use a C++ shared object library compiled with one version of gcc with some code that will be compiled with another version of gcc. In particular, I want to use methods that return some STL containers like std::string and std::map.
The gcc website and many old stackoverflow posts (e.g. here) discuss this issue. My current understanding is that
Most of the concern and most of the posts on this issue are about cross-compatibility between .so files and .dll files. This is very difficult, due to different compiler ABIs.
For cross-compatibility between .so files compiled with different versions of gcc (at least with gcc version >= 3.4), all you need to ensure is that the standard library API hasn't changed (and, if it has, there is dual ABI support).
My question has to do with how this works at a machine level. It seems like it is possible that gcc can change the header implementing std::string, even if the library API has not changed, in order to make it more efficient or for other reasons. If so, then two different pieces of code are compiled with two different std::string headers, and are basically defining two different classes with the same name. How can we be guaranteed that, when we pass a std::string from code that uses one header to code that uses another, the object won't be mangled or misread somehow?
For example, suppose that I have the following files:
// File a.h:
#ifndef FILE_A
#define FILE_A
#include <string>
class X {
public:
std::string f();
};
#endif // FILE_A
// File a.cpp:
#include "a.h"
std::string X::f() {
return "hello world";
}
// File b.cpp:
#include <iostream>
#include <string>
#include "a.h"
int main() {
std::string x = X().f();
std::cout << x << std::endl;
}
(The only purpose of the class X here is to introduce a bit more name-mangling into the shared object library while I am testing how this works.)
Now I compile these as follows:
/path/to/gcc/version_a/bin/g++ -fPIC -shared a.cpp -o liba.so
/path/to/gcc/version_b/bin/g++ -L. -la -o b b.cpp
When I execute b, then b has a definition of std::string that comes from the header in version_b. But the object that is produced by X().f() relies on machine code that was compiled using a copy of the header that came from version_a of gcc.
I don't understand very much about the low-level mechanics of compilers, linkers, and machine instructions. But it seems to me like we are breaking a fundamental rule here, which is that the definition of a class has to be the same every time it is used, and if not, we have no guarantee that the scenario above will work.
Edit: I think that the main resolution to my confusion is that the phrase "library API" means something much more general in this context than it does in the uses of the term "API" that I am used to. The gcc documentation seems to indicate, in a very vague way, that pretty much any change to the include files that implement the standard library can be considered a change in the library API. See the discussion in the comments on Mohan's answer for details.
GCC has to do whatever it takes so that our programs work. If using different implementations of std::string in different translation units means our programs are broken, then gcc is not allowed to do that.
This is applicable to any given version of GCC.
GCC goes out of its way to remain backwards compatible. That is, it strives that the above remains applicable across different version of GCC and not just within a given version. It however cannot guarantee that all its versions up to eternity will remain compatible. When there's no longer a possibility to keep backward compatibility, an ABI change is introduced.
Since the big GCC-5 ABI change, it is introduced in such a way so that it tries to deliberately break your builds if you combine old and new binaries. It does so by renaming std::string and std::list classes at the binary level. This propagates to all functions and templates that have std::string or std::list parameters. If you try to pass e.g. an std::string between translation units compiled against incompatible ABI versions, your program will fail to link. The mechanism is not 100% foolproof but it catches many common cases.
The alternative would be to silently produce broken executables, which no one wants.
The dual ABI is a way for the newer versions of GCC standard library binary to remain compatible with older executables. Basically it has two versions of everything that involves std::string and std::list, with different symbol names for the linker, so older programs that use the old versions of names can still be loaded and ran.
There's also a compilation flag that allows the newer versions of GCC to produce binaries compatible with the older ABI (and incompatible with newer binaries produced without the compatibility flag). It is not recommended to use it unless you absolutely have to.
It seems like it is possible that gcc can change the header implementing std::string
It can't make arbitrary changes. That would (as you surmise) break things. But only some changes to std::string will affect the memory layout of the class, and those are the ones that matter.
For an example of an optimisation that wouldn't affect the memory layout: they could change the code inside
size_t string::find (const string& str, size_t pos = 0) const;
to use a more efficient algorithm. That wouldn't change the memory layout of the string.
In fact, if you temporarily ignore the fact that everything is templated and so has to be in header files, you can imagine string as being defined in a .h file and implemented in a .cpp file. The memory layout is determined only from the contents of the header file. Anything in the .cpp file could be safely changed.
An example of something they couldn't do is to add a new data member to string. That would definitely break things.
You mentioned the dual ABI case. What happened there is that they needed to make a breaking change, and so they had to introduce a new string class. One of the classes is std::string and the other std::_cxx11::string. (Messy things happen under the hood so most users don't realise they are using std::_cxx11::string on newer versions of the compiler/standard library.)
I got Netbeans 7.4 installed today with supports for PHP, Java, and C++.
However, it seems that I only have partial support for C++11 (even with GCC 4.8.1 with -std=c++11 or -std=C++0x).
I've looked at the header files (e.g. chrono) and there's a inlined namespace named _V2 that follows the following comments :
// To support the (forward) evolution of the library's defined
// clocks, wrap inside inline namespace so that the current
// defintions of system_clock, steady_clock, and
// high_resolution_clock types are uniquely mangled. This way, new
// code can use the latests clocks, while the library can contain
// compatibility definitions for previous versions. At some
// point, when these clocks settle down, the inlined namespaces
// can be removed. XXX GLIBCXX_ABI Deprecated
So, for instance, if I want to use high_resolution_clock, I need
chrono::_V2::high_resolution_clock
Instead of
chrono::high_resolution_clock
Also to_string from string is also declared as undefined.
Am I missing somethng? Do I have to update my headers in Netbeans? If so, may I ask you to advise me on how to proceed?
Thanks a lot!
We are in the process of designing a new C++ library and decided to go with a template-based approach along with some specific partial template specialisations for corner cases. In particular, this will be a header-only template library.
Now, there is some concern that this will lead to a lot of code duplication in the binaries, since this template 'library' will be compiled into any other shared library or executable that uses it (arguably only those parts that are used). I still think that this is not a problem (in particular, the compiler might even inline things which it could not across shared library boundaries).
However, since we know the finite set of types this is going to be used for, is there a way to compile this header into a library, and provide a different header with only the declarations and nothing else? Note that the library must contain not only the generic implementations but also the partial specialisations..
Yes. What you can do is explicitly instantiate the templates in CPP files using the compiler's explicit template instantiation syntax. Here is how to use explicit instantiation in VC++: http://msdn.microsoft.com/en-us/library/by56e477(v=VS.100).aspx. G++ has a similar feature: http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html#Template-Instantiation.
Note that C++11 introduced a standard syntax for explicit instantiation, described in [14.7.2] Explicit instantiation of the FDIS:
The syntax for explicit instantiation is:
explicit-instantiation:
externopt template declaration
C++ Shared Library with Templates: Undefined symbols error
Some answers there cover this topic. To sum up short: it is possible if you force to instantiate templates in shared library code explicitly. It will require explicit specification for all used types for all used templates on shared lib side, though.
If it is really templates-only, then there is no shared library. See various Boost projects for concrete examples. Only when you have non-template code will you have a library. A concrete example is eg Boost Date_Time and date formatting and parsing; you can use the library with or without that feature and hence with or without linking.
Not having a shared library is nice in the sense of having fewer dependencies. The downside is that your binaries may get a little bigger and that you have somewhat higher compile-time costs. But storage is fairly cheap (unless you work in embedded systems are other special circumstances) and compiling is usually a fixed one-time cost.
Although there isn't a standard way to do it, it is usually possible with implementation specific techniques. I did it a long time ago with Borland's C++ Builder. The idea is to declare your templates to be exported from the shared library where they need to reside and import them where they are used. The way I did it was along these lines:
// A.h
#ifdef GENERATE
# define DECL __declspec(dllexport)
#else
# define DECL __declspec(dllimport)
#endif
template <typename T> class DECL C {
};
// A.cpp
#define GENERATE
#include "A.h"
template class DECL A<int>;
Beware that I don't have access to the original code, so it may contain mistakes. This blog entry describes a very similar approach.
From your wording I suspect you're not on Windows, so you'll have to find out if and how this approach can be adopted with your compiler. I hope this is enough to put you in the right direction.
I'm trying to port my own lib from Visual Studio to g++ on GNU/Linux, and I'm getting some problems with template compilation. Indeed, in Visual C++, templates are generated only when they are explicitly used in the code, while it seems (from my errors) that g++ evaluates the contents of templates before they are first used. This results in the following error:
error: incomplete type ‘X’ used in nested name specifier
... because I include some classes after the template code, rather than before. I am doing this due to a cross-use conflict.
To sum it seems that Visual C++ does not attempt to resolve templates' content on use, and g++ does resolution as soon as possible.
class MyClass;
template<class _Ty>
void func(MyClass* a_pArg)
{
a_pArg->foo();
};
(_Ty isn't used but it doesn't matter, it's just to explain the problem)
In that case Visual C++ would compile (even if MyClass isn't predeclared), while g++ will not, because MyClass hasn't been completely declared.
Is there a way to tell g++ to instantiate templates only on use?
No, that's the way two-phase lookup works. MSVC implements it wrong, it nearly skips the first phase, which parses the template at the point of definition. MSVC only does some basic syntax checking here. In the second phase, on actual use of the template, the dependent names should only be inspected. MSVC does all kind of parsing here instead. GCC implements the two-phase lookup correctly.
In your case, since MyClass isn't a template parameter, it can inspect it in phase one. You just need to include your class header before that.
As it was indicated in another answer, gcc is correct looking up non-dependent names in the first lookup phase, and VC++ shifts most checks to the second phase (which is incorrect). In order to fix your code, you don't need to search for some broken version of gcc. You need to separate the declaration and implementation (at least for non-dependent names). Using your example,
// provide declarations
class MyClass;
template<class T>
void func(MyClass* a_pArg);
// provide definition of MyClass
class MyClass
{
// watever
};
// provide definition of func
template<class T>
void func(MyClass* a_pArg);
{
a_pArg->foo();
};
If you are willing to use CLang instead of gcc, CLang support the -fdelayed-template (dedicated to perform template instantiation at the end of the parsing) implied by -fms-extensions option specifically designed to compile MSVC code (and numerous quirks).
According to Francois Pichet, who is leading CLang effort to fully compile MSVC code (and actually doing most of it), CLang should be able to parse all of MFC code in about 2 to 3 months, with only a couple of non-trivial issues remaining. Already most of MFC is correctly interpreted (ie, interpreted as VC++ does).
Visual C++ doesn't implement by default two-phase lookup specified by the standard.
However, looks like two-phase lookup is a bit better in Visual Studio 2015 with /Za option. Perhaps you can do the opposite by adding /Za option to mimic GCC template instantiation behavior for some cases.
I know that in the original C++0x standard there was a feature called export.
But I can't find a description or explanation of this feature. What is it supposed to do? Also: which compiler is supporting it?
Although Standard C++ has no such requirement, some compilers require that all function templates need to be made available in every translation unit that it is used in. In effect, for those compilers, the bodies of template functions must be made available in a header file. To repeat: that means those compilers won't allow them to be defined in non-header files such as .cpp files. To clarify, in C++ese this means that this:
// ORIGINAL version of xyz.h
template <typename T>
struct xyz
{
xyz();
~xyz();
};
would NOT be satisfied with these definitions of the ctor and dtors:
// ORIGINAL version of xyz.cpp
#include "xyz.h"
template <typename T>
xyz<T>::xyz() {}
template <typename T>
xyz<T>::~xyz() {}
because using it:
// main.cpp
#include "xyz.h"
int main()
{
xyz<int> xyzint;
return 0;
}
will produce an error. For instance, with Comeau C++ you'd get:
C:\export>como xyz.cpp main.cpp
C++'ing xyz.cpp...
Comeau C/C++ 4.3.4.1 (May 29 2004 23:08:11) for MS_WINDOWS_x86
Copyright 1988-2004 Comeau Computing. All rights reserved.
MODE:non-strict warnings microsoft C++
C++'ing main.cpp...
Comeau C/C++ 4.3.4.1 (May 29 2004 23:08:11) for MS_WINDOWS_x86
Copyright 1988-2004 Comeau Computing. All rights reserved.
MODE:non-strict warnings microsoft C++
main.obj : error LNK2001: unresolved external symbol xyz<T1>::~xyz<int>() [with T1=int]
main.obj : error LNK2019: unresolved external symbol xyz<T1>::xyz<int>() [with T1=int] referenced in function _main
aout.exe : fatal error LNK1120: 2 unresolved externals
because there is no use of the ctor or dtor within xyz.cpp, therefore, there is no instantiations that needs to occur from there. For better or worse, this is how templates work.
One way around this is to explicitly request the instantiation of xyz, in this example of xyz<int>. In a brute force effort, this could be added to xyz.cpp by adding this line at the end of it:
template xyz<int>;
which requests that (all of) xyz<int> be instantiated. That's kind of in the wrong place though, since it means that everytime a new xyz type is brought about that the implementation file xyz.cpp must be modified. A less intrusive way to avoid that file is to create another:
// xyztir.cpp
#include "xyz.cpp" // .cpp file!!!, not .h file!!
template xyz<int>;
This is still somewhat painful because it still requires a manual intervention everytime a new xyz is brought forth. In a non-trivial program this could be an unreasonable maintenance demand.
So instead, another way to approach this is to #include "xyz.cpp" into the end of xyz.h:
// xyz.h
// ... previous content of xyz.h ...
#include "xyz.cpp"
You could of course literally bring (cut and paste it) the contents of xyz.cpp to the end of xyz.h, hence getting rid of xyz.cpp; it's a question of file organization and in the end the results of preprocessing will be the same, in that the ctor and dtor bodies will be in the header, and hence brought into any compilation request, since that would be using the respective header. Either way, this has the side-effect that now every template is in your header file. It could slow compilation, and it could result in code bloat. One way to approach the latter is to declare the functions in question, in this case the ctor and dtor, as inline, so this would require you to modify xyz.cpp in the running example.
As an aside, some compilers also require that some functions be defined inline inside a class, and not outside of one, so the setup above would need to be tweaked further in the case of those compilers. Note that this is a compiler issue, not one of Standard C++, so not all compilers require this. For instance, Comeau C++ does not, nor should it. Check out http://www.comeaucomputing.com/4.0/docs/userman/ati.html for details on our current setup. In short, Comeau C++ supports many models, including one which comes close to what the export keyword's intentions are (as an extension) as well as even supporting export itself.
Lastly, note that the C++ export keyword is intended to alleviate the original question. However, currently Comeau C++ is the only compiler which is being publicized to support export. See http://www.comeaucomputing.com/4.0/docs/userman/export.html and http://www.comeaucomputing.com/4.3.0/minor/win95+/43stuff.txt for some details. Hopefully as other compilers reach compliance with Standard C++, this situation will change. In the example above, using export means returning to the original code which produced the linker errors, and making a change: declare the template in xyz.h with the export keyword:
// xyz.h
export
// ... ORIGINAL contents of xyz.h ...
The ctor and dtor in xyz.cpp will be exported simply by virtue of #includeing xyz.h, which it already does. So, in this case you don't need xyztir.cpp, nor the instantiation request at the end of xyz.cpp, and you don't need the ctor or dtor manually brought into xyz.h. With the command line shown earlier, it's possible that the compiler will do it all for you automatically.
See this explanation for its use
Quite a few compilers don't support it either because it's too new or in the case of gcc - because they disaprove.
This post describes standard support for many compilers.
Visual Studio support for new C / C++ standards?
See here and here for Herb Sutter's treatment of the subject.
Basically: export has been implemented in only one compiler - and in that implementation, export actually increases the coupling between template definition and declaration, whereas the only point in introducing export was to decrease this coupling.
That's why most compilers don't bother. I would have thought they would have just removed export from the language in C++0x, but I don't think they did. Maybe some day there will be a good way to implement export that has the intended use.
To put it simply:
export lets you separate the declaration (ie. header) from the definition (ie. the code) when you write your template classes. If export is not supported by your compiler then you need to put the declaration and definition in one place.
Export is a feature that introduces a circular dependency between linker and compiler. As others noted, it allows one translation unit to contain the definition of a template used in another. The linker will be the first to detect this, but it needs the compiler for the instantiation of the template. And this involves real hard work, like name lookup.
Comeau introduced it first, about 5 years ago IIRC. It worked quite well on the first beta release I got. Even testcases like A<2> using B<2> using A<1> using B<1> using A<0>, worked, if templates A and B came from different TU's. Sure, the linker was repeatedly invoking the compiler, but all name lookups worked OK. Instantiation A<1> found names from A.cpp that were invisible in B.cpp.
Standard Features Missing From VC++ 7.1. Part II: export
The only compilers that support exported templates at the moment (as far as I know) are Comeau, the one that came with Borland C++ Builder X but not the current C++ Builder, and Intel (at least unofficially, if not officially, not sure).