We are using a third-party library in our Visual Studio 2013 C++ project. This library I guess uses a different STL library than ours. We are getting some link errors. There are no compile-time errors. As I understand, STL is primarily template-based code thus requiring no linking. Looks like there are some parts of STL that do require linking with an STL library. Wondering if there is a way to force STL to be completely inline. Regards.
There is no single answer to this question. In general, C++ is well known to not be "link-compatible" (or "binary compatible") between different versions of compilers (or different brands of compilers, etc, etc). So someone supplying a library SHOULD specify exactly which compiler they expect to be used, and the user of the library will need to use that one. [I've even seen problems using exactly the same version, but two different "builds" of the compiler]
To POSSIBLY find a solution, you'd have to look at what it is the compiler/linker is attempting to find, and then see if that function is at all present in the sources for the STL - if it is, then possibly applying a liberal sprinkling (on the relevant functins) of "always_inline" or whatever that particular compiler uses for that functionality. But chances are that the functions you're missing aren't provided in the header files in the first place. This of course assumes that you actually have the source for the library, so that it can be recompiled [or you can convince the provider to recompile with new settings].
And you'll potentially still have problems with things that are implementation dependent (the problem with "same version, different build" I mentioned above is that someone who wrote STL implementation decided to change a constructor parameter from "unsigned int" to "size_t" at some point, which [could change or] changes the size of the data passed to the constructor -> not behaving the same when you call the function [but the shared library loader detects it and refuses to even load the executable/shared library combination]
[As Lightness Races in Orbit says, the above applies to the "Standard Template Library", which is things like std::vector, std::map, std::random and many other things, but it does not include ALL of the runtime functionality that is required to write any non-trivial C++ program]
STL is primarily template-based code
The "STL" as the term is commonly used (which in itself is inaccurate) relates only to a very small subset of the C++ standard library, and plenty of the C++ standard library is not templates. And that still doesn't take into consideration any of the C++ runtime that your implementation has to link into your project.
So, assuming you mean either the C++ standard library or (more likely) the whole Visual Studio runtime … no, you can't inline it all.
You should probably rebuild the third-party library against your own toolchain.
Related
This question already has answers here:
Why isn't the C++ standard library already pre-included in any C++ source?
(7 answers)
Closed 1 year ago.
Why must I #include various libraries, such as iostream?
I have learned that libraries, such as iostream specifically, are part of the C++ 'Standard Library' "which are written in the core language and part of the C++ ISO Standard itself."
This is pretty neat, but, if they are so 'standard' and such a 'core part' of the language, why do I need to #include them? Why is it that I can write a script that uses +, -, *, /, or other things without #including any libraries? [As a person completely unqualified to have an opinion on this matter,] I feel like being able to add two operands together is just as essential as being able to retrieve input from an operator (std::cin).
What exactly is the issue? Is it that my specific compiler (using Visual Studio exclusively so far) doesn't have those functions built-in? But the designers chose to innately include simple arithmetic operators instead? I understand the need to do this for an obscure or lesser known library, but not so much iostream.
Secondly, how can I 'look inside' a standard header file, such as iostream? How can I inspect the source code of common header files to see how string compare or cin works? I understand the beauty of abstraction and not having to know the nitty gritty details, but I'm obsessed with knowing how everything works.
Third, are there penalties associated with #including header files? For example, if I #include iostream, but I only use a single operator, such as std::cin, is the entire header file included in the final product? Would my program/file or the machine running it be burdened with the unused portions of iostream?
When looking at a C++ "implementation", you are really looking at two separate parts: The compiler, and the library. Basics like integer arithmetics, pointers, references and the like are defined by the core language, and implemented in the compiler core.
Higher level functions are defined in libraries. Of these, the Standard Library is only one, and defined in the same standard document as the language core.
But the standard library can be, and often is, a separate project that only happens to work in tandem with the compiler. You might use the compiler with a different library implementation, or may use the library implementation with a different compiler.
That you have to #include headers before you can use the library has something to do with efficiency. Both the compiler and you (or any other developer reading your source later on) only has to "know" about those parts of any libraries that are actually #included by your source. The compiler has less looking-up to do when parsing your code, and a developer knows pretty exactly which part of the documentation to look at for further information -- especially if you resist the urge to write using namespace ... in your source. This becomes even more important later on when you will be using several different libraries with different namespaces.
You should not use the header files for information. Highly abstracted C++ source can be somewhat of a handful to understand, and the headers are usually not written for the end user anyway. You are very much better off by referring to actual documentation, like cppreference.com, instead.
As for the third part of your question, including a header will introduce all the identifiers from that header (and, with C++, likely several other headers as well) into your translation unit. (With C, the library implementation is actually forbidden to introduce any identifiers from other standard headers unless that header was explicitly included.) This should not bother you, though. Things that are not needed do not result in actual code in your resulting binary. With really small example programs and <iostream> specifically, the "burden" might seem comparatively large because something "simple" like std::cin requires significant support (files, streams, buffers and the like). But this effect rapidly diminishes as your program becomes larger, because whatever support code is added to your binary is only added once.
And finally, most of the above is the same in other languages as well, with only minor differences. Java, C# or Python might not have "headers" per se, but in the end the issues with the standard library being separate from the compiler, identifier namespacing, and (as far as compiled languages are concerned) effect on resulting binary are similar across the board.
In C++ the standard library is wrapped in the std namespace and the programmer is not supposed to define anything inside that namespace. Of course the standard include files don't step on each other names inside the standard library (so it's never a problem to include a standard header).
Then why isn't the whole standard library included by default instead of forcing programmers to write for example #include <vector> each time? This would also speed up compilation as the compilers could start with a pre-built symbol table for all the standard headers.
Pre-including everything would also solve some portability problems: for example when you include <map> it's defined what symbols are taken into std namespace, but it's not guaranteed that other standard symbols are not loaded into it and for example you could end up (in theory) with std::vector also becoming available.
It happens sometimes that a programmer forgets to include a standard header but the program compiles anyway because of an include dependence of the specific implementation. When moving the program to another environment (or just another version of the same compiler) the same source code could however stop compiling.
From a technical point of view I can image a compiler just preloading (with mmap) an optimal perfect-hash symbol table for the standard library.
This should be faster to do than loading and doing a C++ parse of even a single standard include file and should be able to provide faster lookup for std:: names. This data would also be read-only (thus probably allowing a more compact representation and also shareable between multiple instances of the compiler).
These are however just shoulds as I never implemented this.
The only downside I see is that we C++ programmers would lose compilation coffee breaks and Stack Overflow visits :-)
EDIT
Just to clarify the main advantage I see is for the programmers that today, despite the C++ standard library being a single monolithic namespace, are required to know which sub-part (include file) contains which function/class. To add insult to injury when they make a mistake and forget an include file still the code may compile or not depending on the implementation (thus leading to non-portable programs).
Short answer is because it is not the way the C++ language is supposed to be used
There are good reasons for that:
namespace pollution - even if this could be mitigated because std namespace is supposed to be self coherent and programmer are not forced to use using namespace std;. But including the whole library with using namespace std; will certainly lead to a big mess...
force programmer to declare the modules that he wants to use to avoid inadvertently calling a wrong standard function because standard library is now huge and not all programmers know all modules
history: C++ has still strong inheritance from C where namespace do not exist and where the standard library is supposed to be used as any other library.
To go in your sense, Windows API is an example where you only have one big include (windows.h) that loads many other smaller include files. And in fact, precompiled headers allows that to be fast enough
So IMHO a new language deriving from C++ could decide to automatically declare the whole standard library. A new major release could also do it, but it could break code intensively using using namespace directive and having custom implementations using same names as some standard modules.
But all common languages that I know (C#, Python, Java, Ruby) require the programmer to declare the parts of the standard library that he wants to use, so I suppose that systematically making available every piece of the standard library is still more awkward than really useful for the programmer, at least until someone find how to declare the parts that should not be loaded - that's why I spoke of a new derivative from C++
Most of the C++ standard libraries are template based which means that the code they'll generate will depend ultimately in how you use them. In other words, there is very little that could be compiled before instantiate a template like std::vector<MyType> m_collection;.
Also, C++ is probably the slowest language to compile and there is a lot parsing work that compilers have to do when you #include a header file that also includes other headers.
Well, first thing first, C++ tries to adhere to "you only pay for what you use".
The standard-library is sometimes not part of what you use at all, or even of what you could use if you wanted.
Also, you can replace it if there's a reason to do so: See libstdc++ and libc++.
That means just including it all without question isn't actually such a bright idea.
Anyway, the committee are slowly plugging away at creating a module-system (It takes lots of time, hopefully it will work for C++1z: C++ Modules - why were they removed from C++0x? Will they be back later on?), and when that's done most downsides to including more of the standard-library than strictly neccessary should disappear, and the individual modules should more cleanly exclude symbols they need not contain.
Also, as those modules are pre-parsed, they should give the compilation-speed improvement you want.
You offer two advantages of your scheme:
Compile-time performance. But nothing in the standard prevents an implementation from doing what you suggest[*] with a very slight modification: that the pre-compiled table is only mapped in when the translation unit includes at least one standard header. From the POV of the standard, it's unnecessary to impose potential implementation burden over a QoI issue.
Convenience to programmers: under your scheme we wouldn't have to specify which headers we need. We do this in order to support C++ implementations that have chosen not to implement your idea of making the standard headers monolithic (which currently is all of them), and so from the POV of the C++ standard it's a matter of "supporting existing practice and implementation freedom at a cost to programmers considered acceptable". Which is kind of the slogan of C++, isn't it?
Since no C++ implementation (that I know of) actually does this, my suspicion is that in point of fact it does not grant the performance improvement you think it does. Microsoft provides precompiled headers (via stdafx.h) for exactly this performance reason, and yet it still doesn't give you an option for "all the standard libraries", instead it requires you to say what you want in. It would be dead easy for this or any other implementation to provide an implementation-specific header defined to have the same effect as including all standard headers. This suggests to me that at least in Microsoft's opinion there would be no great overall benefit to providing that.
If implementations were to start providing monolithic standard libraries with a demonstrable compile-time performance improvement, then we'd discuss whether or not it's a good idea for the C++ standard to continue permitting implementations that don't. As things stand, it has to.
[*] Except perhaps for the fact that <cassert> is defined to have different behaviour according to the definition of NDEBUG at the point it's included. But I think implementations could just preprocess the user's code as normal, and then map in one of two different tables according to whether it's defined.
I think the answer comes down to C++'s philosophy of not making you pay for what you don't use. It also gives you more flexibility: you aren't forced to use parts of the standard library if you don't need them. And then there's the fact that some platforms might not support things like throwing exceptions or dynamically allocating memory (like the processors used in the Arduino, for example). And there's one other thing you said that is incorrect. As long as it's not a template class, you are allowed to add swap operators to the std namespace for your own classes.
First of all, I am afraid that having a prelude is a bit late to the game. Or rather, seeing as preludes are not easily extensible, we have to content ourselves with a very thin one (built-in types...).
As an example, let's say that I have a C++03 program:
#include <boost/unordered_map.hpp>
using namespace std;
using boost::unordered_map;
static unordered_map<int, string> const Symbols = ...;
It all works fine, but suddenly when I migrate to C++11:
error: ambiguous symbol "unordered_map", do you mean:
- std::unordered_map
- boost::unordered_map
Congratulations, you have invented the least backward compatible scheme for growing the standard library (just kidding, whoever uses using namespace std; is to blame...).
Alright, let's not pre-include them, but still bundle the perfect hash table anyway. The performance gain would be worth it, right?
Well, I seriously doubt it. First of all because the Standard Library is tiny compared to most other header files that you include (hint: compare it to Boost). Therefore the performance gain would be... smallish.
Oh, not all programs are big; but the small ones compile fast already (by virtue of being small) and the big ones include much more code than the Standard Library headers so you won't get much mileage out of it.
Note: and yes, I did benchmark the file look-up in a project with "only" a hundred -I directives; the conclusion was that pre-computing the "include path" to "file location" map and feeding it to gcc resulted in a 30% speed-up (after using ccache already). Generating it and keeping it up-to-date was complicated, so we never used it...
But could we at least include a provision that the compiler could do it in the Standard?
As far as I know, it is already included. I cannot remember if there is a specific blurb about it, but the Standard Library is really part of the "implementation" so resolving #include <vector> to an internal hash-map would fall under the as-if rule anyway.
But they could do it, still!
And lose any flexibility. For example Clang can use either the libstdc++ or the libc++ on Linux, and I believe it to be compatible with the Dirkumware's derivative that ships with VC++ (or if not completely, at least greatly).
This is another point of customization: if the Standard library does not fit your needs, or your platforms, by virtue of being mostly treated like any other library you can replace part of most of it with relative ease.
But! But!
#include <stdafx.h>
If you work on Windows, you will recognize it. This is called a pre-compiled header. It must be included first (or all benefits are lost) and in exchange instead of parsing files you are pulling in an efficient binary representation of those parsed files (ie, a serialized AST version, possibly with some type resolution already performed) which saves off maybe 30% to 50% of the work. Yep, this is close to your proposal; this is Computer Science for you, there's always someone else who thought about it first...
Clang and gcc have a similar mechanism; though from what I've heard it can be so painful to use that people prefer the more transparent ccache in practice.
And all of these will come to naught with modules.
This is the true solution to this pre-processing/parsing/type-resolving madness. Because modules are truly isolated (ie, unlike headers, not subject to inclusion order), an efficient binary representation (like pre-compiled headers) can be pre-computed for each and every module you depend on.
This not only means the Standard Library, but all libraries.
Your solution, more flexible, and dressed to the nines!
One could use an alternative implementation of the C++ Standard Library to the one shipped with the compiler. Or wrap headers with one's definitions, to add, enable or disable features (see GNU wrapper headers). Plain text headers and the C inclusion model are a more powerful and flexible mechanism than a binary black box.
Try to don't judge me really hard for this question.
Will I have problems if I try to use a DLL (written in C++) in a Visual Studio project that was compiled with the MinGW toolchain (g++)?
If the answer is yes, could someone explain me why?
As far as code execution goes, if it targets Windows, then it should run.
For interoperability; it depends on the API exposed from the dll.
If it is a "C" style API then you should be fine.
If it is a C++ API with classes and STL use, then you may run into trouble using different compilers (even different settings in the compilers). C++ doesn't have a standard ABI.
As noted in the comments and probably worth touching on here; the "C" style API is more than just the function calls, compatibility includes (but not limited) to things such as;
Memory allocation; general rule is "deallocation must be done by the same code as the allocation" - i.e. in the dll such that the matching allocation/deallocation is used (e.g. new and delete, malloc and free, CustomAlloc and CustomDealloc)
Data alignment and packing, not all compilers have the same defaults
Calling conventions, again, not all compilers have the same defaults, when in doubt, be explicit on what that convention should be (e.g. __stdcall)
As for the C++ ABI, in addition to all of the above, incompatibilities include;
Name mangling
STL class and container implementations
Separate and different implementations of new and delete, with custom memory pooling etc.
Exception mechanisms, e.g. generation, and stack unwinding
These are not comprehensive lists, but they go some way to motivating the general advice that when possible, use the same toolchains and same compiler/linker settings.
The problem is not about DLL. It's about how you describe such DLL to your compiler (ie. header files and exports).
Those descriptions affect how your code expect on data layout (e.g. different compiler may have different default alignment and padding), or even different code path (e.g. resolve to different inline functions and macro, such as in STL case).
Then each compiler may have different name mangling scheme, you may need a bit hack to glue the codes.
With very special care on the above issues you should be fine using a DLL from different compiler.
No, because it is all one standard. And in fact your C++ application compiled with gcc calls system functions from OS dll files. So no matter if you call them manually or automatically, it is all the same.
I'm attempting to switch a large project to using C++11. I ran into large number of linker errors which seem caused by the mismatched namespace on STL classes between library compiled with C++11 and those compiled with C++03.
As an example, say library B is a dependency of A. B has following templated class as part of its interface.
template <class Type>
class VectorParameter
{
public:
VectorParameter();
virtual ~VectorParameter();
...
}
Library A instantiates the template with VectorParameter<std::pair<float, float>>.
When I recompiled A with C++11 without recompiling B, I ran into linker error that complains that
LFE::VectorParameter<std::__1::pair<float, float>>::~VectorParameter() is undefined symbol.
I think the issue here is that library A uses std::__1::pair while B still uses std::pair. Following this reasoning, I assume that I will need to recompile all dependency libraries that refers STL types in their interfaces.
If this is the case, then migrating a large project to C++11 will require all involved groups to switch at the same time, which doesn't seem very practical on a complex project. What would be the best practice for dealing this issue?
It is almost certain that the library header files have changed, therefore to remain in compliance with the One Definition Rule you must recompile everything.
You don't specify your platform or compiler/libraries.
There are some interesting notes here (although a bit out of date) about ABI compatibility for GNU libStdc++ - there is some insulation from ABI compatibility at the expense of true compliance. It rather looks as if it's all or nothing here if you use std::pair.
libc++ (which is clang's standard library) takes another approach and intentionally inserts an extra namespace (I believe called __1) to all of its exported symbols, which means it's possible to link both libstdc++ and libc++ in the same executable.
Providing you're not passing STL objects across the boundaries between your old and new libraries, you may well be able to get this to work.
Is it possible for gcc to link against a library that was created with Visual C++? If so, are there any conflicts/problems that might arise from doing so?
Some of the comments in the answers here are slightly too generalistic.
Whilst no, in the specific case mentioned gcc binaries won't link with a VC++ library (AFAIK). The actual means of interlinking code/libraries is a question of the ABI standard being used.
An increasingly common standard in the embedded world is the EABI (or ARM ABI) standard (based on work done during Itanium development http://www.codesourcery.com/cxx-abi/). If compilers are EABI compliant they can produce executables and libraries which will work with each other. An example of multiple toolchains working together is ARM's RVCT compiler which produces binaries which will work with GCC ARM ABI binaries.
(The code sourcery link is down at the moment but can be google cached)
I would guess not. Usually c++ compilers have quite different methods of name-mangling which means that the linkers will fail to find the correct symbols. This is a good thing by the way, because C++ compilers are allowed by the standard to have much greater levels of incompatibility than just this that will cause your program to crash, die, eat puppies and smear paint all over the wall.
Usual schemes to work around this usually involve language independent techniques like COM or CORBA. A simpler sanctified method is to use C "wrappers" around your C++ code.
It is not possible. It's usually not even possible to link libraries produced by different versions of the same compiler.
No. Plain and simple :-)
Yes, if you make it a dynamic link and make the interface c-style. lib.exe will generate import libraries which are compatible with the gcc toolchain.
That will resolve your linking problems. However that is just the start of the problem.
Your larger problems will be things like exceptions, and memory allocation.
You must ensure that no exception cross from VC++ to gcc code, there are no guarantees of compatibility.
Every object from the VC++ library will need to live on the heap because:
Do not mix gcc new/delete with anything from VC++, bad things will happen. This goes for object construction on the stack too. However, if you make an interface like create_some_obj()/delete_some_obj() you do not end up using gcc new to construct VC++ objects. Maybe make a small handler object that handles construction and destruction. This way you preserve RAII, but still use the c-interface for the true interface.
Calling convention must be correct. In VC++ there is cdecl and stdcall. If gcc tried to call an imported function with the wrong calling type, bad things will happen.
The bottom line is keep a simple interface that is ANSI C compliant, and you should be fine. The fact that crazy C++ goes on behind is okay, as long as it is contained.
Oh and make sure all the code is re-entrant, or you risk opening a whole nother can-o-worms.