Replace Visual Studio standard library - c++

I am on Windows, and suppose I want to use different implementation of standard C++ library for my projects - say, libstdc++ or libc++.
Is there a way to persuade my Visual Studio to use it instead of MSVC library, so I can still use #include <algorithm> and not #include <custom/algorithm>? I believe that I can achieve it by simply adding path to my headers into project, but I am looking for more "system-wise" way, so I wouldn't repeat it for every single project.
Will it actually worth the hassle - specifically in terms of modern C++ features being available / standart-compliant?
If so, what would be cons of such replacement, apart of possibility to use some features that are not present in other implementations?
Particulary, answers to this question mention that there may be compatibility issues with other libs - does this apply only to Linux world, or will I have problems on Windows too?
Note: this is mostly a theoretical question - I'm fine with MSVC library, but I'd really like to know more about different stdlib implementations.

Theoretically it's not impossible to swap out stdlib implementations. With clang, you can choose between libc++ (clang's) and libstdc++ (GCC's).
However, in practice, stdlib implementations are often tied fairly fundamentally to the internals of the compiler they ship with, especially when it comes to newer-added C++ features, and this is not truer for many compilers than Visual Studio.
Could you make it work with a lot of hacking around? Maybe. Would it be worthwhile? I very much doubt it. Even if you succeeded, you will have sacrificed a reproducible build environment and will be relying on some deeply dark arts. Your project will not be reusable.
There is no indication in your question as you why you think you need to switch implementations, but it seems unlikely that any reason you could come up with would be worth the trouble.

Related

How to find Boost libraries that does not contain any platform specific code

For our current project, we are thinking to use Boost framework.
However, the project should be truly cross-platform and might be shipped to some exotic platforms. Therefore, we would like to use only Boost packages (libraries) that does not contain any platform specific code: pure C++ and that's all.
Boost has the idea of header-only packages (libraries).
Can one assume that these packages (libraries) are free from platform specific code?
In case if not, is there a way to identify these kind of packages of Boost?
All C++ code is platform-specific to some extent. On the one side, there is this ideal concept of "pure standard C++ code", and on the other side, there is reality. Most of the Boost libraries are designed to maintain the ideal situation on the user-side, meaning that you, as the user of Boost, can write platform-agnostic standard C++ code, while all the underlying platform-specific code is hidden away in the guts of those Boost libraries (for those that need them).
But at the core of this issue is the problem of how to define platform-specific code versus standard C++ code in the real world. You can, of course, look at the standard document and say that anything outside of it is platform-specific, but that's nothing more than an academic discussion.
If we start from this scenario: assume we have a platform that only has a C++ compiler and a C++ standard library implementation, and no other OS or OS-specific API to rely on for other things that aren't covered by the standard library. Well, at that point, you still have to ask yourself:
What compiler is this? What version?
Is the standard library implementation correct? Bug-free?
Are those two entirely standard-compliant?
As far as I know, there is essentially no universal answer to this and there are no realistic guarantees. Most exotic platforms rely on exotic (or old) compilers with partial or non-compliant standard library implementations, and sometimes have self-imposed restrictions (e.g., no exceptions, no RTTI, etc.). An enormous amount of "pure standard C++ code" would never compile on these platforms.
Then, there is also the reality that most platforms today, even really small embedded systems have an operating system. The vast majority of them are POSIX compliant to some level (except for Windows, but Windows doesn't support any exotic platform anyways). So, in effect, platform-specific code that relies on POSIX functions is not really that bad since it is likely that most exotic platforms have them, for the most part.
I guess what I'm really getting at here is that this pure dividing line that you have in your mind about "pure C++" versus platform-specific code is really just an imaginary one. Every platform (compiler + std-lib + OS + ext-libs) lies somewhere along a continuum of level of support for standard language features, standard library features, OS API functions, and so on. And by that measure, all C++ code is platform-specific.
The only real question is how wide of a net it casts. For example, most Boost libraries (except for recent "flimsy" ones) generally support compilers down to a reasonable level of C++98 support, and many even try to support as far back as early 90s compilers and std-libs.
To know if a library, part of Boost or not, has wide enough support for your intended applications or platforms, you have the define the boundaries of that support. Just saying "pure C++" is not enough, it means nothing in the real world. You cannot say that you will be using C++11 compilers just after you've taken Boost.Thread as an example of a library with platform-specific code. Many C++11 implementations have very flimsy support for std::thread, but others do better, and that issue is as much of a "platform-specific" issue as using Boost.Thread will ever be.
The only real way to ever be sure about your platform support envelope is to actual set up machines (e.g., virtual machines, emulators, or real hardware) that will provide representative worst-cases. You have to select those worst-case machines based on a realistic assessment of what your clients may be using, and you have to keep that assessment up to date. You can create a regression test suite for your particular project, that uses the particular (Boost) libraries, and test that suite on all your worst-case test environments. Whatever doesn't pass the test, doesn't pass the test, it's that simple. And yes, you might find out in the future that some Boost library won't work under some new exotic platform, and if that happens you need to either get the Boost dev-team to add code to support it, or you have to re-write your code to get around it, but that's what software maintenance is all about, and it's a cost you have to anticipate, and such problems will come not only from Boost, but from the OS and from the compiler vendors too! At least, with Boost, you can fix the code yourself and contribute it to Boost, which you can't always do with OS or compiler vendors.
We had "Boost or not" discussion too. We decided not to use it.
We had some untypical hardware platforms to serve with one source code. Especially running boost on AVR was simply impossible because RTTI and exceptions, which Boost requires for a lot of things, aren't available.
There are parts of boost which use compiler specific "hacks" to e.g. get information about class structure.
We tried splitting the packages, but the inter dependency is quite high (at least 3 or 4 years ago).
In the meantime, C++11 was underway and GCC started supporting more and more. With that many reasons to use from boost faded (Which Boost features overlap with C++11?). We implemented the rest of our needs from scratch (with relative low effort thanks to variadic templates and other TMP features in C++11).
After a steep learning curve we have all we need without external libraries.
At the same time we have pondered the future of Boost. We expected the newly standardized C++11 features would be removed from boost. I don't know the current roadmap for Boost, but at the time our uncertainty made us vote against Boost.
This is not a real answer to your question, but it may help you decide whether to use Boost. (And sorry, it was to large for a comment)

C++ Standard Library Portability

I work on large scale, multi platform, real time networked applications. The projects I work on lack any real use of containers or the Standard Library in general, no smart pointers or really any "modern" C++ language features. Lots of raw dynamically allocated arrays are common place.
I would very much like to start using the Standard Library and some of the C++11 spec, however, there are many people also working on my projects that are against because "STL / C++11 isn't as portable, we take a risk using it". We do run software on a wide variety of embedded systems as well as fully fledged Ubuntu/Windows/Mac OS systems.
So, to my question; what are the actual issues of portability with concern to the Standard Library and C++11? Is it just a case of having g++ past a certain version? Are there some platforms that have no support? Are compiled libraries required and if so, are they difficult to obtain/compile? Has anyone had serious issues being burnt by non-portable pure C++?
Library support for the new C++11 Standard is pretty complete for either Visual C++ 2012, gcc >= 4.7 and Clang >= 3.1, apart from some concurrency stuff. Compiler support for all the individual language features is another matter. See this link for an up to date overview of supported C++11 features.
For an in-depth analysis of C++ in an embedded/real-time environment, Scott Meyers's presentation materials are really great. It discusses costs of virtual functions, exception handling and templates, and much more. In particular, you might want to look at his analysis of C++ features such as heap allocations, runtime type information and exceptions, which have indeterminate worst-case timing guarantees, which matter for real-time systems.
It's those kind of issues and not portability that should be your major concern (if you care about your granny's pacemaker...)
Any compiler for C++ should support some version of the standard library. The standard library is part of C++. Not supporting it means the compiler is not a C++ compiler. I would be very surprised if any of the compilers you're using at the moment don't portably support the C++03 standard library, so there's no excuse there. Of course, the compiler will have to be have been updated since 2003, but unless you're compiling for some archaic system that is only supported by an archaic compiler, you'll have no problems.
As for C++11, support is pretty good at the moment. Both GCC and MSVC have a large portion of the C++11 standard library supported already. Again, if you're using the latest versions of these compilers and they support the systems you want to compile for, then there's no reason you can't use the subset of the C++11 standard library that they support - which is almost all of it.
C++ without the standard library just isn't C++. The language and library features go hand in hand.
There are lists of supported C++11 library features for GCC's libstdc++ and MSVC 2012. I can't find anything similar for LLVM's libc++, but they do have a clang c++11 support page.
The people you are talking to are confusing several different
issues. C++11 isn't really portable today. I don't think any
compiler supports it 100% (although I could be wrong); you can
get away with using large parts of it if (and only if) you limit
yourself to the most recent compilers on two or three platforms
(Windows and Linux, and probably Apple). While these are the
most visible platforms, they represent but a small part of all
machines. (If you're working on large scale networked
applications, Solaris will probably be important, and Sun CC.
Unless Sun have greatly changed since I last worked on it, that
means that there are even parts of C++03 that you can't count
on.)
The STL is a completely different issue. It depends partially
on what you mean by the STL, but there is certainly no
portability problem today in using std::vector. locale
might be problematic on a very few compilers (it was with Sun
CC—with both the Rogue Wave and the Stlport libraries),
and some of the algorithms, but for the most part, you can
pretty much count on all of C++03.
And in the end, what are the alternatives? If you don't have
std::vector, you end up implementing something pretty much
like it. If you're really worried about the presence of
std::vector, wrap it in your own class—if ever it's not
available (highly unlikely, unless you go back with a time
machine), just reimplement it, exactly like we did in the
pre-standard days.
Use STLPort with your existing compiler, if it supports it. This is no more than a library of code, and you use other libraries without problem, right?
Every permitted implementation-defined behaviour is listed in publicly available standard draft. There is next to nothing less portable in C+11 than in C++98.

Embedded C++ project - Smart Pointer supported wanted. Possible portability issues?

I'm working on a big project for embedded systems.
The project is a library and some binaries that must be integrated into customer's code/solution.
So, it must be as much OS/Platform independent as possible.
We've been working on embedded linux so far without problems. However it is possible that non linux based platforms join the fun in the near future.
To ilustrate the kind of platform we are working with, they must be capable of running demanding modules such as a Java Virtual Machine.
I'm not sure which kind of platform may show up and what kind of compilers they may offer.
So I'm a little worried about using advanced C++ futures or libraries that may cause a lot of trouble. Mainly I want to avoid the possibility of incompatibility due to that.
We are refactoring a few C++ modules of our solution. They are really tricky and smart pointers support would help a lot.
At first, I thought about making a custom smart pointer package, but it seems a little risk to me (bugs here would cause a huge headache).
So I thought about using boost's smart pointers.
Do you guys think I'm going to have trouble in the future if I use the boost's smart pointers?
I tried to extract the boost's smart pointer package using bcp, however a lot of other things come along with that. something like 4Mb of code.
The extracted directories are:
config/compiler
config/stdlib
config/platform
config/abi
config/no_tr1
detail
smart_ptr
mpl (and subdirs)
preprocessor (and subdirs)
exception (and subdirs)
type_traits (and dubdirs)
That doesn't seem very portable to me (but I may be wrong about it).
What do you guys think about it?
Thanks very much for your help.
Newer compilers include shared_ptr as C++11/TR1. If you have a reasonably modern compiler- which you really want to have, because of C++11- then it should not be problematic.
If you do not right now have a customer who cannot use TR1, then rock on with it. You can deal with future customers when they arrive- YAGNI applies here, and smart pointers are very important. As are C++11 features like move semantics.
However, if you were desperate, you could roll your own shared_ptr- the concept is not particularly complex.
Don't hesitate with using smart pointers. The smart pointer package you extracted should be portable to all decent compilers.
If it won't work with your compiler, you can replace conflicting parts of code manually. Boost code is more complicated, because it contains workarounds for various compiler bugs or missing functionalities. That's one of the reasons, why Boost.Preprocessor or Boost.Typetraits were added.
Boost is very portable; the source code size of the library is no indication of target image size; much of the library code will remain unused and will not be included in the target image.
Moreover, most common (and not so common and obsolete) 32bit platforms are supported by a "bare-metal" ports of GCC. However while GCC is portable without an OS, GNU libc targets POSIX compliant OS, so bare-metal and non-POSIX dependent ports usually use alternative libraries such as uClib or Newlib. On top of these GNU stdlibc++ will run happily and also many Boost libraries. Parts of Boost such as threads will need porting for unsupported targets, purely data structure related features such as smart pointers will have no target environment dependencies.

Developing cross-platform C++11 code [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
With C++03 it was (and still is) possible to write cross-platform code with both MSVC and GCC, sharing C++ code bases between Windows, Linux and Mac OS X.
Now, what is the situation with C++11? It seems that different C++ compilers implement different features of C++11. To build cross-platform C++11 code, is it safe to take MSVC10 (VS2010) as a kind of "least common denominator"? i.e. if we restrict the approved C++11 features to those implemented by MSVC10, will the resulting C++11 code be compilable with GCC (and so usable on both Linux and Mac OS X) ?
Or is it just better to wait for C++11 compilers to mature and stick with C++03 if we need cross-platform code?
Thanks.
You can compile code for Windows using GCC. You don't need to use Microsoft's compiler.
If you want to use C++11 features painlessly at the moment, that's going to be your best solution. Microsoft still has yet to implement a lot of C++11, and not all of it is slated to be in VS11, either.
Otherwise, yes, you can obviously just use the subset of the C++11 features that are supported by the compiler implementation that represents the lowest-common-denominator. You'll need to check and make sure that that is Microsoft's compiler for all of the new features rather than just assuming that it is.
I don't believe GCC has gotten around to everything yet, and there's no guarantee that their implementation of all the features is perfect and matches Microsoft's 100%. Writing completely portable code is and has always been hard.
Using only C++03 features is obviously the safe approach, but it doesn't allow you to use C++11 features (obviously). Rather or not that's important is a decision that only you can make.
C++11 is not ready for prime time yet, as you already figured out.
Not only is the parsing stage still being worked out by the various compilers, but there is also the issue that some, while appearing to accept some features, may have quirks and bugs in the releases you currently have.
The only sound approach I can think of is to first select the compilers you want to use:
you can use gcc/Clang on Windows (with libstdc++) however this will prevent you from interacting with libraries compiled by VC++
you can on the other hand validate your code for both gcc/Clang and VC++ (and perhaps a few others if you need to)
Once you have determined the compilers you want to use, you then have to pick the features of C++11 that you want to use, and that work on all those compilers.
gcc is probably the more advanced here
Clang does not have lambdas, but has move semantics and variadic templates
VC++ is the most behind I think
And you need to setup a test suite with all those compilers, and on all the platforms you target, and be especially wary of possible code generation issues. I recommend using Valgrind on Linux par example and perhaps Purify (or equivalent) on Windows as they both help spotting those runtime issues.
Beware that both VC++ and g++ may have extensions accepted by default that are not standard, and may also base their interpretation of the code on previous drafts of C++11.
Honestly, for production use, I think this is still a bit wonky.
If you are writing new code, you are probably not releasing it tomorrow.
So plan for your release date. There are some features, that will be accepted more slowly than the rest. Mostly hard to implemented features and duplicated features (like the range for loop).
I wouldn't worry much about using the new library features, those are already supported very well across all compilers.
Currently there isn't any least common denominator, since Microsoft decided to concentrate on the library first, while the rest has gone (mostly) for language features.
This depends largely on your project. If you ship only binaries you need to figure out a toolset you want to use and stick to what this toolset supports. Should your team use different tools everbody should make sure his code builds with the common build system (Be it C++03 or C++11).
The situation changes as soon as you ship headers that contain more than just declarations. First of all you need some infrastructure do determine what is supported by which compiler. You can either write those tests yourself and integrate them with your build system or stick to Boost.Config. Than you can ifdef platform dependent code. This sounds simple at first but really isn't. Everytime you have C++11 code that can be implemented with C++03 work-arounds you want to have both versions available for your users (e.g. variadic templates vs. the preprocessor). This leads to duplicated code and comes with a significant maintenance cost. I usually only include C++11 code if it provides a clear benefit over the workaround (better compiler error messages (variadic templates vs. macros), better performance (move semantics)).
Visual studio support for C++2011 is quite good, so if you use GCC 4.7 and VS2010 you will be able to use an ample set of the most interesting features of C++2011 while being cross platform.
Support for C++11 overview for VC10 and VC11
http://blogs.msdn.com/b/vcblog/archive/2011/09/12/10209291.aspx
Table for all the compilers:
https://wiki.apache.org/stdcxx/C++0xCompilerSupport
GCC C++11 support:
http://gcc.gnu.org/projects/cxx0x.html
Also related: C++11 features in Visual Studio 2012
Use only those features of C++11 at the moment which improve your code in some manner.
Let me explain this, I don't look up C++11 features to use them, rather when they solve my problem I adopt them. (This is the way I learned about them, all on SO) This approach will change in future, but for now I am doing this.
I currently use only few features of c++11, which incidentally work in both VS2010 and GCC.
Also if there is a great feature, you want to use, and VS doesn't have it, why not use GCC. It is cross-platform, so will work on windows as well.

When should you use an STL other than the one that comes with your compiler?

I was curious about STL implementations outside of what's packaged with gcc or Visual Studio, so a quick Google search turned up a few results, such as:
Apache stdcxx
uSTL
rdeSTL
Under what circumstances should one use an alternative standard template library?
For instance, Apache's page has a list including items such as "full conformance to the C++ standard" and "optimized for fast compiles and extremely small executable file sizes". If it's so good, why wouldn't it replace libstdc++?
For the sake of completeness, here are some of the other STL implementations:
STLPort
STXXL (which is sort of special purpose, meant for large data sets that won't fit in memory)
Dinkumware (commercial)
SGI STL
libstdc++ (GCC's implementation)
I never had to use an STL version other than the one packed with the compiler. But here are some points that come into my mind.
Thread-safety: The STL from apache provides a compile switch to turn on/off some thread-safety features.
Localization: Again the STL from apache comes with nice support for many different locales.
Data structures: You might need a basic_string implementation that is based on COW (copy-on-write) and the STL version that came with your compiler doesn't offer that.
Non-standard extensions: Particular features you like from some other STL implementations. For example, hash_map (and related) versions from Dinkumware (which ships with Visual Studio) have a significantly different design from hash_map (and related) from STLPort.
Binary issues: Constraints in some environment (embedded software) due to code size. In such case, if you don't need the whole STL it could be interesting to use a reduced version.
Performance: What if you discovered, after profiling, that the "other" STL implementation gives you significant better performance for a particular application. (With so many details concerning algorithms and data structures this could actually be possible.)
Debug mode: Some STL implementation provide nice features for debugging. For instance, checking ranges of iterators.
I sometimes use STLPort rather than the STL that ships with Visual Studio. Back when VC6 was supported the STL that shipped with it was buggy and so using STLPort (or another STL) made a lot of sense (especially if you were building multi-threaded code).
Now it's often more about performance (size or speed). For example, the STL that ships with VS2008 isn't that friendly in a multi-threaded situation as it uses locking around locale objects which causes things that you wouldn't expect to synchronise across threads. (See here Convert a number to a string with specified length in C++ for details of one example of this).
Third parties can implement improved versions of STL that attempt to offer various things, such as smaller size, faster execution, etc. You might choose one of these alternative implementations because you want one of those attributes of their implementation. You might also choose one of them when doing cross-platform development because you want to avoid running into differences in behavior between the gcc and Visual Studio versions of your product (as just one example).
There is no need to wait for a new release of a compiler with a bundled implementation of STL in order to reach out for a fresh implementation of it if you have specific needs.
I've never had a need to use an alternative STL, but I could envision some scenarios where it might be useful to use, for example, the Apache version, if you need small executables because you're developing for an embedded platform.
Another reason might be to use an STL version that guarantees certain things which are not necessarily guaranteed by the standard. For example, to ensure you have non-COW strings so you can write thread-safe code.
STLport has what they call a "power debug mode" that does a whole slew of run-time checking for "the correctness of iterators and containers usage". Helps catch a number of errors that would not be immediately obvious. I highly recommend use of STLport while debugging and testing.
The C++ standard library can be implemented in variety of ways. Some implementers try to cope with modern ideas. So, using an optimized implementation may result in faster and smaller executables.
Take SCARY for example. Some implementers didn't do it yet, although it reduces the bloat of STL to a great extent. When you do the following:
vector<int> f;
vector<int, MyAllocator> s;
size_t fc = count(f.begin(), f.end(), SomeValue);
size_t sc = count(s.begin(), s.end(), SomeOtherValue);
An "old" implementation could produce two different count functions in the result executable, because the type of f is not the same as of s. Thats because the iterator type depends on the type of the vector itself, although it doesn't need to be like that. A better idea is to separate the type of the iterator in a separate class, and provide a typedef in vector, and the compiler would produce one count only. That was just an example, but I think there are more to say about the quality of some implementations.
Aside from the reasons already given, I could imagine using a different STL due to debugging support or as a way to guarantee I was not relying on vendor extensions.
It would also be a first step in testing whether a library I was shipping worked well on other platforms.
Folks mentioning STLport have cited performance and portability, but there's also a pretty good debug mode available. I think that's a great reason to be using a different STL, if your current compiler's library is limited in this way.
Aaand ... it looks like Max and I tied on mentioning debugging! ;^)~
One reason is for better thread-safety. I was using the default STL that came with Visual Studio (VC6 infact) then had to shift to STLPort as it had much better thread-safety.
We are currently using STLPort - external implementation of STL because we have to use (for various reasons) quite old Microsoft Visual C++ 6.0 compiler (1998 release date) and compiler supplied library (by Dimkunware) is of course very out of date.
Debugging has been mentioned by several people, in terms of the ability to switch on extra diagnostic information, however another important aspect is that if you're using the platform's own STL then you may get a better experience in the debugger. This is especially true if you're using Visual Studio which has visualisers for all the standard containers.
STLPort has support for files bigger than 2GB through std::fstreams. Visual Studio 2005/2008 cannot handle files bigger than 2GB.
You can test your STL implementation by displaying: std::numeric_limits<std::streamsize>::max()
Both MSVC++ and GNU g++ come with pretty good implementations of the C++ Standard Library, but there are compilers that don't, and if I had to support such compilers I would look for a 3rd party implementation of STL.