I'm trying to use the C++11 emplace() function of a map, but NetBeans says a map has no such function. Looking at the headers, it's "right" - there is no mention (on Fedora 16) of emplace(). Which is all well and good, you know... but I kinda wanna use emplace().
How do I go about enabling this functionality? I know for a fact that it's existed since March of last year, probably earlier. A thorough search shows that emplace() basically only exists on my system in the headers for lists and vectors. Since there hasn't been a major revision of C++ in almost a decade, I'm not having any luck finding documentation on what to do if the libraries are "wrong".
If your implementation doesn't support something, you have two choices:
don't use the feature
use another implementation which supports what you need.
The fact that there is a new standard doesn't widen the choices. In fact, it reduces it as you'll have additional difficulties in finding an implementation which supports everything that you want for each one your targets.
Note that for pure library things, the other implementation could be one you make: compatibility wrappers have an increased appearance in transition time. But you have to pay attention to name clashes effects (visibility of compatibility wrappers may add ambiguities to the code when the feature appears at its standard place).
GCC 4.7 does not support this as there were unresolved issues with the standard at the time. It is implemented in 4.8 and above. You will need -std=c++11
Related
I am using C++ in Xcode version 8.1. I need to use the functionality of boost::any but am strongly opposed to pulling any part of Boost into our project (let's not debate it please).
I see that std::any is "merged into C++17" here.
I want to use this in my Xcode 8.1 project. I have tried using -std=c++1z as a custom flag on the project, but I can't seem to find a header for it.
How can I use std::any or std::experimental::any in my Xcode project?
Can I download the appropriate headers from an implementation and throw them into my project's sourcecode? Or, even better, is actually available to now in my version of Xcode/Clang/C++?
You can't say "I want the default Xcode compiler [which has no support for any]" and at the same time request it to support any. You also can't mix standard library headers for different compiler versions.
You can either
use a compiler version that provides std::any or
use any third party library that provides another any-like type.
Your installation setup does not have the c++17 standard. std::any simply is not available to you unless you get a compiler with at least experimental support for what you want.
Clang Cxx Status
You'd have a lot better luck just using boost::any probably.
If you're really set on not bringing a third party library into play, the reality is that creating your own any isn't that difficult. I don't recommend reinventing the wheel but in this case it's not that difficult.
Here's a SO question with an answer showing a way to do 'any'.
It is illegal to inject new types into std via a third party library. You can upgrade your compiler, get a distinct std library your compiler supports, or use a 3rd party library that provides any in another namespace, or write your own.
The first you said no to.
The second is hard, as xcode does not advertise what its compiler actually is. There are generally two common std libraries that work with clang-llvm derived compilers; libc++ and libstdc++. That kind of swap tends to be very expensive even if the other one has the feature you want.
The third is basically "use boost" or equivalent.
The last isn't hard; a few days work (mostly bugs after the fact), based on writing types of similar complexity, assuming "good enough" is good enough (ie, not getting caught up in ideal exception guarantees, or matching standard exactly, etc). An implementation will require hyperbolic effort to approach perfection, naturally.
Xcode 9.0 beta can now be downloaded (https://developer.apple.com/download/). It supports the c++17 flag option.
Edit: Xcode 9.2 is publically available with std::any support.
I am new user of the boost library. I find my self thinking more about adopting boost for a number of reasons. From what I can tell, it seems that the boost library is a sort of skunkworks sandbox where various C++ TR features for upcoming standardization are tried out before being adopted by the C++ committee - think boost::filesystem and boost::regex,
As an example, I was trying out some of the C++11 regex features in visual studio via the #include header - this worked great until I ported to a target power pc platform, which, at the time used CodeSourcery's GCC 4.7.3. Unfortunately, I realized that at run-time, that much of the regex implementation was incomplete or empty (even thought it compiled) - With a bit of homework, I should have realized this beforehand, however now that GCC 4.8.x is out, the implementation is part of the v3 standard C++ library so it is a different story now.
In an ideal world, the standard library should be like developing for Java - write once, deploy everywhere - but that is not a reality. I would eventually like to move to the standard library implementation rather than Boost's regex and filesystem implementations.
My question given the above regex history, is how should developers use boost, is it possible to do a simple search and replace of the boost headers and namespaces when the features are adopted by the standard library or are there more many things to consider. I would like to use pure C++11 code without dependency on 3rd party libraries.
The amount of work required to move from a Boost library to its C++11 conterpart depends on the degree of C++11 conformance of a particular Boost library. In the simplest case it can be a matter of including another set of headers and using another namespace.
In a more complicated case, Boost library may have some subtle incompliancy with C++11 (eg. in Boost.Thread V1 ~thread used to call detach()) - such things might "silently" break the code correctness, but they are easy to fix.
Finally, Boost library may implement funcionality that doesn't exist in C++11 (eg. boost::bind can be extended using get_pointer function). Apparantly, porting such a code to C++11 would be quite not trivial.
Let's begin with your statement
I would like to use pure C++11 code without dependency on 3rd party
libraries.
It is clear that this is not possible now. You will have to use 3rd party libraries for any non-trivial program.
Unfortunately, C++ with Boost is not a platform also. You need 3rd party libraries to do things available out of the box in languages like Java, C#, Python etc.
So, you have to select libraries according to your requirements: performance, supported platforms, multithreading etc.
Again, Boost shouldn't be your default choice. It is not that useful now as it was 10 years ago. Most of must have stuff went into C++ standard library already.
If you support existing C++ codebase, find the best C++ library for your needs (e.g. re2 for regex). If you start a new project, I would suggest using Qt as a platform.
A "simple" way to migrate usage may be to use preprocessor defines to define a "Using Boost" directive. By putting all boost code in an #if-#else and carefully writing the code to not break (or at least have expected results) for sections that do not have a C++11 equivalent. You can simply not provide a definition for "Using Boost" before at the beginning of your code and C++11 features would be used instead.
See this and this
One link points to an old stackoverflow question, the other to an interesting talk performed by Stephan Lavavej
This is a very uninformed question, but:
I would like to start using C++11. Can I continue to use my large collection of libraries which were compiled with my old gcc 4.2.1 compiler or do I need to recompile them all with a new compiler? I would think (or hope) the answer is no, but I am only a dabbler.
So that I may have at least part of my ignorance removed, can you explain why in either case?
Thanks
Yes, you should.
The weaker reason is not binary compatibility, the problem is about expectations. A C++11 enabled compiler will expect a number of features to be available (move constructors among them) and use them when appropriate. And that's only scratching the tip of the iceberg, there are several other incompatibilities (auto, 0 and its interaction with pointers, ...).
It means that any inline method in a header may suddenly be interpreted differently, in light of the C++11 Standard.
The stronger reason is that each version of the compiler comes with its own Standard Library implementation. You don't really want start mixing various versions around and especially not when they have undergone such major changes (once again, rvalue references...).
Believe me, it's simpler to recompile now rather than having that nagging thought that each bug that appear may be due to an incompatibility between old and new libraries...
It's a compiler question. For instance, if you have a single compiler which supports both C++03 and C++11 depending on a compiler switch, you can very likely mix libraries. There's nothing new in C++11 that forces an incompatibility with C++03.
Howeve, you mentioned that your libraries were compiled with GGC 4.2.1. Since C++11 was just an idea back then, it's quite likely that GCC back then was implemented in ways that turned out to be incompatible with C++11.
For instance, std::list::size() must be O(1) in C++11, but it could be O(N) in C++03. GCC chose a O(N) implementation back then, not knowing the future requirements. The current GCC std::list::size implementation is compatible with both C++11 and C++03, since O(1) is better than O(N).
The answer totally depends on the API of your library and its implementation dependencies.
The conditions that guarantee that you don't need to recompile are:
-- Your library doesn't use C++ specific features in its public API.
That implies:
Your library doesn't provide classes/class-templates/function-templates/other C++ specific stuff.
You don't accept/return C++ classes to/from your library functions.
You don't pass function parameters by reference.
You don't provide public inline functions with C++ specific implementations.
You don't throw exceptions from your functions.
You don't (because you have no reason to) include C++ specific library headers in your public headers. (It wouldn't hurt if you do include them but everything must be OK if you remove such includes. It is like an indicator.)
-- Your library depends only on the libraries that binary-compatible with those available in your new build environment.
If those conditions are not satisfied then there is no guarantee that your library will work correctly. In most cases it is much easier to recompile than to make sure that everything works correctly.
Either way if you are going to make binary compatible API that satisfies the conditions above then it is much better to design and implement C language API. This way you automatically satisfy the above conditions and don't fall into sin of writing C-style C++ code.
You can use large parts of C++11 without recompiling (assuming ABI compatability), but a particular important part, for me atleast, will not be accessible to the already compiled code - move semantics.
Move semantics can make your code faster just by recompiling with a C++11 compiler (and preferably, a C++11 stdlib).
There are also other reasons. Maybe your preferred library has become C++11 aware since you last compiled it, and is now more efficient, safer or easier to use if compiled with a C++11 compiler?
For your own libraries, with C++11 you surely can make it them more efficient, safer and easier to use? :)
If your interfaces to the compiled code uses any template which was modified by C++11, then yes, you have to recompile. Otherwise you probably can continue to use your old libraries (unless the compiler provider also decided to make changes of the ABI at the same time, because it's a great opportunity to fix long-standing ABI bugs which are otherwise often refrained from fixing because of binary incompatibilities).
Often I read of some software cutting out some C++ features in order to be compliant with poor/old/exotic C++ compilers.
This one is just the last one I got into: Box2D isn't using namespaces because they need to support:
poor C++ compilers where namespace support can be spotty
A bigger example I can think of is Qt, which is relying upon MOC, is limiting template usage a lot and is avoiding templates (well, this is at least true for Qt3 and previous versions, Qt4 is mostly doing that to keep to their conventions).
I'm wondering what compilers are that poor?
There are lots of C++ compilers out there (I never heard of most of them), but I would hope that all of them are supporting most common(/simple?) C++ features like namespaces (unless they're dead); isn't this the case?
What are the most unsupported features?
I can easily expect the lack of external templates, maybe template partial specialization and similar features. At most even RTTI or exceptions, but I would have never suspected of namespaces.
In my experience, people are just scared of new things and especially things that broke on them once, 20 decades ago. There's no valid reason against using namespaces in anything written during this century.
If you're looking for things to toss out though, if you happened to be targeting windows not too long ago you had to do more than just toss features out of C++ and not use them, you had to use different syntax. Templates come to mind as one of the worse supported features in VC. They've gotten much better but still fail sometimes.
Another one that is not supported by that particular compiler (STILL!) is overloading virtual functions to return derived type pointers to the types the base version returned when using MI. VC just plain freaks out and you end up having to do virtual_xxx() and providing non-virtual "xxx()" functions to replicate the standard behavior.
What are the most unsupported features?
Let's abstract away export, which, sadly, was an experiment that failed. Then, by count of compiler users this is probably two-phase lookup, which Visual C++ still doesn't properly implement. And VC has a lot of users.
Basically anything that was added as part of the standardization process in C++98. That is:
Templates (Not just partial specialization -- I mean all templates)
Exceptions (Often because this one requires runtime support/overhead -- they're often not available on things like microcontrollers)
Namespaces
RTTI
If you go even older, you can add
Most everything in the standard library.
Several old compilers are just plain bad; OTOH I would strongly not recommend worrying about them. There are already several good and freely available reasonably standards compliant compilers available for pretty much every platform of which I am aware (mostly via G++).
Namespaces have been around forever. Some of the darker places in templates, maybe. But namespaces? No way. Even with templates, the vast, vast majority of usage scenarios are fine. Some compilers aren't perfect (they're software), but flat out not supporting a C++03 feature that isn't export? That just doesn't happen. Most compilers extend the language, not reduce it, and are moving to support C++0x.
RTTI and exceptions are often accused of poorly efficient implementations, but not poor implementation conformance.
If you are programming in the main stream with a main stream compiler then usually you find everything in the C++03 standard supported (except export as mentioned by sbi (but then again nobody used the feature (as it was not supported)). The further from the main stream the compiler is the less features usually (but they are all moving forward (some more slowly than others))
Its only when you get to less used hardware that has its own proprietary compiler that support for features start to dwindle.
The main example would be mobile devices and their associated compilers (though because of their popularity in the main stream in recent years these compilers are getting updated more quickly than before).
A secondary example would by SOC devices. There compilers are so specific that they are usually in house compilers and only get as much work on them as the SOC company can afford and as a result tend to lag a long way (or at least further) behind other compilers.
Value Initialization (C++03 feature) is not properly implemented in MSVC++.
Qt doesn't use templates much because it is older than templates.
I was curious about STL implementations outside of what's packaged with gcc or Visual Studio, so a quick Google search turned up a few results, such as:
Apache stdcxx
uSTL
rdeSTL
Under what circumstances should one use an alternative standard template library?
For instance, Apache's page has a list including items such as "full conformance to the C++ standard" and "optimized for fast compiles and extremely small executable file sizes". If it's so good, why wouldn't it replace libstdc++?
For the sake of completeness, here are some of the other STL implementations:
STLPort
STXXL (which is sort of special purpose, meant for large data sets that won't fit in memory)
Dinkumware (commercial)
SGI STL
libstdc++ (GCC's implementation)
I never had to use an STL version other than the one packed with the compiler. But here are some points that come into my mind.
Thread-safety: The STL from apache provides a compile switch to turn on/off some thread-safety features.
Localization: Again the STL from apache comes with nice support for many different locales.
Data structures: You might need a basic_string implementation that is based on COW (copy-on-write) and the STL version that came with your compiler doesn't offer that.
Non-standard extensions: Particular features you like from some other STL implementations. For example, hash_map (and related) versions from Dinkumware (which ships with Visual Studio) have a significantly different design from hash_map (and related) from STLPort.
Binary issues: Constraints in some environment (embedded software) due to code size. In such case, if you don't need the whole STL it could be interesting to use a reduced version.
Performance: What if you discovered, after profiling, that the "other" STL implementation gives you significant better performance for a particular application. (With so many details concerning algorithms and data structures this could actually be possible.)
Debug mode: Some STL implementation provide nice features for debugging. For instance, checking ranges of iterators.
I sometimes use STLPort rather than the STL that ships with Visual Studio. Back when VC6 was supported the STL that shipped with it was buggy and so using STLPort (or another STL) made a lot of sense (especially if you were building multi-threaded code).
Now it's often more about performance (size or speed). For example, the STL that ships with VS2008 isn't that friendly in a multi-threaded situation as it uses locking around locale objects which causes things that you wouldn't expect to synchronise across threads. (See here Convert a number to a string with specified length in C++ for details of one example of this).
Third parties can implement improved versions of STL that attempt to offer various things, such as smaller size, faster execution, etc. You might choose one of these alternative implementations because you want one of those attributes of their implementation. You might also choose one of them when doing cross-platform development because you want to avoid running into differences in behavior between the gcc and Visual Studio versions of your product (as just one example).
There is no need to wait for a new release of a compiler with a bundled implementation of STL in order to reach out for a fresh implementation of it if you have specific needs.
I've never had a need to use an alternative STL, but I could envision some scenarios where it might be useful to use, for example, the Apache version, if you need small executables because you're developing for an embedded platform.
Another reason might be to use an STL version that guarantees certain things which are not necessarily guaranteed by the standard. For example, to ensure you have non-COW strings so you can write thread-safe code.
STLport has what they call a "power debug mode" that does a whole slew of run-time checking for "the correctness of iterators and containers usage". Helps catch a number of errors that would not be immediately obvious. I highly recommend use of STLport while debugging and testing.
The C++ standard library can be implemented in variety of ways. Some implementers try to cope with modern ideas. So, using an optimized implementation may result in faster and smaller executables.
Take SCARY for example. Some implementers didn't do it yet, although it reduces the bloat of STL to a great extent. When you do the following:
vector<int> f;
vector<int, MyAllocator> s;
size_t fc = count(f.begin(), f.end(), SomeValue);
size_t sc = count(s.begin(), s.end(), SomeOtherValue);
An "old" implementation could produce two different count functions in the result executable, because the type of f is not the same as of s. Thats because the iterator type depends on the type of the vector itself, although it doesn't need to be like that. A better idea is to separate the type of the iterator in a separate class, and provide a typedef in vector, and the compiler would produce one count only. That was just an example, but I think there are more to say about the quality of some implementations.
Aside from the reasons already given, I could imagine using a different STL due to debugging support or as a way to guarantee I was not relying on vendor extensions.
It would also be a first step in testing whether a library I was shipping worked well on other platforms.
Folks mentioning STLport have cited performance and portability, but there's also a pretty good debug mode available. I think that's a great reason to be using a different STL, if your current compiler's library is limited in this way.
Aaand ... it looks like Max and I tied on mentioning debugging! ;^)~
One reason is for better thread-safety. I was using the default STL that came with Visual Studio (VC6 infact) then had to shift to STLPort as it had much better thread-safety.
We are currently using STLPort - external implementation of STL because we have to use (for various reasons) quite old Microsoft Visual C++ 6.0 compiler (1998 release date) and compiler supplied library (by Dimkunware) is of course very out of date.
Debugging has been mentioned by several people, in terms of the ability to switch on extra diagnostic information, however another important aspect is that if you're using the platform's own STL then you may get a better experience in the debugger. This is especially true if you're using Visual Studio which has visualisers for all the standard containers.
STLPort has support for files bigger than 2GB through std::fstreams. Visual Studio 2005/2008 cannot handle files bigger than 2GB.
You can test your STL implementation by displaying: std::numeric_limits<std::streamsize>::max()
Both MSVC++ and GNU g++ come with pretty good implementations of the C++ Standard Library, but there are compilers that don't, and if I had to support such compilers I would look for a 3rd party implementation of STL.