Using C++20 in 2019 [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm starting a new C++ project that I probably will be working on and gradually extending for quite a while (at least a year). I'm trying to keep up with C++20 and I would love to start using some of the new features. I don't really care about supporting multiple compilers (GCC or Clang is enough). So far, I've been only experimenting with some of these features, but never considered using C++20 features in a real project.
Edit: My original question was about the current state of the C++20 standard and its support from compilers. I've been asked to narrow down the actual question, so I'll stick to my main reason to use C++20:
The main feature I'm interested in are the concepts. I've experimented with concepts on GCC with the -fconcepts flag. As I understand, this should correspond to the Concepts TS. But what's the state of concepts in the current standard? I've noticed that there are some minor syntactical differences between the TS and some other sources I've found on C++20. Is it realistic to use the current GCC's implementation (or maybe other compiler, that does it better) in a way that will be (at least with a high probability) valid in the actual finalized standard? Are there any reliable sources to keep track of the current agreed upon specification of concepts and other features?
The original questions:
What's the state of C++20 standard? When can I expect it to be complete, or at least in such a state that I can use it safely without worrying about my code not being valid in the final standard? I use cppreference as my primary source of information on language details. When it says since C++20, does that mean, that it is a finalized version that will stay in the standard?
What's the state of C++20 support? When can I expect it to be fully implemented (or at least the most important parts) in GCC, Clang, or maybe MSVC? In particular, what's the state of concepts and modules? I know that GCC has experimental support for concepts with -fconcepts (though cppreference says, that it supports "TS only") and there's a branch of GCC that supports modules with -fmodules (but doesn't work with concepts).

The C++20 standard, baring catastrophic circumstances, will be complete in... 2020. This ain't rocket science ;)
The C++20 draft was designated feature complete at the last standards meeting, so new things will generally not be added. The likelihood of features being removed or having significant alterations is also low, but non-zero.
As for support for various C++20 features, that will take time. Not only that, it will take further time for said support to reach maturity. If you just want to play around with C++20 features, odds are good that you can do so in some compiler for many C++20 features sometime in 2020. But if you want to actually produce a product that's stable, it would be better to wait for compiler/library maturity until 2021 or 2022.
Visual Studio has a tendency to take longer to implement features than the other compilers. But generally, they take less time to implement library features, and will typically do so immediately upon shipping any dependent language features. By contrast, libc++ and libstdc++ tend to be much slower about getting library features done than their respective compilers about getting language features done.
Also for C++20, Microsoft has been pushing coroutines and modules hard, and they have the most mature implementations of both at present. So if that's what you're looking for, VS will likely have you covered more than the others.

Related

Is it safe to use a C++ Technical Specifications approved for a future standard in a earlier standard?

The Filesystem Technical Specification (TS) has recently been merged into the C++17 standard.
The same TS is also available for C++14, but in this case it's technically only "experimental". However the fact that it's been approved for C++17 makes me think it's mature enough and that it can be used safely.
When working on a C++14 project that will most likely be upgraded to C++ 17 in the future, and assuming the compiler I use supports it on both versions, would you advise against using the "experimental" TS, considering that it will officially be part of the next standard?
My question of course extends to any TS that has been accepted in a future C++ version and that is available for earlier standards.
The real question is whether or not somebody's implemented it, not whether or not it's been approved/merged/whatever of some arbitrary document. Features can be excised, added or modified at any point in time of the standardization process. We've seen things get cut from C++14 right before release and also things that couldn't make it that were later amended. Vendors rely on specific versions of documents when implementing features and so the only surefire way is to consult the documentation of whatever compiler you're using.
Actual implementations can contains features that are not in the current standard, and can have flaws in other features that are defined in the standard or even can fail to implement specific parts - Microsoft was know to let parts of the standard unimplemented.
But if a compiler supports a feature, and if that feature is part of next standard, there is little risk if any that it will disappear in a future version of that particular compiler.
Simply, some other compiler may not implement it as soon as it is approved in standard, but you know whether it is a problem in you specific use case.
Is it safe to use a C++ Technical Specifications approved for a future standard in a earlier standard?
It depends on what you mean by "safe"
Is it portable?
No.
Does it work?
You need to check the release notes of your toolset's version, and the release notes of your standard library's version (they may be different).
Will it work tomorrow?
Who knows?
Should I invest time in code that assumes it works?
probably not.
In summary, the answer is "no".
Use the boost version until the standard is published and your compiler and standard library conforms to it.

Toolchain support for the C++11 standard [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am currently updating my knowledge on C++ to the new standard. It makes me feel like a little kid that just got the awesomest toy: I want to play with it all the time but I don't want to lose my friends because of it.
I am involved in a few open source projects for which some of the new features would be extremely useful, so I am quite keen on using them. My question is how many users can compile C++11 code, e.g. what's the adoption rate of C++11-complete compilers in the general public? Does anyone have related information?
I know that gcc 4.8.1 and clang 3.3 are C++11 feature complete, but I have no idea how many people actually use compilers that are up to date. I know most codemonkeys surely do, but what about the average open source user? Slapping potential users in the face and telling them to update their compilers is not really an option.
I am aware that this question may be criticised/closed for being similar to these questions:
How are you using C++11 today?
To use or not to use C++0x features
I would like to point out that things are different now, since we are talking about an actual approved standard. I believe that being aware of its adoption rate is important in practice for programming.
You should probably first decide which C++11 you absolutely want to be able to use, and then lookup the lowest compiler version that supports this on the platforms that you want to support. Apache has a nice survey of the earliest version of each major compiler (gcc, clang, visual c++, intel, etc.) that supported the various C++11 features.
In my experience, gcc 4.7 and Clang 3.2 are almost feature complete (except for things like inheriting constructors, which are useful but not game changers). You could get a lot of useful features with gcc 4.6 (but take the 4.6.3 version to avoid many bugs) or Clang 3.1, which is nice since gcc 4.6 is also the official Android NDK compiler (if you are looking to support that).
If you are looking to support Linux, you can take a look at DistroWatch, where you can see which gcc versions were installed for each distro version. E.g. many popular distributions based on Ubuntu have been on gcc 4.7 for almost a year now, and are going to upgrade to gcc 4.8.1 (feature complete) in their next releases.
On Windows, there is the Nuwen Distro currently running MinGW 4.8.1 (only 32-bit and no threading). Visual C++ is not up to the job and will take a while (year or more?) to get where gcc 4.8 and Clang 3.3 are.
Even if the distros don't officially support a recent version, there are private package repositories (often maintained by the same people also doing the official packaging) that provide cutting edge. The LLVM project even provides pre-built nightly SVN snapshots that enable many of the C++14 features (in -std=c++1y mode). For gcc there are no nightly packages AFAIK.
About forcing developers to upgrade compilers / distros. I don't think it is such a big deal (but the point by #ArneMertz about consulting with them first, is very good here). Virtual machines are a breeze to install (~45 minutes end-to-end), so if you only want to release a binary-only product, then go ahead. For users that's another matter, so if you are providing a header-only template library that all regular users need to compile, that should make you a lot more conservative in your transition pace.
I think this is a hard one to answer, since its a somewhat broad question. You are asking about "adoption in the general public", and that is quite dependent on how you define that.
I'd say in the majority of companies adoption of new compilers is slow, because for larger projects changing parts of the toolchain comes with some costs and risks. This is true especially for bigger and "older" companies. Smaller startups often are more likely to embrace new technology.
Open source projects on the other hand are often composed of people that do programming for fun and are keen to adopt new promising things. I am sure that many of your fellow contributors will feel the same as you. How your user community adopts the new compilers can not be said without knowing more about your projects. There are projects and communities that just want the program to work and don't care about newer compilers, and there are communities that will want you to use the newest technology available, because it's cool, faster, better, whatever.
Bottom line: Ask the other contributors of your projects how the think about adopting the new standard, as well as the user community of your projects.

Developing cross-platform C++11 code [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
With C++03 it was (and still is) possible to write cross-platform code with both MSVC and GCC, sharing C++ code bases between Windows, Linux and Mac OS X.
Now, what is the situation with C++11? It seems that different C++ compilers implement different features of C++11. To build cross-platform C++11 code, is it safe to take MSVC10 (VS2010) as a kind of "least common denominator"? i.e. if we restrict the approved C++11 features to those implemented by MSVC10, will the resulting C++11 code be compilable with GCC (and so usable on both Linux and Mac OS X) ?
Or is it just better to wait for C++11 compilers to mature and stick with C++03 if we need cross-platform code?
Thanks.
You can compile code for Windows using GCC. You don't need to use Microsoft's compiler.
If you want to use C++11 features painlessly at the moment, that's going to be your best solution. Microsoft still has yet to implement a lot of C++11, and not all of it is slated to be in VS11, either.
Otherwise, yes, you can obviously just use the subset of the C++11 features that are supported by the compiler implementation that represents the lowest-common-denominator. You'll need to check and make sure that that is Microsoft's compiler for all of the new features rather than just assuming that it is.
I don't believe GCC has gotten around to everything yet, and there's no guarantee that their implementation of all the features is perfect and matches Microsoft's 100%. Writing completely portable code is and has always been hard.
Using only C++03 features is obviously the safe approach, but it doesn't allow you to use C++11 features (obviously). Rather or not that's important is a decision that only you can make.
C++11 is not ready for prime time yet, as you already figured out.
Not only is the parsing stage still being worked out by the various compilers, but there is also the issue that some, while appearing to accept some features, may have quirks and bugs in the releases you currently have.
The only sound approach I can think of is to first select the compilers you want to use:
you can use gcc/Clang on Windows (with libstdc++) however this will prevent you from interacting with libraries compiled by VC++
you can on the other hand validate your code for both gcc/Clang and VC++ (and perhaps a few others if you need to)
Once you have determined the compilers you want to use, you then have to pick the features of C++11 that you want to use, and that work on all those compilers.
gcc is probably the more advanced here
Clang does not have lambdas, but has move semantics and variadic templates
VC++ is the most behind I think
And you need to setup a test suite with all those compilers, and on all the platforms you target, and be especially wary of possible code generation issues. I recommend using Valgrind on Linux par example and perhaps Purify (or equivalent) on Windows as they both help spotting those runtime issues.
Beware that both VC++ and g++ may have extensions accepted by default that are not standard, and may also base their interpretation of the code on previous drafts of C++11.
Honestly, for production use, I think this is still a bit wonky.
If you are writing new code, you are probably not releasing it tomorrow.
So plan for your release date. There are some features, that will be accepted more slowly than the rest. Mostly hard to implemented features and duplicated features (like the range for loop).
I wouldn't worry much about using the new library features, those are already supported very well across all compilers.
Currently there isn't any least common denominator, since Microsoft decided to concentrate on the library first, while the rest has gone (mostly) for language features.
This depends largely on your project. If you ship only binaries you need to figure out a toolset you want to use and stick to what this toolset supports. Should your team use different tools everbody should make sure his code builds with the common build system (Be it C++03 or C++11).
The situation changes as soon as you ship headers that contain more than just declarations. First of all you need some infrastructure do determine what is supported by which compiler. You can either write those tests yourself and integrate them with your build system or stick to Boost.Config. Than you can ifdef platform dependent code. This sounds simple at first but really isn't. Everytime you have C++11 code that can be implemented with C++03 work-arounds you want to have both versions available for your users (e.g. variadic templates vs. the preprocessor). This leads to duplicated code and comes with a significant maintenance cost. I usually only include C++11 code if it provides a clear benefit over the workaround (better compiler error messages (variadic templates vs. macros), better performance (move semantics)).
Visual studio support for C++2011 is quite good, so if you use GCC 4.7 and VS2010 you will be able to use an ample set of the most interesting features of C++2011 while being cross platform.
Support for C++11 overview for VC10 and VC11
http://blogs.msdn.com/b/vcblog/archive/2011/09/12/10209291.aspx
Table for all the compilers:
https://wiki.apache.org/stdcxx/C++0xCompilerSupport
GCC C++11 support:
http://gcc.gnu.org/projects/cxx0x.html
Also related: C++11 features in Visual Studio 2012
Use only those features of C++11 at the moment which improve your code in some manner.
Let me explain this, I don't look up C++11 features to use them, rather when they solve my problem I adopt them. (This is the way I learned about them, all on SO) This approach will change in future, but for now I am doing this.
I currently use only few features of c++11, which incidentally work in both VS2010 and GCC.
Also if there is a great feature, you want to use, and VS doesn't have it, why not use GCC. It is cross-platform, so will work on windows as well.

Using C++11 in a production environment with GCC [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
C++11 provides us with a lot of new great and immensly useful tools. GCC support of C++11 has already made good progress. So I have thought about when to switch to C++11. This question relates to gcc only, I do not expect to compile my (our) code with any other compiler.
Would you (did you) switch to C++11 before gcc supports the entire C++11 standard to benefit from the features already implemented? Would you still do this in a production environment where stability and correctness is very important? Do you think it would be a reasonable approach to allow developers only to use certain C++11 features?
How would you (do you) go about deciding when GCCs C++11 support is ready for a production environment?
(Note: I'm aware of this question, but it specifically relates to gcc 4.4 and is somewhat outdated)
It depends.
If it were to power my blog or something like this ? Definitely.
If it were to power a critical service ? Of course not.
I believe that the support of C++11 is too immature as it is now, to be called production ready.
You may settle on a version of gcc, but the truth is that because the successive drafts evolved as new problems were discovered and tackled, the code you write now may well be rejected by a later version, or the behavior may change lightly.
Therefore, I think this judgement truly depends on what you intend to be doing. There is a reason the space shuttle is powered by an old and proven technology: it's a matter of trade-off between ease of development and confidence in the tools.
It's your judgment, you know your situation better than we do.
The GCC C++ developers still think their C++03 support is not up to par, and therefore aren't even setting the __cplusplus version correctly (citation needed, I can look up the bug+discussion). They marked the support as experimental because they started implementing the basics before there was a final draft/standard. By now (ie GCC 4.6), most major flaws have been removed, although some details remain inconsistent with the exact standard wording.
If possible, you should also test with Clang, which IMHO strives and succeeds at better adhering to the puny details in most places where GCC lacks the necessary enforcement. Production use is something that's personal. Me, I think that every compiler has bugs, and although the chance of a bug in the "new stuff" is statistically more probably, chances are you'll also encounter an older bug messing with your perfectly compliant code. That's why I suggest using at least two compilers to prevent any incompatibilities (or at least reduce them as much as possible).
As for the Standard library, libstdc++ is functional for the most part, but lacking in some large and useful parts like <regex>, which is sad. If you're feeling lucky, you should be able to get LLVM's libc++ working on at least Linux and Mac, this is a feature complete c++11 library minus <atomic>), but also the "new kid on the block".
To summarize: the more compilers and Standard libraries you run your code against the better (although you should check which ones are correct, and which are buggy). This inevitably reduces the amount of C++11 features available to you, although if you go with GCC/Clang, only lambda's, uniform initializers and <atomic> fall outside your scope. MSVC is a different story...

To use or not to use C++0x features [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How are you using C++0x today?
I'm working with a team on a fairly new system. We're talking about migrating to MSVC 2010 and we've already migrated to GCC 4.5. These are the only compilers we're using and we have no plans to port our code to different compilers any time soon.
I suggested that after we do it, we start taking advantage of some of the C++0x features already provided like auto. My co-worker suggested against this, proposing to wait "until C++0x actually becomes standard." I have to disagree, but I can see the appeal in the way he worded it. Nevertheless, I can't help but think that this counter-argument comes more out of fear and trepidation of learning C++0x than a genuine concern for standardization.
Given the new state of the system, I want for us to take advantage of the new technology available. Just auto, for instance, would make our daily lives easier (just writing iterator-based for loops until range-based loops come along, e.g.).
Am I wrong to think this? It is not as though I'm proposing we radically change our budding codebase, but just start making use of C++0x features where convenient. We know what compilers we're using and have no immediate plans to port (if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform). Otherwise it seems to me like avoiding the use of iostreams in 1997 just because the ISO C++ standard was not published yet in spite of the fact that all compilers already provided them in a portable fashion.
If you all agree, could you provide me arguments I could use to strengthen my position? If not, could I get a bit more details on this "until the C++0x is standard" idea? BTW, anyone know when that's going to be?
I'd make the decision on a per-feature basis.
Remember that the standard is really close to completion. All that is left is voting, bugfixing and more voting.
So a simple feature like auto is not going to go away, or have its semantics changed. So why not use it.
Lambdas are complex enough that they might have their wording changed and the semantics in a few corner cases fixed up a bit, but on the whole, they're going to behave the way they do today (although VS2010 has a few bugs about the scope of captured variables, MS has stated that they are bugs, and as such may be fixed outside of a major product release).
If you want to play it safe, stay away from lambdas. Otherwise, use them where they're convenient, but avoid the super tricky cases, or just be ready to inspect your lambda usage when the standard is finalized.
Most features can be categorized like this, they're either so simple and stable that their implementation in GCC/MSVC are exactly how they're going to work in the final standard, or they're tricky enough that they might get a few bugfixes applied, and so they can be used today, but you run the risk of running into a few rough edges in certain border cases.
It does sound silly to avoid C++0x feature solely because they're not formalized yet. Avoid the features that you don't trust to be complete, bug-free and stable, but use the rest.
Theoretical but not practical disadvantages of using C++0x:
Makes it harder to port to different compilers.
Not adhering to any published standard.
Practical advantages of using C++0x:
Makes your daily lives easier, hence more productive.
It's a debate between what's theoretically right, and what's practical. If your team has any intent of actually doing something with this code, the practical should outweigh the theoretical tenfold.
One thing you don't need to (mostly) worry about now is features being added or taken away because the working draft reached "Final Committee Draft" (FCD) back in march. Feature-wise should be frozen, standards committee will not accept any-more proposals for C++0x.
Downside is it's still a draft and not finalized yet, the standards committee are in the phase of making corrections and adjustments before finalizing and publish the ISO standard (expected release to be march 2011). That could mean minor syntactic or semantic/behaviour changes which could make your code not compilable or not work correctly once you compile with a compiler that is more standard compliant than the one you're using at the time you wrote the code.
You'll probably have to wait sometime for compilers like VC++10 to get update with any corrections/adjustments made.
We had the exact same problem so we compromised. We took the C++ 0x TR1 release and then only took the portions that we knew we wanted to use. Sounds like a lot of work, but so far it's worked out well. We're using the regex libraries, tuples, and a couple of others. Once the standard is ratified, then we'll migrate to the full C++ 0x. This obviously isn't the best solution but it was one that has worked well for us.
If you intend to make your system open source within a not-too-far future, then that's an argument for not using too many bleeding-edge features. A production system running Debian or Red Hat won't necessarily have a bleeding-edge compiler installed.
You said
if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform
but that a compiler exists for a platform doesn't always mean that it's installed/used/wanted, especially on production systems.
If, on the other hand, you intend to do all the compiling yourself, this is not an issue.