C++ Template Portability - c++

I am updating a code base that is 10 years old and used Metrowerks Code Warrior on Mac and Windows.
I am updating to OS X, XCode 3.2, Universal Binary.
I seem to be getting a lot of template related errors and not being a genius on templates (and forgetting to eat a healthy dose of frosted templates for breakfast), I find myself wondering about template portability issues.
IIRC, templates are/or can be compiler specific?
Does anyone have advice or a tutorial on templates that they recommend?

Yes and no -- most reasonable template code written for one current compiler will work fine on other current compilers. Compilers have progressed over time, so a lot of new code won't work on old compilers, and vice versa. The biggest culprit with old code on new compilers is needing "typename" in quite a few places that old compilers would accept the code without it.
The most common problem is with code something like this:
template <class T>
class XYZ {
T::y a;
};
Most older compilers would (incorrectly) interpret "T::y" as a type -- but in a template, it's actually impossible to be sure of that, because T might be any type. To make the code work with a modern (more accurate) compiler, you need to change that to: typename T::y a;, to let the compiler know that T::y is the name of a type.

Templates themselves have well-defined behavior, as defined in §14 in the standard.
What is implementation-dependent is the limits of template use. For example, from Annex B (which lists recommended limit minimums):
Template arguments in a template declaration [1024].
Recursively nested template instantiations [17].
If you're depending on behavior more than these, it may be implementation dependent. It should be noted a compiler does not have to provide these minimal limits to remain standards compliant.
If you post some actual code/errors, we can tell you why you're getting an error. Likely, you're old code used some compiler-specific extensions or otherwise was allowed to use explicitly forbidden behavior.

The ability of compilers has improved a lot in ten years.
I would question the standards compliance of the compiler and the STL from 10 years ago. I belive tha standard had only just been introduced ten years ago and it takes compilers a while to catch up with the standard.
In modern compilers I think you will find that template code is relatively portable across compilers and the standards comitee is very carefull about changes to the standard to make sure it does not break compatabilty (very often).

There wasn't a single compiler in 2000 which implemented all aspects of standard templates. I'd dare to say no one even realised what was possible with templates till Alexandrescu released his Modern C++ design in 2001.
That said, Metrowerks was one of the better ones. If it compiles on version 7 or later, it should be very feasible to get it working on a modern, more standard compliant, compiler rather fast.
If I remember correctly the biggest problem with Metrowerks compilers in the early days was that wherever typename appeared, what followed was simply interpreted as a type, no matter what followed it.
This meant you could do, and I've seen, completely non standard stuff with that, like forward declaring typedefs.
Another part of templates that took them pretty long to get right was everything w.r.t. template template parameters and default template arguments.
Post some specific errors if you can't get it to work, they'll probably all fall in one or two 'classes' of problems, and someone will quickly be able to help you.
As I already said, Metrowerks had a pretty good C++ compiler, especially their STL implementation, largely thanks to Howard Hinnant I think.

Related

Why doesn't compiler help us with type traits instead of resorting to language quirks?

Type traits are cool and I've used them since they originated in boost a few years ago. However, when you look at their implementation (check out "How is_base_of works?" StackOverflow thread).
Why won't compiler help here? For example, if you want to check if some class is base of another, compiler already knows that, why can't it tell us? This would make things like concepts so much easier to implement and use. You could use language constructs right there.
I am not sure, but I am guessing that it would increase general performance. It is like asking compiler for help, instead of C++ language.
I suspect that the primary reason will sound something like "we need to maintain backwards compatibility" and I agree, but why won't the compiler be more active in generating generic templated code?
Actually... some do.
The thing is that if something can be implemented in pure C++ code, there is no reason, other than simplifying the code, to hard-wire them in the compiler. It is then a matter of trade-off, is the value brought by the code simplification worth the hard-wiring ?
This depends on several points:
correctness (some times software may only partially emulate the trait)
complexity of the code ~~ maintenance burden
performance
...
Once all those points have been weighted, then you can determine whether it's more advantageous to put things in the library or the compiler; and the more likely situation is that you will end up with a mixed strategy: a couple intrinsics in the compiler used as building blocks to provide the required interface in the library.
Note that the maintenance burden is much more significant in a compiler: any C++ developer sufficiently acquainted with the language can delve into a library implementation, whereas the compiler code is a black-box. Therefore, it'll be much easier to debug and patch the library than the compiler, so there is incentive not to put things in the compiler unless you have a very good reason to.
It's hard to give an objective answer here, but I suspect the following.
The code using language quirks to find out this stuff has often already been written (Boost, etc).
The compiler does not have to be changed to implement this if it can be done with language quirks (which saves a lot of time in writing, compiling, debugging and testing).
It's basically a "don't fix what isn't broken" mentality.
Compiler help for type traits has always been a design goal. TR1 formally introduced type traits, and included a section that described acceptable incorrect results in some cases to enable writing the type traits in straight C++. When those type traits were added to C++11 (with some name changes that don't affect their implementation) the allowance for incorrect results was removed, effectively requiring compiler help to implement some of them. And even for those that can be implemented in straight C++, compiler writers prefer intrinsics to complicated templates so that you don't have to put a drip pan under your computer to catch the slag as the overworked compiler causes the computer to melt down.

Few questions about C++ compilers : GCC, MSVC, Clang, Comeau etc

I've few questions about C++ compilers
Are C++ compilers required to be one-pass compiler? Does the Standard talk about it anywhere?
In particular, is GCC one-pass compiler? If it is, then why does it generate the following error twice in this example (though the template argument is different in each error message)?
error: declaration of ‘adder<T> item’ shadows a parameter
error: declaration of ‘adder<char [21]> item’ shadows a parameter
A more general question
What are the advantages and disadvantages of one-pass compiler and multi-pass compiler?
Useful links:
A List of C/C++ compilers (wikipedia)
An incomplete list of C++ compilers (Bjarne Stroustrup's site)
The standard sets no requirements what so ever with regards to
how a compiler is implemented. But what do you mean by
"one-pass"? Most compilers today do only read the input file
once. They create an in memory representation (often in the
form of some sort of parse tree), and may make multiple passes
over that. And almost certainly make multiple passes over parts
of it. The compiler must make a "pass" over the internal
representation of a template each time it is instantiated, for
example; there's no way of avoiding that. G++ also makes
a "pass" over the template when it is defined, before any
instantiation, and reports some errors then. (The standard
committee expressedly designed templates to allow a maximum of
error detection at the point of definition. This is the
motivation behind the requirement for typename in certain
places, for example.) Even without templates, a compiler will
generally have to make two passes over a class definition if
there are functions defined in it.
With regards to the more general question, again, I think you'd
have to define exactly what you mean by "one-pass". I don't
know of any compiler today which reads the source file several
times, but almost all will visit some or all of the nodes in the
parse tree more than once. Is this one-pass or multi-pass? The
distinction was more significant in the past, when memory wasn't
sufficient to maintain much of the source code in an internal
representation. Languages like Pascal and, to a lesser degree
C, were sometimes designed to be easy to implement with a single
pass compiler, since a single pass compiler would be
significantly faster. Today, this issue is largely irrelevant,
and modern languages, including C++, tend to ignore it; where
C++ seems to conform to the needs of a one-pass compiler, it's
largely for reasons of C compatibility, and where
C compatibility is not an issue (e.g. in a class definition), it
often makes order of declaration irrelevant.
From what I know, 30 years ago it was important for a compiler to be one-pass, because reads and writes to disk (or magnetic tape) were very slow and there was not enough memory to hold whole code (thanks James Kanze). Also, a single-pass is a requirement for scripting/interactive languages.
Nowdays compilers are usually not one-pass, there are several intermediate representations (e.g Abstract Syntax Tree or Static Single Assignment Form) that the code is transformed into and then analised/optimised.
Some elements in C++ cannot be solved without some intermediate steps, e.g. in a class you can reference members which are defined only later in the class body. Also, all templates need to be somehow remembered for further access during instantiation.
What does not happen usually, is that the source code is not parsed several times --- there is no need for that. So you should not experience same syntactic error being reported several times.
No, I would be surprised if you found a heavily used C++ single pass compiler.
No, it does multiple passes and even different optimizations based on the flags you pass it.
Advantages (single-pass): fast! Since all the source only needs to be examined once the compilation phase (and thus beginning of execution) can happen very quickly. It is also a model that is attractive because it makes the compiler easy to understand and often times "easier" to implement. (I worked on a single pass Pascal compiler once, but don't encounter them often, whereas single pass interpreters are common)
Disadvantages (sinlge-pass): Optimization, semantic/syntactic analysis. Sometimes a single code look lets things through that are easily caught by simple mechanisms in multiple passes. (kind of why we have things like JSLint)
Advantages (multi-pass): optimizations, semantic/syntactic analysis. Even pseudo interpreted languages like "JRuby" go through a pipeline compilation process to get to java/jvm bytecode before execution, you could consider this multi-pass and the multiple looks at the varying representations (and consequently the resulting optimizations) of code can make it very fast.
Disadvantages (multi-pass): complexity, sometimes time (depending on if AOT/JIT is being used as your compilation method)
Also, single-pass is pretty common in academia to help learn the aspects of compiler design.
Walter Bright, the developer of the first C++ compiler, has stated that he believes it is not possible to compile C++ without at least 3 passes. And, yes, that means 3 full text-transforming passes over the source, not just traversals through an internal tree representation. See his Dr. Dobb's magazine article, "Why is C++ compilation so slow?" So any hope of finding a true one-pass compiler seems doomed. (I think this was part of the motivation Bright had to develop D, his C++ alternative.)
The compiler only needs to look at the sources once top down, but that does not mean that it does not have to process the parsed contents more than once. In particular with templates, it has to instantiate the templated code with the type, and that cannot happen until the template is used (or explicitly instantiated by the user), which is the reason for your duplicate errors:
When the template is defined, the compiler detects an error and at that point the type has not been substituted. When the actual instantiation occurs it substitutes the template arguments and processes the result, which is what triggers the second error. Note that if the template was specialized after the first definition, and before the instantiation, for that particular type, the second error need not occur.

Is new C++ backward compatible

I haven't coded in C++ for years. I recently discovered that during those years it has changed quite dramatically. I'm not sure I like the changes, but that's another discussion.
I still have some C++ code knocking around my hard drive. If I got it out and tried to compile it with a nice new C++ compiler, say the latest version of g++, would it compile? Without warnings (assuming it compiled without warnings before)?
I did get to mess around with a little VC++ 2010 recently and found some things I expected to work just not working, and got different messages depending on context when I tried to use NULL. But in one part of that code I used NULL without even a warning.
It depends. Usually newer compilers are more standard-compliant, so many constructs that compiled on earlier compilers don't compile now without fixing. For example:
for( int i = 0; ... );
i++;
compiled in Visual C++ 7, but not in Visual C++ 9.
In general, yes it is backwards compatible. However, the devil is in the details. You likely will find things where conventions change, or specific libraries fall into or out of use.
NULL is a macro - prefer to use 0 (or nullptr in C++0x).
Not sure just how old your code is, but Visual C++ v6 suffers from limitations that result in code that simply won't compile on newer compilers. VS2005 and up are a lot better (in the sense of more correct wrt the contemporaneous C++ standard).
I would not expect it to be a huge amount of work to get old code compiled, though. I've done some pretty major ports from VC6 -> VS2005 and for the most part it's a matter of hours, not days. Perhaps once the culture shock has worn off this will not seem so daunting. Really, VC++ 10 is very nice.
This depends on what you're comparing to.
Visual Studio 2010 partially implements the upcoming C++0x draft (recent versions of GCC also implement a subset of this draft, which is expected to be standardized next year)
C++98/C++03 was the first standardized version of C++, and is still the official one (as C++0x is still only a draft)
and of course, there are the dialects from before the language was standardized
C++0x is pretty much backwards compatible with C++03/98. There may be a couple of obscure corner cases that have changed, but you are unlikely to encounter them.
However, a lot of changes occurred when the language was first standardized, meaning that C++98 isn't entirely (but nearly) compatible with pre-standard C++.
But more likely, what you're into isn't a question of C++ backwards compatibility, but simply that compilers have gotten stricter. They have become much better at following the standard, and no longer allow many non-standard tricks that were common earlier. Most likely, your old code was never valid C++, but worked because compilers used to deviate more from the standard.
The language itself hasn't changed since it was standardized in 1998. Idioms have changed, and compilers have both improved their support for the standard and become stricter about non-standard code, but any standard-conforming C++ code that compiled in the past should still compile today. Code that relies on non-standard compiler features or quirks might not work, but compilers typically offer command-line options to make them accept non-standard code that used to be accepted by previous versions of the same compiler.
NULL is a macro that's defined in <cstddef>. Other standard headers may include that header as an implementation detail, so it's often possible to use NULL without explicitly including <cstddef>, but relying on that has always been non-portable. It's OK to use NULL as long as you include its header, though just using 0 is preferred in idiomatic C++.
All version of C++ should be backwards compatible.
Although there might be some unusual cases where there could be a problem, e.g. noexcept on destructos in C++0x (although this has not yet been decided).
Newer C++ standards come to clarify things, then take decissions on which usage is "more correct" or accepted, so I would expect warnings. Some other decisions change what can be done or not, so I would also expect errors. I've come across the same situation, and I had to tweak the code for it to compile. It was not a great deal, but this was taking into account my knowledge of C++. In general, you should encounter not great problems.
There are also other things. For example, not all compilers implemented the whole rules of the C++ standard, or they had bugs. When you get used with a compiler, some of those errors or missing features may pass unnoticed to your compiler, but may cause errors in future versions of the same compiler.
That's the major reason why we have standards. You don't have to worry about compatibility, you will just tell the compiler to compile the code as C++98 (or 2003).
MSVC is unfortunately very bad at supporting C++ standards. It is slowly getting better (and that's why old code doesn't compile, since it shouldn't compile in the first place).
Well, I know a lot of our old VS6 code started throwing tons of warnings when we upgraded to VS 2005 (with full warnings on of course, as everyone should work). Most were good things though, warning of potential information loss, warning that some types may be 64-bit instead of 32-bit like old code might expect, etc.
For your specific example of NULL, that wasn't even really standard C back in the day. Every compiler just had a #define for it that set it to 0. These days it is generally considered more correct (and clear) to just use 0.
If you have used old versions of various libraries such as Boost, expect some problems.
Slightly random, but the keyword export is being removed from the standard. So previously standards compliant code the used export would now be illegal. Of course, very few compilers even began implementing that keyword.
Very similar to sharptooth's answer, there are bits of older C and C++ code that will need /ZC:forScope- set (i.e. don't force conformance in for loop scope). e.g.
int myfunc(int x, int y, int z)
{
int j;
for (int i=0; i <= 10; i++)
{
if (f(i) == 0)
break;
}
j = i;
for (i=0; i <= 10; i++)
{
if (g(i) == 0)
break;
}
if (i > j)
return j;
return i;
}
This type of thing is quite common in much older code, where bytes cost money and variable re-use was common place.

Concept Checking change in C++?

I'm porting over some code from one project to another within my company and I encountered a generic "sets_intersect" function that won't compile:
template<typename _InputIter1, typename _InputIter2, typename _Compare>
bool sets_intersect(_InputIter1 __first1, _InputIter1 __last1,
_InputIter2 __first2, _InputIter2 __last2,
_Compare __comp)
{
// Standard library concept requirements
// These statements confuse automatic indentation tools.
// concept requirements
__glibcpp_function_requires(_InputIteratorConcept<_InputIter1>)
__glibcpp_function_requires(_InputIteratorConcept<_InputIter2>)
__glibcpp_function_requires(_SameTypeConcept<
typename iterator_traits<_InputIter1>::value_type,
typename iterator_traits<_InputIter2>::value_type>)
__glibcpp_function_requires(_OutputIteratorConcept<_OutputIter,
typename iterator_traits<_InputIter1>::value_type>)
__glibcpp_function_requires(_BinaryPredicateConcept<_Compare,
typename iterator_traits<_InputIter1>::value_type,
typename iterator_traits<_InputIter2>::value_type>)
while (__first1 != __last1 && __first2 != __last2)
if (__comp(*__first1, *__first2))
++__first1;
else if (__comp(*__first2, *__first1))
++__first2;
else {
return true;
}
return false;
}
I'm new to this concept of "concepts" (sorry for the pun), so I did some poking around in the c++ standard library and some googling and I can see that these __glibcpp_function_requires macros were changed to __glibcxx_function_requires. So that fixed my compiler error; however, since this is new to me, I'm curious about what this code is doing for me and I'm having trouble finding any documentation or decyphering the code in the library.
I'm assuming that the point of these macros is that when the compiler expands the templated function these will run some type checking at compile-time to see if the container being used is compatible with this algorithm. In other words, I'm assuming the first call is checking that _InputIter1 conforms to the _InputIteratorConcept. Am I just confused or am I on the right track? Also, why were the names of these macros changed in the c++ standard library?
"Concepts" were a proposed feature for the next version of C++, but they were (relatively) recently voted out of the standard so won't resurface for quote some time now.
They were designed to allow early checking of requirements for template parameters and, amongst other things, would have enabled much more succinct error messages when a type that didn't meet the required constrains was used to instantiate a template.
2nd Edit: (see comments from dribeas and Jerry Coffin) These g++ macros are an internal concept checking mechanism and are not directly related to the proposed new language feature of the same name. As they are internal to g++ you can (and perhaps should) safely remove them without any loss of functionality in your function template.
You are correct, the first call is checking that _InputIter1 implements "input iterator" concept.
These macros are internal GLIBC implementation details (starting with an underscore or a double underscore), therefore GLIBC implementers are allowed to change them at will. They are not supposed to be used by user's code.
Since "concepts" are no longer the part of C++0x draft, in order to have portable concept checking, you should use some third-party library, like Boost Concept Check Library.
There are two concept concepts (pun intended) around. The standard as it is being defined had a proposal for concepts as a standard language feature that would help in compilation and... there's quite a bit of literature around about C++0x concepts, discussions...
The other concept concept is the one you have just hit. The STL that is deployed with g++ implementations does have specific implementor checks that are also meant to aid in error detection. These are different to the previous concepts in that they are not a language feature and are not meant to be used by programmers, but rather they are used internally in the library. As the names are reserved (they begin with double underscore) the compiler/library implementor is free to add anything there as long as the behavior of the library does not differ from what the standard defines.
Going back to what you are doing: The code that you are trying to port to a newer compiler is a modified version of std::set_intersect as defined in the standard [lib.set.intersection] to return only whether they do intersect without having to parse the whole two ranges. I would either use the standard version and check that the output iterator was not modified, or if it is a performance issue, implement it without the concept checks, depending on non-standard hidden compiler defined symbols is asking for maintenance trouble when upgrading the compiler. But that you know already.
As Charles already pointed out, direct support for concepts as part of C++ 0x was removed from the language relatively recently, and they almost certainly won't even be reconsidered until the next round of standardization. Hopefully by then there will be greater agreement on what they should really be/do.
A fair number of libraries attempt to provide similar capabilities though. Boost Concept Check Library is probably the most obvious, and I believe most of the others are based on it, at least in concept (if you'll pardon the pun). As to why the g++ guys decided to change from '*cxx' to '*cpp', I can't even begin to guess -- other than that they seem to think breaking backward compatibility as often as possible is a good thing (though these are really only intended for internal use, so changing the name shouldn't break much but their own code).
This is also fairly similar to the basic idea of what Andrei Alexandrescu presented in §2.1 of Modern C++ Design. If you want some idea of how to write concept checking of your own, you might want to read that (as well as §2.7, where he applies similar techniques to testing for convertibility and inheritance). While the concepts being checked varied, those do a good job of explaining most of the basic techniques, so: You stand a decent chance of reading and understanding concept-checking templatesIf you ever decide to write some of your own, you have a starting point
Edit: It's probably worth noting that the standard libraries for most current C++ compilers include at least some sort of concept-checking templates. Obviously gnu does. So does Comeau. In fact, the only one I can think of that doesn't seem to include any such thing anymore is MS VC++ (which uses the Dinkumware library). They've concentrated primarily on some (rather expensive) run-time debugging instead. This is also somewhat useful, but in an entirely different direction, and the two aren't mutually exclusive at all.

C++ Developer Tools: The Dark Areas [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
While C++ Standards Committee works hard to define its intricate but powerful features and maintain its backward compatibility with C, in my personal experience I've found many aspects of programming with C++ cumbersome due to lack of tools.
For example, I recently tried to refactor some C++ code, replacing many shared_ptr by T& to remove pointer usages where not needed within a large library. I had to perform almost the whole refactoring manually as none of the refactoring tools out there would help me do this safely.
Dealing with STL data structures using the debugger is like raking out the phone number of a stranger when she disagrees.
In your experience, what essential developer tools are lacking in C++?
My dream tool would be a compile-time template debugger. Something that'd let me interactively step through template instantiations and examine the types as they get instantiated, just like the regular debugger does at runtime.
In your experience, what essential developer tools are lacking in C++?
Code completion. Seriously. Refactoring is a nice-to-have feature but I think code completion is much more fundamental and more important for API discoverabilty and usabilty.
Basically, tools that require any undestanding of C++ code suck.
Code generation of class methods. When I type in the declaration you should be able to figure out the definition. And while I'm on the topic can we fix "goto declaration / goto definition" always going to the declaration?
Refactoring. Yes I know it's formally impossible because of the pre-processor - but the compiler could still do a better job of a search and replace on a variable name than I can maually. You could also syntax highlight local, members and paramaters while your at it.
Lint. So the variable I just defined shadows a higher one? C would have told me that in 1979, but c++ in 2009 apparently prefers me to find out on my own.
Some decent error messages. If I promise never to define a class with the same name inside the method of a class - do you promise to tell me about a missing "}". In fact can the compiler have some knowledge of history - so if I added an unbalanced "{" or "(" to a previously working file could we consider mentioning this in the message?
Can the STL error messages please (sorry to quote another comment) not look like you read "/dev/random", stuck "!/bin/perl" in front and then ran the tax code through the result?
How about some warnings for useful things? "Integer used as bool performance warning" is not useful, it doesn't make any performance difference, I don't have a choice - it's what the library does, and you have already told me 50 times.
But if I miss a ";" from the end of a class declaration or a "}" from the end of a method definition you don't warn me - you go out of your way to find the least likely (but theoretically) correct way to parse the result.
It's like the built in spell checker in this browser which happily accepts me misspelling wether (because that spelling is an archaic term for a castrated male goat! How many times do I write about soprano herbivores?)
How about spell checking? 40 years ago mainframe Fortran compilers had spell checking so if misspelled "WRITE" you didn't come back the next day to a pile of cards and a snotty error message. You got a warning that "WRIET" had been changed to WRITE in line X. Now the compiler happily continues and spends 10mins building some massive browse file and debugger output before telling you that you misspelled prinft 10,000 lines ago.
ps. Yes a lot of these only apply to Visual C++.
pps. Yes they are coming with my medication now.
If talking about MS Visual Studio C++, Visual Assist is a very handy tool for code completition, some refactorings - e.g. rename all/selected references, find/goto declaration, but I still miss the richness of Java IDEs like JBuilder or IntelliJ.
What I still miss, is a semantic diff tool - you know, one which does not compare the two files line-by-line, but statements/expressions. What I've found on the internet are only some abandoned tries - if you know one, please write in comment
The main problem with C++ is that it is hard to parse. That's why there are so very few tools out there that work on source code. (And that's also why we're stuck with some of the most horrific error messages in the history of compilers.) The result is, that, with very few exceptions (I only know doxygen and Visual Assist), it's down to the actual compiler to support everything needed to assist us writing and massaging the code. With compilers traditionally being rather streamlined command line tools, that's a very weak foundation to build rich editor support on.
For about ten years now, I'm working with VS. meanwhile, its code completion is almost usable. (Yes, I'm working on dual core machines. I wouldn't have said this otherwise, wouldn't I?) If you use Visual Assist, code completion is actually quite good. Both VS itself and VA come with some basic refactoring nowadays. That, too, is almost usable for the few things it aims for (even though it's still notably less so than code completion). Of course, >15 years of refactoring with search & replace being the only tool in the box, my demands are probably much too deteriorated compared to other languages, so this might not mean much.
However, what I am really lacking is still: Fully standard conforming compilers and standard library implementations on all platforms my code is ported to. And I'm saying this >10 years after the release of the last standard and about a year before the release of the next one! (Which just adds this: C++1x being widely adopted by 2011.)
Once these are solved, there's a few things that keep being mentioned now and then, but which vendors, still fighting with compliance to a >10 year old standard (or, as is actually the case with some features, having even given up on it), never got around to actually tackle:
usable, sensible, comprehensible compiler messages (como is actually pretty good, but that's only if you compare it to other C++ compilers); a linker that doesn't just throw up its hands and says "something's wrong, I can't continue" (if you have taught C++ as a first language, you'll know what I mean); concepts ('nuff said)
an IO stream implementation that doesn't throw away all the compile-time advantages which overloading operator<<() gives us by resorting to calling the run-time-parsing printf() under the hood (Dietmar Kühl once set out to do this, unfortunately his implementation died without the techniques becoming widespread)
STL implementations on all platforms that give rich debugging support (Dinkumware is already pretty good in that)
standard library implementations on all platforms that use every trick in the book to give us stricter checking at compile-time and run-time and more performance (wnhatever happened to yasli?)
the ability to debug template meta programs (yes, jalf already mentioned this, but it cannot be said too often)
a compiler that renders tools like lint useless (no need to fear, lint vendors, that's just wishful thinking)
If all these and a lot of others that I have forgotten to mention (feel free to add) are solved, it would be nice to get refactoring support that almost plays in the same league as, say, Java or C#. But only then.
A compiler which tries to optimize the compilation model.
Rather than naively include headers as needed, parsing them again in every compilation unit, why not parse the headers once first, build complete syntax trees for them (which would have to include preprocessor directives, since we don't yet know which macros are defined), and then simply run through that syntax tree whenever the header is included, applying the known #defines to prune it.
It could even be be used as a replacement for precompiled headers, so every header could be precompiled individually, just by dumping this syntax tree to the disk. We wouldn't need one single monolithic and error-prone precompiled header, and would get finer granularity on rebuilds, rebuilding as little as possible even if a header is modified.
Like my other suggestions, this would be a lot of work to implement, but I can't see any fundamental problems rendering it impossible.
It seems like it could dramatically speed up compile-times, pretty much rendering it linear in the number of header files, rather than in the number of #includes.
A fast and reliable indexer. Most of the fancy features come after this.
A common tool to enforce coding standards.
Take all the common standards and allow you to turn them on/off as appropriate for your project.
Currently just a bunch of perl scrips usullay has to supstitute.
I'm pretty happy with the state of C++ tools. The only thing I can think of is a default install of Boost in VS/gcc.
Refactoring, Refactoring, Refactoring. And compilation while typing. For refactorings I am missing at least half of what most modern Java IDEs can do. While Visual Assist X goes a long way, a lot of refactoring is missing. The task of writing C++ code is still pretty much that. Writing C++ code. The more the IDE supports high level refactoring the more it becomes construction, the more mallable the structure is the easier it will be to iterate over the structure and improve it. Pick up a demo version of Intellij and see what you are missing. These are just some that I remember from a couple of years ago.
Extract interface: taken a view classes with a common interface, move the common functions into an interface class (for C++ this would be an abstract base class) and derive the designated functions as abstract
Better extract method: mark a section of code and have the ide write a function that executes that code, constructing the correct parameters and return values
Know the type of each of the symbols that you are working with so that not only command completion can be correct for derived values e.g. symbol->... but also only offer functions that return the type that can be used in the current expression e.g. for
UiButton button = window->...
At the ... only insert functions that actually return a UiButton.
A tool all on it's own: Naming Conventions.
Intelligent Intellisense/Code Completion even for template-heavy code.
When you're inside a function template, of course the compiler can't say anything for sure about the template parameter (at least not without Concepts), but it should be able to make a lot of guesses and estimates. Depending on how the type is used in the function, it should be able to narrow the possible types down, in effect a kind of conservative ad-hoc Concepts. If one line in the function calls .Foo() on a template type, obviously a Foo member method must exist, and Intellisense should suggest it in the rest of the function as well.
It could even look at where the function is invoked from, and use that to determine at least one valid template parameter type, and simply offer Intellisense inside the function based on that.
If the function is called with a int as a template parameter, then obviously, use of int must be valid, and so the IDE could use that as a "sample type" inside the function and offer Intellisense suggestions based on that.
JavaScript just got Intellisense support in VS, which had to overcome a lot of similar problems, so it can be done. Of course, with C++'s level of complexity, it'd be a ridiculous amount of work. But it'd be a nice feature.