Why allow function declarations inside function bodies - c++

Why does the syntax allow function declarations inside function bodies?
It does create a lot of questions here on SO where function declarations are mistaken for variable initializations and the like.
a object();
Not to mention most vexing parse.
Is there any use-case that is not easily achieved by the more common scope hiding means like namespaces and members?
Is it for historical reasons?
Addendum: If for historical reasons, inherited from C to limit scope, what is the problem banning them?

Whilst many C++ applications are written solely in C++ code, there are also a lot of code that is some mixture of C and C++ code. This mixture is definitely something of C++'s important part of its usefulness (including the "easy" interfacing to existing API's, anything from OpenGL or cURL to custom hardware drivers that are written in C, can pretty much be used directly with very little effort, where trying to interface your custom hardware C driver into a Basic interpreter is pretty diffcult)
If we start breaking that compatibility by removing things "for no particular value", then C++ is no longer as useful. Aside from giving better error messages in a condition that is confusing, which of course is useful in itself, it's hard to see how it's useful to REMOVE this - and that's of course assuming NONE of the C++ is using this in itself - and I wouldn't be surprised if it DOES happen at times even in modern code (for whatever good or bad reasons).
In general, C++ tries very hard to not break backwards compatibility - and this, in my mind, is a good thing. That's why the keyword static is used for a bunch of different things, rather than adding a new keyword, and auto means something different now than it used to in C, but it's not a "new" keyword that could break existing code that happened to use whatever other word chosen (and that is a small break, but nobody really used it for the past 20 years anyway).

Well, the ability to declare functions within function bodies is inherited from C so, by definition, there is a reason involving historical and backward-compatibility reasons. When there is likely to be real-world code which uses a feature, the argument to remove that feature from the language is weakened.
People - particularly those who only use the latest version of the language, and are not required to maintain legacy code - do tend to under-estimate how strong an argument backward compatibility is in C++. The original C++ standard was specifically required to maintain backward compatibility with C. As a rough rule, standards discourage removing old features if doing so is likely to break existing code. It can be done, however, if the only possible usage causes a danger that cannot be prevented (which is reason for removal of gets(), for example).
When maintaining legacy code there are often significant costs with updating a code base to replace all instances of an old construct with some modern replacement. A coding change that may be insignificant for a hobbyist programmer may be extremely costly when maintaining large-scale code bases in regulatory environments, where it is necessary to provide formal evidence and audit trail that the change of code does not affect ability to meet its original requirement.
There are certain programming styles where it is useful to be able to limit the scope of any declarations. Not everyone uses such programming styles, but the reason such features are in the language is to allow the programmer the choice of programming technique. And, whether advocates of removing such features like it or not, there is a certain amount of code which uses such constructs usefully. That significantly weakens the case for removing the feature from the language.
These sorts of arguments will tend to come up for languages that are used in large-scale development, to develop systems in regulatory environments, etc etc. C and C++ (and a number of other languages) are used in such settings, so will tend to accumulate some set of features for "historical" or "backward compatibility" reasons. It is possible to make a case for removing such features by providing evidence that the feature is not in real-world use. But, since the argument is about justifying a negative claim, that is difficult (all it needs is someone to provide ONE example of continuing real-world beneficial usage and suddenly a counter-example exists which supports the case for keeping the feature).

Related

Testing C++17 in safety critical systems

I'm currently thinking about C++ in safety-critical software (DO-178C DAL-D) and definitions of a coding standard. I was looking at MISRA C++ which is again 10 years old and misses all the C++11…17 features.
While being conservative regarding safety is often not a bad idea, the new language features might be beneficial to safety.
During reviews one has to argue about why you made certain decisions. And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
But it is hard to find language features that carry the safety aspects more prominently than "make things clearer". What aspects of modern C++ really help regarding safety?
I'm setting up a small exercise project to test these ideas and currently totally focused on the "let the compiler check your assumptions". For example, we have just started to use [[nodiscard]] and found at least two bugs this way within the first hour. But what aspects of modern c++ were designed and shall be used with safety in mind?
These come to my mind first :
atomic and memory_model : they allow writing portable code in concurrent / lockfree contexts.
unique_ptr : helps simplify memory handling
override lets you find bugs at compile time.
constexpr if makes the code be written closer to where it is used, which helps writing less bugs (sometimes, to specialize a behaviour according to a template parameter, you would write a class with n specializations. Now you can use if constexpr with n branches instead).
etc... in a way, considering the benefits on code clarity and portability, I think every feature of C++11/14/17 helps.
And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
In my not so humble opinion, there are few language features, that is, standard general purpose programming language features that both, fall outside of the allowed standards AND are worth the time and energy to argue your way through in an assessment. If you are aiming for a higher level of abstraction (which is a good thing also for safety, although you'll hardly find anyone openly admitting this, because it would render half of the safety industry unemployed and the other half severly outdated) then you'd be better off to resort to a domain specific language and put the effort in a flawless compilation (to source) to a standard conforming platform. If you don't work in an engineering culture which allows this, then you can resort to some of the patches that the other answer here proposes, but it is always difficult to convincingly transport the intention and meaning of non-specific measures to other safety engineers (a dedicated domain specific language is much easier both to support or object).
That said I think the advances in parallel programming of modern C++ will find their way into the standards relatively quickly.

Is it worth writing part of code in C instead of C++ as micro-optimization?

I am wondering if it is still worth with modern compilers and their optimizations to write some critical code in C instead of C++ to make it faster.
I know C++ might lead to bad performance in case classes are copied while they could be passed by reference or when classes are created automatically by the compiler, typically with overloaded operators and many other similar cases; but for a good C++ developer who knows how to avoid all of this, is it still worth writing code in C to improve performance?
I'm going to agree with a lot of the comments. C syntax is supported, intentionally (with divergence only in C99), in C++. Therefore all C++ compilers have to support it. In fact I think it's hard to find any dedicated C compilers anymore. For example, in GCC you'll actually end up using the same optimization/compilation engine regardless of whether the code is C or C++.
The real question is then, does writing plain C code and compiling in C++ suffer a performance penalty. The answer is, for all intents and purposes, no. There are a few tricky points about exceptions and RTTI, but those are mainly size changes, not speed changes. You'd be so hard pressed to find an example that actually takes a performance hit that it doesn't seem worth it do write a dedicate module.
What was said about what features you use is important. It is very easy in C++ to get sloppy about copy semantics and suffer huge overheads from copying memory. In my experience this is the biggest cost -- in C you can also suffer this cost, but not as easily I'd say.
Virtual function calls are ever so slightly more expensive than normal functions. At the same time forced inline functions are cheaper than normal function calls. In both cases it is likely the cost of pushing/popping parameters from the stack that is more expensive. Worrying about function call overhead though should come quite late in the optimization process -- as it is rarely a significant problem.
Exceptions are costly at throw time (in GCC at least). But setting up catch statements and using RAII doesn't have a significant cost associated with it. This was by design in the GCC compiler (and others) so that truly only the exceptional cases are costly.
But to summarize: a good C++ programmer would not be able to make their code run faster simply by writing it in C.
measure! measure before thinking about optimizing, measure before applying optimization, measure after applying optimization, measure!
If you must run your code 1 nanosecond faster (because it's going to be used by 1000 people, 1000 times in the next 1000 days and that second is very important) anything goes.
Yes! it is worth ...
changing languages (C++ to C; Python to COBOL; Mathlab to Fortran; PHP to Lisp)
tweaking the compiler (enable/disable all the -f options)
use different libraries (even write your own)
etc
etc
What you must not forget is to measure!.
pmg nailed it. Just measure instead of global assumptions. Also think of it this way, compilers like gcc separate the front, middle, and back end. so the frontend fortran, c, c++, ada, etc ends up in the same internal middle language if you will that is what gets most of the optimization. Then that generic middle language is turned into assembler for the specific target, and there are target specific optimizations that occur. So the language may or may not induce more code from the front to middle when the languages differ greatly, but for C/C++ I would assume it is the same or very similar. Now the binary size is another story, the libraries that may get sucked into the binary for C only vs C++ even if it is only C syntax can/will vary. Doesnt necessarily affect execution performance but can bulk up the program file costing storage and transfer differences as well as memory requirements if the program loaded as a while into ram. Here again, just measure.
I also add to the measure comment compile to assembler and/or disassemble the output and compare the results of your different languages/compiler choices. This can/will supplement the timing differences you see when you measure.
The question has been answered to death, so I won't add to that.
Simply as a generic question, assuming you have measured, etc, and you have identified that a certain C++ (or other) code segment is not running at optimal speed (which generally means you have not used the right tool for the job); and you know you can get better performance by writing it in C, then yes, definitely, it is worth it.
There is a certain mindset that is common, trying to do everything from one tool (Java or SQL or C++). Not just Maslow's Hammer, but the actual belief that they can code a C construct in Java, etc. This leads to all kinds of performance problems. Architecture, as a true profession, is about placing code segments in the appropriate architectural location or platform. It is the correct combination of Java, SQL and C that will deliver performance. That produces an app that does not need to be re-visited; uneventful execution. In which case, it will not matter if or when C++ implements this constructors or that.
I am wondering if it is still worth with modern compilers and their optimizations to write some critical code in C instead of C++ to make it faster.
no. keep it readable. if your team prefers c++ or c, prefer that - especially if it is already functioning in production code (don't rewrite it without very good reasons).
I know C++ might lead to bad performance in case classes are copied while they could be passed by reference
then forbid copying and assigning
or when classes are created automatically by the compiler, typically with overloaded operators and many other similar cases
could you elaborate? if you are referring to templates, they don't have additional cost in runtime (although they can lead to additional exported symbols, resulting in a larger binary). in fact, using a template method can improve performance if (for example) a conversion would otherwise be necessary.
but for a good C++ developer who knows how to avoid all of this, is it still worth writing code in C to improve performance?
in my experience, an expert c++ developer can create a faster, more maintainable program.
you have to be selective about the language features that you use (and do not use). if you break c++ features down to the set available in c (e.g., remove exceptions, virtual function calls, rtti) then you're off to a good start. if you learn to use templates, metaprogramming, optimization techniques, avoid type aliasing (which becomes increasingly difficult or verbose in c), etc. then you should be on par or faster than c - with a program which is more easily maintained (since you are familiar with c++).
if you're comfortable using the features of c++, use c++. it has plenty of features (many of which have been added with speed/cost in mind), and can be written to be as fast as c (or faster).
with templates and metaprogramming, you could turn many runtime variables into compile-time constants for exceptional gains. sometimes that goes well into micro-optimization territory.

C++11: a new language?

Recently I started reading (just a bit) the current draft for the future C++11 standard.
There are lots of new features, some of them already available via Boost Libs. Of course, I'm pretty happy with this new standard and I'd like to play with all the new features as soon as possibile.
Anyway, speaking about this draft with some friends, long-time C++ devs, some worries emerged. So, I ask you (to answer them):
1) The language itself
This update is huge, maybe too huge for a single standard update. Huge for the compiler vendors (even if most of them already started implementing some features) but also for the end-users.
In particular, a friend of mine told me "this is a sort of new language".
Can we consider it a brand new language after this update?
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
2) Knowledge of the language
How the learning curve will be impacted by the new standard?
Teaching the language will be more difficult?
Some features, while pretty awesome, seem a bit too "academic" to me (as definition I mean). Am I wrong?
Mastering all these new additions could be a nightmare, couldn't it?
In short, no, we can't consider this a new language. It's the same language, new features. But instead of being bolted on by using the Boost libs, they're now going to be standard inclusions if you're using a compiler that supports the 0x standard.
One doesn't have to use the new standard while using a compiler that supports the new standard. One will have to learn and use the new standard if certain constraints exist on the software being developed, however, but that's a constraint with any software endeavor. I think that the new features that the 0x standard brings will make doing certain things easier and less error prone, so it's to one's advantage to learn what the new features are, and how they will improve their design strategy for future work. One will also have to learn it so that when working on software developed with it, they will understand what's going on and not make large boo-boos.
As to whether I will "switch to the new standard", if that means that I will learn the new standard and use it where applicable and where it increases my productivity, then yes, I certainly plan to switch. However, if this means that I will limit myself to only working with the new features of the 0x standard, then no, since much of my work involves code written before the standard and it would be a colossal undertaking to redesign everything to use the new features. Not only that, but it may introduce new bugs and performance issues that I'm not aware of without experience.
Learning C++ has always been one of the more challenging journeys a programmer can undertake. Adding new features to the language will not change the difficulty of learning its syntax and how to use it effectively, but the approach will change. People will still learn about pointers and how they work, but they'll also learn about smart pointers and how they're managed. In some cases, people will learn things differently than before. For example, people will still need to learn how to initialize things, but now they'll learn about Uniform Initialization and Initializer Lists as primary ways to do things. In some cases, perhaps understanding things will be easier with the addition of the new for syntax for ranges or the auto return type in a function declaration. I think that overall, C++ will become easier to learn and use while at the same time becoming easier to teach.
Mastering a language is a long-term goal, it can not be done over night. It's silly to think that one can have mastery over something as complex as C++ quickly. It takes practice, experience and debugging code to really hammer something in. Academically learning is one thing, but putting to use that knowledge is an entire different monster. I think that if one already has mastery of the C++ language, the new concepts will not pose too much of a burden, but a new comer may have an advantage in that they won't bother learning some of the more obsolete ways of doing things.
1) The language itself
As far as I'm aware, there are really no breaking changes between
C++'03 and C++'0x. The only one I can think of here relates to using
auto as a storage class specifier, but since it had no semantic
meaning I don't see that being an issue.
There are a lot of other academic fixes to the standard which are very
necssary, for example better descriptions for the layout of member
data. Finally, with multi-core/cpu architectures becoming the norm,
fixing the memory model was a must.
2) Knowledge of the language
Personally, I feel that for 99.9% of C++ developers the newer language is going to be easier to use. I'm specifically thinking of features such as auto, lambda's and constexpr. These features really should make using the language more enjoyable.
At a more advanced level, you have other features such as variadic
templates etc that help the more advanced users.
But there's nothing new here, I'm still surprised at the amount of
everyday C++ developers that haven't used (or even heard of) the STL.
From a personal perspective, the only feature I'm a bit concerned about in the new standard is that of concepts. As it is such a large change, the same problems that occurred with templates (ie. completely broken implementations) is a real danger.
Update post FDIS going out for voting:
As it happens, 'concepts' was dropped for C++ 0x and will be taken up again for C++ 1x. In the end there are some changes other than auto which could break your code, but in practise they'll probably be pretty rare. The key differences can be found in Appendix C.2 of the FDIS (pdf).
For me, one of the most important, will be:
unique_ptr + std::move() !
Imagine:
Smart pointer without any overhead:
no reference counting operations
no additional storage for reference counter variable
Smart pointer that can be moved, ie. no destructor/constructor calls when moved
What does this give you? Exception safe, cheap (pointers..) containers without any costs. The container will be able to just memcpy() unique_ptrs, so there will be no performance loss caused by wrapping regular pointer by smart pointer! So, once again:
You can use pointers
It will be safe (no memory leaks)
It will cost you nothing
You will be able to store them in containers, and they will be able to do "massive" moves (memcpy-like) with them cheaply.
It will be exception safe
:)
Another point of view:
Actually, when you move group of objects using copy(), there is constructor and destructor call for every object instance. When you copy 1000 objects of 1kb size, there will be at least one memcpy() and 2000 function calls.
If you would want to avoid the thousands of calls, you would have to use pointers.
But pointers are: dangerous, etc. Actual smart pointers will not help you, they solve other problems.
There is no solution for now. You must pay for C++ RAII/pointer/valuevars design from time to time. But with C++0x, using unique_ptr will allow to do "massive" moves of objects (yes, practically objects, because pointer will be smart) without "massive" constructor/destructor calls, and without risk of using pointers! For me, this is really important.
It's like relaxing the RAII concept (because of using pointers) without loosing RAII benefits. Another aspect: pointer wrapped in unique_ptr() will behave in many aspects similar to java reference object variable. The difference is that unique_ptr() will be able to exist in only one scope at a time.
Your friend is partially right but mostly wrong: it's the same language with extra features.
The good thing is, you don't have to master all the new features. One of the primary mandates for a standards body is to not break existing code, so you'll be able to go on, happily coding in your old style (I'm still mostly a C coder even though I do "C++" applications :-).
Only when you want to have a look at the new features will you need to bone up on the changes. This is a process you can stretch over years if need be.
My advice is to learn what all the new features are at a high level (if only to sound knowledgeable in job interviews) but learn the details slowly.
In some respects, C++0x should be easier to teach/learn than current C++:
Looping through a container - the new for syntax is far easier than for_each + functor or looping manually using iterators
Initialising containers: we'll be able to initialise sequences with the same syntax as arrays
Memory management: out goes dodgy old auto_ptr, in comes well-defined unique_ptr and shared_ptr
Lambdas, although necessarily more complex than other languages' equivalents, will be easier to learn than the C++98 process of defining function objects in a different scope.
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
A year ago, I was writing strict C89, because the product in question was aggressively portable to embedded platforms, some of which had compilers with radically different ideas of which bits of C99 it's worth supporting. So a 20-year-old standard still hasn't been fully replaced by its 10-year-old successor.
So I don't expect to be able to get away from C++03 any time soon.
I do expect to use C++0x features where appropriate. Just as I use C99 features in C code, and gcc extensions in C and C++ (and would use MSVC extensions, although I've never worked on MSVC-only code for more than trivial amounts of time). But I expect it to be "nice to have" rather than baseline, pretty much indefinitely.
You have a point, but it's always been the case. There is a lot of C++ code out there that still doesn't incorporate anything from the '98 standard just because of the innate conservatism of some coders. Some of us remember a dark time before the std:: namespace (before namespaces, in fact), when everyone wrote their own string class, and pointers walked around naked all the time. There is a reason why we talk about "modern C++ style" - to distinguish from the earlier style because some people still have to maintain or update code in that style.
Any language has to evolve to survive, and any language that evolves will have a divided user base, if only because people vary in their attitude towards estimating opportunity costs in applying new language features to their own work.
With the advent of C++0x in shipping compilers, this conversation will be played out over and over in dev teams across the world:
YOUNGSTER: I've just discovered these things called lambdas! And I'm finding lots of ways to use them to make our code more expressive! Look, I rewrote your old Foo class, isn't that much neater?
OLDSTER: There was nothing wrong with my old Foo class. You're just looking for excuses to use a "cool" new feature unnecessarily. Why do you keep trying to make my life so complicated? Why do I keep having to learn new things? We need another war, that's what we need.
YOUNGSTER: You're just too stuck in your ways, old man, we shouldn't even be using C++ these days... if it was up to me -
OLDSTER: If it was up to me we'd have stuck with PL/1, but no... my wife had to vote for Carter and now we're stuck with all this object-oriented crap. There's nothing you can do with std::transform and lambdas that I can't do with a goto and a couple of labels.
etc.
Your programming career will always involve learning and re-learning. You can't expect c++ to stay the same till you retire and to be using the same methods and practises that you were using 40 years ago. Technology rolls on, and it rolls quickly. It's your job to keep up with it. Of course you can ignore that, and continue to work the same way you currently do, but in 5 / 10 years time you'll become so outdated that you'll be forced to learn it all then when you're trying to change job. And it will have been a lot easier to learn on the job all those years before :)
A few months ago I heard Bjarne Stroustrup give a talk titled 50 years of C++. Admittedly, I'm not a C++ programmer, but it seemed to me that he certainly doesn't think 0x is a new language!
Whether or not we can consider it a "new language", I think that's semantics. It doesn't make a difference. It's backwards compatible with our current C++ code, and it's a better language. Whether or not we consider it "the same language" doesn't matter.
About learning the language, remember that a lot of the new features are there to make the language easier to learn and use. Most of the features that add complexity are intended for library developers only. They can use these new features to make better, more efficient, and easier to use libraries, that you can then use without knowing about the features. Several of the changes actually simplify and generalize existing features, making them easier for newcomers to learn.
It is a big update, yes, but it is guided by a decade of experience with the current C++ standard. Every change is there because experience has shown that it is needed. In fact, the committee is being extremely cautious and conservative, and have refused a huge number of other language improvements. What is added here is only the fundamentals that 1) everyone could agree on, and 2) could be specified in time, without delaying the new standard.
It is not simply a few language designers sitting down and brainstorming new features they'd like to try.
Concepts and Concept Maps are going to greatly increase the grokability of template frameworks. If you've ever poured over the Boost source you'll know what I mean. You're constantly going from source to docs because the language just doesn't have the facilities to express template concepts. Hopefully Concepts + Duck Typing will give us the best of both worlds whereby entry points to template libraries can explicitly declare requirements but still have the freedom that Duck Typing provides when writing generic code.
There are lots of good things in C++0x, but they're mostly evolutionary changes that refine or extend existing ideas. I don't think it's different enough to justify calling it a "new language".

Is there a C++ styled language without C trappings?

The main reason to use C++ over a managed language these days is to gain the benefits C++ brings to the table. One of the pillars of C++ is "you don't pay for what you don't need". It can be argued though that sometimes you don't want to pay for backward compatibility with C. Many of the odd quirks of C++ can be attributed to this backward compatibility. What other languages are there where "you don't pay for what you don't need" including backward compatibility with C?
Edit/clarification: The real killer for me is in that second sentence. Is there a language truly designed from the ground up that doesn't impose things you don't want on your code? C++ has that as its design philosophy: don't want RTTI? It doesn't exist. Don't want garbage collection? It's not there. The problem with C++ is it (IMO) violates this requirement when it refuses to break from the past. I don't want the cruft of backward compatibility with 20 year old code hampering my going forward. C++ isn't willing to do that. What is/has?
Edit2: I suppose I should have been more clear about what a cost is. There are multiple potential costs. The one I was initially focusing on was runtime cost.
In C++ polymorphism through virtual methods has a cost. But not all methods pay that cost. A non-virtual C++ method is called with the same runtime cost as a plain old C function (having at least one parameter). C++ does not require you to use polymorphism. In other OOP languages all methods are virtual and so the cost of polymorphism cannot be avoided.
Runtime costs are most important but other costs mitigate against that. Assembly language would have the least runtime overhead obviously but the writing and maintenance costs of assembly language are a huge strike against it.
With that in mind the idea is to find languages that provide useful abstractions which, when not in use, do not affect runtime costs.
D language
D is a general purpose systems and applications programming language. It is a higher level language than C++, but retains the ability to write high performance code and interface directly with the operating system API's and with hardware. D is well suited to writing medium to large scale million line programs with teams of developers. D is easy to learn, provides many capabilities to aid the programmer, and is well suited to aggressive compiler optimization technology.
The general look of D is like C and C++. This makes it easier to learn and port code to D. Transitioning from C/C++ to D should feel natural. The programmer will not have to learn an entirely new way of doing things.
D drops C source code compatibility. Extensions to C that maintain source compatibility have already been done (C++ and ObjectiveC). Further work in this area is hampered by so much legacy code it is unlikely that significant improvements can be made.
I'm tempted to downvote this question (but as of yet I have not). Your requirement that "you don't pay for what you don't use" depends very heavily on what exactly you use. You already mentioned in one of your comments that assembly is perhaps the most fluff-free language there is, but you complain about C, which sits somewhere between assembly and C++.
If you find garbage collection and explicit object-oriented features "fluff" then frankly, I think C is probably the best candidate. The language is actually small and elegantly designed. It meets most people's "fits in one's head" requirement. For a language that gives such close control over the hardware, it's very expressive.
If you are not tied to the hardware, then Scheme or some other minimalist dialect of Lisp probably fits the bill for "doesn't impose what you don't want on your code" but again, it all depends so heavily on what exactly it is that you don't want.
If there are some higher-level features you "can't live without" (which it seems you are implying by naming "C++ without C compatibility" as your language of choice) then you should say explicitly what those are. What exactly is C++ bringing to the table that you don't want to sacrifice?
Ada is another alternative - gc is optional in the implementation of the language:
http://en.wikipedia.org/wiki/Ada_(programming_language)
You can also try Ch from : http://www.softintegration.com/
SPECS is alternative syntactic binding for C++. This binding includes a simple syntax for declaring and defining types, functions, objects and templates, and changes several problematic operators and control structures. The resulting syntax is LALR(1) parsable.
Eiffel. Looks like Pascal, has C++ type semantics. Also, adds special "programming by contract" assertions that are built into the language definigion, years before people were talking about "X"Unit libraries.
Try Aikido. Syntactically similar to C++, and I don't think it inherits C style backward compatibility.
Aikido # Wikipedia
I'm not exactly sure what costs backward compatibility with C has...
I'm not exactly sure there are any runtime hits associated with C backwards compatibility, could you elaborate? Certainly the backwards compatibility does give you more rope to hang yourself with but strict enforcement of rules can mitigate that. So I suppose my answer to you is C++ is the language that does what you ask for and you just need a tool to enforce valid constructs.
Edit: odd that i didn't notice the post directly below me with the same first line comment

What can C++ do that is too hard or messy in any other language?

I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it.
I'm particularly interested in aspects of C++ that are little known, or underutilised.
RAII / deterministic finalization. No, garbage collection is not just as good when you're dealing with a scarce, shared resource.
Unfettered access to OS APIs.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language.
Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
Shooting oneself in the foot.
No other language offers such a creative array of tools. Pointers, multiple inheritance, templates, operator overloading and a preprocessor.
A wonderfully powerful language that also provides abundant opportunities for foot shooting.
Edit: I apologize if my lame attempt at humor has offended some. I consider C++ to be the most powerful language that I have ever used -- with abilities to code at the assembly language level when desired, and at a high level of abstraction when desired. C++ has been my primary language since the early '90s.
My answer was based on years of experience of shooting myself in the foot. At least C++ allows me to do so elegantly.
Deterministic object destruction leads to some magnificent design patterns. For instance, while RAII is not as general a technique as garbage collection, it leads to some impressive capabilities which you cannot get with GC.
C++ is also unique in that it has a Turing-complete preprocessor. This allows you to prefer (as in the opposite of defer) a lot of code tasks to compile time instead of run time. For instance, in real code you might have an assert() statement to test for a never-happen. The reality is that it will sooner or later happen... and happen at 3:00am when you're on vacation. The C++ preprocessor assert does the same test at compile time. Compile-time asserts fail between 8:00am and 5:00pm while you're sitting in front of the computer watching the code build; run-time asserts fail at 3:00am when you're asleep in Hawai'i. It's pretty easy to see the win there.
In most languages, strategy patterns are done at run-time and throw exceptions in the event of a type mismatch. In C++, strategies can be done at compile-time through the preprocessor facility and can be guaranteed typesafe.
Write inline assembly (MMX, SSE, etc.).
Deterministic object destruction. I.e. real destructors. Makes managing scarce resources easier. Allows for RAII.
Easier access to structured binary data. It's easier to cast a memory region as a struct than to parse it and copy each value into a struct.
Multiple inheritance. Not everything can be done with interfaces. Sometimes you want to inherit actual functionality too.
I think i'm just going to praise C++ for its ability to use templates to catch expressions and execute it lazily when it's needed. For those not knowing what this is about, here is an example.
Template mixins provide reuse that I haven't seen elsewhere. With them you can build up a large object with lots of behaviour as though you had written the whole thing by hand. But all these small aspects of its functionality can be reused, it's particularly great for implementing parts of an interface (or the whole thing), where you are implementing a number of interfaces. The resulting object is lightning-fast because it's all inlined.
Speed may not matter in many cases, but when you're writing component software, and users may combine components in unthought-of complicated ways to do things, the speed of inlining and C++ seems to allow much more complex structures to be created.
Absolute control over the memory layout, alignment, and access when you need it. If you're careful enough you can write some very cache-friendly programs. For multi-processor programs, you can also eliminate a lot of slow downs from cache coherence mechanisms.
(Okay, you can do this in C, assembly, and probably Fortran too. But C++ lets you write the rest of your program at a higher level.)
This will probably not be a popular answer, but I think what sets C++ apart are its compile-time capabilities, e.g. templates and #define. You can do all sorts of text manipulation on your program using these features, much of which has been abandoned in later languages in the name of simplicity. To me that's way more important than any low-level bit fiddling that's supposedly easier or faster in C++.
C#, for instance, doesn't have a real macro facility. You can't #include another file directly into the source, or use #define to manipulate the program as text. Think about any time you had to mechanically type repetitive code and you knew there was a better way. You may even have written a program to generate code for you. Well, the C++ preprocessor automates all of these things.
The "generics" facility in C# is similarly limited compared to C++ templates. C++ lets you apply the dot operator to a template type T blindly, calling (for example) methods that may not exist, and checks-for-correctness are only applied once the template is actually applied to a specific class. When that happens, if all the assumptions you made about T actually hold, then your code will compile. C# doesn't allow this... type "T" basically has to be dealt with as an Object, i.e. using only the lowest common denominator of operations available to everything (assignment, GetHashCode(), Equals()).
C# has done away with the preprocessor, and real generics, in the name of simplicity. Unfortunately, when I use C#, I find myself reaching for substitutes for these C++ constructs, which are inevitably more bloated and layered than the C++ approach. For example, I have seen programmers work around the absence of #include in several bloated ways: dynamically linking to external assemblies, re-defining constants in several locations (one file per project) or selecting constants from a database, etc.
As Ms. Crabapple from The Simpson's once said, this is "pretty lame, Milhouse."
In terms of Computer Science, these compile-time features of C++ enable things like call-by-name parameter passing, which is known to be more powerful than call-by-value and call-by-reference.
Again, this is perhaps not the popular answer- any introductory C++ text will warn you off of #define, for example. But having worked with a wide variety of languages over many years, and having given consideration to the theory behind all of this, I think that many people are giving bad advice. This seems especially to be the case in the diluted sub-field known as "IT."
Passing POD structures across processes with minimum overhead. In other words, it allows us to easily handle blobs of binary data.
C# and Java force you to put your 'main()' function in a class. I find that weird, because it dilutes the meaning of a class.
To me, a class is a category of objects in your problem domain. A program is not such an object. So there should never be a class called 'Program' in your program. This would be equivalent to a mathematical proof using a symbol to notate itself -- the proof -- alongside symbols representing mathematical objects. It'll be just weird and inconsistent.
Fortunately, unlike C# and Java, C++ allows global functions. That lets your main() function to exist outside. Therefore C++ offers a simpler, more consistent and perhaps truer implementation of the the object-oriented idiom. Hence, this is one thing C++ can do, but C# and Java cannot.
I think that operator overloading is a quite nice feature. Of course it can be very much abused (like in Boost lambda).
Tight control over system resources (esp. memory) while offering powerful abstraction mechanisms optionally. The only language I know of that can come close to C++ in this regard is Ada.
C++ provides complete control over memory and as result a makes the the flow of program execution much more predictable.
Not only can you say precisely at what time allocations and deallocations of memory occurs, you can define you own heaps, have multiple heaps for different purposes and say precisely where in memory data is allocated to. This is frequently useful when programming on embedded/real time systems, such as games consoles, cell phones, mp3 players, etc..., which:
have strict upper limits on memory that is easy to reach (constrast with a PC which just gets slower as you run out of physical memory)
frequently have non homogeneous memory layout. You may want to allocate objects of one type in one piece of physical memory, and objects of another type in another piece.
have real time programming constraints. Unexpectedly calling the garbage collector at the wrong time can be disastrous.
AFAIK, C and C++ are the only sensible option for doing this kind of thing.
Well to be quite honest, you can do just about anything if your willing to write enough code.
So to answer your question, no, there is nothing you can't do in another language that C++ can't do. It's just how much patience do you have and are you willing to devote the long sleepless nights to get it to work?
There are things that C++ wrappers make it easy to do (because they can read the header files), like Office development. But again, it's because someone wrote lots of code to "wrap" it for you in an RCW or "Runtime Callable Wrapper"
EDIT: You also realize this is a loaded question.