Testing C++17 in safety critical systems - c++

I'm currently thinking about C++ in safety-critical software (DO-178C DAL-D) and definitions of a coding standard. I was looking at MISRA C++ which is again 10 years old and misses all the C++11…17 features.
While being conservative regarding safety is often not a bad idea, the new language features might be beneficial to safety.
During reviews one has to argue about why you made certain decisions. And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
But it is hard to find language features that carry the safety aspects more prominently than "make things clearer". What aspects of modern C++ really help regarding safety?
I'm setting up a small exercise project to test these ideas and currently totally focused on the "let the compiler check your assumptions". For example, we have just started to use [[nodiscard]] and found at least two bugs this way within the first hour. But what aspects of modern c++ were designed and shall be used with safety in mind?

These come to my mind first :
atomic and memory_model : they allow writing portable code in concurrent / lockfree contexts.
unique_ptr : helps simplify memory handling
override lets you find bugs at compile time.
constexpr if makes the code be written closer to where it is used, which helps writing less bugs (sometimes, to specialize a behaviour according to a template parameter, you would write a class with n specializations. Now you can use if constexpr with n branches instead).
etc... in a way, considering the benefits on code clarity and portability, I think every feature of C++11/14/17 helps.

And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
In my not so humble opinion, there are few language features, that is, standard general purpose programming language features that both, fall outside of the allowed standards AND are worth the time and energy to argue your way through in an assessment. If you are aiming for a higher level of abstraction (which is a good thing also for safety, although you'll hardly find anyone openly admitting this, because it would render half of the safety industry unemployed and the other half severly outdated) then you'd be better off to resort to a domain specific language and put the effort in a flawless compilation (to source) to a standard conforming platform. If you don't work in an engineering culture which allows this, then you can resort to some of the patches that the other answer here proposes, but it is always difficult to convincingly transport the intention and meaning of non-specific measures to other safety engineers (a dedicated domain specific language is much easier both to support or object).
That said I think the advances in parallel programming of modern C++ will find their way into the standards relatively quickly.

Related

Why is C++ template use not recommended in a space/radiated environment?

By reading this question, I understood, for instance, why dynamic allocation or exceptions are not recommended in environments where radiation is high, like in space or in a nuclear power plant.
Concerning templates, I don't see why. Could you explain it to me?
Considering this answer, it says that it is quite safe to use.
Note: I'm not talking about complex standard library stuff, but purpose-made custom templates.
Notice that space-compatible (radiation-hardened, aeronautics compliant) computing devices are very expensive (including to launch in space, since their weight exceeds kilograms), and that a single space mission costs perhaps hundred million € or US$. Losing the mission because of software or computer concerns has generally a prohibitive cost so is unacceptable and justifies costly development methods and procedures that you won't even dream using for developing your mobile phone applet, and using probabilistic reasoning and engineering approaches is recommended, since cosmic rays are still somehow an "unusual" event. From a high-level point of view, a cosmic ray and the bit flip it produces can be considered as noise in some abstract form of signal or of input. You could look at that "random bit-flip" problem as a signal-to-noise ratio problem, then randomized algorithms may provide a useful conceptual framework (notably at the meta level, that is when analyzing your safety-critical source code or compiled binary, but also, at critical system run-time, in some sophisticated kernel or thread scheduler), with an information theory viewpoint.
Why C++ template use is not recommended in space/radiated environment?
That recommendation is a generalization, to C++, of MISRA C coding rules and of Embedded C++ rules, and of DO178C recommendations, and it is not related to radiation, but to embedded systems. Because of radiation and vibration constraints, the embedded hardware of any space rocket computer has to be very small (e.g. for economical and energy-consumption reasons, it is more -in computer power- a Raspberry Pi-like system than a big x86 server system). Space hardened chips cost 1000x much as their civilian counterparts. And computing the WCET on space-embedded computers is still a technical challenge (e.g. because of CPU cache related issues). Hence, heap allocation is frowned upon in safety-critical embedded software-intensive systems (how would you handle out-of-memory conditions in these? Or how would you prove that you have enough RAM for all real run time cases?)
Remember that in the safety-critical software world, you not only somehow "guarantee" or "promise", and certainly assess (often with some clever probabilistic reasoning), the quality of your own software, but also of all the software tools used to build it (in particular: your compiler and your linker; Boeing or Airbus won't change their version of GCC cross-compiler used to compile their flight control software without prior written approval from e.g. FAA or DGAC). Most of your software tools need to be somehow approved or certified.
Be aware that, in practice, most C++ (but certainly not all) templates internally use the heap. And standard C++ containers certainly do. Writing templates which never use the heap is a difficult exercise. If you are capable of that, you can use templates safely (assuming you do trust your C++ compiler and its template expansion machinery, which is the trickiest part of the C++ front-end of most recent C++ compilers, such as GCC or Clang).
I guess that for similar (toolset reliability) reasons, it is frowned upon to use many source code generation tools (doing some kind of metaprogramming, e.g. emitting C++ or C code). Observe, for example, that if you use bison (or RPCGEN) in some safety critical software (compiled by make and gcc), you need to assess (and perhaps exhaustively test) not only gcc and make, but also bison. This is an engineering reason, not a scientific one. Notice that some embedded systems may use randomized algorithms, in particular to cleverly deal with noisy input signals (perhaps even random bit flips due to rare-enough cosmic rays). Proving, testing, or analyzing (or just assessing) such random-based algorithms is a quite difficult topic.
Look also into Frama-Clang and CompCert and observe the following:
C++11 (or following) is an horribly complex programming language. It has no complete formal semantics. The people
expert enough in C++ are only a few dozens worldwide (probably, most
of them are in its standard committee). I am capable of coding in
C++, but not of explaining all the subtle corner cases of move
semantics, or of the C++ memory model. Also, C++ requires in practice many optimizations to be used efficiently.
It is very difficult to make an error-free C++ compiler, in particular because C++ practically requires tricky optimizations, and because of the complexity of the C++ specification. But current
ones (like recent GCC or Clang) are in practice quite good, and they have few (but still some)
residual compiler bugs. There is no CompCert++ for C++ yet, and making one requires several millions of € or US$ (but if you can collect such an amount of money, please contact me by email, e.g. to basile.starynkevitch#cea.fr, my work email). And the space software industry is extremely conservative.
It is difficult to make a good C or C++ heap memory allocator. Coding
one is a matter of trade-offs. As a joke, consider adapting this C heap allocator to C++.
proving safety properties (in particular, lack of race conditions or undefined behavior such as buffer overflow at run-time) of template-related C++ code is still, in 2Q2019, slightly ahead of the state of the art of static program analysis of C++ code. My draft Bismon technical report (it is a draft H2020 deliverable, so please skip pages for European bureaucrats) has several pages explaining this in more details. Be aware of Rice's theorem.
a whole system C++ embedded software test could require a rocket launch (a la Ariane 5 test flight 501, or at least complex and heavy experimentation in lab). It is very expensive. Even testing, on Earth, a Mars rover takes a lot of money.
Think of it: you are coding some safety-critical embedded software (e.g. for train braking, autonomous vehicles, autonomous drones, big oil platform or oil refinery, missiles, etc...). You naively use some C++ standard container, e.g. some std::map<std::string,long>. What should happen for out of memory conditions? How do you "prove", or at least "convince", to the people working in organizations funding a 100M€ space rocket, that your embedded software (including the compiler used to build it) is good enough? A decade-year old rule was to forbid any kind of dynamic heap allocation.
I'm not talking about complex standard library stuff but purposed-made custom templates.
Even these are difficult to prove, or more generally to assess their quality (and you'll probably want to use your own allocator inside them). In space, the code space is a strong constraint. So you would compile with, for example, g++ -Os -Wall or clang++ -Os -Wall. But how did you prove -or simply test- all the subtle optimizations done by -Os (and these are specific to your version of GCC or of Clang)? Your space funding organization will ask you that, since any run-time bug in embedded C++ space software can crash the mission (read again about Ariane 5 first flight failure - coded in some dialect of Ada which had at that time a "better" and "safer" type system than C++17 today), but don't laugh too much at Europeans. Boeing 737 MAX with its MACS is a similar mess).
My personal recommendation (but please don't take it too seriously. In 2019 it is more a pun than anything else) would be to consider coding your space embedded software in Rust. Because it is slightly safer than C++. Of course, you'll have to spend 5 to 10 M€ (or MUS$) in 5 or 7 years to get a fine Rust compiler, suitable for space computers (again, please contact me professionally, if you are capable of spending that much on a free software Compcert/Rust like compiler). But that is just a matter of software engineering and software project managements (read both the Mythical Man-Month and Bullshit jobs for more, be also aware of Dilbert principle: it applies as much to space software industry, or embedded compiler industry, as to anything else).
My strong and personal opinion is that the European Commission should fund (e.g. through Horizon Europe) a free software CompCert++ (or even better, a Compcert/Rust) like project (and such a project would need more than 5 years and more than 5 top-class, PhD researchers). But, at the age of 60, I sadly know it is not going to happen (because the E.C. ideology -mostly inspired by German policies for obvious reasons- is still the illusion of the End of History, so H2020 and Horizon Europe are, in practice, mostly a way to implement tax optimizations for corporations in Europe through European tax havens), and that after several private discussions with several members of CompCert project. I sadly expect DARPA or NASA to be much more likely to fund some future CompCert/Rust project (than the E.C. funding it).
NB. The European avionics industry (mostly Airbus) is using much more formal methods approaches that the North American one (Boeing). Hence some (not all) unit tests are avoided (since replaced by formal proofs of source code, perhaps with tools like Frama-C or Astrée - neither have been certified for C++, only for a subset of C forbidding C dynamic memory allocation and several other features of C). And this is permitted by DO-178C (not by the predecessor DO-178B) and approved by the French regulator, DGAC (and I guess by other European regulators).
Also notice that many SIGPLAN conferences are indirectly related to the OP's question.
The argumentation against the usage of templates in safety code is that they are considered to increase the complexity of your code without real benefit. This argumentation is valid if you have bad tooling and a classic idea of safety. Take the following example:
template<class T> fun(T t){
do_some_thing(t);
}
In the classic way to specify a safety system you have to provide a complete description of each and every function and structure of your code. That means you are not allowed to have any code without specification. That means you have to give a complete description of the functionality of the template in its general form. For obvious reasons that is not possible. That is BTW the same reason why function-like macros are also forbidden. If you change the idea in a way that you describe all actual instantiations of this template, you overcome this limitation, but you need proper tooling to prove that you really described all of them.
The second problem is that one:
fun(b);
This line is not a self-contained line. You need to look up the type of b to know which function is actually called. Proper tooling which understands templates helps here. But in this case it is true that it makes the code harder to check manually.
This statement about templates being a cause of vulnerability seems completely surrealistic to me. For two main reasons:
templates are "compiled away", i.e. instantiated and code-generated like any other function/member, and there is no behavior specific to them. Just as if they never existed;
no construction in any language is neither safe or vulnerable; if an ionizing particule changes a single bit of memory, be it in code or in data, anything is possible (from no noticeable problem occurring up to processor crash). The way to shield a system against this is by adding hardware memory error detection/correction capabilities. Not by modifying the code !

Why allow function declarations inside function bodies

Why does the syntax allow function declarations inside function bodies?
It does create a lot of questions here on SO where function declarations are mistaken for variable initializations and the like.
a object();
Not to mention most vexing parse.
Is there any use-case that is not easily achieved by the more common scope hiding means like namespaces and members?
Is it for historical reasons?
Addendum: If for historical reasons, inherited from C to limit scope, what is the problem banning them?
Whilst many C++ applications are written solely in C++ code, there are also a lot of code that is some mixture of C and C++ code. This mixture is definitely something of C++'s important part of its usefulness (including the "easy" interfacing to existing API's, anything from OpenGL or cURL to custom hardware drivers that are written in C, can pretty much be used directly with very little effort, where trying to interface your custom hardware C driver into a Basic interpreter is pretty diffcult)
If we start breaking that compatibility by removing things "for no particular value", then C++ is no longer as useful. Aside from giving better error messages in a condition that is confusing, which of course is useful in itself, it's hard to see how it's useful to REMOVE this - and that's of course assuming NONE of the C++ is using this in itself - and I wouldn't be surprised if it DOES happen at times even in modern code (for whatever good or bad reasons).
In general, C++ tries very hard to not break backwards compatibility - and this, in my mind, is a good thing. That's why the keyword static is used for a bunch of different things, rather than adding a new keyword, and auto means something different now than it used to in C, but it's not a "new" keyword that could break existing code that happened to use whatever other word chosen (and that is a small break, but nobody really used it for the past 20 years anyway).
Well, the ability to declare functions within function bodies is inherited from C so, by definition, there is a reason involving historical and backward-compatibility reasons. When there is likely to be real-world code which uses a feature, the argument to remove that feature from the language is weakened.
People - particularly those who only use the latest version of the language, and are not required to maintain legacy code - do tend to under-estimate how strong an argument backward compatibility is in C++. The original C++ standard was specifically required to maintain backward compatibility with C. As a rough rule, standards discourage removing old features if doing so is likely to break existing code. It can be done, however, if the only possible usage causes a danger that cannot be prevented (which is reason for removal of gets(), for example).
When maintaining legacy code there are often significant costs with updating a code base to replace all instances of an old construct with some modern replacement. A coding change that may be insignificant for a hobbyist programmer may be extremely costly when maintaining large-scale code bases in regulatory environments, where it is necessary to provide formal evidence and audit trail that the change of code does not affect ability to meet its original requirement.
There are certain programming styles where it is useful to be able to limit the scope of any declarations. Not everyone uses such programming styles, but the reason such features are in the language is to allow the programmer the choice of programming technique. And, whether advocates of removing such features like it or not, there is a certain amount of code which uses such constructs usefully. That significantly weakens the case for removing the feature from the language.
These sorts of arguments will tend to come up for languages that are used in large-scale development, to develop systems in regulatory environments, etc etc. C and C++ (and a number of other languages) are used in such settings, so will tend to accumulate some set of features for "historical" or "backward compatibility" reasons. It is possible to make a case for removing such features by providing evidence that the feature is not in real-world use. But, since the argument is about justifying a negative claim, that is difficult (all it needs is someone to provide ONE example of continuing real-world beneficial usage and suddenly a counter-example exists which supports the case for keeping the feature).

Safe c++ in mission critical realtime apps

I'd want to hear various opinions how to safely use c++ in mission critical realtime applications.
More precisely, it is probably possible to create some macros/templates/class library for safe data manipulation (sealing for overflows, zerodivides produce infinity values or division is possible only for special "nonzero" data types), arrays with bound checking and foreach loops, safe smartpointers (similar to boost shared_ptr, for instance) and even safe multithreading/distributed model (message passing and lightweight processes like ones are defined in Erlang languge).
Then we prohibit some dangerous c/c++ constructions such as raw pointers, some raw types, native "new" operator and native c/c++ arrays ( for application programmer, not for library writer, of course). Ideally, we should create a special preprocessor/checker, at least we must have some formal checking procedure, which can be applyed to sources using some tool or manualy by some person.
So, my questions:
1) Are there any existing libraries/projects that utilize such an idea? (Embedded c++ is apparently not of desired kind) ?
2) Is it a good idea at all or not? Or it may be useful only for prototyping some another hipothetical language? Or it is totally unusable?
3) Any other thoughts (or links) on this matter also welcome
Sorry if this question is not actually a question, offtopic, duplicate, etc.,
but I haven't found more appropriate place to ask it
For good rules on how to write C++ for mission critical real-time applications have a look at the Joint Strike Fighter coding standards. Many of the rules there are based on the MISRA C coding standards, which I believe are proprietary. PC-Lint is a C++ code checker with rule sets like what you want (including the MISRA rules). I believe you can customize your own rules as well.
We use C++ in mission-critical real-time applications, although I suppose we have it easy (in theory) because we have to only provide real-time guarantees as good as the hardware our clients use. Thus, sufficient profiling lets us get by without mlockall() or stack pre-loading or any other RT traditions. As for the language itself, I think everyday modern C++ coding practices (ones that discourage C concepts) are entirely sufficient to write robust applications that can be used in RT contexts, given 21st century hardware.
Unit tests and QA should be the main focus of effort, instead of in-house libraries that duplicate existing language features.
If you're writing critical high-performance realtime S/W in C++, you probably need every microsecond you can get out of the hardware. As such, I wouldn't necessarily suggest implementing all the extra checks such as ones that you mentioned, at least the ones with overhead implications on program execution. You can obviously mask floating point exceptions to prevent divide by zero from crashing the program.
Some observations:
Peer review all code (possibly multiple reviewers). This will go a long way to improving quality without requiring lots of runtime checks.
DO make use of diagnostic tools and non-release-only asserts.
Do make use of simulation systems to test on non-embedded hardware.
C++ was specifically designed without things like bounds checking for performance reasons.
In general I don't suggest arbitrarily restricting the language, although making use of RAII and smart pointers should have minimal overhead and provides a nice benefit.
Someone else pointed out that if you want Ada, just use Ada.

C++11: a new language?

Recently I started reading (just a bit) the current draft for the future C++11 standard.
There are lots of new features, some of them already available via Boost Libs. Of course, I'm pretty happy with this new standard and I'd like to play with all the new features as soon as possibile.
Anyway, speaking about this draft with some friends, long-time C++ devs, some worries emerged. So, I ask you (to answer them):
1) The language itself
This update is huge, maybe too huge for a single standard update. Huge for the compiler vendors (even if most of them already started implementing some features) but also for the end-users.
In particular, a friend of mine told me "this is a sort of new language".
Can we consider it a brand new language after this update?
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
2) Knowledge of the language
How the learning curve will be impacted by the new standard?
Teaching the language will be more difficult?
Some features, while pretty awesome, seem a bit too "academic" to me (as definition I mean). Am I wrong?
Mastering all these new additions could be a nightmare, couldn't it?
In short, no, we can't consider this a new language. It's the same language, new features. But instead of being bolted on by using the Boost libs, they're now going to be standard inclusions if you're using a compiler that supports the 0x standard.
One doesn't have to use the new standard while using a compiler that supports the new standard. One will have to learn and use the new standard if certain constraints exist on the software being developed, however, but that's a constraint with any software endeavor. I think that the new features that the 0x standard brings will make doing certain things easier and less error prone, so it's to one's advantage to learn what the new features are, and how they will improve their design strategy for future work. One will also have to learn it so that when working on software developed with it, they will understand what's going on and not make large boo-boos.
As to whether I will "switch to the new standard", if that means that I will learn the new standard and use it where applicable and where it increases my productivity, then yes, I certainly plan to switch. However, if this means that I will limit myself to only working with the new features of the 0x standard, then no, since much of my work involves code written before the standard and it would be a colossal undertaking to redesign everything to use the new features. Not only that, but it may introduce new bugs and performance issues that I'm not aware of without experience.
Learning C++ has always been one of the more challenging journeys a programmer can undertake. Adding new features to the language will not change the difficulty of learning its syntax and how to use it effectively, but the approach will change. People will still learn about pointers and how they work, but they'll also learn about smart pointers and how they're managed. In some cases, people will learn things differently than before. For example, people will still need to learn how to initialize things, but now they'll learn about Uniform Initialization and Initializer Lists as primary ways to do things. In some cases, perhaps understanding things will be easier with the addition of the new for syntax for ranges or the auto return type in a function declaration. I think that overall, C++ will become easier to learn and use while at the same time becoming easier to teach.
Mastering a language is a long-term goal, it can not be done over night. It's silly to think that one can have mastery over something as complex as C++ quickly. It takes practice, experience and debugging code to really hammer something in. Academically learning is one thing, but putting to use that knowledge is an entire different monster. I think that if one already has mastery of the C++ language, the new concepts will not pose too much of a burden, but a new comer may have an advantage in that they won't bother learning some of the more obsolete ways of doing things.
1) The language itself
As far as I'm aware, there are really no breaking changes between
C++'03 and C++'0x. The only one I can think of here relates to using
auto as a storage class specifier, but since it had no semantic
meaning I don't see that being an issue.
There are a lot of other academic fixes to the standard which are very
necssary, for example better descriptions for the layout of member
data. Finally, with multi-core/cpu architectures becoming the norm,
fixing the memory model was a must.
2) Knowledge of the language
Personally, I feel that for 99.9% of C++ developers the newer language is going to be easier to use. I'm specifically thinking of features such as auto, lambda's and constexpr. These features really should make using the language more enjoyable.
At a more advanced level, you have other features such as variadic
templates etc that help the more advanced users.
But there's nothing new here, I'm still surprised at the amount of
everyday C++ developers that haven't used (or even heard of) the STL.
From a personal perspective, the only feature I'm a bit concerned about in the new standard is that of concepts. As it is such a large change, the same problems that occurred with templates (ie. completely broken implementations) is a real danger.
Update post FDIS going out for voting:
As it happens, 'concepts' was dropped for C++ 0x and will be taken up again for C++ 1x. In the end there are some changes other than auto which could break your code, but in practise they'll probably be pretty rare. The key differences can be found in Appendix C.2 of the FDIS (pdf).
For me, one of the most important, will be:
unique_ptr + std::move() !
Imagine:
Smart pointer without any overhead:
no reference counting operations
no additional storage for reference counter variable
Smart pointer that can be moved, ie. no destructor/constructor calls when moved
What does this give you? Exception safe, cheap (pointers..) containers without any costs. The container will be able to just memcpy() unique_ptrs, so there will be no performance loss caused by wrapping regular pointer by smart pointer! So, once again:
You can use pointers
It will be safe (no memory leaks)
It will cost you nothing
You will be able to store them in containers, and they will be able to do "massive" moves (memcpy-like) with them cheaply.
It will be exception safe
:)
Another point of view:
Actually, when you move group of objects using copy(), there is constructor and destructor call for every object instance. When you copy 1000 objects of 1kb size, there will be at least one memcpy() and 2000 function calls.
If you would want to avoid the thousands of calls, you would have to use pointers.
But pointers are: dangerous, etc. Actual smart pointers will not help you, they solve other problems.
There is no solution for now. You must pay for C++ RAII/pointer/valuevars design from time to time. But with C++0x, using unique_ptr will allow to do "massive" moves of objects (yes, practically objects, because pointer will be smart) without "massive" constructor/destructor calls, and without risk of using pointers! For me, this is really important.
It's like relaxing the RAII concept (because of using pointers) without loosing RAII benefits. Another aspect: pointer wrapped in unique_ptr() will behave in many aspects similar to java reference object variable. The difference is that unique_ptr() will be able to exist in only one scope at a time.
Your friend is partially right but mostly wrong: it's the same language with extra features.
The good thing is, you don't have to master all the new features. One of the primary mandates for a standards body is to not break existing code, so you'll be able to go on, happily coding in your old style (I'm still mostly a C coder even though I do "C++" applications :-).
Only when you want to have a look at the new features will you need to bone up on the changes. This is a process you can stretch over years if need be.
My advice is to learn what all the new features are at a high level (if only to sound knowledgeable in job interviews) but learn the details slowly.
In some respects, C++0x should be easier to teach/learn than current C++:
Looping through a container - the new for syntax is far easier than for_each + functor or looping manually using iterators
Initialising containers: we'll be able to initialise sequences with the same syntax as arrays
Memory management: out goes dodgy old auto_ptr, in comes well-defined unique_ptr and shared_ptr
Lambdas, although necessarily more complex than other languages' equivalents, will be easier to learn than the C++98 process of defining function objects in a different scope.
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
A year ago, I was writing strict C89, because the product in question was aggressively portable to embedded platforms, some of which had compilers with radically different ideas of which bits of C99 it's worth supporting. So a 20-year-old standard still hasn't been fully replaced by its 10-year-old successor.
So I don't expect to be able to get away from C++03 any time soon.
I do expect to use C++0x features where appropriate. Just as I use C99 features in C code, and gcc extensions in C and C++ (and would use MSVC extensions, although I've never worked on MSVC-only code for more than trivial amounts of time). But I expect it to be "nice to have" rather than baseline, pretty much indefinitely.
You have a point, but it's always been the case. There is a lot of C++ code out there that still doesn't incorporate anything from the '98 standard just because of the innate conservatism of some coders. Some of us remember a dark time before the std:: namespace (before namespaces, in fact), when everyone wrote their own string class, and pointers walked around naked all the time. There is a reason why we talk about "modern C++ style" - to distinguish from the earlier style because some people still have to maintain or update code in that style.
Any language has to evolve to survive, and any language that evolves will have a divided user base, if only because people vary in their attitude towards estimating opportunity costs in applying new language features to their own work.
With the advent of C++0x in shipping compilers, this conversation will be played out over and over in dev teams across the world:
YOUNGSTER: I've just discovered these things called lambdas! And I'm finding lots of ways to use them to make our code more expressive! Look, I rewrote your old Foo class, isn't that much neater?
OLDSTER: There was nothing wrong with my old Foo class. You're just looking for excuses to use a "cool" new feature unnecessarily. Why do you keep trying to make my life so complicated? Why do I keep having to learn new things? We need another war, that's what we need.
YOUNGSTER: You're just too stuck in your ways, old man, we shouldn't even be using C++ these days... if it was up to me -
OLDSTER: If it was up to me we'd have stuck with PL/1, but no... my wife had to vote for Carter and now we're stuck with all this object-oriented crap. There's nothing you can do with std::transform and lambdas that I can't do with a goto and a couple of labels.
etc.
Your programming career will always involve learning and re-learning. You can't expect c++ to stay the same till you retire and to be using the same methods and practises that you were using 40 years ago. Technology rolls on, and it rolls quickly. It's your job to keep up with it. Of course you can ignore that, and continue to work the same way you currently do, but in 5 / 10 years time you'll become so outdated that you'll be forced to learn it all then when you're trying to change job. And it will have been a lot easier to learn on the job all those years before :)
A few months ago I heard Bjarne Stroustrup give a talk titled 50 years of C++. Admittedly, I'm not a C++ programmer, but it seemed to me that he certainly doesn't think 0x is a new language!
Whether or not we can consider it a "new language", I think that's semantics. It doesn't make a difference. It's backwards compatible with our current C++ code, and it's a better language. Whether or not we consider it "the same language" doesn't matter.
About learning the language, remember that a lot of the new features are there to make the language easier to learn and use. Most of the features that add complexity are intended for library developers only. They can use these new features to make better, more efficient, and easier to use libraries, that you can then use without knowing about the features. Several of the changes actually simplify and generalize existing features, making them easier for newcomers to learn.
It is a big update, yes, but it is guided by a decade of experience with the current C++ standard. Every change is there because experience has shown that it is needed. In fact, the committee is being extremely cautious and conservative, and have refused a huge number of other language improvements. What is added here is only the fundamentals that 1) everyone could agree on, and 2) could be specified in time, without delaying the new standard.
It is not simply a few language designers sitting down and brainstorming new features they'd like to try.
Concepts and Concept Maps are going to greatly increase the grokability of template frameworks. If you've ever poured over the Boost source you'll know what I mean. You're constantly going from source to docs because the language just doesn't have the facilities to express template concepts. Hopefully Concepts + Duck Typing will give us the best of both worlds whereby entry points to template libraries can explicitly declare requirements but still have the freedom that Duck Typing provides when writing generic code.
There are lots of good things in C++0x, but they're mostly evolutionary changes that refine or extend existing ideas. I don't think it's different enough to justify calling it a "new language".

Is there a C++ styled language without C trappings?

The main reason to use C++ over a managed language these days is to gain the benefits C++ brings to the table. One of the pillars of C++ is "you don't pay for what you don't need". It can be argued though that sometimes you don't want to pay for backward compatibility with C. Many of the odd quirks of C++ can be attributed to this backward compatibility. What other languages are there where "you don't pay for what you don't need" including backward compatibility with C?
Edit/clarification: The real killer for me is in that second sentence. Is there a language truly designed from the ground up that doesn't impose things you don't want on your code? C++ has that as its design philosophy: don't want RTTI? It doesn't exist. Don't want garbage collection? It's not there. The problem with C++ is it (IMO) violates this requirement when it refuses to break from the past. I don't want the cruft of backward compatibility with 20 year old code hampering my going forward. C++ isn't willing to do that. What is/has?
Edit2: I suppose I should have been more clear about what a cost is. There are multiple potential costs. The one I was initially focusing on was runtime cost.
In C++ polymorphism through virtual methods has a cost. But not all methods pay that cost. A non-virtual C++ method is called with the same runtime cost as a plain old C function (having at least one parameter). C++ does not require you to use polymorphism. In other OOP languages all methods are virtual and so the cost of polymorphism cannot be avoided.
Runtime costs are most important but other costs mitigate against that. Assembly language would have the least runtime overhead obviously but the writing and maintenance costs of assembly language are a huge strike against it.
With that in mind the idea is to find languages that provide useful abstractions which, when not in use, do not affect runtime costs.
D language
D is a general purpose systems and applications programming language. It is a higher level language than C++, but retains the ability to write high performance code and interface directly with the operating system API's and with hardware. D is well suited to writing medium to large scale million line programs with teams of developers. D is easy to learn, provides many capabilities to aid the programmer, and is well suited to aggressive compiler optimization technology.
The general look of D is like C and C++. This makes it easier to learn and port code to D. Transitioning from C/C++ to D should feel natural. The programmer will not have to learn an entirely new way of doing things.
D drops C source code compatibility. Extensions to C that maintain source compatibility have already been done (C++ and ObjectiveC). Further work in this area is hampered by so much legacy code it is unlikely that significant improvements can be made.
I'm tempted to downvote this question (but as of yet I have not). Your requirement that "you don't pay for what you don't use" depends very heavily on what exactly you use. You already mentioned in one of your comments that assembly is perhaps the most fluff-free language there is, but you complain about C, which sits somewhere between assembly and C++.
If you find garbage collection and explicit object-oriented features "fluff" then frankly, I think C is probably the best candidate. The language is actually small and elegantly designed. It meets most people's "fits in one's head" requirement. For a language that gives such close control over the hardware, it's very expressive.
If you are not tied to the hardware, then Scheme or some other minimalist dialect of Lisp probably fits the bill for "doesn't impose what you don't want on your code" but again, it all depends so heavily on what exactly it is that you don't want.
If there are some higher-level features you "can't live without" (which it seems you are implying by naming "C++ without C compatibility" as your language of choice) then you should say explicitly what those are. What exactly is C++ bringing to the table that you don't want to sacrifice?
Ada is another alternative - gc is optional in the implementation of the language:
http://en.wikipedia.org/wiki/Ada_(programming_language)
You can also try Ch from : http://www.softintegration.com/
SPECS is alternative syntactic binding for C++. This binding includes a simple syntax for declaring and defining types, functions, objects and templates, and changes several problematic operators and control structures. The resulting syntax is LALR(1) parsable.
Eiffel. Looks like Pascal, has C++ type semantics. Also, adds special "programming by contract" assertions that are built into the language definigion, years before people were talking about "X"Unit libraries.
Try Aikido. Syntactically similar to C++, and I don't think it inherits C style backward compatibility.
Aikido # Wikipedia
I'm not exactly sure what costs backward compatibility with C has...
I'm not exactly sure there are any runtime hits associated with C backwards compatibility, could you elaborate? Certainly the backwards compatibility does give you more rope to hang yourself with but strict enforcement of rules can mitigate that. So I suppose my answer to you is C++ is the language that does what you ask for and you just need a tool to enforce valid constructs.
Edit: odd that i didn't notice the post directly below me with the same first line comment