Related
I'm currently thinking about C++ in safety-critical software (DO-178C DAL-D) and definitions of a coding standard. I was looking at MISRA C++ which is again 10 years old and misses all the C++11…17 features.
While being conservative regarding safety is often not a bad idea, the new language features might be beneficial to safety.
During reviews one has to argue about why you made certain decisions. And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
But it is hard to find language features that carry the safety aspects more prominently than "make things clearer". What aspects of modern C++ really help regarding safety?
I'm setting up a small exercise project to test these ideas and currently totally focused on the "let the compiler check your assumptions". For example, we have just started to use [[nodiscard]] and found at least two bugs this way within the first hour. But what aspects of modern c++ were designed and shall be used with safety in mind?
These come to my mind first :
atomic and memory_model : they allow writing portable code in concurrent / lockfree contexts.
unique_ptr : helps simplify memory handling
override lets you find bugs at compile time.
constexpr if makes the code be written closer to where it is used, which helps writing less bugs (sometimes, to specialize a behaviour according to a template parameter, you would write a class with n specializations. Now you can use if constexpr with n branches instead).
etc... in a way, considering the benefits on code clarity and portability, I think every feature of C++11/14/17 helps.
And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
In my not so humble opinion, there are few language features, that is, standard general purpose programming language features that both, fall outside of the allowed standards AND are worth the time and energy to argue your way through in an assessment. If you are aiming for a higher level of abstraction (which is a good thing also for safety, although you'll hardly find anyone openly admitting this, because it would render half of the safety industry unemployed and the other half severly outdated) then you'd be better off to resort to a domain specific language and put the effort in a flawless compilation (to source) to a standard conforming platform. If you don't work in an engineering culture which allows this, then you can resort to some of the patches that the other answer here proposes, but it is always difficult to convincingly transport the intention and meaning of non-specific measures to other safety engineers (a dedicated domain specific language is much easier both to support or object).
That said I think the advances in parallel programming of modern C++ will find their way into the standards relatively quickly.
Why does the syntax allow function declarations inside function bodies?
It does create a lot of questions here on SO where function declarations are mistaken for variable initializations and the like.
a object();
Not to mention most vexing parse.
Is there any use-case that is not easily achieved by the more common scope hiding means like namespaces and members?
Is it for historical reasons?
Addendum: If for historical reasons, inherited from C to limit scope, what is the problem banning them?
Whilst many C++ applications are written solely in C++ code, there are also a lot of code that is some mixture of C and C++ code. This mixture is definitely something of C++'s important part of its usefulness (including the "easy" interfacing to existing API's, anything from OpenGL or cURL to custom hardware drivers that are written in C, can pretty much be used directly with very little effort, where trying to interface your custom hardware C driver into a Basic interpreter is pretty diffcult)
If we start breaking that compatibility by removing things "for no particular value", then C++ is no longer as useful. Aside from giving better error messages in a condition that is confusing, which of course is useful in itself, it's hard to see how it's useful to REMOVE this - and that's of course assuming NONE of the C++ is using this in itself - and I wouldn't be surprised if it DOES happen at times even in modern code (for whatever good or bad reasons).
In general, C++ tries very hard to not break backwards compatibility - and this, in my mind, is a good thing. That's why the keyword static is used for a bunch of different things, rather than adding a new keyword, and auto means something different now than it used to in C, but it's not a "new" keyword that could break existing code that happened to use whatever other word chosen (and that is a small break, but nobody really used it for the past 20 years anyway).
Well, the ability to declare functions within function bodies is inherited from C so, by definition, there is a reason involving historical and backward-compatibility reasons. When there is likely to be real-world code which uses a feature, the argument to remove that feature from the language is weakened.
People - particularly those who only use the latest version of the language, and are not required to maintain legacy code - do tend to under-estimate how strong an argument backward compatibility is in C++. The original C++ standard was specifically required to maintain backward compatibility with C. As a rough rule, standards discourage removing old features if doing so is likely to break existing code. It can be done, however, if the only possible usage causes a danger that cannot be prevented (which is reason for removal of gets(), for example).
When maintaining legacy code there are often significant costs with updating a code base to replace all instances of an old construct with some modern replacement. A coding change that may be insignificant for a hobbyist programmer may be extremely costly when maintaining large-scale code bases in regulatory environments, where it is necessary to provide formal evidence and audit trail that the change of code does not affect ability to meet its original requirement.
There are certain programming styles where it is useful to be able to limit the scope of any declarations. Not everyone uses such programming styles, but the reason such features are in the language is to allow the programmer the choice of programming technique. And, whether advocates of removing such features like it or not, there is a certain amount of code which uses such constructs usefully. That significantly weakens the case for removing the feature from the language.
These sorts of arguments will tend to come up for languages that are used in large-scale development, to develop systems in regulatory environments, etc etc. C and C++ (and a number of other languages) are used in such settings, so will tend to accumulate some set of features for "historical" or "backward compatibility" reasons. It is possible to make a case for removing such features by providing evidence that the feature is not in real-world use. But, since the argument is about justifying a negative claim, that is difficult (all it needs is someone to provide ONE example of continuing real-world beneficial usage and suddenly a counter-example exists which supports the case for keeping the feature).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Many embedded engineers use c++, but some argue it's bad because it's "object oriented"?
Is it true that being object oriented makes it bad for embedded systems, and if so, why is that really the case?
Edit: Here's a quick reference for those who asked:
so we
prefer people not to use divide ..., malloc ..., or other object
oriented practice that carry large
penalty.
I guess the question is are objects considered heavyweight in the context of an embedded system? Some of the answers here suggest they are and some suggest they're not.
Whilst I'm not sure it answers your question, I can summarise the reasons my previous companies source code was pure C.
It's firstly worth summarising the situation:
we wanted to write a large amount of "core" code that would be highly portable across a large number of ARM embedded systems (mostly mid-range mobile phones; both smart phones and ones running RTOSs of various ages)
the platforms generally had a workable C compiler, though some for example didn't support floating point "double"s.
in some cases the platform had a reasonable implementation of the standard library, but in many cases it didn't.
a C++ compiler was not available on most platforms, and where it was available support for the C++ standard library, STL or exceptions was highly variable.
debuggers often weren't available (a serial port you could send debug printfs to was considered a luxury)
we always had access to a reasonable amount of memory, but often not to a reasonable malloc() implementation
Given that, we worked entirely in C, and even then only a restricted set of C 89. The resulting code was highly portable. We often used object orientated concepts though.
These days "embedded" is a very wide definition. It covers everything from 8 bit microprocessors with no RAM or C compilers upto what are essentially high end PCs (albeit not running Microsoft Windows) - I don't know where your project/company sits in that range.
Taking your quote at face value, dynamic memory allocation is completely separate concept from object-oriented software design, so it's outright false. You can have object-oriented design, and not use dynamic memory allocation.
In fact, you can do OO in C to an extent (that's what Linux kernel does). The real reason that many embedded developers don't like C++ is that it's very complex and it is hard to write straight-forward and predictable code in it. Linus has a good recent rant on why he does not like C++ (it's better and more reasoned than his old one, I promise). Probably most folks just don't articulate it very well.
What makes you say that C++ is Object Oriented? C++ is multiparadigm, and not all of the features that C++ provides are useful for the embedded market due to their overheads. (So... Just don't use those features! Problem solved!)
Object Oriented is great for embedded systems. It focuses a lot on encapsulation, data hiding, and code sharing. One can have Object Oriented embedded systems without division or dynamic memory allocation.
Division and dynamic memory allocation are enemies of embedded systems regardless of Object Oriented, Data Oriented or procedural programming. These concepts may or may not be used in the implementation of Object Oriented designs.
Object Oriented allows for a UART class to transmit instances of Message objects without knowing the content of the Message objects. A Message could be the base class and have several descendant classes.
The C++ language helps promote safe coding in embedded systems by allowing constructors, copy constructors, and destructors, which would only remembered in the highest disciplined C language embedded systems.
Exception handling is also a pain to get working in the C language. The C++ provides better facilities embedded into the language.
The C++ language provides templates for writing common code to handle different data types. A classic example is a ring buffer or circular queue. In the C language, one would have to use "pointers to void" so that any object could be passed. C++ offers a template so one can write a Circular_Queue class that works with different data types and has better compile-time type checking.
Inheritance allows for better code sharing. The shared code is factored into a base class and child classes can be created that share the same functionality; through inheritance.
The C language profides function pointers. The C++ languages provides facilities for function objects (function pointers with attributes).
Sorry, I just don't like those people who limit embedded systems to C language because of rumors and little knowledge and experience with C++.
Object-oriented design by itself isn't bad. The answer lies in your quote. Especially in real-time embedded systems, you want to make your code as light and efficient as possible. The things mentioned in your quote (objects, division, dynamic memory allocation) are relatively heavyweight and can usually be replaced with simpler alternatives (for eg. using bit-manipulation to approximate division, allocating memory on the stack or with static pools) to improve performance in time-critical systems.
Nothing about 'object-oriented' is bad for embedded systems. OO is just a way of thinking about software.
What's bad for embedded systems is that, in general, they have less sophisticated debuggers, and C++ does a lot of crazy stuff 'behind your back', so to speak. Those pieces of hard-to-get-access-to code will drive you nuts.
C++ was designed with the philosophy of don't pay for what you don't use. So apart from the lack of good embedded compilers, there's no real reason.
Maybe CFront could have compiled C++ into C, which has a myriad of compilers...
Edit: The Comeau compiler transforms C++ into plain C, so the no-compiler argument doesn't hold.
As others have noted, 'embedded' encompasses a broad and varied range of hardware/software options. But...
The quote you give will give microcontroller embedded types shivers. Dynamic allocation is a no-no, if you have an error, you crash the system in unpredictable ways. Divides are heavily discouraged since they take forever in execution time. Objects are only discouraged insofar as they tend to carry lot's of 'stuff' around with them, all that 'stuff' takes up space, and microcontrollers don't have any.
I think of embedded as being projects that are small and specific, you don't worry much about extensibility or portability. You write clean code in C that does only and exactly what you want your device to do, reliably. You choose one chip family so you can move your (almost the) same code among different hardware options with minor tweaks to the port your writing too or initialization of configuration fuses.
So, you don't need to define
4 wheeled Transportation
Car
Toyota
Since you're only working on Toyotas. And the difference in accelerations between a Camry and Corolla are stored as constants in a register.
As said above, it's what object-oriented / malloc / math do behind your back that carries a penalty - both in code size and CPU cycles which are usually in short supply in embedded.
As an example, including the sqrt() function in a loop added so much overhead in recursive calculations that we had to remove it and work a fast approximation around it, using a lookup table if I remember correctly.
By all means use any tools/langauges you like, but you need to at least be able to lift the lid and check just how much extra code is being generated behind your back.
Programming is always about using the right tool for the job. There are no pat answers, and that is especially true in the embedded world. If you want to become skilled in embedded development you will be just as intimately familier with C as you are with C++.
First, let's pick this apart:
so we prefer people not to use divide ..., malloc ..., or other object oriented practice that carry large penalty.
Division and malloc are not unique to object oriented programming, they are present in procedural languages too (and presumably functional, and whatever other paradigms you might think of).
Division and malloc can be a problem on an embedded system if the system has sufficiently limited resources, that much is true, but they will be a problem no matter what programming paradigm you use.
Onto the main issue "Is object orientation bad for embedded systems?".
Firstly, 'object orientation' is quite a broad spectrum. Some people have conflicting ideas about what it actually means. There's a minimalist definition where an object is essentially just a bundle of functions (or 'methods') and data, and there's a more 'purist' definition that also includes features like inheritance and polymorphism.
If you take the minimalist definition of OOP then no - OOP is not bad for embedded systems. It does depend on the language, but it's entirely possible for using objects to be just as cheap as not using objects (and possibly cheaper in some situations). In C++, an object (without virtual methods, which I'll get to in a moment) takes up no more memory than its individual fields would if they weren't part of an object. In C++, (the size of) an object is equal to the sum of (the size of) its parts.
However, if you take the 'puritan' view of OOP and insist on including inheritance and polymorphism then the answer is 'yes, it is bad'. Inheritance is perhaps less of a concern (depending on the language), but polymorphism via virtual methods is a definite memory chewer. virtual functions are usually implemented by maintaining a 'vtable' (a virtual method table) that stores pointers to the correct functions, and each object must store a pointer to its vtable to enable dynamic dispatch (the process of calling virtual functions) to work properly. There are circumstances in which inheritance can use more memory than solutions that don't require inheritance - typically when comparing inheritance to composition, composition sometimes uses less memory.
One last point, particularly for the case of C++ (since that's usually what people mean when they talk about using OOP on embedded systems). People often say that "C++ does strange things behind your back" - it does do some things without those things being immediately obvious when looking at the code, such as producing the vtables I mentioned earlier, however it doesn't do these things "behind your back" in an attempt to thwart you, it does these things because they are simply needed to implement the features being used. Overall there are very few detrimental things that happen 'behind the scenes', and the things it does aren't exactly arcane or mysterious, they're generally quite well known and they're things the programmer ought to be aware of when programming for an embedded system. If you don't know about them then you don't know your tools properly and you should research them some more.
Having said all that, remember that people are free to be selective with which language features they do and don't use. It makes absolute sense to avoid the more expensive features like virtual functions, but it doesn't make sense to forgo an entire language (like C++) simply because it has a few expensive entirely optional features (like virtual functions) - that's just throwing the baby out with the bathwater.
Recently I started reading (just a bit) the current draft for the future C++11 standard.
There are lots of new features, some of them already available via Boost Libs. Of course, I'm pretty happy with this new standard and I'd like to play with all the new features as soon as possibile.
Anyway, speaking about this draft with some friends, long-time C++ devs, some worries emerged. So, I ask you (to answer them):
1) The language itself
This update is huge, maybe too huge for a single standard update. Huge for the compiler vendors (even if most of them already started implementing some features) but also for the end-users.
In particular, a friend of mine told me "this is a sort of new language".
Can we consider it a brand new language after this update?
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
2) Knowledge of the language
How the learning curve will be impacted by the new standard?
Teaching the language will be more difficult?
Some features, while pretty awesome, seem a bit too "academic" to me (as definition I mean). Am I wrong?
Mastering all these new additions could be a nightmare, couldn't it?
In short, no, we can't consider this a new language. It's the same language, new features. But instead of being bolted on by using the Boost libs, they're now going to be standard inclusions if you're using a compiler that supports the 0x standard.
One doesn't have to use the new standard while using a compiler that supports the new standard. One will have to learn and use the new standard if certain constraints exist on the software being developed, however, but that's a constraint with any software endeavor. I think that the new features that the 0x standard brings will make doing certain things easier and less error prone, so it's to one's advantage to learn what the new features are, and how they will improve their design strategy for future work. One will also have to learn it so that when working on software developed with it, they will understand what's going on and not make large boo-boos.
As to whether I will "switch to the new standard", if that means that I will learn the new standard and use it where applicable and where it increases my productivity, then yes, I certainly plan to switch. However, if this means that I will limit myself to only working with the new features of the 0x standard, then no, since much of my work involves code written before the standard and it would be a colossal undertaking to redesign everything to use the new features. Not only that, but it may introduce new bugs and performance issues that I'm not aware of without experience.
Learning C++ has always been one of the more challenging journeys a programmer can undertake. Adding new features to the language will not change the difficulty of learning its syntax and how to use it effectively, but the approach will change. People will still learn about pointers and how they work, but they'll also learn about smart pointers and how they're managed. In some cases, people will learn things differently than before. For example, people will still need to learn how to initialize things, but now they'll learn about Uniform Initialization and Initializer Lists as primary ways to do things. In some cases, perhaps understanding things will be easier with the addition of the new for syntax for ranges or the auto return type in a function declaration. I think that overall, C++ will become easier to learn and use while at the same time becoming easier to teach.
Mastering a language is a long-term goal, it can not be done over night. It's silly to think that one can have mastery over something as complex as C++ quickly. It takes practice, experience and debugging code to really hammer something in. Academically learning is one thing, but putting to use that knowledge is an entire different monster. I think that if one already has mastery of the C++ language, the new concepts will not pose too much of a burden, but a new comer may have an advantage in that they won't bother learning some of the more obsolete ways of doing things.
1) The language itself
As far as I'm aware, there are really no breaking changes between
C++'03 and C++'0x. The only one I can think of here relates to using
auto as a storage class specifier, but since it had no semantic
meaning I don't see that being an issue.
There are a lot of other academic fixes to the standard which are very
necssary, for example better descriptions for the layout of member
data. Finally, with multi-core/cpu architectures becoming the norm,
fixing the memory model was a must.
2) Knowledge of the language
Personally, I feel that for 99.9% of C++ developers the newer language is going to be easier to use. I'm specifically thinking of features such as auto, lambda's and constexpr. These features really should make using the language more enjoyable.
At a more advanced level, you have other features such as variadic
templates etc that help the more advanced users.
But there's nothing new here, I'm still surprised at the amount of
everyday C++ developers that haven't used (or even heard of) the STL.
From a personal perspective, the only feature I'm a bit concerned about in the new standard is that of concepts. As it is such a large change, the same problems that occurred with templates (ie. completely broken implementations) is a real danger.
Update post FDIS going out for voting:
As it happens, 'concepts' was dropped for C++ 0x and will be taken up again for C++ 1x. In the end there are some changes other than auto which could break your code, but in practise they'll probably be pretty rare. The key differences can be found in Appendix C.2 of the FDIS (pdf).
For me, one of the most important, will be:
unique_ptr + std::move() !
Imagine:
Smart pointer without any overhead:
no reference counting operations
no additional storage for reference counter variable
Smart pointer that can be moved, ie. no destructor/constructor calls when moved
What does this give you? Exception safe, cheap (pointers..) containers without any costs. The container will be able to just memcpy() unique_ptrs, so there will be no performance loss caused by wrapping regular pointer by smart pointer! So, once again:
You can use pointers
It will be safe (no memory leaks)
It will cost you nothing
You will be able to store them in containers, and they will be able to do "massive" moves (memcpy-like) with them cheaply.
It will be exception safe
:)
Another point of view:
Actually, when you move group of objects using copy(), there is constructor and destructor call for every object instance. When you copy 1000 objects of 1kb size, there will be at least one memcpy() and 2000 function calls.
If you would want to avoid the thousands of calls, you would have to use pointers.
But pointers are: dangerous, etc. Actual smart pointers will not help you, they solve other problems.
There is no solution for now. You must pay for C++ RAII/pointer/valuevars design from time to time. But with C++0x, using unique_ptr will allow to do "massive" moves of objects (yes, practically objects, because pointer will be smart) without "massive" constructor/destructor calls, and without risk of using pointers! For me, this is really important.
It's like relaxing the RAII concept (because of using pointers) without loosing RAII benefits. Another aspect: pointer wrapped in unique_ptr() will behave in many aspects similar to java reference object variable. The difference is that unique_ptr() will be able to exist in only one scope at a time.
Your friend is partially right but mostly wrong: it's the same language with extra features.
The good thing is, you don't have to master all the new features. One of the primary mandates for a standards body is to not break existing code, so you'll be able to go on, happily coding in your old style (I'm still mostly a C coder even though I do "C++" applications :-).
Only when you want to have a look at the new features will you need to bone up on the changes. This is a process you can stretch over years if need be.
My advice is to learn what all the new features are at a high level (if only to sound knowledgeable in job interviews) but learn the details slowly.
In some respects, C++0x should be easier to teach/learn than current C++:
Looping through a container - the new for syntax is far easier than for_each + functor or looping manually using iterators
Initialising containers: we'll be able to initialise sequences with the same syntax as arrays
Memory management: out goes dodgy old auto_ptr, in comes well-defined unique_ptr and shared_ptr
Lambdas, although necessarily more complex than other languages' equivalents, will be easier to learn than the C++98 process of defining function objects in a different scope.
Do you plan to switch to the new standard or keep up with the "old" standard(s)?
A year ago, I was writing strict C89, because the product in question was aggressively portable to embedded platforms, some of which had compilers with radically different ideas of which bits of C99 it's worth supporting. So a 20-year-old standard still hasn't been fully replaced by its 10-year-old successor.
So I don't expect to be able to get away from C++03 any time soon.
I do expect to use C++0x features where appropriate. Just as I use C99 features in C code, and gcc extensions in C and C++ (and would use MSVC extensions, although I've never worked on MSVC-only code for more than trivial amounts of time). But I expect it to be "nice to have" rather than baseline, pretty much indefinitely.
You have a point, but it's always been the case. There is a lot of C++ code out there that still doesn't incorporate anything from the '98 standard just because of the innate conservatism of some coders. Some of us remember a dark time before the std:: namespace (before namespaces, in fact), when everyone wrote their own string class, and pointers walked around naked all the time. There is a reason why we talk about "modern C++ style" - to distinguish from the earlier style because some people still have to maintain or update code in that style.
Any language has to evolve to survive, and any language that evolves will have a divided user base, if only because people vary in their attitude towards estimating opportunity costs in applying new language features to their own work.
With the advent of C++0x in shipping compilers, this conversation will be played out over and over in dev teams across the world:
YOUNGSTER: I've just discovered these things called lambdas! And I'm finding lots of ways to use them to make our code more expressive! Look, I rewrote your old Foo class, isn't that much neater?
OLDSTER: There was nothing wrong with my old Foo class. You're just looking for excuses to use a "cool" new feature unnecessarily. Why do you keep trying to make my life so complicated? Why do I keep having to learn new things? We need another war, that's what we need.
YOUNGSTER: You're just too stuck in your ways, old man, we shouldn't even be using C++ these days... if it was up to me -
OLDSTER: If it was up to me we'd have stuck with PL/1, but no... my wife had to vote for Carter and now we're stuck with all this object-oriented crap. There's nothing you can do with std::transform and lambdas that I can't do with a goto and a couple of labels.
etc.
Your programming career will always involve learning and re-learning. You can't expect c++ to stay the same till you retire and to be using the same methods and practises that you were using 40 years ago. Technology rolls on, and it rolls quickly. It's your job to keep up with it. Of course you can ignore that, and continue to work the same way you currently do, but in 5 / 10 years time you'll become so outdated that you'll be forced to learn it all then when you're trying to change job. And it will have been a lot easier to learn on the job all those years before :)
A few months ago I heard Bjarne Stroustrup give a talk titled 50 years of C++. Admittedly, I'm not a C++ programmer, but it seemed to me that he certainly doesn't think 0x is a new language!
Whether or not we can consider it a "new language", I think that's semantics. It doesn't make a difference. It's backwards compatible with our current C++ code, and it's a better language. Whether or not we consider it "the same language" doesn't matter.
About learning the language, remember that a lot of the new features are there to make the language easier to learn and use. Most of the features that add complexity are intended for library developers only. They can use these new features to make better, more efficient, and easier to use libraries, that you can then use without knowing about the features. Several of the changes actually simplify and generalize existing features, making them easier for newcomers to learn.
It is a big update, yes, but it is guided by a decade of experience with the current C++ standard. Every change is there because experience has shown that it is needed. In fact, the committee is being extremely cautious and conservative, and have refused a huge number of other language improvements. What is added here is only the fundamentals that 1) everyone could agree on, and 2) could be specified in time, without delaying the new standard.
It is not simply a few language designers sitting down and brainstorming new features they'd like to try.
Concepts and Concept Maps are going to greatly increase the grokability of template frameworks. If you've ever poured over the Boost source you'll know what I mean. You're constantly going from source to docs because the language just doesn't have the facilities to express template concepts. Hopefully Concepts + Duck Typing will give us the best of both worlds whereby entry points to template libraries can explicitly declare requirements but still have the freedom that Duck Typing provides when writing generic code.
There are lots of good things in C++0x, but they're mostly evolutionary changes that refine or extend existing ideas. I don't think it's different enough to justify calling it a "new language".
I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it.
I'm particularly interested in aspects of C++ that are little known, or underutilised.
RAII / deterministic finalization. No, garbage collection is not just as good when you're dealing with a scarce, shared resource.
Unfettered access to OS APIs.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language.
Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
Shooting oneself in the foot.
No other language offers such a creative array of tools. Pointers, multiple inheritance, templates, operator overloading and a preprocessor.
A wonderfully powerful language that also provides abundant opportunities for foot shooting.
Edit: I apologize if my lame attempt at humor has offended some. I consider C++ to be the most powerful language that I have ever used -- with abilities to code at the assembly language level when desired, and at a high level of abstraction when desired. C++ has been my primary language since the early '90s.
My answer was based on years of experience of shooting myself in the foot. At least C++ allows me to do so elegantly.
Deterministic object destruction leads to some magnificent design patterns. For instance, while RAII is not as general a technique as garbage collection, it leads to some impressive capabilities which you cannot get with GC.
C++ is also unique in that it has a Turing-complete preprocessor. This allows you to prefer (as in the opposite of defer) a lot of code tasks to compile time instead of run time. For instance, in real code you might have an assert() statement to test for a never-happen. The reality is that it will sooner or later happen... and happen at 3:00am when you're on vacation. The C++ preprocessor assert does the same test at compile time. Compile-time asserts fail between 8:00am and 5:00pm while you're sitting in front of the computer watching the code build; run-time asserts fail at 3:00am when you're asleep in Hawai'i. It's pretty easy to see the win there.
In most languages, strategy patterns are done at run-time and throw exceptions in the event of a type mismatch. In C++, strategies can be done at compile-time through the preprocessor facility and can be guaranteed typesafe.
Write inline assembly (MMX, SSE, etc.).
Deterministic object destruction. I.e. real destructors. Makes managing scarce resources easier. Allows for RAII.
Easier access to structured binary data. It's easier to cast a memory region as a struct than to parse it and copy each value into a struct.
Multiple inheritance. Not everything can be done with interfaces. Sometimes you want to inherit actual functionality too.
I think i'm just going to praise C++ for its ability to use templates to catch expressions and execute it lazily when it's needed. For those not knowing what this is about, here is an example.
Template mixins provide reuse that I haven't seen elsewhere. With them you can build up a large object with lots of behaviour as though you had written the whole thing by hand. But all these small aspects of its functionality can be reused, it's particularly great for implementing parts of an interface (or the whole thing), where you are implementing a number of interfaces. The resulting object is lightning-fast because it's all inlined.
Speed may not matter in many cases, but when you're writing component software, and users may combine components in unthought-of complicated ways to do things, the speed of inlining and C++ seems to allow much more complex structures to be created.
Absolute control over the memory layout, alignment, and access when you need it. If you're careful enough you can write some very cache-friendly programs. For multi-processor programs, you can also eliminate a lot of slow downs from cache coherence mechanisms.
(Okay, you can do this in C, assembly, and probably Fortran too. But C++ lets you write the rest of your program at a higher level.)
This will probably not be a popular answer, but I think what sets C++ apart are its compile-time capabilities, e.g. templates and #define. You can do all sorts of text manipulation on your program using these features, much of which has been abandoned in later languages in the name of simplicity. To me that's way more important than any low-level bit fiddling that's supposedly easier or faster in C++.
C#, for instance, doesn't have a real macro facility. You can't #include another file directly into the source, or use #define to manipulate the program as text. Think about any time you had to mechanically type repetitive code and you knew there was a better way. You may even have written a program to generate code for you. Well, the C++ preprocessor automates all of these things.
The "generics" facility in C# is similarly limited compared to C++ templates. C++ lets you apply the dot operator to a template type T blindly, calling (for example) methods that may not exist, and checks-for-correctness are only applied once the template is actually applied to a specific class. When that happens, if all the assumptions you made about T actually hold, then your code will compile. C# doesn't allow this... type "T" basically has to be dealt with as an Object, i.e. using only the lowest common denominator of operations available to everything (assignment, GetHashCode(), Equals()).
C# has done away with the preprocessor, and real generics, in the name of simplicity. Unfortunately, when I use C#, I find myself reaching for substitutes for these C++ constructs, which are inevitably more bloated and layered than the C++ approach. For example, I have seen programmers work around the absence of #include in several bloated ways: dynamically linking to external assemblies, re-defining constants in several locations (one file per project) or selecting constants from a database, etc.
As Ms. Crabapple from The Simpson's once said, this is "pretty lame, Milhouse."
In terms of Computer Science, these compile-time features of C++ enable things like call-by-name parameter passing, which is known to be more powerful than call-by-value and call-by-reference.
Again, this is perhaps not the popular answer- any introductory C++ text will warn you off of #define, for example. But having worked with a wide variety of languages over many years, and having given consideration to the theory behind all of this, I think that many people are giving bad advice. This seems especially to be the case in the diluted sub-field known as "IT."
Passing POD structures across processes with minimum overhead. In other words, it allows us to easily handle blobs of binary data.
C# and Java force you to put your 'main()' function in a class. I find that weird, because it dilutes the meaning of a class.
To me, a class is a category of objects in your problem domain. A program is not such an object. So there should never be a class called 'Program' in your program. This would be equivalent to a mathematical proof using a symbol to notate itself -- the proof -- alongside symbols representing mathematical objects. It'll be just weird and inconsistent.
Fortunately, unlike C# and Java, C++ allows global functions. That lets your main() function to exist outside. Therefore C++ offers a simpler, more consistent and perhaps truer implementation of the the object-oriented idiom. Hence, this is one thing C++ can do, but C# and Java cannot.
I think that operator overloading is a quite nice feature. Of course it can be very much abused (like in Boost lambda).
Tight control over system resources (esp. memory) while offering powerful abstraction mechanisms optionally. The only language I know of that can come close to C++ in this regard is Ada.
C++ provides complete control over memory and as result a makes the the flow of program execution much more predictable.
Not only can you say precisely at what time allocations and deallocations of memory occurs, you can define you own heaps, have multiple heaps for different purposes and say precisely where in memory data is allocated to. This is frequently useful when programming on embedded/real time systems, such as games consoles, cell phones, mp3 players, etc..., which:
have strict upper limits on memory that is easy to reach (constrast with a PC which just gets slower as you run out of physical memory)
frequently have non homogeneous memory layout. You may want to allocate objects of one type in one piece of physical memory, and objects of another type in another piece.
have real time programming constraints. Unexpectedly calling the garbage collector at the wrong time can be disastrous.
AFAIK, C and C++ are the only sensible option for doing this kind of thing.
Well to be quite honest, you can do just about anything if your willing to write enough code.
So to answer your question, no, there is nothing you can't do in another language that C++ can't do. It's just how much patience do you have and are you willing to devote the long sleepless nights to get it to work?
There are things that C++ wrappers make it easy to do (because they can read the header files), like Office development. But again, it's because someone wrote lots of code to "wrap" it for you in an RCW or "Runtime Callable Wrapper"
EDIT: You also realize this is a loaded question.