functional concurrency - concurrency

when there is some list of advantages of functional languages, there is usually mentioned that it makes concurrency easier because there is not any variables which are subject of change. But, as my assembler-school-lessons-memory knows, there are registers in cpu and memory, that are both mutable. So when the high-level functional code is compiled into some low-lever code, it becomes imperative and mutable. So I'm not understanding what is
the advantage of using functional languages in concurrency. Can anyone explain this?

It's programmers, not computers, that have a hard time with concurrency. So saying that immutability makes it easier means on humans, not on computers.
(I include compiler writing as a human activity)

Related

Testing C++17 in safety critical systems

I'm currently thinking about C++ in safety-critical software (DO-178C DAL-D) and definitions of a coding standard. I was looking at MISRA C++ which is again 10 years old and misses all the C++11…17 features.
While being conservative regarding safety is often not a bad idea, the new language features might be beneficial to safety.
During reviews one has to argue about why you made certain decisions. And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
But it is hard to find language features that carry the safety aspects more prominently than "make things clearer". What aspects of modern C++ really help regarding safety?
I'm setting up a small exercise project to test these ideas and currently totally focused on the "let the compiler check your assumptions". For example, we have just started to use [[nodiscard]] and found at least two bugs this way within the first hour. But what aspects of modern c++ were designed and shall be used with safety in mind?
These come to my mind first :
atomic and memory_model : they allow writing portable code in concurrent / lockfree contexts.
unique_ptr : helps simplify memory handling
override lets you find bugs at compile time.
constexpr if makes the code be written closer to where it is used, which helps writing less bugs (sometimes, to specialize a behaviour according to a template parameter, you would write a class with n specializations. Now you can use if constexpr with n branches instead).
etc... in a way, considering the benefits on code clarity and portability, I think every feature of C++11/14/17 helps.
And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
In my not so humble opinion, there are few language features, that is, standard general purpose programming language features that both, fall outside of the allowed standards AND are worth the time and energy to argue your way through in an assessment. If you are aiming for a higher level of abstraction (which is a good thing also for safety, although you'll hardly find anyone openly admitting this, because it would render half of the safety industry unemployed and the other half severly outdated) then you'd be better off to resort to a domain specific language and put the effort in a flawless compilation (to source) to a standard conforming platform. If you don't work in an engineering culture which allows this, then you can resort to some of the patches that the other answer here proposes, but it is always difficult to convincingly transport the intention and meaning of non-specific measures to other safety engineers (a dedicated domain specific language is much easier both to support or object).
That said I think the advances in parallel programming of modern C++ will find their way into the standards relatively quickly.

Immutable Data Structure - Application maintenance

I have been reading about Immutable data structures and understood that change detection has been made easy . And quite often, I hear that it makes the application maintenance simpler and provides an easy to understand programming model.
I need help to understand the way it simplifies the job.
The Clojure community has embraced immutability and it is an eye opener. The best I can do is send you to the source: Rich Hickey's essay on State and his talk The Value of Values. Rich explains how separating the concept of a variable into three distinct concepts: identity, state, and value helps you model your system and reason about it.
It boils down to this: in your programming model, you should only allow things to change if they change in the system you are trying to model. Otherwise you are adding moving parts (mutable variables and objects) to a model that doesn't need them. This makes it harder to understand the model (specially as time evolves) but has little or no benefit.
Even though reading helps, the only way to grok this is to program in a language that takes immutability as a default until you realize how most of the systems you model actually have only a handful of things that change instead of pages and pages of mutable variables.
Immutability is certainly more embraced in functional languages than in imperative ones, even if you can have a Java programming style that limits mutability (see this for immutability in Java). That said, I will just comment on [functional/immutability] and [object/mutability].
I'm Clojure fan and find functional programming really powerful, but...
May be I spent too much time with C++ & Java and not enough with Lisp & Clojure, but I reckon that the simpler maintenance argument has yet to be proven by facts. I'm not sure there are reliable surveys on the actual cost of maintenance in big production systems with data on the technology used and associated costs.
Certainly, in terms of LOC, language like Clojure are really more focused and concise than Java. Hence you can say that less code leads to less maintenance, but I think functional style gives really more compact code that needs a very focused attention to fully understand what a function is doing comparing to imperative style which is more verbose but kind of straightforward. One big advantage of functional programming associated with immutability, is the ability to isolate a function and experiment with it without the need to drag a heavy context of satellite objects or build a bunch of mocks, which is very often the case with OO languages. Putting aside the experimentation, a pure function won't modify its arguments, which ease the fear to break unintentionally some piece of code outside the scope of the function.
But, putting aside the merits of functional/immutability over oop/mutability, in terms of maintenance, my experience leads me to think that it's not the technology which is the main issue, but the design, code quality and evolution of this code over time even when the initial one was of good quality. By "good", I mean that the code is respectful of style conventions (like basic naming), managed complexity, and has a sensible test harness, in a continuous (or at least automated) build environment.
Then, the question becomes: is there a paradigm (functional/immutability, object-oriented/mutability) that enforced a better design and better code. My feeling is that functional languages are the land of computer science passionates, OTOH OOP is more mainstream. Isn't it because OOP is easier to apprehend or is ot just a matter of education? but then, in order to maintain a system in the long run, should one go for a "clever" functional environment with few people able to tackle it, or some mainstream OO technology - with its unsafeness or permissiveness - but lots of people having some knowledge in it?
Certainly the solution is to choose the right technologies (plural) with the right, motivated people...

Efficiency of program

I want to know whether there is an effect on program efficiency by adopting object oriented approach to a problem as compared to the structured programming approach in any programming language but specially in c++.
Maybe. Maybe not.
You can write efficient object-oriented code. You can write inefficient structured code.
It depends on the application, how well the code is written, and how heavily the code is optimized. In general, you should write code so that it has a good, clean, modular architecture and is well designed, then if you have problems with performance optimize the hot spots that are causing performance issues.
Use object oriented programming where it makes sense to use it and use structured programming where it makes sense to use it. You don't have to choose between one and the other: you can use both.
I remember back in the early 1990's when C++ was young there were studies done about this. If I remember correctly, the guys who took (well written) C++ programs and recoded them in C got around a 15% increase in speed. The guys who took C programs and recoded them in C++, and modified the imperative style of C to an OO style (but same algorithms) for C++ got the same or better performance. The apparent contradiction was explained by the observation that the C programs, in being translated to an object oriented style, became better organized. Things that you did in C because it was too much code and trouble to do better could more easily be done properly in C++.
Thinking back about this I wonder about the conclusion some. Writing a program a second time will always result in a better program, so it didn't have to be imperative to OO style that made the difference. Todays computer architectures are designed with hardware support for common operations done by OO programs, and compilers have gotten better at using the instructions, so I think that it is likely that whatever overhead a virtual function call had in 1992 it is far smaller today.
There doesn't have to be, if you are very careful to avoid it. If you just take the most straightforward approach, using dynamic allocation, virtual functions, and (especially) passing objects by value, then yes there will be inefficiency.
It doesn't have to be. Algorithm is all matters. I agree encapsulation will slow you down little bit, but compilers are there to optimize.
You would say no if this is the question in computer science paper.
However in the real development environment this tends to be true if the OOP paradigm is used correctly. The reason is that in real development process, we generally need to maintain our code base and that the time when OOP paradigm could help us. One strong point of OOP over structured programming like C is that in OOP it is easier to make the code maintainable. When the code is more maintainable, it means less bug and less time to fix bug and less time needed for implementing new features. The bottom line is then we will have more time to focus on the efficiency of the application.
The problem is not technical, it is psychological. It is in what it encourages you to do by making it easy.
To make a mundane analogy, it is like a credit card. It is much more efficient than writing checks or using cash. If that is so, why do people get in so much trouble with credit cards? Because they are so easy to use that they abuse them. It takes great discipline not to over-use a good thing.
The way OO gets abused is by
Creating too many "layers of abstraction"
Creating too much redundant data structure
Encouraging the use of notification-style code, attempting to maintain consistency within redundant data structures.
It is better to minimize data structure, and if it must be redundant, be able to tolerate temporary inconsistency.
ADDED:
As an illustration of the kind of thing that OO encourages, here's what I see sometimes in performance tuning: Somebody sets SomeProperty = true;. That sounds innocent enough, right? Well that can ripple to objects that contain that object, often through polymorphism that's hard to trace. That can mean that some list or dictionary somewhere needs to have things added to it or removed from it. That can mean that some tree or list control needs controls added or removed or shuffled. That can mean windows are being created or destroyed. It can also mean some things need to be changed in a database, which might not be local so there's some I/O or mutex locking to be done.
It can really get crazy. But who cares? It's abstract.
There could be: the OO approach tends to be closer to a decoupled approach where different modules don't go poking around inside each other. They are restricted to public interfaces, and there is always a potential cost in that. For example, calling a getter instead of just directly examining a variable; or calling a virtual function by default because the type of an object isn't sufficiently obvious for a direct call.
That said, there are several factors that diminish this as a useful observation.
A well written structured program should have the same modularity (i.e. hiding implementations), and therefore incur the same costs of indirection. The cost of calling a function pointer in C is probably going to be very similar to the cost of calling a virtual function in C++.
Modern JITs, and even the use of inline methods in C++, can remove the indirection cost.
The costs themselves are probably relatively small (typically just a few extra simple operations per instruction call). This will be insignificant in a program where the real work is done in tight loops.
Finally, a more modular style frees the programmer to tackle more complicated, but hopefully less complex algorithms without the peril of low level bugs.

As a programmer with no CS degree, do I have to learn C++ extensively?

I'm a programmer with 2 years experience, I worked in 4 places and I really think of myself as a confident, and fluent developer.
Most of my colleagues have CS degrees, and I don't really feel any difference! However, to keep up my mind on the same stream with these guys, I studied C (read beginning C from novice to professional), DataStructures with C, and also OOP with C++.
I have a reasonable understanding of pointers, memory management, and I also attended a scholarship which C, DataStructures, and C++ were a part of it.
I want to note that my familiarity with C and C++ does not exceed reading some pages, and executing some demos; I haven't worked on any project using C or C++.
Lately a friend of mine advised me to learn C, and C++ extensively, and then move to OpenGL and learn about graphics programming. He said that the insights I may gain by learning these topics will really help me throughout my entire life as a programmer.
PS: I work as a full-time developer mostly working on ASP.NET applications using C#.
Recommendations?
For practical advancement:
From a practical sense, pick a language that suites the domain you want to work in.
There is no need to learn C nor C++ for most programming spaces. You can be a perfectly competent programmer without writing a line of code in those languages.
If however you are not happy working in the exact field you are in now, you can learn C or C++ so that you may find a lower level programming job.
Helping you be a better programmer:
You can learn a lot from learning multiple languages though. So it is always good to broaden your horizons that way.
If you want more experience in another language, and have not tried it yet, I would recommend to learn a functional programming language such as Scheme, Lisp, or Haskell.
First, having a degree has nothing to do with knowing C++. I know several people who graduated from CS without ever writing more than 50 lines of C/C++. CS is not about programming (in the same sense that surgery is not about knives), and it certainly isn't about individual languages. A CS degree requires you to poke your nose into several different languages, on your way to somewhere else. CS teaches the underlying concepts, an understanding of compilers, operating systems, the hardware your code is running on, algorithms and data structures and many other fascinating subjects. But it doesn't teach programming. Whatever programming experience a CS graduate has is almost incidental. It's something he picked up on the fly, or because of a personal interest in programming.
Second, let's be clear that it's very possible to have a successful programming career without knowing C++. In fact, I'd expect that most programmers fall into this category. So you certainly don't need to learn C++.
That leaves two possible reasons to learn C++:
Self-improvement
Changing career track
#2 is simple. If you want to transition to a field where C++ is the dominant language, learning it would obviously be a good idea. You mentioned graphics programming as an example, and if you want to do that for a living, learning C++ will probably be a good idea. (however, I don't think it's a particularly good suggestion for "insights that will help throughout your live as a programmer". There are other fields that are much more generally applicable. Learning graphics programming will teach you graphics programming, and not much else.)
That leaves #1, which is a bit more interesting. Will you become a better programmer simply by knowing C++? Perhaps, but not as much as some may think. There are several useful things that C++ may teach you, but there also seems to be a fair bit of superstition about it: it's low-level and has pointers, so by learning C++, you will achieve enlightenment.
If you want to understand what goes on under the hood, C or C++ will be helpful, sure, but you could cut out the middle man and just go directly into learning about compilers. That'd give you an even better idea. Supplement that with some basic information on how CPU's work, and a bit about operating systems as well, and you've learned all the underlying stuff much better than you would from C++.
However, some things I believe are worth picking up from C++, in no particular order:
(several of them are likely to make you despair at C#, which, despite adopting a lot of brilliant features, is still missing out some that to a C++ programmer seems blindingly obvious)
Paranoia: Becoming good at C++ implies becoming a bit of a language lawyer. The language leaves a lot of things undefined or unspecified, so a good C++ programmer is paranoid. "The code I just wrote looks ok, and it seems to be have ok when I run it - but is it well-defined by the standard? Will it break tomorrow, on his computer, or when I compile with an updated compiler? I have to check the standard". That's less necessary in other languages, but it may still be a healthy experience to carry with you. Sometimes, the compiler doesn't have the final word.
RAII: C++ has pioneered a pretty clever way to deal with resource management (including the dreaded memory management). Create an object on the stack, which in its constructor acquires the resource in question (database connection, chunk of memory, a file, a network socket or whatever else), and in its destructor ensures that this resource is released. This simple mechanism means that you virtually never write new/delete in your top level code, it is always hidden inside constructors or destructors. And because destructors are guaranteed to execute when the object goes out of scope, even if an exception is thrown, your resource is guaranteed to be released. No memory leaks, no unclosed database connections. C# doesn't directly support this, but being familiar with the technique sometimes lets you see a way to emulate it in C#, in the cases where it's useful. (Obviously memory management isn't a concern, but ensuring that database connections are released quickly might still be)
Generic programming, templates, the STL and metaprogramming: The C++ standard library (or the part of it commonly known as the STL) is a pretty interesting example of library design. In some ways, it is lightyears ahead of .NET or Java's class libraries, although LINQ has patched up some of the worst shortcomings of .NET. Learning about it might give you some useful insights into clever ways to work with sequences or sets of data. It also has a strong flavor of functional programming, which is always nice to poke around with. It's implemented in terms of templates, which are another remarkable feature of C++, and template metaprogramming may be beneficial to learn about as well. Not because it is directly applicable to many other languages, but because it might give you some ideas for writing more generic code in other languages as well.
Straightforward mapping to hardware: C++ isn't necessarily a low level language. But most of its abstractions have been modelled so that they can be implemented to map directly to common hardware features. That means it might help provide a good understanding of the "magic" that occurs between your managed .net code and the CPU at the other end. How is the CLR implemented, what do the heap and stack actually mean, and so on.
p/invoke: Let's face it, sometimes, .NET doesn't offer the functionality you need. You have to call some unmanaged code. And then it's useful to actually know the language you might be using. (if you can get around it with just a single pinvoke call, you only need to be able to read C function signatures on MSDN so you know which arguments to pass, but sometimes, it may be preferable to write your own C++ wrapper, and call into that instead.
I don't know if you should learn C++. There are valid reasons why doing so may make you a better programmer, but then again, there are plenty of other things you could spend your time on that would also make you a better programmer. The choice is yours. :)
Experience is the best teacher.
While you can read about things like memory management, data structures (and their implementations), algorithms, etc., you won't really get it until you've had a chance to put it in to practice. While I don't know if it's truly necessary to use C or C++ to learn these things I would put some effort into actually writing some code that manages its own memory and implements some common data structures. I think you'll learn things that will help you to understand your code better; to know what's really going on under the hood, so to speak. I would also recommend reading up on computer organization and operating systems, computer security, and boolean logic. On the other hand, I've never really found a need to do any OpenGL programming, though I did do some X Windows stuff once upon a time.
Having degree has got nothing to do with C/C++ actually. Now, stuff like big O() estimation, data structures or even mathematical background. For example linear algebra results very useful, even in context that seemingly have nothing to do (eg. search engines).
For example typical error that a good coder, but without any theoretical knowledge, might commit is to try to solve NP-complete problems by exact algorithm, rather than approximation.
Now, why in universities they teach you C/C++? Because it let's you see how it's all working "under the hood". You get opportunity to see how call stack works, how memory management works, how pointers work. Of course you don't need that knowledge to use most modern languages. But you need that to understand how their "magic" works. Eg. you can't understand how GC works, if you got no idea about pointers and memory allocation.
I've often asked this question (to myself). I think the more general version is, "how can I call myself a programmer if I don't know how to kick around a language that doesn't have automatic garbage collection, with pointers and all that 'complex' stuff'?" I've never learned C++ except to do a few HelloWorlds, so my answer is limited by that lack:
I think that the feeling that you need to learn C++ (or assembler, really) comes from the feeling that you're always working on someone else's abstractions: the "rocket scientists" who write the JVM, CLR, whatever. So if you can get to a lower level language, you'll really know what you're talking about. I think this is quite wrong. One is always building on a set of abstractions: even Assembler is translated into binary, which can be learned as well. And beyond that, you still couldn't make a computer out of firewood, even if you had a pair of pliers and a bit of titanium.
In my experience as a corporate trainer in software dev (in Java, mostly), the best people were not those who knew C++, but rather those that took the language that they are working in as an independent space for "play." Although memory management comes up all the time in C# and Java, you never have to think about anything beyond freeing your object from references (and a few other cliche places, like using streams instead of throwing around huge objects in memory). Pointers and all that stuff do not help you there, except as a right of passage (and a good one, I'm sure).
So in summary, work in the language you're in and branch out into as many relevant things as possible. These days I find myself dipping into Javascript though the APIs are supposed to make this unecessary, and doing some stuff in Fireworks while I mess with CSS by hand. And this is all in addition to the development I'm really doing in RoR, PHP and Actionscript. So my point is: focus on abstractions that you need, because they're more likely to be relevant than the lower-level stuff that underlies your platform.
Edit: I made some slight changes in response to jalf's comments, thanks.
I have a 1st class Software Engineering degree and work for a large console manufacturer developing a game engine in a team of programmers all of whom program across a wide range of languages from Asm to C++ to C# to LUA and know the hardware inside out.
I would say that 5% of my degree was useful and that by far and away the most important trait to furthering my career has been enthusiam and self development.
In fact many of the colleagues I've worked with haven't had a degree and on average have probably been the better ones.
I'd say this is because they've had to replace that piece of paper from a university degree with actual working code that they've developed in thier own spare time learning the skills off thier own back rather than being spoon fed it.
My driving instructor use to tell me that I would only start learning how to drive after I pass my test ie you only really learn from the practical application of the basics. A CS degree gives you the basics which if you've had a job programming any of the major languages for 6 months you will already have. A degree just opens up doors that you may not have otherwise - it doesn't help that much once inside the door.
Knowing how the software interacts with the hardware by the sounds of it is the most important area for you at the moment only then does the 'mystery' or 'magic' really disappear and you can be confident of what your talking about else where. Learning C and C++ will undoubtedbly help in this respect as will knowing an API like OpenGL.
But I'd say the most important thing is to find something you have interest in and code that. If you have real enthusiam for it you will naturally learn more low level information and become a better programmer, if indeed that is what your definition of being a better programmer is!
I've been working as a developer with no degree for almost 15 years now. I started with Ada and moved quickly into C/C++, but it's been my experience that there will always be some language that you "have to learn." If it's not C++, it will be C# or C or Java or Lisp. My advice is make sure you're solid on the basics that apply to any language(my best friend as a dev with no degree was the CLR book), and you should be able to move relatively easily between languages and frameworks.
You don't absolutely have to learn C/C++, but both languages will teach you to think about how your software interacts with the underlying OS and hardware, which is a essential skill. You say that you already know about pointers, memory management and so on, which is great. Many programmers without a CS degree lack this important knowledge.
Another good reason to learn C/C++ is that there's a lot of code written in these languages and a good way to learn more about programming is to read other people's code. If you're interested in writing low level code like drivers, OS, file systems and the like C/C++ is pretty much the only way to go.
Do you have to learn it extensively? I expect not.
However it's best to always be learning things that help you look at programming from a different perspective. Learning C or C++ are worth it for the insight into how things work at a lower level. For C and C++ programmers the same thing might be accomplished by learning assembly. Most people won't use assembly in a project, but knowing how it works can be very helpful from time to time.
My recommendation is always to learn as much as you can. If you're not working on a C++ project in the near future I wouldn't be too worried about learning the ins and outs, but it's always good to be able to look at problems from another angle and learning new languages is one way to do that.
Today for the majority of applications, C and C++ can be viewed as an academic exercise: "How can we write programs without garbage collection?"
The answer is: you can, but it's a mostly painful experience. Most of the details of best practices in C++ are related to the lack of garbage collection.
Given the brilliant performance of modern GCs, and the general increase in computing power, even cell phones have GCs these days. And in a platform with a GC, you can always code in such a way as to limit the pressure you put on the GC.
Listen or read SO podcast 44, where Joel plays his favorite song Write in C
Spolsky: Yeah, it's not paying the proper royalties to the Beatles anyway. We'll link to that from the shownotes. Awesome song, Write in C.
Atwood: That's right, Joel's favourite song. Write everything in C, because Joel does in fact write everything in C, don't you, Joel?
Spolsky: I started using a little bit of C99, the latest version of C, which let you declare variables after you written some statements.
...
Without a professional reason (other than the good practice of self-improvement) to learn C or C++, then you should have a passionate side project planned out that you could write in C or C++. Once the going gets tough on the side project, you'll need your enthusiasm and curiosity to take you over the hump (since on a side project, you naturally don't have the motivation of pay or de-motivation of a superior looming over you).
Also, most CS degrees are using Java as their language of choice now. This just proves the point that experience gained in the language of choice and exposure to some of the theory involved in the other classes in the degree is the main benefit for people with CS degrees, and not so much the specific language (though I think the higher they go up the abstraction scale, the worse it is for the students in the long run).
Without a practical reason for learning a programming language it is pretty hard going.
If you can think of particular problems or a specific task which the language is suit for - Then the learning experience is driven by needs, rather than simple academics.
I only just recently switched from VB to C# (1 month ago) while not as significantly different as a switch from C# to C, because I switch for a particular reason I found it much easier to learn. I had dabbled previous without a specific problem to solve, needless to say I switched back
If you have a different style of learning as in self-taught then my recommendation to be a better programmer is to research topics regarding your domain. From bottom to top, slowly climb up the ladder.There is a fairly amount of different programmers, no one will excel in all, so don't start off with that context in mind.
Best of luck to you.
C++ is just a programming language. What you don't have that other students (if they paid attention in class) have is the deeper understanding that comes through studying concepts.
Being a programmer is not and should not be the end goal of any CS graduate. However it is as far as most people get without such a degree.
Here is an analogy: An engineer and an architect both at some point learn to draft buildings using CAD. Also, someone completely untrained can come in and start work using CAD and be very effective. This is a good career and it pays well, but for both the engineer and the architect it is not where you want to be when you are 30.
One value of knowing C is that many other languages including C#, Java, C++, JavaScript, Python, and PHP have their roots in C syntax.
Another value, and arguably more important, is that it will build your confidence. Programmers are a confident group and very optimistic (you have to be confident to think that you can write the equivalent of a 1000 page book without a single spelling or grammatical error). And confidence in your ability to learn and effectively use any language will grow considerably with a pure C application or two under your belt.
So write a non trivial program in C; something that at least reads and writes files, allocates and deallocates memory, and manages a data structure like a queue or binary tree.
Your confidence will thank you.

What can C++ do that is too hard or messy in any other language?

I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it.
I'm particularly interested in aspects of C++ that are little known, or underutilised.
RAII / deterministic finalization. No, garbage collection is not just as good when you're dealing with a scarce, shared resource.
Unfettered access to OS APIs.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language.
Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
Shooting oneself in the foot.
No other language offers such a creative array of tools. Pointers, multiple inheritance, templates, operator overloading and a preprocessor.
A wonderfully powerful language that also provides abundant opportunities for foot shooting.
Edit: I apologize if my lame attempt at humor has offended some. I consider C++ to be the most powerful language that I have ever used -- with abilities to code at the assembly language level when desired, and at a high level of abstraction when desired. C++ has been my primary language since the early '90s.
My answer was based on years of experience of shooting myself in the foot. At least C++ allows me to do so elegantly.
Deterministic object destruction leads to some magnificent design patterns. For instance, while RAII is not as general a technique as garbage collection, it leads to some impressive capabilities which you cannot get with GC.
C++ is also unique in that it has a Turing-complete preprocessor. This allows you to prefer (as in the opposite of defer) a lot of code tasks to compile time instead of run time. For instance, in real code you might have an assert() statement to test for a never-happen. The reality is that it will sooner or later happen... and happen at 3:00am when you're on vacation. The C++ preprocessor assert does the same test at compile time. Compile-time asserts fail between 8:00am and 5:00pm while you're sitting in front of the computer watching the code build; run-time asserts fail at 3:00am when you're asleep in Hawai'i. It's pretty easy to see the win there.
In most languages, strategy patterns are done at run-time and throw exceptions in the event of a type mismatch. In C++, strategies can be done at compile-time through the preprocessor facility and can be guaranteed typesafe.
Write inline assembly (MMX, SSE, etc.).
Deterministic object destruction. I.e. real destructors. Makes managing scarce resources easier. Allows for RAII.
Easier access to structured binary data. It's easier to cast a memory region as a struct than to parse it and copy each value into a struct.
Multiple inheritance. Not everything can be done with interfaces. Sometimes you want to inherit actual functionality too.
I think i'm just going to praise C++ for its ability to use templates to catch expressions and execute it lazily when it's needed. For those not knowing what this is about, here is an example.
Template mixins provide reuse that I haven't seen elsewhere. With them you can build up a large object with lots of behaviour as though you had written the whole thing by hand. But all these small aspects of its functionality can be reused, it's particularly great for implementing parts of an interface (or the whole thing), where you are implementing a number of interfaces. The resulting object is lightning-fast because it's all inlined.
Speed may not matter in many cases, but when you're writing component software, and users may combine components in unthought-of complicated ways to do things, the speed of inlining and C++ seems to allow much more complex structures to be created.
Absolute control over the memory layout, alignment, and access when you need it. If you're careful enough you can write some very cache-friendly programs. For multi-processor programs, you can also eliminate a lot of slow downs from cache coherence mechanisms.
(Okay, you can do this in C, assembly, and probably Fortran too. But C++ lets you write the rest of your program at a higher level.)
This will probably not be a popular answer, but I think what sets C++ apart are its compile-time capabilities, e.g. templates and #define. You can do all sorts of text manipulation on your program using these features, much of which has been abandoned in later languages in the name of simplicity. To me that's way more important than any low-level bit fiddling that's supposedly easier or faster in C++.
C#, for instance, doesn't have a real macro facility. You can't #include another file directly into the source, or use #define to manipulate the program as text. Think about any time you had to mechanically type repetitive code and you knew there was a better way. You may even have written a program to generate code for you. Well, the C++ preprocessor automates all of these things.
The "generics" facility in C# is similarly limited compared to C++ templates. C++ lets you apply the dot operator to a template type T blindly, calling (for example) methods that may not exist, and checks-for-correctness are only applied once the template is actually applied to a specific class. When that happens, if all the assumptions you made about T actually hold, then your code will compile. C# doesn't allow this... type "T" basically has to be dealt with as an Object, i.e. using only the lowest common denominator of operations available to everything (assignment, GetHashCode(), Equals()).
C# has done away with the preprocessor, and real generics, in the name of simplicity. Unfortunately, when I use C#, I find myself reaching for substitutes for these C++ constructs, which are inevitably more bloated and layered than the C++ approach. For example, I have seen programmers work around the absence of #include in several bloated ways: dynamically linking to external assemblies, re-defining constants in several locations (one file per project) or selecting constants from a database, etc.
As Ms. Crabapple from The Simpson's once said, this is "pretty lame, Milhouse."
In terms of Computer Science, these compile-time features of C++ enable things like call-by-name parameter passing, which is known to be more powerful than call-by-value and call-by-reference.
Again, this is perhaps not the popular answer- any introductory C++ text will warn you off of #define, for example. But having worked with a wide variety of languages over many years, and having given consideration to the theory behind all of this, I think that many people are giving bad advice. This seems especially to be the case in the diluted sub-field known as "IT."
Passing POD structures across processes with minimum overhead. In other words, it allows us to easily handle blobs of binary data.
C# and Java force you to put your 'main()' function in a class. I find that weird, because it dilutes the meaning of a class.
To me, a class is a category of objects in your problem domain. A program is not such an object. So there should never be a class called 'Program' in your program. This would be equivalent to a mathematical proof using a symbol to notate itself -- the proof -- alongside symbols representing mathematical objects. It'll be just weird and inconsistent.
Fortunately, unlike C# and Java, C++ allows global functions. That lets your main() function to exist outside. Therefore C++ offers a simpler, more consistent and perhaps truer implementation of the the object-oriented idiom. Hence, this is one thing C++ can do, but C# and Java cannot.
I think that operator overloading is a quite nice feature. Of course it can be very much abused (like in Boost lambda).
Tight control over system resources (esp. memory) while offering powerful abstraction mechanisms optionally. The only language I know of that can come close to C++ in this regard is Ada.
C++ provides complete control over memory and as result a makes the the flow of program execution much more predictable.
Not only can you say precisely at what time allocations and deallocations of memory occurs, you can define you own heaps, have multiple heaps for different purposes and say precisely where in memory data is allocated to. This is frequently useful when programming on embedded/real time systems, such as games consoles, cell phones, mp3 players, etc..., which:
have strict upper limits on memory that is easy to reach (constrast with a PC which just gets slower as you run out of physical memory)
frequently have non homogeneous memory layout. You may want to allocate objects of one type in one piece of physical memory, and objects of another type in another piece.
have real time programming constraints. Unexpectedly calling the garbage collector at the wrong time can be disastrous.
AFAIK, C and C++ are the only sensible option for doing this kind of thing.
Well to be quite honest, you can do just about anything if your willing to write enough code.
So to answer your question, no, there is nothing you can't do in another language that C++ can't do. It's just how much patience do you have and are you willing to devote the long sleepless nights to get it to work?
There are things that C++ wrappers make it easy to do (because they can read the header files), like Office development. But again, it's because someone wrote lots of code to "wrap" it for you in an RCW or "Runtime Callable Wrapper"
EDIT: You also realize this is a loaded question.