Practical uses of exploiting RTTI in C++ - c++

Having done with 1st Vol. of Thinking in C++ by Bruce Eckel, I have started reading the 2nd Vol. The chapter devoted to RTTI (Run-Time Type Identification) amazes me the most. I have been reading about tyepid, dynamic_cast, etc.
But, I have a question floating in my mind. Are their any practical uses of exploiting RTTI through the operators mentioned i.e. some examples from real-life projects? Also, what were the limitations encountered which made its use necessary?

dynamic_cast can be useful for adding optional functionality
void foo(ICoolStuff *cs)
{
auto ecs = dynamic_cast<IEvenCoolerStuff*>(cs);
if (ecs != 0)
{
ecs->DoEvenCoolerStuff();
}
cs->DoCoolStuff();
}
when you design from scratch it might be possible to put DoEvenCoolerStuff into ICoolStuff and have empty implementations in classes which don't support it, but it's often not feasible when you need to change existing code.
Another use is messaging system implementation where one might use dynamic_cast for distinguishing messages you are interested in. More generally speaking you might need it when faced with the expression problem.

The most common example of RTTI in production code that I have seen in my travels is dynamic_cast, but it is almost always used as a band-aid for a poor design.
dynamic_cast is useful primarily for polymorphic classes, and then for going from base to derived. But think about it. If you have a base pointer to a properly designed polymorphic class, why would you ever need a pointer to a derived type? You should, in theory, only ever need to call the virtual functions, and have the actual instantiation deal with the implementation details.
Now that being said there are cases where even though dynamic_cast is a band-aid, it is still the lesser of two evils. This is particulary true when "fixing" the broken design would imply a large maintenance project, and would have no performance implications. Suppose you have a 1 MLOC application, and fixing something that is academically broken would mean having to touch 100k lines of code. If there is no performance reason to make that change, then you are fixing it purely for the sake of fixing it, but you run the risk of creating dozens or hundreds of new bugs. It might not be worth it.

Related

Template abuse?

I wanted to transform the dynamic_casts from base class to derived from this style:
auto derived = dynamic_cast<Derived*>(object);
To something more compact. For that I have added in Base class the following template:
template<typename T>
T As() { return dynamic_cast<T>(this); }
So now the previous statement would be rewritten as
auto derived = object->As<Derived*>();
I like this style more. But I know there might be readability issues (subjective) or memory usage of the class maybe? If am I correct this will generate a function for each type of derived I cast. This number can be potentially large (100 different derived classes).
Should I just stick to plain dynamic_cast?
If you read material from a number of experts who have participated in the design of C++ (Stroustrup, Sutter, the list goes on) you will find that dynamic_cast (and all the _casts) are verbose and clumsy for the programmer BY DESIGN.
Where at all possible, it is considered best to AVOID using them. While all of the _cast operators have their place (i.e. there are circumstances in which they are genuinely the best solution to a problem) they are also blunt instruments that can be used to work around problems due to bad design. Unfortunately, given a choice, a lot of programmers will reach for such blunt instruments rather than applying a bit more effort to learn appropriate techniques, and to clean up their design - which has benefits such as making the code easier to get working right, and easier to maintain.
dynamic_cast is, arguably, the worst of the _cast operators, since it almost invariably introduces an overhead at run time. If it is used to work around deficiencies due to bad design, there is a distinct run-time penalty.
Making the syntax clumsy and verbose encourages a programmer to find alternatives (e.g. design types and operations on types, in a way that avoids the need for such conversions).
What you're asking for is a way to allow programmers to use dynamic_cast easily and with less thought. That will encourage bad design, by allowing a programmer to easily use the _cast operators to work around design problems, when they would often be better off applying a bit more effort to avoid a need for such conversions in the first place. There is plenty of information available about techniques that can be used to avoid use of operations like dynamic_cast.
So, yes, if you really need to use such conversions, I suggest you stick to use of dynamic_cast.
Better yet, you might want to also apply effort to learn design techniques and idioms that reduce how often you need to use it.

C++ : inheritance without virtuality

I wonder if what I'm currently doing is a shame for C++, or if it is OK.
I work on a code for computational purpose. For some classes, I use a normal inheritance scheme with virtuality/polymorphism. But I need some classes to do intensive computation, and it would be great to avoid overhead due to virtuality.
Basically, I want to use this classes without pointers or redirection : inheritance is just here to avoid many copy/paste of code (the file size of the base class is like 60Ko (which is a lot of code)). So no virtual functions, and no virtual desctructor.
I wonder if it is perfectly OK from a C++ point of view or if it can create side effects (the concerned classes will be used a lot in the program).
Thank you very much.
Using polymorphism in C++ is neither good nor bad. Polymorphism serves a purpose, as does a lack of polymorphism. There is nothing wrong with using inheritance without using polymorphism on its own.
Since polymorphism serves a purpose, and the lack of polymorphism also serves a purpose, you should design your classes with those purposes in mind. If, for example, you need runtime binding of behavior to class instances, you need polymorphism.
That all being said, there are right and wrong reasons for choosing one approach over the other. If you are designing your classes without polymorphism strictly because you want to "avoid overhead" that is likely a wrong reason. This is an instance of premature optimization so long as you are making design changes or decisions without having profiled your code and proved that polymorphism is an actual problem.
Design by architectural requirements first. Later go back and refactor if the design proves to be non-performant.
I would rephrase the question:
What does inheritance brings that composition could not achieve if you eschew polymorphism ?
If the answer is nothing, which I suspect, then perhaps that inheritance is not required in the first place.
Not using virtual members/inheritance is perfectly ok. C++ is designed to entertain vast audience and it doesn't restrict anyone to particular paradigm.
You can use C++ to code procedural, generic, object-oriented or any mix of them. Just try to make best out of it.
I'm currently doing is a shame for C++, or if it is OK.
Not at all.
Rather if you don't need OO design and still imposing it just for the sake of it, would be a shame.
Basically, I want to use this classes without pointers or redirection ...
In fact you are going in right direction. Using pointers, arrays and such low level features are better suited for advance programming. Use instead like std::shared_ptr, std::vector, and standard library containers.
Basically, you are using inheritance without polymorphism. And that's ok.
Object-oriented programming has other feature than polymorphism. If you can benefits from them, just use them.
In general, it is not a good idea to use inheritance to reuse code. Inheritance is rather to be used by code that was designed to use your base class. I would suggest a different approach to the problem. Consider some of the alternatives, like composition, changing the functionality to be implemented in free functions rather than a base class, or static polymorphism (through the use of templates).
It's not a performance problem until you can prove it.
Check out that answer and the "Fastest possible delegates" article.

Is the PIMPL idiom really used in practice?

I am reading the book "Exceptional C++" by Herb Sutter, and in that book I have learned about the PIMPL idiom. Basically, the idea is to create a structure for the private objects of a class and dynamically allocate them to decrease the compilation time (and also hide the private implementations in a better manner).
For example:
class X
{
private:
C c;
D d;
} ;
could be changed to:
class X
{
private:
struct XImpl;
XImpl* pImpl;
};
and, in the .cpp file, the definition:
struct X::XImpl
{
C c;
D d;
};
This seems pretty interesting, but I have never seen this kind of approach before, neither in the companies I have worked, nor in open source projects that I've seen the source code. So, I am wondering whether this technique is really used in practice.
Should I use it everywhere, or with caution? And is this technique recommended to be used in embedded systems (where the performance is very important)?
So, I am wondering it this technique is really used in practice? Should I use it everywhere, or with caution?
Of course it is used. I use it in my project, in almost every class.
Reasons for using the PIMPL idiom:
Binary compatibility
When you're developing a library, you can add/modify fields to XImpl without breaking the binary compatibility with your client (which would mean crashes!). Since the binary layout of X class doesn't change when you add new fields to Ximpl class, it is safe to add new functionality to the library in minor versions updates.
Of course, you can also add new public/private non-virtual methods to X/XImpl without breaking the binary compatibility, but that's on par with the standard header/implementation technique.
Data hiding
If you're developing a library, especially a proprietary one, it might be desirable not to disclose what other libraries / implementation techniques were used to implement the public interface of your library. Either because of Intellectual Property issues, or because you believe that users might be tempted to take dangerous assumptions about the implementation or just break the encapsulation by using terrible casting tricks. PIMPL solves/mitigates that.
Compilation time
Compilation time is decreased, since only the source (implementation) file of X needs to be rebuilt when you add/remove fields and/or methods to the XImpl class (which maps to adding private fields/methods in the standard technique). In practice, it's a common operation.
With the standard header/implementation technique (without PIMPL), when you add a new field to X, every client that ever allocates X (either on stack, or on heap) needs to be recompiled, because it must adjust the size of the allocation. Well, every client that doesn't ever allocate X also need to be recompiled, but it's just overhead (the resulting code on the client side will be the same).
What is more, with the standard header/implementation separation XClient1.cpp needs to be recompiled even when a private method X::foo() was added to X and X.h changed, even though XClient1.cpp can't possibly call this method for encapsulation reasons! Like above, it's pure overhead and is related with how real-life C++ build systems work.
Of course, recompilation is not needed when you just modify the implementation of the methods (because you don't touch the header), but that's on par with the standard header/implementation technique.
Is this technique recommended to be used in embedded systems (where the performance is very important)?
That depends on how powerful your target is. However the only answer to this question is: measure and evaluate what you gain and lose. Also, take into consideration that if you're not publishing a library meant to be used in embedded systems by your clients, only the compilation time advantage applies!
It seems that a lot of libraries out there use it to stay stable in their API, at least for some versions.
But as for all things, you should never use anything everywhere without caution. Always think before using it. Evaluate what advantages it gives you, and if they are worth the price you pay.
The advantages it may give you are:
helps in keeping binary compatibility of shared libraries
hiding certain internal details
decreasing recompilation cycles
Those may or may not be real advantages to you. Like for me, I don't care about a few minutes recompilation time. End users usually also don't, as they always compile it once and from the beginning.
Possible disadvantages are (also here, depending on the implementation and whether they are real disadvantages for you):
Increase in memory usage due to more allocations than with the naïve variant
increased maintenance effort (you have to write at least the forwarding functions)
performance loss (the compiler may not be able to inline stuff as it is with a naïve implementation of your class)
So carefully give everything a value, and evaluate it for yourself. For me, it almost always turns out that using the PIMPL idiom is not worth the effort. There is only one case where I personally use it (or at least something similar):
My C++ wrapper for the Linux stat call. Here the struct from the C header may be different, depending on what #defines are set. And since my wrapper header can't control all of them, I only #include <sys/stat.h> in my .cxx file and avoid these problems.
I agree with all the others about the goods, but let me put in evidence about a limit: doesn't work well with templates.
The reason is that template instantiation requires the full declaration available where the instantiation took place. (And that's the main reason you don't see template methods defined into .cpp files.)
You can still refer to templatised subclasses, but since you have to include them all, every advantage of "implementation decoupling" on compiling (avoiding to include all platform-specific code everywhere, shortening compilation) is lost.
It is a good paradigm for classic OOP (inheritance based), but not for generic programming (specialization based).
Other people have already provided the technical up/downsides, but I think the following is worth noting:
First and foremost, don't be dogmatic. If PIMPL works for your situation, use it - don't use it just because "it's better OO since it really hides implementation", etc. Quoting the C++ FAQ:
encapsulation is for code, not people (source)
Just to give you an example of open source software where it is used and why: OpenThreads, the threading library used by the OpenSceneGraph. The main idea is to remove from the header (e.g., <Thread.h>) all platform-specific code, because internal state variables (e.g., thread handles) differ from platform to platform. This way one can compile code against your library without any knowledge of the other platforms' idiosyncrasies, because everything is hidden.
I would mainly consider PIMPL for classes exposed to be used as an API by other modules. This has many benefits, as it makes recompilation of the changes made in the PIMPL implementation does not affect the rest of the project. Also, for API classes they promote a binary compatibility (changes in a module implementation do not affect clients of those modules, they don't have to be recompiled as the new implementation has the same binary interface - the interface exposed by the PIMPL).
As for using PIMPL for every class, I would consider caution because all those benefits come at a cost: an extra level of indirection is required in order to access the implementation methods.
I think this is one of the most fundamental tools for decoupling.
I was using PIMPL (and many other idioms from Exceptional C++) on embedded project (SetTopBox).
The particular purpose of this idiom in our project was to hide the types XImpl class uses.
Specifically, we used it to hide details of implementations for different hardware, where different headers would be pulled in. We had different implementations of XImpl classes for one platform and different for the other. Layout of class X stayed the same regardless of the platform.
I used to use this technique a lot in the past but then found myself moving away from it.
Of course it is a good idea to hide the implementation detail away from the users of your class. However you can also do that by getting users of the class to use an abstract interface and for the implementation detail to be the concrete class.
The advantages of pImpl are:
Assuming there is just one implementation of this interface, it is clearer by not using abstract class / concrete implementation
If you have a suite of classes (a module) such that several classes access the same "impl" but users of the module will only use the "exposed" classes.
No v-table if this is assumed to be a bad thing.
The disadvantages I found of pImpl (where abstract interface works better)
Whilst you may have only one "production" implementation, by using an abstract interface you can also create a "mock" inmplementation that works in unit testing.
(The biggest issue). Before the days of unique_ptr and moving you had restricted choices as to how to store the pImpl. A raw pointer and you had issues about your class being non-copyable. An old auto_ptr wouldn't work with forwardly declared class (not on all compilers anyway). So people started using shared_ptr which was nice in making your class copyable but of course both copies had the same underlying shared_ptr which you might not expect (modify one and both are modified). So the solution was often to use raw pointer for the inner one and make the class non-copyable and return a shared_ptr to that instead. So two calls to new. (Actually 3 given old shared_ptr gave you a second one).
Technically not really const-correct as the constness isn't propagated through to a member pointer.
In general I have therefore moved away in the years from pImpl and into abstract interface usage instead (and factory methods to create instances).
As many other said, the Pimpl idiom allows to reach complete information hiding and compilation independency, unfortunately with the cost of performance loss (additional pointer indirection) and additional memory need (the member pointer itself). The additional cost can be critical in embedded software development, in particular in those scenarios where memory must be economized as much as possible.
Using C++ abstract classes as interfaces would lead to the same benefits at the same cost.
This shows actually a big deficiency of C++ where, without recurring to C-like interfaces (global methods with an opaque pointer as parameter), it is not possible to have true information hiding and compilation independency without additional resource drawbacks: this is mainly because the declaration of a class, which must be included by its users, exports not only the interface of the class (public methods) needed by the users, but also its internals (private members), not needed by the users.
Here is an actual scenario I encountered, where this idiom helped a great deal. I recently decided to support DirectX 11, as well as my existing DirectX 9 support, in a game engine.
The engine already wrapped most DX features, so none of the DX interfaces were used directly; they were just defined in the headers as private members. The engine uses DLL files as extensions, adding keyboard, mouse, joystick, and scripting support, as week as many other extensions. While most of those DLLs did not use DX directly, they required knowledge and linkage to DX simply because they pulled in headers that exposed DX.
In adding DX 11, this complexity was to increase dramatically, however unnecessarily. Moving the DX members into a PIMPL, defined only in the source, eliminated this imposition.
On top of this reduction of library dependencies, my exposed interfaces became cleaner as I moved private member functions into the PIMPL, exposing only front facing interfaces.
One benefit I can see is that it allows the programmer to implement certain operations in a fairly fast manner:
X( X && move_semantics_are_cool ) : pImpl(NULL) {
this->swap(move_semantics_are_cool);
}
X& swap( X& rhs ) {
std::swap( pImpl, rhs.pImpl );
return *this;
}
X& operator=( X && move_semantics_are_cool ) {
return this->swap(move_semantics_are_cool);
}
X& operator=( const X& rhs ) {
X temporary_copy(rhs);
return this->swap(temporary_copy);
}
PS: I hope I'm not misunderstanding move semantics.
It is used in practice in a lot of projects. It's usefulness depends heavily on the kind of project. One of the more prominent projects using this is Qt, where the basic idea is to hide implementation or platform-specific code from the user (other developers using Qt).
This is a noble idea, but there is a real drawback to this: debugging
As long as the code hidden in private implementations is of premium quality this is all well, but if there are bugs in there, then the user/developer has a problem, because it just a dumb pointer to a hidden implementation, even if he/she has the implementations source code.
So as in nearly all design decisions there are pros and cons.
I thought I would add an answer because although some authors hinted at this, I didn't think the point was made clear enough.
The primary purpose of PIMPL is to solve the N*M problem. This problem may have other names in other literature, however a brief summary is this.
You have some kind of inhertiance hierachy where if you were to add a new subclass to your hierachy, it would require you to implement N or M new methods.
This is only an approximate hand-wavey explanation, because I only recently became aware of this and so I am by my own admission not yet an expert on this.
Discussion of existing points made
However I came across this question, and similar questions a number of years ago, and I was confused by the typical answers which are given. (Presumably I first learned about PIMPL some years ago and found this question and others similar to it.)
Enables binary compatiability (when writing libraries)
Reduces compile time
Hides data
Taking into account the above "advantages", none of them are a particularly compelling reason to use PIMPL, in my opinion. Hence I have never used it, and my program designs suffered as a consequence because I discarded the utility of PIMPL and what it can really be used to accomplish.
Allow me to comment on each to explain:
1.
Binary compatiability is only of relevance when writing libraries. If you are compiling a final executable program, then this is of no relevance, unless you are using someone elses (binary) libraries. (In other words, you do not have the original source code.)
This means this advantage is of limited scope and utility. It is only of interest to people who write libraries which are shipped in proprietary form.
2.
I don't personally consider this to be of any relevance in the modern day when it is rare to be working on projects where the compile time is of critical importance. Maybe this is important to the developers of Google Chrome. The associated disadvantages which probably increase development time significantly probably more than offset this advantage. I might be wrong about this but I find it unlikely, especially given the speed of modern compilers and computers.
3.
I don't immediatly see the advantage that PIMPL brings here. The same result can be accomplished by shipping a header file and a binary object file. Without a concrete example in front of me it is difficult to see why PIMPL is relevant here. The relevant "thing" is shipping binary object files, rather than original source code.
What PIMPL actually does:
You will have to forgive my slightly hand-wavey answer. While I am not a complete expert in this particular area of software design, I can at least tell you something about it. This information is mostly repeated from Design Patterns. The authors call it "Bridge Pattern" aka Handle aka Body.
In this book, the example of writing a Window manager is given. The key point here is that a window manager can implement different types of windows as well as different types of platform.
For example, one may have a
Window
Icon window
Fullscreen window with 3d acceleration
Some other fancy window
These are types of windows which can be rendered
as well as
Microsoft Windows implementation
OS X platform implementation
Linux X Window Manger
Linux Wayland
These are different types of rendering engines, with different OS calls and possibly fundamentally different functionality as well
The list above is analagous to that given in another answer where another user described writing software which should work with different kinds of hardware for something like a DVD player. (I forget exactly what the example was.)
I give slightly different examples here compared to what is written in the Design Patterns book.
The point being that there are two seperate types of things which should be implemented using an inheritance hierachy, however using a single inheritance hierachy does not suffice here. (N*M problem, the complexity scales like the square of the number of things in each bullet point list, which is not feasible for a developer to implement.)
Hence, using PIMPL, one seperates out the types of windows and provides a pointer to an instance of an implementation class.
So PIMPL:
Solves the N*M problem
Decouples two fundamentally different things which are being modelled using inheritance such that there are 2 or more hierachies, rather than just one monolith
Permits runtime exchange of the exact implementation behaviour (by changing a pointer). This may be advantagous in some situations, whereas a single monolith enforces static (compile time) behaviour selection rather than runtime behaviour selection
There may be other ways to implement this, for example with multiple inheritance, but this is usually a more complicated and difficult approach, at least in my experience.

Fast dynamic casting progress

A little while ago, I found that very interesting paper on a very neat performance upgrade for dynamic_cast in C++: http://www2.research.att.com/~bs/fast_dynamic_casting.pdf.
Basically, it makes dynamic_cast in C++ way faster than the traditional research in inheritance tree. As stated in the paper, the method provides for a fast, constant-time dynamic casting algorithm.
This paper was published in 2005. Now, I am wondering if the technique was ever implemented somewhere or if there are plans to implement it anywhere?
I do not know what implementations various compilers use beside GCC (which isn't linear). However, it is important to stress that the paper does not necessarily propose a method that is always faster than existing implementations for all (or even common) usage. It proposes a general solution that is asymptotically better as inheritance hierarchies grow.
However, it is rarely a good design to have large inheritance hierarchies, as they tend to force the application to become monolithic and inflexible to change. Programs with flexible design tend to only have hierarchies mostly with 2 levels, an abstract base and an implementation of runtime polymorphic roles to support the Open/Closed Principle. In these cases, walking the inheritance graph can be as simple as a single pointer dereference and compare, which can be faster than the index-sum-then-dereference-then-compare presented by Gibbs and Stroustrup.
Also, it is important to stress that it is never necessary to write a program that uses dynamic_cast unless your own business rules require it. The use of dynamic_cast is always an indication that polymorphism is not being properly used and reuse is being compromised. If you need a behavior based on casting up a hierarchy, adding a virtual method gives the clean solution. If you have a code section that does dynamic_cast-checks on types, that section of code will never "close" (in the meaning of the Open/Closed Principle), and will need to be updated for every new type added to the system. A virtual dispatch, on the other hand, is added only on new types, allowing you to remain open to expansion and yet closing the behaviors operating on the base type.
So this is really a rather academic suggestion (equating to changing a map to a hash_map algorithmically) that shouldn't have real world effects if good design is followed. If business rules forbid good design (some shops may have code barriers or code ownership issues where you cannot change existing architectures the way they need to be, nor do they allow adaptors to be built as would commonly be used for 3rd party libraries), then it is best not to make the decision on which compiler to use based on what algorithm is implemented. As always, if performance is key and you have to use a feature like dynamic_cast, profile your code. It is possible (and likely in many cases) that the tree-walking implementation is faster in practice.
See also the standards committee's review of implementations, including dynamic_cast and a well-known look at c++ in embedded environments and good use (which mentions Gibbs and Stroustrup in passing).

For C/C++, When is it beneficial not to use Object Oriented Programming?

I find myself always trying to fit everything into the OOP methodology, when I'm coding in C/C++. But I realize that I don't always have to force everything into this mold. What are some pros/cons for using the OOP methodology versus not? I'm more interested in the pros/cons of NOT using OOP (for example, are there optimization benefits to not using OOP?). Thanks, let me know.
Of course it's very easy to explain a million reasons why OOP is a good thing. These include: design patterns, abstraction, encapsulation, modularity, polymorphism, and inheritance.
When not to use OOP:
Putting square pegs in round holes: Don't wrap everything in classes when they don't need to be. Sometimes there is no need and the extra overhead just makes your code slower and more complex.
Object state can get very complex: There is a really good quote from Joe Armstrong who invented Erlang:
The problem with object-oriented
languages is they’ve got all this
implicit environment that they carry
around with them. You wanted a banana
but what you got was a gorilla holding
the banana and the entire jungle.
Your code is already not OOP: It's not worth porting your code if your old code is not OOP. There is a quote from Richard Stallman in 1995
Adding OOP to Emacs is not clearly an
improvement; I used OOP when working
on the Lisp Machine window systems,
and I disagree with the usual view
that it is a superior way to program.
Portability with C: You may need to export a set of functions to C. Although you can simulate OOP in C by making a struct and a set of functions who's first parameter takes a pointer to that struct, it isn't always natural.
You may find more reasons in this paper entitled Bad Engineering Properties
of Object-Oriented Languages.
Wikipedia's Object Oriented Programming page also discusses some pros and cons.
One school of thought with object-oriented programming is that you should have all of the functions that operate on a class as methods on the class.
Scott Meyers, one of the C++ gurus, actually argues against this in this article:
How Non-Member Functions Improve Encapsulation.
He basically says, unless there's a real compelling reason to, you should keep the function SEPARATE from the class. Otherwise the class can turn into this big bloated unmanageable mess.
Based on experiences in a previous large project, I totally agree with him.
A benefit of non-oop functionality is that it often makes exporting your functionality to different languages easier. For example a simple DLL containing only functions is much easier to use in C#, you can use the P/Invoke to simply call the C++ functions. So in this sense it can be useful for writing extremely time critical algorithms that fit nicely into single/few function calls.
OOP is used a lot in GUI code, computer games, and simulations. Windows should be polymorphic - you can click on them, resize them, and so on. Computer game objects should be polymorphic - they probably have a location, a path to follow, they might have health, and they might have some AI behavior. Simulation objects also have behavior that is similar, but breaks down into classes.
For most things though, OOP is a bit of a waste of time. State usually just causes trouble, unless you have put it safely in the database where it belongs.
I suggest you read Bjarne's Paper about Why C++ is not just an Object-Oriented Programming Language
If we consider, for a moment, not object-orienatation itself but one
of the keystones of object-orientation: encapsulation.
It can be shown that change-propagation probability cannot increase
with distance from the change: if A depends on B and B depends on C,
and we change C, then the probability that A will change
cannot be larger than the proabability that B will
change. If B is a direct dependency on C and A is an indirect
dependency on C, then, more generally, to minimise the potential cost
of any change in a system we must miminimise the potential number of
direct dependencies.
The ISO defines encapsulation as the property that the information
contained in an object is accessible only through interactions at the
interfaces supported by the object.
We use encapsulation to minimise the number of potential dependencies
with the highest change-propagation probability. Basically,
encapsulation mitigates the ripple effect.
Thus one reason not to use encapsulation is when the system is so
small or so unchanging that the cost of potential ripple effects is
negligible. This is also, therefore, a case when OO might not be used
without potentially costly consequences.
Well, there are several alternatives. Non-OOP code in C++ may instead be:
C-style procedural code, or
C++-style generic programming
The only advantages to the first are the simplicity and backwards-compatibility. If you're writing a small trivial app, then messing around with classes is just a waste of time. If you're trying to write a "Hello World", just call printf already. Don't bother wrapping it in a class. And if you're working with an existing C codebase, it's probably not object-oriented, and trying to force it into a different paradigm than it already uses is just a recipe for pain.
For the latter, the situation is different, in that this approach is often superior to "traditional OOP".
Generic programming gives you greater performance (among other things because you often avoid the overhead of vtables, and because with less indirection, the compiler is better able to inline), better type safety (because the exact type is known, rather than hiding it behind an interface), and often cleaner and more concise code as well (STL iterators and algorithms enable much of this, without using a single instance of runtime polymorphism or virtual functions.
OOP is little more than an aging buzzword. A methodology that everyone misunderstood (The version supported by C++ and Java has little to do with what OOP originally meant, as in SmallTalk), and then pretended was the holy grail. There are aspects to it that are useful, certainly, but it is often not the best approach for designing an application.
Rather, express the overall logic by other means, for example generic programming, and when you need a class to encapsulate some simple concept, by all means design it according to OOP principles.
OOP is just a tool among many. The goal is not to write OOP code, but to write good code. Sometimes, the way to do this is by using OOP principles, but often, you can get better code using generic programmming principles, or functional programming.
It is a very project dependent decision. My general feel of OOP is that its useful for organizing large projects that involve multiple components. One area I find that OOP is especially pointless is school assignments. Excepting those specifically designed to teach OOP concepts, or large software design concepts, many of my assignments, specifically those in more algorithmy type classes are best suited to non-OOP design.
So specifically, smaller projects, that are not likely to grow large, and projects that center around a single algorithm seem to be non-OOP candidates in my books. Also, if you can write the specification as a linear set of steps, e.g., with no interactive GUI or state to maintain, this would also be an opportunity.
Of course, if you're required to use an OOP design, or an OOP toolkit, or if you have well defined 'objects' in you're spec, or if you need the features of polymorphism, etc. etc. etc...there are plenty of reasons to use it, the above seem to be indicators of when it would be simple not to.
Just my $0.02.
Having an Ada background, I develop in C in terms of packages containing data and their associated functions. This gives a code very modular with pieces of code that can be taken apart and reused on other projects. I don't feel the need to use OOP.
When I develop in Objective-C, objects are the natural container for data and code. I still develop with more or less the package concept in mind with some new cool features.
I'm used to be an OOP fanboy... Then realized using functions, generics and callbacks can often make a more elegant and change-friendly solution in C++ than classes and virtual functions.
Other big names realized it too: http://harmful.cat-v.org/software/OO_programming/
IMHO, I have a feeling that the OOP concept is not really suits the needs of the Big Data, as OOP assume all the stuff to be kept in memory (concept of Objects and member variables). This always result in memory demanding and heavy applications when OOP is used for example for big images processing. Instead, the simplicity of C maybe used with intensive parallel I/O making apps more efficient and easy to implement. It is the year 2019 I am writing this message...Everything may change in a year! :)
In my mind it comes down to what kind of model suits the problem at hand. It seems to me that OOP is best suited to coding GUI programs, in that the data and functionality for a graphical object is easily bundled together. Other problems- (such as a webserver, as an example off the top of my head), might be more easily modeled with a data centric approach, where there's no strong advantage to having a method and its data near each-other.
tl;dr depends on the problem.
I'd say the greatest benefit of C++ OOP is inheritance and polymorphism (Virtual function etc...) .
This allows for code reuse and extendibility
C++, use OOP - - - C, no, with certain exceptions
In C++ you should use OOP. It's a nice abstraction and it's the tool you are given. You either use it or leave it in the box where it can't help. You don't use the power saw for everything but I would read the manual and have it ready for the right job.
In C, it's a more difficult call. While you can certainly write arbitrarily object-oriented code in C, it's enough of a pain that you immediately find yourself fighting the language in order to use it. You may be more productive dropping the doesn't-fit-so-well design pattern and programming as C was intended to be used.
Furthermore, every time you make an array of function pointers or something in an OOP-in-C design pattern, you sever almost completely all visible links in the inheritance chain, making the code hard to maintain. In real OOP languages, there is an obvious chain of derived classes, often analyzed and documented for you. (mmm, javadoc.) Not so in OOP-in-C, and the tools available won't be able to see it.
So, I would argue in general against OOP in C. For a really complex program, you may well need the abstraction, and then you will have to do it despite needing to fight the language in the process and despite making the program quite hard to follow by anyone other than the original author.
But if you knew the program was going to become that complicated, you shouldn't have written it in C in the first place...
In C, there are some times when I 'emulate' the object oriented approach, by defining some sort of constructor with granular control over things like callbacks, when running several instances of it.
For instance, lets say I have some spiffy event handler library and I know that down the road I'm going to need many allocated copies:
So I would have (in C)
MyEvent *ev1 = new_eventhandler();
set_event_callback_func(ev1, callback_one);
ev1->setfd(fd1);
MyEvent *ev2 = new_eventhandler();
set_event_callback_func(ev2, callback_two);
ev2->setfd(fd2);
destroy_eventhandler(ev1);
destroy_eventhandler(ev2);
Obviously, I would later do something useful with that like handle received events in the two respective callback functions. I'm not going to really elaborate on the method of typing function pointers and structures to hold them, nor what would go on in the 'constructor' because its pretty obvious.
I think, this approach works for more advanced interfaces where its desirable to allow the user to define their own callbacks (and change them on the fly), or when working on complex non-blocking I/O services.
Otherwise, I much prefer a more procedural / functional approach.
Probably an unpopular idea but I think you should stick with non-OOP unless it adds something useful. In most practical problems OOP is useful but if I'm just playing with an idea I start writing non-object code and put functions and data into classes if it becomes useful.
Of course I still use other objects in my code (std::vector et al) and I use namespaces to help organise my functions but why put code into objects until it is useful? Equally don't shy away from free functions in an OO solution.
The question is tricky because OOP encompasses several concepts: object encapsulation, polymorphism, inheritance, etc. It's easy to take those ideas too far. Here's a concrete example:
When C++ first caught on, zillions of string classes sprung into being. Everything you could possibly imagine doing to a string (upcasing, downcasing, trimming, tokenizing, parsing, etc.) was a member function of some string class.
Notice, though, that std::strings from the STL don't have all these methods. STL is object-oriented--the state and implementation details of a string object are well encapsulated, only a small, orthogonal interface is exposed to the world. All the crazy manipulations that people used to include as member functions are now delegated to non-member functions.
This is powerful, because these functions can now work on any string class that exposes the same interface. If you use STL strings for most things and a specialty version tuned to your program's idiosyncracies, you don't have to duplicate member functions. You just have to implement the basic string interface and then you can re-use all those crazy manipulations.
Some people call this hybrid approach generic programming. It's still object-oriented programming, but it moves away from the "everything is a member-function" mentality that a lot of people associate with OOP.