Related
I wanted to transform the dynamic_casts from base class to derived from this style:
auto derived = dynamic_cast<Derived*>(object);
To something more compact. For that I have added in Base class the following template:
template<typename T>
T As() { return dynamic_cast<T>(this); }
So now the previous statement would be rewritten as
auto derived = object->As<Derived*>();
I like this style more. But I know there might be readability issues (subjective) or memory usage of the class maybe? If am I correct this will generate a function for each type of derived I cast. This number can be potentially large (100 different derived classes).
Should I just stick to plain dynamic_cast?
If you read material from a number of experts who have participated in the design of C++ (Stroustrup, Sutter, the list goes on) you will find that dynamic_cast (and all the _casts) are verbose and clumsy for the programmer BY DESIGN.
Where at all possible, it is considered best to AVOID using them. While all of the _cast operators have their place (i.e. there are circumstances in which they are genuinely the best solution to a problem) they are also blunt instruments that can be used to work around problems due to bad design. Unfortunately, given a choice, a lot of programmers will reach for such blunt instruments rather than applying a bit more effort to learn appropriate techniques, and to clean up their design - which has benefits such as making the code easier to get working right, and easier to maintain.
dynamic_cast is, arguably, the worst of the _cast operators, since it almost invariably introduces an overhead at run time. If it is used to work around deficiencies due to bad design, there is a distinct run-time penalty.
Making the syntax clumsy and verbose encourages a programmer to find alternatives (e.g. design types and operations on types, in a way that avoids the need for such conversions).
What you're asking for is a way to allow programmers to use dynamic_cast easily and with less thought. That will encourage bad design, by allowing a programmer to easily use the _cast operators to work around design problems, when they would often be better off applying a bit more effort to avoid a need for such conversions in the first place. There is plenty of information available about techniques that can be used to avoid use of operations like dynamic_cast.
So, yes, if you really need to use such conversions, I suggest you stick to use of dynamic_cast.
Better yet, you might want to also apply effort to learn design techniques and idioms that reduce how often you need to use it.
I want to know whether there is an effect on program efficiency by adopting object oriented approach to a problem as compared to the structured programming approach in any programming language but specially in c++.
Maybe. Maybe not.
You can write efficient object-oriented code. You can write inefficient structured code.
It depends on the application, how well the code is written, and how heavily the code is optimized. In general, you should write code so that it has a good, clean, modular architecture and is well designed, then if you have problems with performance optimize the hot spots that are causing performance issues.
Use object oriented programming where it makes sense to use it and use structured programming where it makes sense to use it. You don't have to choose between one and the other: you can use both.
I remember back in the early 1990's when C++ was young there were studies done about this. If I remember correctly, the guys who took (well written) C++ programs and recoded them in C got around a 15% increase in speed. The guys who took C programs and recoded them in C++, and modified the imperative style of C to an OO style (but same algorithms) for C++ got the same or better performance. The apparent contradiction was explained by the observation that the C programs, in being translated to an object oriented style, became better organized. Things that you did in C because it was too much code and trouble to do better could more easily be done properly in C++.
Thinking back about this I wonder about the conclusion some. Writing a program a second time will always result in a better program, so it didn't have to be imperative to OO style that made the difference. Todays computer architectures are designed with hardware support for common operations done by OO programs, and compilers have gotten better at using the instructions, so I think that it is likely that whatever overhead a virtual function call had in 1992 it is far smaller today.
There doesn't have to be, if you are very careful to avoid it. If you just take the most straightforward approach, using dynamic allocation, virtual functions, and (especially) passing objects by value, then yes there will be inefficiency.
It doesn't have to be. Algorithm is all matters. I agree encapsulation will slow you down little bit, but compilers are there to optimize.
You would say no if this is the question in computer science paper.
However in the real development environment this tends to be true if the OOP paradigm is used correctly. The reason is that in real development process, we generally need to maintain our code base and that the time when OOP paradigm could help us. One strong point of OOP over structured programming like C is that in OOP it is easier to make the code maintainable. When the code is more maintainable, it means less bug and less time to fix bug and less time needed for implementing new features. The bottom line is then we will have more time to focus on the efficiency of the application.
The problem is not technical, it is psychological. It is in what it encourages you to do by making it easy.
To make a mundane analogy, it is like a credit card. It is much more efficient than writing checks or using cash. If that is so, why do people get in so much trouble with credit cards? Because they are so easy to use that they abuse them. It takes great discipline not to over-use a good thing.
The way OO gets abused is by
Creating too many "layers of abstraction"
Creating too much redundant data structure
Encouraging the use of notification-style code, attempting to maintain consistency within redundant data structures.
It is better to minimize data structure, and if it must be redundant, be able to tolerate temporary inconsistency.
ADDED:
As an illustration of the kind of thing that OO encourages, here's what I see sometimes in performance tuning: Somebody sets SomeProperty = true;. That sounds innocent enough, right? Well that can ripple to objects that contain that object, often through polymorphism that's hard to trace. That can mean that some list or dictionary somewhere needs to have things added to it or removed from it. That can mean that some tree or list control needs controls added or removed or shuffled. That can mean windows are being created or destroyed. It can also mean some things need to be changed in a database, which might not be local so there's some I/O or mutex locking to be done.
It can really get crazy. But who cares? It's abstract.
There could be: the OO approach tends to be closer to a decoupled approach where different modules don't go poking around inside each other. They are restricted to public interfaces, and there is always a potential cost in that. For example, calling a getter instead of just directly examining a variable; or calling a virtual function by default because the type of an object isn't sufficiently obvious for a direct call.
That said, there are several factors that diminish this as a useful observation.
A well written structured program should have the same modularity (i.e. hiding implementations), and therefore incur the same costs of indirection. The cost of calling a function pointer in C is probably going to be very similar to the cost of calling a virtual function in C++.
Modern JITs, and even the use of inline methods in C++, can remove the indirection cost.
The costs themselves are probably relatively small (typically just a few extra simple operations per instruction call). This will be insignificant in a program where the real work is done in tight loops.
Finally, a more modular style frees the programmer to tackle more complicated, but hopefully less complex algorithms without the peril of low level bugs.
I would like to start my question by stating that this is a C++ design question, more then anything, limiting the scope of the discussion to what is accomplishable in that language.
Let us pretend that I am working on a vehicle simulator that is intended to model modern highway systems. As part of this simulation, entities will be interacting with each other to avoid accidents, stop at stop lights and perhaps eventually even model traffic enforcement with radar guns and subsequent exciting high speed chases.
Being a spatial simulation written in C++, it seems like it would be ideal to start with some kind of Vehicle hierarchy, with cars and trucks deriving from some common base class. However, a common problem I have run in to is that such a hierarchy is usually very rigidly defined, and introducing unexpected changes - modeling a boat for instance - tends to introduce unexpected complexity that tends to grow over time into something quite unwieldy.
This simple aproach seems to suffer from a combinatoric explosion of classes. Imagine if I created a MoveOnWater interface and a MoveOnGround interface, and used them to define Car and Boat. Then lets say I add RadarEquipment. Now I have to do something like add the classes RadarBoat and RadarCar. Adding more capabilities using this approach and the whole thing rapidly becomes quite unreasonable.
One approach I have been investigating to address this inflexibility issue is to do away with the inheritance hierarchy all together. Instead of trying to come up with a type safe way to define everything that could ever be in this simulation, I defined one class - I will call it 'Entity' - and the capabilities that make up an entity - can it drive, can it fly, can it use radar - are all created as interfaces and added to a kind of capability list that the Entity class contains. At runtime, the proper capabilities are created and attached to the entity and functions that want to use these interfaced must first query the entity object and check for there existence. This approach seems to be the most obvious alternative, and is working well for the time being. I, however, worry about the maintenance issues that this approach will have. Effectively any arbitrary thing can be added, and there is no single location in which all possible capabilities are defined. Its not a problem currently, when the total number of things is quite small, but I worry that it might be a problem when someone else starts trying to use and modify the code.
As one potential alternative, I pondered using the template system to achieve type safe while keeping the same kind of flexibility. I imagine I could create entities that inherited whatever combination of interfaces I wanted. Using these objects would entail creating a template class or function that used any combination of the interfaces. One example might be the simple move on road using just the MoveOnRoad interface, whereas more complex logic, like a "high speed freeway chase", could use methods from both MoveOnRoad and Radar interfaces.
Of course making this approach usable mandates the use of boost concept check just to make debugging feasible. Also, this approach has the unfortunate side effect of making "optional" interfaces all but impossible. It is not simple to write a function that can have logic to do one thing if the entity has a RadarEquipment interface, and do something else if it doesn't. In this regard, type safety is somewhat of a curse. I think some trickery with boost any may be able to pull it off, but I haven't figured out how to make that work and it seems like way to much complexity for what I am trying to achieve.
Thus, we are left with the dynamic "list of capabilities" and achieving the goal of having decision logic that drives behavior based on what the entity is capable of becomes trivial.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning. I am eager to learn of a design pattern or idiom that is commonly used to address this issue, and the sort of tradeoffs I will have to make.
I also want to mention that I have been contemplating perhaps an even more random design. Even though I my gut tells me that this should be designed as a high performance C++ simulation, a part of me wants to do away with the Entity class and object-orientated foo all together and uses a relational model to define all of these entity states. My initial thought is to treat entities as an in memory database and use procedural query logic to read and write the various state information, with the necessary behavior logic that drives these queries written in C++. I am somewhat concerned about performance, although it would not surprise me if that was a non-issue. I am perhaps more concerned about what maintenance issues and additional complexity this would introduce, as opposed to the relatively simple list-of-capabilities approach.
Encapsulate what varies and Prefer object composition to inheritance, are the two OOAD principles at work here.
Check out the Bridge Design pattern. I visualize Vehicle abstraction as one thing that varies, and the other aspect that varies is the "Medium". Boat/Bus/Car are all Vehicle abstractions, while Water/Road/Rail are all Mediums.
I believe that in such a mechanism, there may be no need to maintain any capability. For example, if a Bus cannot move on Water, such a behavior can be modelled by a NOP behavior in the Vehicle Abstraction.
Use the Bridge pattern when
you want to avoid a permanent binding
between an abstraction and its
implementation. This might be the
case, for example, when the
implementation must be selected or
switched at run-time.
both the abstractions and their
implementations should be extensible
by subclassing. In this case, the
Bridge pattern lets you combine the
different abstractions and
implementations and extend them
independently.
changes in the implementation of an
abstraction should have no impact on
clients; that is, their code should
not have to be recompiled.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning.
You may be erring in using C++ to define a system for which you as yet have no need/no requirements:
This approach seems to be the most
obvious alternative, and is working
well for the time being. I, however,
worry about the maintenance issues
that this approach will have.
Effectively any arbitrary thing can be
added, and there is no single location
in which all possible capabilities are
defined. Its not a problem currently,
when the total number of things is
quite small, but I worry that it might
be a problem when someone else starts
trying to use and modify the code.
Maybe you should be considering principles like YAGNI as opposed to BDUF.
Some of my personal favourites are from Systemantics:
"15. A complex system that works is invariably found to have evolved from a simple system that works"
"16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."
You're also worring about performance, when you have no defined performance requirements, and no problems with performance:
I am somewhat concerned about
performance, although it would not
surprise me if that was a non-issue.
Also, I hope you know about double-dispatch, which might be useful for implementing anything-to-anything interactions (it's described in some detail in More Effective C++ by Scott Meyers).
Having spent some time playing around in Haskell and other functional languages, I've come to appreciate the simplicity of design that comes from describing problems in general terms. While many aspects of template programming can be far from simple, some uses are common enough that I don't think they're an impediment to clarity (especially function templates). I find templates can often simplify the current design while automatically adding a bit of future-resistance. Why should their functionality be relegated to the library writers?
On the other hand, some people seem to avoid templates like the plague. I could understand this a decade ago when the very concept of generic types was foreign to much of the programming community. But now all of the popular statically-typed OO languages support generics of one form or another. The added familiarity seems to warrant an adjustment of the conservative attitudes.
One such conservative attitude was expressed to me recently:
You should never make anything more general than necessary - basic rule of software development.
I was quite honestly surprised to see this stated so dismissively as if it should've been self evident. Personally I find it far from self-evident, what with languages like Haskell where everything is generic unless you specify otherwise. That being said, I think I understand where this point of view comes from.
In the back of my mind, I do have something like that rule rattling around. Now that it's at the forefront, I realize I've always interpreted it in the light of overall architecture. For example, if you have a class, you don't want to load it up with tons of features you might one day use. Don't bother making interfaces if you only need one concrete version (though mockability might be a counterargument to this one). Things like that...
What I don't do, however, is apply this principle on the micro level. If I have a small utility function that has no reason to be dependent on any particular type, I'll make a template.
So what do you think, SO? What do you consider to be over-generalizing? Does this rule have differing applicability depending on the context? Do you even agree this is a rule?
Over generalizing makes me crazy. I'm not scared of templates (nowhere near) and I like general solutions. But I also like solving the problem for which the client is paying. If it's a one week project, why am I now funding a one month extravaganza which will continue to work not only through obvious possible future changes like new taxes, but probably through the discovery of new moons or life on mars?
Bringing this back to templates, the client asks for some capability that involves your writing a function that takes a string and a number. You give me a templated solution that takes any two types and does the right thing for my specific case and something that might or might not be right (due the absence of requirements) in the rest of the cases, and I will not be grateful. I will be ticked off that in addition to paying you I have to pay someone to test it, someone to document it, and someone to work within your constraints in the future if a more general case should happen to come along.
Of course, not all generalization is over generalization. Everything should be as simple as possible, but no simpler. As general as necessary, but no more general. As tested as we can afford, but no more tested. Etc. Also, "predict what might change and encapsulate it." All these rules are simple, but not easy. That's why wisdom matters in developers and those who manage them.
If you can do it in the same time, and the code is at least as clear, generalization is always better than specialization.
There's a principle that XP people follow called YAGNI - You Ain't Gonna Need It.
The wiki has this to say:
Even if you're totally, totally, totally sure that you'll need a feature later on, don't implement it now. Usually, it'll turn out either a) you don't need it after all, or b) what you actually need is quite different from what you foresaw needing earlier.
This doesn't mean you should avoid building flexibility into your code. It means you shouldn't overengineer something based on what you think you might need later on.
Too generic ? I must admit I am a fan of Generic Programming (as a principle) and I really like the idea that Haskell and Go are using there.
While programming in C++ however, you are offered two ways to achieve similar goals:
Generic Programming: by the way of templates, even though there are issues with compilation time, dependency on the implementation, etc..
Object-Oriented Programming: its ancestor in a way, which places the issue on the object itself (class/struct) rather than on the function...
Now, when to use ? It's a difficult question for sure. Most of the times it's not much more than a gut feeling, and I've certainly seen abuse of either.
From experience I would say that the smaller a function/class, the more basic its goal, the easier it is to generalize. As an example, I carry around a toolbox in most of my pet projects and at work. Most of the functions / classes there are generic... a bit like Boost in a way ;)
// No container implements this, it's easy... but better write it only once!
template <class Container, class Pred>
void erase_if(Container& c, Pred p)
{
c.erase(std::remove_if(c.begin(), c.end(), p), c.end());
}
// Same as STL algo, but with precondition validation in debug mode
template <class Container, class Iterator = typename Container::iterator>
Iterator lower_bound(Container& c, typename Container::value_type const& v)
{
ASSERT(is_sorted(c));
return std::lower_bound(c.begin(), c.end(), v);
}
On the other hand, the closer you get to the business specific job, the least likely you are to be generic.
That's why I myself appreciate the principle of least effort. When I am thinking of a class or method, I first take a step backward and think a bit:
Would it make sense for it to be more generic ?
What would be the cost ?
Depending on the anwsers, I adapt the degree of genericity, and I struggle to avoid premature locking, ie I avoid using a non-generic enough way when it doesn't cost much to use a slightly more generic one.
Example:
void Foo::print() { std::cout << /* some stuff */ << '\n'; }
// VS
std::ostream& operator<<(std::ostream& out, Foo const& foo)
{
return out << /* some stuff */ << '\n';
}
Not only is it more generic (I can specify where to output), it's also more idiomatic.
Something is over-generalized when you're wasting time generalizing it. If you are going to use the generic features in the future then you're probably not wasting time. It's really that simple [in my mind].
One thing to note is that making your software generalized isn't necessarily an improvement if it also makes it more confusing. There is often a trade off.
I think you should consider two basic principles of programming: KISS (keep it simple and straightforward) and DRY (don't repeat yourself). Most of the time I start with the first: implement the needed functionality in the most straightforward and simple way. Quite often it's enough, because it can already satisfy my requirements. In this case it remains simple (and not generic).
When the second (or max third) time I need something similar I try to generalize the problem (function, class, design, whatever) based on the concrete real life examples -> it's unlikely that I do the generalization just for itself.
Next similar problem: if it fits to the current picture elegantly, fine, I can solve it easily. If not, I check if the current solution can be further generalized (without making it too complicated/not so elegant).
I think you should do something similar even if you know in advance that you will need a general solution: take some concrete examples, and do the generalization based on them. Otherwise it's too easy to run into dead ends where you have a "nice", general solution, but it's not usable to solve the real problems.
However, there might be some exceptional cases to this.
a) When a general solution is almost exactly the same effort and complexity. Example: writing a Queue implementation using generics is not much more complicated then doing the same just for Strings.
b) If it's easier to solve the problem in the general way, and also the solution is easier to understand. It does not happen too often, I can't come up with a simple, real life example of this at the moment :-(. But even in this case having/analyzing concrete examples previously is a must IMO, as only it can confirm that you are on the right track.
One can say experience can overcome the prerequisite of having concrete problems, but I think in this case experience means that you have already seen and thought about concrete, similar problems and solutions.
If you have some time you could have a look at Structure and Interpretation of Computer Programs. It has a lot of interesting stuff about how to find the right balance between genericity and complexity, and how to keep the complexity at a minimum that is really required by the your problem.
And of course, the various agile processes also recommend something similar: start with the simple, refactor when it's needed.
For me, over generalization is, if there is need to break the abstraction in any further steps. Example within the project, I live in:
Object saveOrUpdate(Object object)
This method is too generic, because it is provided to the client within a 3-Tier-Architecture, so you have to check the saved object on the server without a context.
there are 2 examples from microsoft in over-generalization:
1.) CObject (MFC)
2.) Object (.Net)
both of them are used to "realise" generics in c++ which most of the people doesn't utilize. In fact, everyone did type checking on parameter given using these (CObject/Object) ~
I find myself always trying to fit everything into the OOP methodology, when I'm coding in C/C++. But I realize that I don't always have to force everything into this mold. What are some pros/cons for using the OOP methodology versus not? I'm more interested in the pros/cons of NOT using OOP (for example, are there optimization benefits to not using OOP?). Thanks, let me know.
Of course it's very easy to explain a million reasons why OOP is a good thing. These include: design patterns, abstraction, encapsulation, modularity, polymorphism, and inheritance.
When not to use OOP:
Putting square pegs in round holes: Don't wrap everything in classes when they don't need to be. Sometimes there is no need and the extra overhead just makes your code slower and more complex.
Object state can get very complex: There is a really good quote from Joe Armstrong who invented Erlang:
The problem with object-oriented
languages is they’ve got all this
implicit environment that they carry
around with them. You wanted a banana
but what you got was a gorilla holding
the banana and the entire jungle.
Your code is already not OOP: It's not worth porting your code if your old code is not OOP. There is a quote from Richard Stallman in 1995
Adding OOP to Emacs is not clearly an
improvement; I used OOP when working
on the Lisp Machine window systems,
and I disagree with the usual view
that it is a superior way to program.
Portability with C: You may need to export a set of functions to C. Although you can simulate OOP in C by making a struct and a set of functions who's first parameter takes a pointer to that struct, it isn't always natural.
You may find more reasons in this paper entitled Bad Engineering Properties
of Object-Oriented Languages.
Wikipedia's Object Oriented Programming page also discusses some pros and cons.
One school of thought with object-oriented programming is that you should have all of the functions that operate on a class as methods on the class.
Scott Meyers, one of the C++ gurus, actually argues against this in this article:
How Non-Member Functions Improve Encapsulation.
He basically says, unless there's a real compelling reason to, you should keep the function SEPARATE from the class. Otherwise the class can turn into this big bloated unmanageable mess.
Based on experiences in a previous large project, I totally agree with him.
A benefit of non-oop functionality is that it often makes exporting your functionality to different languages easier. For example a simple DLL containing only functions is much easier to use in C#, you can use the P/Invoke to simply call the C++ functions. So in this sense it can be useful for writing extremely time critical algorithms that fit nicely into single/few function calls.
OOP is used a lot in GUI code, computer games, and simulations. Windows should be polymorphic - you can click on them, resize them, and so on. Computer game objects should be polymorphic - they probably have a location, a path to follow, they might have health, and they might have some AI behavior. Simulation objects also have behavior that is similar, but breaks down into classes.
For most things though, OOP is a bit of a waste of time. State usually just causes trouble, unless you have put it safely in the database where it belongs.
I suggest you read Bjarne's Paper about Why C++ is not just an Object-Oriented Programming Language
If we consider, for a moment, not object-orienatation itself but one
of the keystones of object-orientation: encapsulation.
It can be shown that change-propagation probability cannot increase
with distance from the change: if A depends on B and B depends on C,
and we change C, then the probability that A will change
cannot be larger than the proabability that B will
change. If B is a direct dependency on C and A is an indirect
dependency on C, then, more generally, to minimise the potential cost
of any change in a system we must miminimise the potential number of
direct dependencies.
The ISO defines encapsulation as the property that the information
contained in an object is accessible only through interactions at the
interfaces supported by the object.
We use encapsulation to minimise the number of potential dependencies
with the highest change-propagation probability. Basically,
encapsulation mitigates the ripple effect.
Thus one reason not to use encapsulation is when the system is so
small or so unchanging that the cost of potential ripple effects is
negligible. This is also, therefore, a case when OO might not be used
without potentially costly consequences.
Well, there are several alternatives. Non-OOP code in C++ may instead be:
C-style procedural code, or
C++-style generic programming
The only advantages to the first are the simplicity and backwards-compatibility. If you're writing a small trivial app, then messing around with classes is just a waste of time. If you're trying to write a "Hello World", just call printf already. Don't bother wrapping it in a class. And if you're working with an existing C codebase, it's probably not object-oriented, and trying to force it into a different paradigm than it already uses is just a recipe for pain.
For the latter, the situation is different, in that this approach is often superior to "traditional OOP".
Generic programming gives you greater performance (among other things because you often avoid the overhead of vtables, and because with less indirection, the compiler is better able to inline), better type safety (because the exact type is known, rather than hiding it behind an interface), and often cleaner and more concise code as well (STL iterators and algorithms enable much of this, without using a single instance of runtime polymorphism or virtual functions.
OOP is little more than an aging buzzword. A methodology that everyone misunderstood (The version supported by C++ and Java has little to do with what OOP originally meant, as in SmallTalk), and then pretended was the holy grail. There are aspects to it that are useful, certainly, but it is often not the best approach for designing an application.
Rather, express the overall logic by other means, for example generic programming, and when you need a class to encapsulate some simple concept, by all means design it according to OOP principles.
OOP is just a tool among many. The goal is not to write OOP code, but to write good code. Sometimes, the way to do this is by using OOP principles, but often, you can get better code using generic programmming principles, or functional programming.
It is a very project dependent decision. My general feel of OOP is that its useful for organizing large projects that involve multiple components. One area I find that OOP is especially pointless is school assignments. Excepting those specifically designed to teach OOP concepts, or large software design concepts, many of my assignments, specifically those in more algorithmy type classes are best suited to non-OOP design.
So specifically, smaller projects, that are not likely to grow large, and projects that center around a single algorithm seem to be non-OOP candidates in my books. Also, if you can write the specification as a linear set of steps, e.g., with no interactive GUI or state to maintain, this would also be an opportunity.
Of course, if you're required to use an OOP design, or an OOP toolkit, or if you have well defined 'objects' in you're spec, or if you need the features of polymorphism, etc. etc. etc...there are plenty of reasons to use it, the above seem to be indicators of when it would be simple not to.
Just my $0.02.
Having an Ada background, I develop in C in terms of packages containing data and their associated functions. This gives a code very modular with pieces of code that can be taken apart and reused on other projects. I don't feel the need to use OOP.
When I develop in Objective-C, objects are the natural container for data and code. I still develop with more or less the package concept in mind with some new cool features.
I'm used to be an OOP fanboy... Then realized using functions, generics and callbacks can often make a more elegant and change-friendly solution in C++ than classes and virtual functions.
Other big names realized it too: http://harmful.cat-v.org/software/OO_programming/
IMHO, I have a feeling that the OOP concept is not really suits the needs of the Big Data, as OOP assume all the stuff to be kept in memory (concept of Objects and member variables). This always result in memory demanding and heavy applications when OOP is used for example for big images processing. Instead, the simplicity of C maybe used with intensive parallel I/O making apps more efficient and easy to implement. It is the year 2019 I am writing this message...Everything may change in a year! :)
In my mind it comes down to what kind of model suits the problem at hand. It seems to me that OOP is best suited to coding GUI programs, in that the data and functionality for a graphical object is easily bundled together. Other problems- (such as a webserver, as an example off the top of my head), might be more easily modeled with a data centric approach, where there's no strong advantage to having a method and its data near each-other.
tl;dr depends on the problem.
I'd say the greatest benefit of C++ OOP is inheritance and polymorphism (Virtual function etc...) .
This allows for code reuse and extendibility
C++, use OOP - - - C, no, with certain exceptions
In C++ you should use OOP. It's a nice abstraction and it's the tool you are given. You either use it or leave it in the box where it can't help. You don't use the power saw for everything but I would read the manual and have it ready for the right job.
In C, it's a more difficult call. While you can certainly write arbitrarily object-oriented code in C, it's enough of a pain that you immediately find yourself fighting the language in order to use it. You may be more productive dropping the doesn't-fit-so-well design pattern and programming as C was intended to be used.
Furthermore, every time you make an array of function pointers or something in an OOP-in-C design pattern, you sever almost completely all visible links in the inheritance chain, making the code hard to maintain. In real OOP languages, there is an obvious chain of derived classes, often analyzed and documented for you. (mmm, javadoc.) Not so in OOP-in-C, and the tools available won't be able to see it.
So, I would argue in general against OOP in C. For a really complex program, you may well need the abstraction, and then you will have to do it despite needing to fight the language in the process and despite making the program quite hard to follow by anyone other than the original author.
But if you knew the program was going to become that complicated, you shouldn't have written it in C in the first place...
In C, there are some times when I 'emulate' the object oriented approach, by defining some sort of constructor with granular control over things like callbacks, when running several instances of it.
For instance, lets say I have some spiffy event handler library and I know that down the road I'm going to need many allocated copies:
So I would have (in C)
MyEvent *ev1 = new_eventhandler();
set_event_callback_func(ev1, callback_one);
ev1->setfd(fd1);
MyEvent *ev2 = new_eventhandler();
set_event_callback_func(ev2, callback_two);
ev2->setfd(fd2);
destroy_eventhandler(ev1);
destroy_eventhandler(ev2);
Obviously, I would later do something useful with that like handle received events in the two respective callback functions. I'm not going to really elaborate on the method of typing function pointers and structures to hold them, nor what would go on in the 'constructor' because its pretty obvious.
I think, this approach works for more advanced interfaces where its desirable to allow the user to define their own callbacks (and change them on the fly), or when working on complex non-blocking I/O services.
Otherwise, I much prefer a more procedural / functional approach.
Probably an unpopular idea but I think you should stick with non-OOP unless it adds something useful. In most practical problems OOP is useful but if I'm just playing with an idea I start writing non-object code and put functions and data into classes if it becomes useful.
Of course I still use other objects in my code (std::vector et al) and I use namespaces to help organise my functions but why put code into objects until it is useful? Equally don't shy away from free functions in an OO solution.
The question is tricky because OOP encompasses several concepts: object encapsulation, polymorphism, inheritance, etc. It's easy to take those ideas too far. Here's a concrete example:
When C++ first caught on, zillions of string classes sprung into being. Everything you could possibly imagine doing to a string (upcasing, downcasing, trimming, tokenizing, parsing, etc.) was a member function of some string class.
Notice, though, that std::strings from the STL don't have all these methods. STL is object-oriented--the state and implementation details of a string object are well encapsulated, only a small, orthogonal interface is exposed to the world. All the crazy manipulations that people used to include as member functions are now delegated to non-member functions.
This is powerful, because these functions can now work on any string class that exposes the same interface. If you use STL strings for most things and a specialty version tuned to your program's idiosyncracies, you don't have to duplicate member functions. You just have to implement the basic string interface and then you can re-use all those crazy manipulations.
Some people call this hybrid approach generic programming. It's still object-oriented programming, but it moves away from the "everything is a member-function" mentality that a lot of people associate with OOP.