I'm comming from an scientific background and I'm having trouble to think object oriented. I'm always thinking of functions not objects.
For example I've got a Dataset containing multiple 2D arrays and I like to extract Regions of Interest out of these arrays. So my first Design was something like a RoIFinder class to whom I pass a reference to the Dataset object. The RoIFinder object did its magic and returned the RoI's.
But this gives me a bad feeling since it looks more like a function than an object.It's more like a Blackbox. But I have no Idea how it is done properly.
How would you do something like that?
OO is not a silver bullet. Do your work in whichever way it seems to be correct from the different points of view: problem decomposition, efficiency, code simplicity, testing, etc.
Do not make the code look in OO way if you don't need to. The OO is about simplification the life when the problem is too complex, not for making simple problems solved in a complicated way.
Specifically for your problem, I see nothing bad in your approach. It possibly doesn't use some advanced techniques, so what?
To me it sounds like in the specific situation you describe, this could be a fine OO design.
OO in brief is about bundling together
state
behaviour
(identity).
Whenever you have data which represents the state of (a part of) the system, and you have behaviour tied to (typically manipulating) that data, you have a candidate for an object. Optionally, these objects may also have an identity, but this may not be always necessary.
If you potentially have multiple different criteria to pick regions of interests out of the Dataset, you may implement these as distinct *Finder classes, implementing a common base interface. There you have an OO class hierarchy! From then on, the Finders can even be used as interchangeable Strategies in your code.
The alternative would be to put the finder functionality in the Dataset. This might be OK if you are absolutely sure you won't have more different criteria to extract regions. Even then, your Dataset has two distinct responsibilities, which is usually not a good idea. It is better to have each class be responsible for one thing, and do it well.
We don't know what you are supposed to do with the data in the arrays - there may be some possibility to find more abstractions there and build some OO types and objects on these too.
Note though, that these all are just possibilities. Implement them only if they are actually useful (for solving problems, simplifying your code, or - last but not least - helping you gain practical experience with new concepts).
What you've done is probably better than the obvious OO approach of adding a find_roi() to the Dataset class itself. Why? Because it sounds like you've created the RoIFinder functionality based only on the public API of Dataset. Keeping Dataset simpler is also good. The STL (well these days it's just part of the Standard library) has examples of this in the way algorithms such as sort are applicable to multiple containers, rather than each container have a sort member function (although in the case of list optimisation opportunities leads it to implementing its own version). The STL also has std::string which controvertially embeds a lot of functionality that could have been factored out - in my opinion it is well designed, prioritorising convenient and elegant usage which is important in such a ubiquitous class which is so frequently and heavily operated on. So, pick what suits the situation. Anyway, there's no reason to put the RoIFinder into a class if it could equally be just a function, but if you've found some state (i.e. data members) that are convenient to preserve, or it helps usability in some other way, then that's a good enough reason to stick with your object.
Your 2D array can be implemented as a matrix class. One object
A region of interest is another class.
Getting a region of interest out of a matrix is a method.
"Iterators" into your matrices are classes.
Related
Right now I have a DirectX engine with a couple of classes - Application,Graphics,Sound and each of them is around 1k lines and they each reference eachother.I initially tried to limit use of classes and stuff like passing the D3D Device and instead made it global for all classes to use,but I see in everyone else's engine that everything is split up into many classes and they have stuff like Engine->GetRenderer->Render(MyD3DContext); isn't that terriby inefficient?Why not just make MyD3DContext global and use it directly in the Render function.And one last thing I don't get is = how are you supposed to make classes that work independent of eachother?Sounds weird.
Firstly why do you think that's terribly inefficient? Besides being much easier to code and maintain, that is also blazingly fast. OOP isn't a bottleneck, its a boon for large projects with multiple developers and multiple concerns(such as real world games).
Let me give you an example, since you mentioned "games":
The game is a Simulation
The simulation contains entities(Objects)
Objects can do things, and have attributes. Hence Objects are like an encapsulation of attributes and actions. This is what makes the "Object" in "Object Oriented Programming". You can think they're(objects are) created in a fictional factory in your simulator. The blueprint of object is the "class", and is called encapsulation.
Each of these objects are bound to your world, probably through some sort of highly mathematical Half-Life-2(source) level Physics engine. You wouldn't want to code the "physics" for each class. Instead you would inherit from a class(or interface) "IPhysics". And then whenever you change the gravity from 10.0 to 15.0, this value is propagated throughout the "world" scenario. This is inheritance.
Each object in your game, say Half-Life-2, Gordon Freeman can at the same time, act as a "Player" and "Can-Be-Scripted" if you know what I mean. This is polymorphism. One object acting in different types.
So you see, this is pretty easy(and terribly EFFICIENT) to model and present the fictional game in OOP.
It isn't terribly inefficient. And you definitely need an introduction to OOP of some sort. Maybe even something online
Yes.
As the project becomes larger having one global anything will cause a vast list of problems. It's also not particularly inefficient to traverse a few pointers. Worry about efficiency in the right areas, areas that you have proven by running tests are inefficient, and try and maintain code clarity and separation at all times.
If you're worried about inefficiency why not knock together a test app that has exactly that kind of structure and time how long it takes to do all that dereferencing. You'll find it insignificant to, say, building up the list of polys in sight.
The only way you'll see the benefit of having well encapsulated non-global objects will be as your project grows and you change things around.
there are a couple big tenants of OO design: in particular Code-Reuse/modularity and scope/isolation. Globals are generally frowned upon these days because they just don't scale well to large development efforts and always end up causing problems, so OO attempts to limit the scope of any given call to the minimum required to perform the function.
as for Modularity/reuse, the larger a sub-module grows, generally the more specific it becomes, and as such the less likely that it will fit all the purposes it could if it were broken apart into more modular chunks. as a result you spend less time rewriting the same code for a slightly tweaked purpose, which also reduces the adjacent purposes that you might break while implementing code for your new one. that makes it more efficient to implement, though there may or may not be some slight cost at runtime. likely not though. remember, it doesn't take a lick more binary to run Render() whether its defined in a root module or composed several layers deep in a composed object. its still just a function pointer.
these are just general concepts, so take what you like.
hope that helps.
I am trying to develop an object oriented representation of a finite element model. The finite element model consists of some output files(outputs of a commercial Finite element program) in binary format which keep certain data related to the finite element model (for people who are not in the field, these data correspond to nodes of the model( coordinates of the nodes), elements of the model, element connectivity for the element representations(which elements are associated with which nodes) and element matrix information). These data could also be represented generally as some vectors of ints and double and some other array structure if you call so.
I was thinking of making a general class FeModel and associating the above mentioned files as members of this class and work this way, of course these files would also be objects representing these files. However, the files that keep the information are not a part of the FeModel class that I was thinking on, at least conceptually. Since the idea does not relate to the real world representation, I thought that there should be a better way. Those files are just there to keep the information.
I am now thinking on the option of just reading the necessary information from these files in the FeModel class by creating suitable member functions and build what I want this way so that the interface is more or less a minimal one. However, on the other hand dividing this task into different classes representing the above mentioned files and using those as members in the FeModel class does not look also like a bad option to me. What are the decision criteria in these situations? I am aware of the fact that a problem could be solved in many different ways but are there some sort of guidelines to follow in similar kinds of cases where one hesitates between some options?
Greetz,
U.
My that's a bit of a wall of text. It could use a bit of filtering/rewriting.
As I understand it you have several files representing your 3D elements etc.
I nice OO way of doing it (at least in my mind) would be to have a separate class for each type of file you want to load, pass the filename in the constructor and load in the data, and have accessible members to access the data.
But without further information, structure to what you're asking it's a bit difficult to say.
You probably want to read about SOLID principles of OOP.
In my opinion, it's all about making you design less fragile in the face of changes. In few words, you should strive to the design which is changes-friendly: minor changes domain in model should be reflected by minor and local changes in the design and the implementation.
Don't forget that C++ is not OO-language, but rather language that supports several paradigms (including OO) and other paradigms could fit better to task at hand. If you have access to Bjarne Stroustrup excellent "The C++ Programming Language (Special Edition)", read through Part IV ("Design using C++") to get the idea of roles of classes and other C++ concepts.
An OO solution focuses on behavior, not data. You haven't told us what behaviors you need so we can't help with an OO solution.
That said, find your invariants and create a class for each related set of invariants and you should end up with a decent modular solution.
I suggest you read the book Object-Oriented Design Heuristics by Arthur Riel.
I would like to start my question by stating that this is a C++ design question, more then anything, limiting the scope of the discussion to what is accomplishable in that language.
Let us pretend that I am working on a vehicle simulator that is intended to model modern highway systems. As part of this simulation, entities will be interacting with each other to avoid accidents, stop at stop lights and perhaps eventually even model traffic enforcement with radar guns and subsequent exciting high speed chases.
Being a spatial simulation written in C++, it seems like it would be ideal to start with some kind of Vehicle hierarchy, with cars and trucks deriving from some common base class. However, a common problem I have run in to is that such a hierarchy is usually very rigidly defined, and introducing unexpected changes - modeling a boat for instance - tends to introduce unexpected complexity that tends to grow over time into something quite unwieldy.
This simple aproach seems to suffer from a combinatoric explosion of classes. Imagine if I created a MoveOnWater interface and a MoveOnGround interface, and used them to define Car and Boat. Then lets say I add RadarEquipment. Now I have to do something like add the classes RadarBoat and RadarCar. Adding more capabilities using this approach and the whole thing rapidly becomes quite unreasonable.
One approach I have been investigating to address this inflexibility issue is to do away with the inheritance hierarchy all together. Instead of trying to come up with a type safe way to define everything that could ever be in this simulation, I defined one class - I will call it 'Entity' - and the capabilities that make up an entity - can it drive, can it fly, can it use radar - are all created as interfaces and added to a kind of capability list that the Entity class contains. At runtime, the proper capabilities are created and attached to the entity and functions that want to use these interfaced must first query the entity object and check for there existence. This approach seems to be the most obvious alternative, and is working well for the time being. I, however, worry about the maintenance issues that this approach will have. Effectively any arbitrary thing can be added, and there is no single location in which all possible capabilities are defined. Its not a problem currently, when the total number of things is quite small, but I worry that it might be a problem when someone else starts trying to use and modify the code.
As one potential alternative, I pondered using the template system to achieve type safe while keeping the same kind of flexibility. I imagine I could create entities that inherited whatever combination of interfaces I wanted. Using these objects would entail creating a template class or function that used any combination of the interfaces. One example might be the simple move on road using just the MoveOnRoad interface, whereas more complex logic, like a "high speed freeway chase", could use methods from both MoveOnRoad and Radar interfaces.
Of course making this approach usable mandates the use of boost concept check just to make debugging feasible. Also, this approach has the unfortunate side effect of making "optional" interfaces all but impossible. It is not simple to write a function that can have logic to do one thing if the entity has a RadarEquipment interface, and do something else if it doesn't. In this regard, type safety is somewhat of a curse. I think some trickery with boost any may be able to pull it off, but I haven't figured out how to make that work and it seems like way to much complexity for what I am trying to achieve.
Thus, we are left with the dynamic "list of capabilities" and achieving the goal of having decision logic that drives behavior based on what the entity is capable of becomes trivial.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning. I am eager to learn of a design pattern or idiom that is commonly used to address this issue, and the sort of tradeoffs I will have to make.
I also want to mention that I have been contemplating perhaps an even more random design. Even though I my gut tells me that this should be designed as a high performance C++ simulation, a part of me wants to do away with the Entity class and object-orientated foo all together and uses a relational model to define all of these entity states. My initial thought is to treat entities as an in memory database and use procedural query logic to read and write the various state information, with the necessary behavior logic that drives these queries written in C++. I am somewhat concerned about performance, although it would not surprise me if that was a non-issue. I am perhaps more concerned about what maintenance issues and additional complexity this would introduce, as opposed to the relatively simple list-of-capabilities approach.
Encapsulate what varies and Prefer object composition to inheritance, are the two OOAD principles at work here.
Check out the Bridge Design pattern. I visualize Vehicle abstraction as one thing that varies, and the other aspect that varies is the "Medium". Boat/Bus/Car are all Vehicle abstractions, while Water/Road/Rail are all Mediums.
I believe that in such a mechanism, there may be no need to maintain any capability. For example, if a Bus cannot move on Water, such a behavior can be modelled by a NOP behavior in the Vehicle Abstraction.
Use the Bridge pattern when
you want to avoid a permanent binding
between an abstraction and its
implementation. This might be the
case, for example, when the
implementation must be selected or
switched at run-time.
both the abstractions and their
implementations should be extensible
by subclassing. In this case, the
Bridge pattern lets you combine the
different abstractions and
implementations and extend them
independently.
changes in the implementation of an
abstraction should have no impact on
clients; that is, their code should
not have to be recompiled.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning.
You may be erring in using C++ to define a system for which you as yet have no need/no requirements:
This approach seems to be the most
obvious alternative, and is working
well for the time being. I, however,
worry about the maintenance issues
that this approach will have.
Effectively any arbitrary thing can be
added, and there is no single location
in which all possible capabilities are
defined. Its not a problem currently,
when the total number of things is
quite small, but I worry that it might
be a problem when someone else starts
trying to use and modify the code.
Maybe you should be considering principles like YAGNI as opposed to BDUF.
Some of my personal favourites are from Systemantics:
"15. A complex system that works is invariably found to have evolved from a simple system that works"
"16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."
You're also worring about performance, when you have no defined performance requirements, and no problems with performance:
I am somewhat concerned about
performance, although it would not
surprise me if that was a non-issue.
Also, I hope you know about double-dispatch, which might be useful for implementing anything-to-anything interactions (it's described in some detail in More Effective C++ by Scott Meyers).
So, I've come back to ask, once more, a patterns-related question. This may be too generic to answer, but my problem is this (I am programming and applying concepts that I learn as I go along):
I have several structures within structures (note, I'm using the word structure in the general sense, not in the strict C struct sense (whoa, what a tongue twister)), and quite a bit of complicated inter-communications going on. Using the example of one of my earlier questions, I have Unit objects, UnitStatistics objects, General objects, Army objects, Soldier objects, Battle objects, and the list goes on, some organized in a tree structure.
After researching a little bit and asking around, I decided to use the mediator pattern because the interdependencies were becoming a trifle too much, and the classes were starting to appear too tightly coupled (yes, another term which I just learned and am too happy about not to use it somewhere). The pattern makes perfect sense and it should straighten some of the chaotic spaghetti that I currently have boiling in my project pot.
But well, I guess I haven't learned yet enough about OO design. My question is this (finally. PS, I hope it makes sense): should I have one central mediator that deals with all communications within the program, and is it even possible? Or should I have, say, an abstract mediator and one subclassed mediator per structure type that deals with communication of a particular set of classes, e.g. a concrete mediator per army which helps out the army, its general, its units, etc.
I'm leaning more towards the second option, but I really am no expert when it comes to OO design. So third question is, what should I read to learn more about this kind of subject (I've looked at Head First's Design Patterns and the GoF book, but they're more of a "learn the vocabulary" kind of book than a "learn how to use your vocabulary" kind of book, which is what I need in this case.
As always, thanks for any and all help (including the witty comments).
I don't think you've provided enough info above to be able to make an informed decision as to which is best.
From looking at your other questions it seems that most of the communication occurs between components within an Army. You don't mention much occurring between one Army and another. In which case it would seem to make sense to have each Mediator instance coordinate communication between the components comprising a single Army - i.e. the Generals, Soldiers etc. So if you have 10 Army's then you will have 10 ArmyMediator's.
If you really want to learn O-O Design you're going to have to try things out and run the risk of getting it wrong from time to time. I think you'll learn just as much, if not more, from having to refactor a design that doesn't quite model the problem correctly into one that does, as you will from getting the design right the first time around.
Often you just won't have enough information up front to be able to choose the right design from the go anyway. Just choose the simplest one that works for now, and improve it later when you have a better idea of the requirements and/or the shortcomings of the current design.
Regarding books, personally I think the GoF book is more useful if you focus less on the specific set of patterns they describe, and focus more on the overall approach of breaking classes down into smaller reusable components, each of which typically encapsulates a single unit of functionality.
I can't answer your question directly, because I have never used that design pattern. However, whenever I have this problem, of message passing between various objects, I use the signal-slot pattern. Usually I use Qt's, but my second option is Boost's. They both solve the problem by having a single, global message passing handler. They are also both type-safe are quite efficient, both in terms of cpu-cycles and in productivity. Because they are so flexible, i.e. any object and emit any kind of signal, and any other object can receive any signal, you'll end up solving, I think, what you describe.
Sorry if I just made things worse by not choosing any of the 2 option, but instead adding a 3rd!
In order to use Mediator you need to determine:
(1) What does the group of objects, which need mediation, consist of?
(2) Among these, which are the ones that have a common interface?
The Mediator design pattern relies on the group of objects that are to be mediated to have a "common interface"; i.e., same base class: the widgets in the GoF book example inherit from same Widget base, etc.
So, for your application:
(1) Which are the structures (Soldier, General, Army, Unit, etc.) that need mediation between each other?
(2) Which ones of those (Soldier, General, Army, Unit, etc.) have a common base?
This should help you determine, as a first step, an outline of the participants in the Mediator design pattern. You may find out that some structures in (1) fall outside of (2). Then, yo may need to force them adhering to a common interface, too, if you can change that or if you can afford to make that change... (may turn out to be too much redesigning work and it violates the Open-Closed principle: your design should be, as much as possible, open to adding new features but closed to modifying existent ones).
If you discover that (1) and (2) above result in a partition of separate groups, each with its own mediator, then the number of these partitions dictate the number of different types of mediators. Now, should these different mediators have a common interface of their own? Maybe, maybe not. Polymorphism is a way of handling complexity by grouping different entities under a common interface such that they can be handled as a group rather then individually. So, would there be any benefit to group all these supposedly different types of mediators under a common interface (like the DialogDirector in the GoF book example)? Possibly, if:
(a) You may have to use a heterogeneous collection of mediators;
or
(b) You envision in the future that these mediators will evolve (and they probably will). Hence providing an abstract interface allows you to derive more evolved versions of mediators without affecting existent ones or their colleagues (the clients of the mediators).
So, without knowing more, I'd have to guess that, yes, it's probably better to use abstract mediators and to subclass them, for each group partition, just to prepare yourself for future changes without having to redesign your mediators (remember the Open-Closed principle).
Hope this helps.
I find myself always trying to fit everything into the OOP methodology, when I'm coding in C/C++. But I realize that I don't always have to force everything into this mold. What are some pros/cons for using the OOP methodology versus not? I'm more interested in the pros/cons of NOT using OOP (for example, are there optimization benefits to not using OOP?). Thanks, let me know.
Of course it's very easy to explain a million reasons why OOP is a good thing. These include: design patterns, abstraction, encapsulation, modularity, polymorphism, and inheritance.
When not to use OOP:
Putting square pegs in round holes: Don't wrap everything in classes when they don't need to be. Sometimes there is no need and the extra overhead just makes your code slower and more complex.
Object state can get very complex: There is a really good quote from Joe Armstrong who invented Erlang:
The problem with object-oriented
languages is they’ve got all this
implicit environment that they carry
around with them. You wanted a banana
but what you got was a gorilla holding
the banana and the entire jungle.
Your code is already not OOP: It's not worth porting your code if your old code is not OOP. There is a quote from Richard Stallman in 1995
Adding OOP to Emacs is not clearly an
improvement; I used OOP when working
on the Lisp Machine window systems,
and I disagree with the usual view
that it is a superior way to program.
Portability with C: You may need to export a set of functions to C. Although you can simulate OOP in C by making a struct and a set of functions who's first parameter takes a pointer to that struct, it isn't always natural.
You may find more reasons in this paper entitled Bad Engineering Properties
of Object-Oriented Languages.
Wikipedia's Object Oriented Programming page also discusses some pros and cons.
One school of thought with object-oriented programming is that you should have all of the functions that operate on a class as methods on the class.
Scott Meyers, one of the C++ gurus, actually argues against this in this article:
How Non-Member Functions Improve Encapsulation.
He basically says, unless there's a real compelling reason to, you should keep the function SEPARATE from the class. Otherwise the class can turn into this big bloated unmanageable mess.
Based on experiences in a previous large project, I totally agree with him.
A benefit of non-oop functionality is that it often makes exporting your functionality to different languages easier. For example a simple DLL containing only functions is much easier to use in C#, you can use the P/Invoke to simply call the C++ functions. So in this sense it can be useful for writing extremely time critical algorithms that fit nicely into single/few function calls.
OOP is used a lot in GUI code, computer games, and simulations. Windows should be polymorphic - you can click on them, resize them, and so on. Computer game objects should be polymorphic - they probably have a location, a path to follow, they might have health, and they might have some AI behavior. Simulation objects also have behavior that is similar, but breaks down into classes.
For most things though, OOP is a bit of a waste of time. State usually just causes trouble, unless you have put it safely in the database where it belongs.
I suggest you read Bjarne's Paper about Why C++ is not just an Object-Oriented Programming Language
If we consider, for a moment, not object-orienatation itself but one
of the keystones of object-orientation: encapsulation.
It can be shown that change-propagation probability cannot increase
with distance from the change: if A depends on B and B depends on C,
and we change C, then the probability that A will change
cannot be larger than the proabability that B will
change. If B is a direct dependency on C and A is an indirect
dependency on C, then, more generally, to minimise the potential cost
of any change in a system we must miminimise the potential number of
direct dependencies.
The ISO defines encapsulation as the property that the information
contained in an object is accessible only through interactions at the
interfaces supported by the object.
We use encapsulation to minimise the number of potential dependencies
with the highest change-propagation probability. Basically,
encapsulation mitigates the ripple effect.
Thus one reason not to use encapsulation is when the system is so
small or so unchanging that the cost of potential ripple effects is
negligible. This is also, therefore, a case when OO might not be used
without potentially costly consequences.
Well, there are several alternatives. Non-OOP code in C++ may instead be:
C-style procedural code, or
C++-style generic programming
The only advantages to the first are the simplicity and backwards-compatibility. If you're writing a small trivial app, then messing around with classes is just a waste of time. If you're trying to write a "Hello World", just call printf already. Don't bother wrapping it in a class. And if you're working with an existing C codebase, it's probably not object-oriented, and trying to force it into a different paradigm than it already uses is just a recipe for pain.
For the latter, the situation is different, in that this approach is often superior to "traditional OOP".
Generic programming gives you greater performance (among other things because you often avoid the overhead of vtables, and because with less indirection, the compiler is better able to inline), better type safety (because the exact type is known, rather than hiding it behind an interface), and often cleaner and more concise code as well (STL iterators and algorithms enable much of this, without using a single instance of runtime polymorphism or virtual functions.
OOP is little more than an aging buzzword. A methodology that everyone misunderstood (The version supported by C++ and Java has little to do with what OOP originally meant, as in SmallTalk), and then pretended was the holy grail. There are aspects to it that are useful, certainly, but it is often not the best approach for designing an application.
Rather, express the overall logic by other means, for example generic programming, and when you need a class to encapsulate some simple concept, by all means design it according to OOP principles.
OOP is just a tool among many. The goal is not to write OOP code, but to write good code. Sometimes, the way to do this is by using OOP principles, but often, you can get better code using generic programmming principles, or functional programming.
It is a very project dependent decision. My general feel of OOP is that its useful for organizing large projects that involve multiple components. One area I find that OOP is especially pointless is school assignments. Excepting those specifically designed to teach OOP concepts, or large software design concepts, many of my assignments, specifically those in more algorithmy type classes are best suited to non-OOP design.
So specifically, smaller projects, that are not likely to grow large, and projects that center around a single algorithm seem to be non-OOP candidates in my books. Also, if you can write the specification as a linear set of steps, e.g., with no interactive GUI or state to maintain, this would also be an opportunity.
Of course, if you're required to use an OOP design, or an OOP toolkit, or if you have well defined 'objects' in you're spec, or if you need the features of polymorphism, etc. etc. etc...there are plenty of reasons to use it, the above seem to be indicators of when it would be simple not to.
Just my $0.02.
Having an Ada background, I develop in C in terms of packages containing data and their associated functions. This gives a code very modular with pieces of code that can be taken apart and reused on other projects. I don't feel the need to use OOP.
When I develop in Objective-C, objects are the natural container for data and code. I still develop with more or less the package concept in mind with some new cool features.
I'm used to be an OOP fanboy... Then realized using functions, generics and callbacks can often make a more elegant and change-friendly solution in C++ than classes and virtual functions.
Other big names realized it too: http://harmful.cat-v.org/software/OO_programming/
IMHO, I have a feeling that the OOP concept is not really suits the needs of the Big Data, as OOP assume all the stuff to be kept in memory (concept of Objects and member variables). This always result in memory demanding and heavy applications when OOP is used for example for big images processing. Instead, the simplicity of C maybe used with intensive parallel I/O making apps more efficient and easy to implement. It is the year 2019 I am writing this message...Everything may change in a year! :)
In my mind it comes down to what kind of model suits the problem at hand. It seems to me that OOP is best suited to coding GUI programs, in that the data and functionality for a graphical object is easily bundled together. Other problems- (such as a webserver, as an example off the top of my head), might be more easily modeled with a data centric approach, where there's no strong advantage to having a method and its data near each-other.
tl;dr depends on the problem.
I'd say the greatest benefit of C++ OOP is inheritance and polymorphism (Virtual function etc...) .
This allows for code reuse and extendibility
C++, use OOP - - - C, no, with certain exceptions
In C++ you should use OOP. It's a nice abstraction and it's the tool you are given. You either use it or leave it in the box where it can't help. You don't use the power saw for everything but I would read the manual and have it ready for the right job.
In C, it's a more difficult call. While you can certainly write arbitrarily object-oriented code in C, it's enough of a pain that you immediately find yourself fighting the language in order to use it. You may be more productive dropping the doesn't-fit-so-well design pattern and programming as C was intended to be used.
Furthermore, every time you make an array of function pointers or something in an OOP-in-C design pattern, you sever almost completely all visible links in the inheritance chain, making the code hard to maintain. In real OOP languages, there is an obvious chain of derived classes, often analyzed and documented for you. (mmm, javadoc.) Not so in OOP-in-C, and the tools available won't be able to see it.
So, I would argue in general against OOP in C. For a really complex program, you may well need the abstraction, and then you will have to do it despite needing to fight the language in the process and despite making the program quite hard to follow by anyone other than the original author.
But if you knew the program was going to become that complicated, you shouldn't have written it in C in the first place...
In C, there are some times when I 'emulate' the object oriented approach, by defining some sort of constructor with granular control over things like callbacks, when running several instances of it.
For instance, lets say I have some spiffy event handler library and I know that down the road I'm going to need many allocated copies:
So I would have (in C)
MyEvent *ev1 = new_eventhandler();
set_event_callback_func(ev1, callback_one);
ev1->setfd(fd1);
MyEvent *ev2 = new_eventhandler();
set_event_callback_func(ev2, callback_two);
ev2->setfd(fd2);
destroy_eventhandler(ev1);
destroy_eventhandler(ev2);
Obviously, I would later do something useful with that like handle received events in the two respective callback functions. I'm not going to really elaborate on the method of typing function pointers and structures to hold them, nor what would go on in the 'constructor' because its pretty obvious.
I think, this approach works for more advanced interfaces where its desirable to allow the user to define their own callbacks (and change them on the fly), or when working on complex non-blocking I/O services.
Otherwise, I much prefer a more procedural / functional approach.
Probably an unpopular idea but I think you should stick with non-OOP unless it adds something useful. In most practical problems OOP is useful but if I'm just playing with an idea I start writing non-object code and put functions and data into classes if it becomes useful.
Of course I still use other objects in my code (std::vector et al) and I use namespaces to help organise my functions but why put code into objects until it is useful? Equally don't shy away from free functions in an OO solution.
The question is tricky because OOP encompasses several concepts: object encapsulation, polymorphism, inheritance, etc. It's easy to take those ideas too far. Here's a concrete example:
When C++ first caught on, zillions of string classes sprung into being. Everything you could possibly imagine doing to a string (upcasing, downcasing, trimming, tokenizing, parsing, etc.) was a member function of some string class.
Notice, though, that std::strings from the STL don't have all these methods. STL is object-oriented--the state and implementation details of a string object are well encapsulated, only a small, orthogonal interface is exposed to the world. All the crazy manipulations that people used to include as member functions are now delegated to non-member functions.
This is powerful, because these functions can now work on any string class that exposes the same interface. If you use STL strings for most things and a specialty version tuned to your program's idiosyncracies, you don't have to duplicate member functions. You just have to implement the basic string interface and then you can re-use all those crazy manipulations.
Some people call this hybrid approach generic programming. It's still object-oriented programming, but it moves away from the "everything is a member-function" mentality that a lot of people associate with OOP.