I'm studying for an exam and am trying to figure this question out. The specific question is "Inheritance and object composition both promote code reuse. (T/F)", but I believe I understand the inheritance portion of the question.
I believe inheritance promotes code reuse because similar methods can be placed in an abstract base class such that the similar methods do not have to be identically implemented within multiple children classes. For example, if you have three kinds of shapes, and each shape's method "getName" simply returns a data member '_name', then why re-implement this method in each of the child classes when it can be implemented once in the abstract base class "shape".
However, my best understanding of object composition is the "has-a" relationship between objects/classes. For example, a student has a school, and a school has a number of students. This can be seen as object composition since they can't really exist without each other (a school without any students isn't exactly a school, is it? etc). But I see no way that these two objects "having" each other as a data member will promote code reuse.
Any help? Thanks!
Object composition can promote code reuse because you can delegate implementation to a different class, and include that class as a member.
Instead of putting all your code in your outermost classes' methods, you can create smaller classes with smaller scopes, and smaller methods, and reuse those classes/methods throughout your code.
class Inner
{
public:
void DoSomething();
};
class Outer1
{
public:
void DoSomethingBig()
{
// We delegate part of the call to inner, allowing us to reuse its code
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
class Outer2
{
public:
void DoSomethingElseThatIsBig()
{
// We used the implementation twice in different classes,
// without copying and pasting the code.
// This is the most basic possible case of reuse
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
As you mentioned in your question, this is one of the two most basic Object Oriented Programming principals, called a "has-a relationship". Inheritance is the other relationship, and is called an "is-a replationship".
You can also combine inheritance and composition in quite useful ways that will often multiply your code (and design) reuse potential. Any real world and well-architected application will constantly combine both of these concepts to gain as much reuse as possible. You'll find out about this when you learn about Design Patterns.
Edit:
Per Mike's request in the comments, a less abstract example:
// Assume the person class exists
#include<list>
class Bus
{
public:
void Board(Person newOccupant);
std::list<Person>& GetOccupants();
private:
std::list<Person> occupants;
};
In this example, instead of re-implementing a linked list structure, you've delegated it to a list class. Every time you use that list class, you're re-using the code that implements the list.
In fact, since list semantics are so common, the C++ standard library gave you std::list, and you just had to reuse it.
1) The student knows about a school, but this is not really a HAS-A relationship; while you would want to keep track of what school the student attends, it would not be logical to describe the school as being part of the student.
2) More people occupy the school than just students. That's where the reuse comes in. You don't have to re-define the things that make up a school each time you describe a new type of school-attendee.
I have to agree with #Karl Knechtel -- this is a pretty poor question. As he said, it's hard to explain why, but I'll give it a shot.
The first problem is that it uses a term without defining it -- and "code reuse" means a lot of different things to different people. To some people, cutting and pasting qualifies as code reuse. As little as I like it, I have to agree with them, to at least some degree. Other people define cod reuse in ways that rule out cutting and pasting as being code reuse (classing another copy of the same code as separate code, not reusing the same code). I can see that viewpoint too, though I tend to think their definition is intended more to serve a specific end than be really meaningful (i.e., "code reuse"->good, "cut-n-paste"->bad, therefore "cut-n-paste"!="code reuse"). Unfortunately, what we're looking at here is right on the border, where you need a very specific definition of what code reuse means before you can answer the question.
The definition used by your professor is likely to depend heavily upon the degree of enthusiasm he has for OOP -- especially during the '90s (or so) when OOP was just becoming mainstream, many people chose to define it in ways that only included the cool new OOP "stuff". To achieve the nirvana of code reuse, you had to not only sign up for their OOP religion, but really believe in it! Something as mundane as composition couldn't possibly qualify -- no matter how strangely they had to twist the language for that to be true.
As a second major point, after decades of use of OOP, a few people have done some fairly careful studies of what code got reused and what didn't. Most that I've seen have reached a fairly simple conclusion: it's quite difficult (i.e., essentially impossible) correlate coding style with reuse. Nearly any rule you attempt to make about what will or won't result in code reuse can and will be violated on a regular basis.
Third, and what I suspect tends to be foremost in many people's minds is the fact that asking the question at all makes it sound as if this is something that can/will affect a typical coder -- that you might want to choose between composition and inheritance (for example) based on which "promotes code reuse" more, or something on that order. The reality is that (just for example) you should choose between composition and inheritance primarily based upon which more accurately models the problem you're trying to solve and which does more to help you solve that problem.
Though I don't have any serious studies to support the contention, I would posit that the chances of that code being reused will depend heavily upon a couple of factors that are rarely even considered in most studies: 1) how similar of a problem somebody else needs to solve, and 2) whether they believe it will be easier to adapt your code to their problem than to write new code.
I should add that in some of the studies I've seen, there were factors found that seemed to affect code reuse. To the best of my recollection, the one that stuck out as being the most important/telling was not the code itself at all, but the documentation available for that code. Being able to use the code without basically reverse engineer it contributes a great deal toward its being reused. The second point was simply the quality of the code -- a number of the studies were done in places/situations where they were trying to promote code reuse. In a fair number of cases, people tried to reuse quite a bit more code than they really did, but had to give up on it simply because the code wasn't good enough -- everything from bugs to clumsy interfaces to poor portability prevented reuse.
Summary: I'll go on record as saying that code reuse has probably been the most overhyped, under-delivered promise in software engineering over at least the last couple of decades. Even at best, code reuse remains a fairly elusive goal. Trying to simplify it to the point of treating it as a true/false question based on two factors is oversimplifying the question to the point that it's not only meaningless, but utterly ridiculous. It appears to trivialize and demean nearly the entire practice of software engineering.
I have an object Car and an object Engine:
class Engine {
int horsepower;
}
class Car {
string make;
Engine cars_engine;
}
A Car has an Engine; this is composition. However, I don't need to redefine Engine to put an engine in a car -- I simply say that a Car has an Engine. Thus, composition does indeed promote code reuse.
Object composition does promote code re-use. Without object composition, if I understand your definition of it properly, every class could have only primitive data members, which would be beyond awful.
Consider the classes
class Vector3
{
double x, y, z;
double vectorNorm;
}
class Object
{
Vector3 position;
Vector3 velocity;
Vector3 acceleration;
}
Without object composition, you would be forced to have something like
class Object
{
double positionX, positionY, positionZ, positionVectorNorm;
double velocityX, velocityY, velocityZ, velocityVectorNorm;
double accelerationX, accelerationY, accelerationZ, accelerationVectorNorm;
}
This is just a very simple example, but I hope you can see how even the most basic object composition promotes code reuse. Now think about what would happen if Vector3 contained 30 data members. Does this answer your question?
Related
In code I have been writing recently I have been forced to directly access a member of an object to call its functions, however, it feels wrong to do this because it would seem to violate encapsulation and the Law of Demeter. Yet the only good alternative I can come up with is to write my own function in the class for every single function of that member I may want to call, which would be very tedious and redundant. Example:
class Object
{
public:
void setNum(int x)
{
num = x;
}
private:
int num;
};
class Object2
{
public:
Object obj;
};
int main()
{
Object2 obj2;
obj2.obj.setNum(5);
}
vs
class Object
{
public:
void setNum(int x)
{
num = x;
}
private:
int num;
};
class Object2
{
public:
void setNum(int x)
{
obj.setNum(x);
}
private:
Object obj;
};
int main()
{
Object2 obj2;
obj2.setNum(5);
}
The call to setNum in Object2 is forwarded to the same function in Object. Is such a design considered bad practice? Is accessing obj directly be any better?
I could also have Object2 inherit from Object, but in this case the class I would be inheriting from is not designed to be a base class, would expose protected members to Object2, and seems unfitting to begin with because it is not an is-a relationship and composition would be preferred.
My specific situation: I am making a game using SFML, there is a class Ship that of course needs a sprite to represent it in the world. Anytime I want to set or get the ship's position, rotation, etc. I have to either directly access its sprite or write a redundant forwarding function in Ship. The issue is that doing either one of those things seems like a code smell: either violate encapsulation and the Law of Demeter, or write redundant code.
What would be considered best practice here? Am I being overly picky about writing simple forwarding functions? Or is there really nothing wrong with directly accessing the sprite of Ship in this case?
This question: C++ Forward method calls to embed object without inheritance in fact is exactly what I'm asking, however, neither answer gives a good solution. One not being condoned and apparently having very poor encapsulation, the other simply using a getter which seems to be a placebo if anything, and no one addresses the main question I have, which is what is the most elegant and acceptable solution?
What solution is the best highly depends on underlying semantics of encapsulation. You need to decouple your code as much as possible. I'll describe that on examples:
You have a Ship class and it has a Sprite. You may want to separate game logic and rendering. So all that Ships knows about the rendering is that it has a Sprite object which handles it. So in this case you are separating responsibilities and that's good. So simple getter is a good solution.
But if absolute coordinates and rotation must be stored in a sprite, then things change: game logic usually needs them two, so they must be set consistently both inside a Ship and a Sprite. The best way to achieve that is to make Ship's setPosition and setRotation methods to set Sprites position and rotation too. That way you simplify the code which works with the Ship at expense of Ship complexity. Given that Ship is manipulated from several places, that's worth it. NOTE: You still may want to expose Sprite through a getter for rendering purposes. And you may want to prevent anybody except a Ship to set Sprite position and rotation, if that does not bloat your code too much.
Let's imagine that Ship class is mostly devoted to rendering. In such situation you may want to hide from outer classes that you use sprites for graphics (because if you change rendering engine it will be good if you won't need to rewrite anything except rendering code). And in such situation you will want to proxy all setPosition and setRotation calls to a Sprite through Ship just to hide existence of the Sprite.
In none of this cases inheritance is used because inheritance means that child is a variation of it's ancestor. You can say that BattleShip is a variant of a Ship. But Ship is not a variant of a Sprite, they are too different and mean different things.
So: if encapsulated class is too specific and should not be visible outside or if it must be operated consistently with a master object, then writing a bunch of proxy methods is a good solution. Otherwise these methods will just bloat your code and it's better to provide a way to get nested object. But in that case I vote for a getter method instead of a public property.
Despite of how the classic javanese oop school can think, never forgot that also DRY (Don't Repeat Yourself) is ALSO considered a good practice.
And while OOP is just one of many programming paradigm C++ can support, DRY is the very essence of all programming since the first very assembler got macros, and this is true long before oop inventors was even far away from their own parent's thoughts and wishes.
For all what my unfair experience is... if respecting a good oop practice force you in writing useless boilerplates of repeating code it means either
The language you are using is fundamentally broken for the purpose you want to achieve, not supporting the paradigm correctly or...
OOP is broken for the purpose you are try to reach. And in C++ chances are that OOP is really the broken paradigm.
To come to your specific problem, delegation (that's how that pattern is commonly named) makes sense if:
the way it is delegating can be changed or
the delegation is to hide part of the member interface.
In you case, you have a function that calls a fixed method reachable from fixed member.
Is that only specific to this particular sample or in your implementation will be always that way by design?
If so, delegation adds no semantic value, if not just reducing a.b.m() to a.m(). If you are writing more than ... let's say three "do nothing just forward" functions you are wasting your time.
If b has 50 methods and you are making it private to delegate only 5 of them, than it makes perfectly sense.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are two schools of thought on how to best extend, enhance, and reuse code in an object-oriented system:
Inheritance: extend the functionality of a class by creating a subclass. Override superclass members in the subclasses to provide new functionality. Make methods abstract/virtual to force subclasses to "fill-in-the-blanks" when the superclass wants a particular interface but is agnostic about its implementation.
Aggregation: create new functionality by taking other classes and combining them into a new class. Attach an common interface to this new class for interoperability with other code.
What are the benefits, costs, and consequences of each? Are there other alternatives?
I see this debate come up on a regular basis, but I don't think it's been asked on
Stack Overflow yet (though there is some related discussion). There's also a surprising lack of good Google results for it.
It's not a matter of which is the best, but of when to use what.
In the 'normal' cases a simple question is enough to find out if we need inheritance or aggregation.
If The new class is more or less as the original class. Use inheritance. The new class is now a subclass of the original class.
If the new class must have the original class. Use aggregation. The new class has now the original class as a member.
However, there is a big gray area. So we need several other tricks.
If we have used inheritance (or we plan to use it) but we only use part of the interface, or we are forced to override a lot of functionality to keep the correlation logical. Then we have a big nasty smell that indicates that we had to use aggregation.
If we have used aggregation (or we plan to use it) but we find out we need to copy almost all of the functionality. Then we have a smell that points in the direction of inheritance.
To cut it short. We should use aggregation if part of the interface is not used or has to be changed to avoid an illogical situation. We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt, use Aggregation.
An other possibility for, the case that we have an class that needs part of the functionality of the original class, is to split the original class in a root class and a sub class. And let the new class inherit from the root class. But you should take care with this, not to create an illogical separation.
Lets add an example. We have a class 'Dog' with methods: 'Eat', 'Walk', 'Bark', 'Play'.
class Dog
Eat;
Walk;
Bark;
Play;
end;
We now need a class 'Cat', that needs 'Eat', 'Walk', 'Purr', and 'Play'. So first try to extend it from a Dog.
class Cat is Dog
Purr;
end;
Looks, alright, but wait. This cat can Bark (Cat lovers will kill me for that). And a barking cat violates the principles of the universe. So we need to override the Bark method so that it does nothing.
class Cat is Dog
Purr;
Bark = null;
end;
Ok, this works, but it smells bad. So lets try an aggregation:
class Cat
has Dog;
Eat = Dog.Eat;
Walk = Dog.Walk;
Play = Dog.Play;
Purr;
end;
Ok, this is nice. This cat does not bark anymore, not even silent. But still it has an internal dog that wants out. So lets try solution number three:
class Pet
Eat;
Walk;
Play;
end;
class Dog is Pet
Bark;
end;
class Cat is Pet
Purr;
end;
This is much cleaner. No internal dogs. And cats and dogs are at the same level. We can even introduce other pets to extend the model. Unless it is a fish, or something that does not walk. In that case we again need to refactor. But that is something for an other time.
At the beginning of GOF they state
Favor object composition over class inheritance.
This is further discussed here
The difference is typically expressed as the difference between "is a" and "has a". Inheritance, the "is a" relationship, is summed up nicely in the Liskov Substitution Principle. Aggregation, the "has a" relationship, is just that - it shows that the aggregating object has one of the aggregated objects.
Further distinctions exist as well - private inheritance in C++ indicates a "is implemented in terms of" relationship, which can also be modeled by the aggregation of (non-exposed) member objects as well.
Here's my most common argument:
In any object-oriented system, there are two parts to any class:
Its interface: the "public face" of the object. This is the set of capabilities it announces to the rest of the world. In a lot of languages, the set is well defined into a "class". Usually these are the method signatures of the object, though it varies a bit by language.
Its implementation: the "behind the scenes" work that the object does to satisfy its interface and provide functionality. This is typically the code and member data of the object.
One of the fundamental principles of OOP is that the implementation is encapsulated (ie:hidden) within the class; the only thing that outsiders should see is the interface.
When a subclass inherits from a subclass, it typically inherits both the implementation and the interface. This, in turn, means that you're forced to accept both as constraints on your class.
With aggregation, you get to choose either implementation or interface, or both -- but you're not forced into either. The functionality of an object is left up to the object itself. It can defer to other objects as it likes, but it's ultimately responsible for itself. In my experience, this leads to a more flexible system: one that's easier to modify.
So, whenever I'm developing object-oriented software, I almost always prefer aggregation over inheritance.
I gave an answer to "Is a" vs "Has a" : which one is better?.
Basically I agree with other folks: use inheritance only if your derived class truly is the type you're extending, not merely because it contains the same data. Remember that inheritance means the subclass gains the methods as well as the data.
Does it make sense for your derived class to have all the methods of the superclass? Or do you just quietly promise yourself that those methods should be ignored in the derived class? Or do you find yourself overriding methods from the superclass, making them no-ops so no one calls them inadvertently? Or giving hints to your API doc generation tool to omit the method from the doc?
Those are strong clues that aggregation is the better choice in that case.
I see a lot of "is-a vs. has-a; they're conceptually different" responses on this and the related questions.
The one thing I've found in my experience is that trying to determine whether a relationship is "is-a" or "has-a" is bound to fail. Even if you can correctly make that determination for the objects now, changing requirements mean that you'll probably be wrong at some point in the future.
Another thing I've found is that it's very hard to convert from inheritance to aggregation once there's a lot of code written around an inheritance hierarchy. Just switching from a superclass to an interface means changing nearly every subclass in the system.
And, as I mentioned elsewhere in this post, aggregation tends to be less flexible than inheritance.
So, you have a perfect storm of arguments against inheritance whenever you have to choose one or the other:
Your choice will likely be the wrong one at some point
Changing that choice is difficult once you've made it.
Inheritance tends to be a worse choice as it's more constraining.
Thus, I tend to choose aggregation -- even when there appears to be a strong is-a relationship.
The question is normally phrased as Composition vs. Inheritance, and it has been asked here before.
I wanted to make this a comment on the original question, but 300 characters bites [;<).
I think we need to be careful. First, there are more flavors than the two rather specific examples made in the question.
Also, I suggest that it is valuable not to confuse the objective with the instrument. One wants to make sure that the chosen technique or methodology supports achievement of the primary objective, but I don't thing out-of-context which-technique-is-best discussion is very useful. It does help to know the pitfalls of the different approaches along with their clear sweet spots.
For example, what are you out to accomplish, what do you have available to start with, and what are the constraints?
Are you creating a component framework, even a special purpose one? Are interfaces separable from implementations in the programming system or is it accomplished by a practice using a different sort of technology? Can you separate the inheritance structure of interfaces (if any) from the inheritance structure of classes that implement them? Is it important to hide the class structure of an implementation from the code that relies on the interfaces the implementation delivers? Are there multiple implementations to be usable at the same time or is the variation more over-time as a consequence of maintenance and enhancememt? This and more needs to be considered before you fixate on a tool or a methodology.
Finally, is it that important to lock distinctions in the abstraction and how you think of it (as in is-a versus has-a) to different features of the OO technology? Perhaps so, if it keeps the conceptual structure consistent and manageable for you and others. But it is wise not to be enslaved by that and the contortions you might end up making. Maybe it is best to stand back a level and not be so rigid (but leave good narration so others can tell what's up). [I look for what makes a particular portion of a program explainable, but some times I go for elegance when there is a bigger win. Not always the best idea.]
I'm an interface purist, and I am drawn to the kinds of problems and approaches where interface purism is appropriate, whether building a Java framework or organizing some COM implementations. That doesn't make it appropriate for everything, not even close to everything, even though I swear by it. (I have a couple of projects that appear to provide serious counter-examples against interface purism, so it will be interesting to see how I manage to cope.)
I'll cover the where-these-might-apply part. Here's an example of both, in a game scenario. Suppose, there's a game which has different types of soldiers. Each soldier can have a knapsack which can hold different things.
Inheritance here?
There's a marine, green beret & a sniper. These are types of soldiers. So, there's a base class Soldier with Marine, Green Beret & Sniper as derived classes
Aggregation here?
The knapsack can contain grenades, guns (different types), knife, medikit, etc. A soldier can be equipped with any of these at any given point in time, plus he can also have a bulletproof vest which acts as armor when attacked and his injury decreases to a certain percentage. The soldier class contains an object of bulletproof vest class and the knapsack class which contains references to these items.
I think it's not an either/or debate. It's just that:
is-a (inheritance) relationships occur less often than has-a (composition) relationships.
Inheritance is harder to get right, even when it's appropriate to use it, so due diligence has to be taken because it can break encapsulation, encourage tight coupling by exposing implementation and so forth.
Both have their place, but inheritance is riskier.
Although of course it wouldn't make sense to have a class Shape 'having-a' Point and a Square classes. Here inheritance is due.
People tend to think about inheritance first when trying to design something extensible, that is what's wrong.
Favour happens when both candidate qualifies. A and B are options and you favour A. The reason is that composition offers more extension/flexiblity possiblities than generalization. This extension/flexiblity refers mostly to runtime/dynamic flexibility.
The benefit is not immediately visible. To see the benefit you need to wait for the next unexpected change request. So in most cases those sticked to generlalization fails when compared to those who embraced composition(except one obvious case mentioned later). Hence the rule. From a learning point of view if you can implement a dependency injection successfully then you should know which one to favour and when. The rule helps you in making a decision as well; if you are not sure then select composition.
Summary: Composition :The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. Generlization: From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. (very few occassions when Generalization wins). And never forget that with composition you are using inheritance too, from a interface instead of a big class
Both approaches are used to solve different problems. You don't always need to aggregate over two or more classes when inheriting from one class.
Sometimes you do have to aggregate a single class because that class is sealed or has otherwise non-virtual members you need to intercept so you create a proxy layer that obviously isn't valid in terms of inheritance but so long as the class you are proxying has an interface you can subscribe to this can work out fairly well.
I have a chess variants engine that plays suicide chess and losers chess along with normal chess. I might, over time, add more variants to my engine. The engine is implemented completely in C++ with proper usage of OOP. My question is related to design of such a variant engine.
Initially the project started as a suicide-only engine while over time I added other flavors. For adding new variants, I experimented using polymorphism in C++ first. For instance, a MoveGenerator abstract class had two subclasses SuicideMoveGenerator and NormalMoveGenerator and depending on the type of game chosen by user, a factory would instantiate the right subclass. But I found this to be much slower - obviously because instantiating classes containing virtual functions and calling virtual functions in tight loops are both quite inefficient.
But then it occurred to me to use C++ templates with template specialization for separating logic for different variants with maximum reuse of code. This also seemed very logical because dynamic linking is not really necessary in the context as once you choose the type of game, you basically stick with it until the end of the game. C++ template specialization provides exactly this - static polymorphism. The template parameter is either SUICIDE or LOSERS or NORMAL.
enum GameType { LOSERS, NORMAL, SUICIDE };
So once user selects a game type, appropriate game object is instantiated and everything called from there will be appropriately templatized. For instance if user selects suicide chess, lets say:
ComputerPlayer<SUICIDE>
object is instantiated and that instantiation basically is linked to the whole control flow statically. Functions in ComputerPlayer<SUICIDE> would work with MoveGenerator<SUICIDE>, Board<SUICIDE> and so on while corresponding NORMAL one will appropriately work.
On a whole, this lets me instantiate the right templatize specialized class at the beginning and without any other if conditions anywhere, the whole thing works perfectly. The best thing is there is no performance penalty at all!
The main downside with this approach however is that using templates makes your code a bit harder to read. Also template specialization if not appropriately handled can lead to major bugs.
I wonder what do other variant engine authors normally do for separation of logic (with good reuse of code)?? I found C++ template programming quite suitable but if there's anything better out there, I would be glad to embrace. In particular, I checked Fairymax engine by Dr. H G Muller but that uses config files for defining game rules. I don't want to do that because many of my variants have different extensions and by making it generic to the level of config-files the engine might not grow strong. Another popular engine Sjeng is littered with if conditions everywhere and I personally feel thats not a good design.
Any new design insights would be very useful.
"Calling virtual functions in tight loops are inefficient"
I would be pretty surprised actually if this caused any real bloat, if all the variables of the loop have the same dynamic type, then I would expect the compiler to fetch the corresponding instruction from its L1 cache and thus not suffer much.
However there is one part that worries me:
"obviously because instantiating classes containing virtual functions [is] quite inefficient"
Now... I am really surprised.
The cost of instantiating a class with virtual functions is near undistinguishable from the cost of instantiating a class without any virtual functions: it's one more pointer, and that's all (on popular compilers, which corresponds to the _vptr).
I surmise that your problem lies elsewhere. So I am going to take a wild guess:
do you, by any chance, have a lot of dynamic instantiation going on ? (calling new)
If that is the case, you would gain much by removing them.
There is a Design Pattern called Strategy which would be eminently suitable for your precise situation. The idea of this pattern is akin, in fact, to the use of virtual functions, but it actually externalize those functions.
Here is a simple example:
class StrategyInterface
{
public:
Move GenerateMove(Player const& player) const;
private:
virtual Move GenerateMoveImpl(Player const& player) const = 0;
};
class SuicideChessStrategy: public StrategyInterface
{
virtual Move GenerateMoveImpl(Player const& player) const = 0;
};
// Others
Once implemented, you need a function to get the right strategy:
StrategyInterface& GetStrategy(GameType gt)
{
static std::array<StrategyInterface*,3> strategies
= { new SuicideChessStrategy(), .... };
return *(strategies[gt]);
}
And finally, you can delegate the work without using inheritance for the other structures:
class Player
{
public:
Move GenerateMove() const { return GetStrategy(gt).GenerateMove(*this); }
private:
GameType gt;
};
The cost is pretty much similar to using virtual functions, however you do not need dynamically allocated memory for the basic objects of your game any longer, and stack allocation is a LOT faster.
I'm not quite sure if this is a fit but you may be able to achieve static polymorphism via the CRTP with some slight modifications to your original design.
The more I get into writing unit tests the more often I find myself writing smaller and smaller classes. The classes are so small now that many of them have only one public method on them that is tied to an interface. The tests then go directly against that public method and are fairly small (sometimes that public method will call out to internal private methods within the class). I then use an IOC container to manage the instantiation of these lightweight classes because there are so many of them.
Is this typical of trying to do things in a more of a TDD manner? I fear that I have now refactored a legacy 3,000 line class that had one method in it into something that is also difficult to maintain on the other side of the spectrum because there is now literally about 100 different class files.
Is what I am doing going too far? I am trying to follow the single responsibility principle with this approach but I may be treading into something that is an anemic class structure where I do not have very intelligent "business objects".
This multitude of small classes would drive me nuts. With this design style it becomes really hard to figure out where the real work gets done. I am not a fan of having a ton of interfaces each with a corresponding implementation class, either. Having lots of "IWidget" and "WidgetImpl" pairings is a code smell in my book.
Breaking up a 3,000 line class into smaller pieces is great and commendable. Remember the goal, though: it's to make the code easier to read and easier to work with. If you end up with 30 classes and interfaces you've likely just created a different type of monster. Now you have a really complicated class design. It takes a lot of mental effort to keep that many classes straight in your head. And with lots of small classes you lose the very useful ability to open up a couple of key files, pick out the most important methods, and get an idea of what the heck is going on.
For what it's worth, though, I'm not really sold on test-driven design. Writing tests early, that's sensible. But reorganizing and restructuring your class design so it can be more easily unit tested? No thanks. I'll make interfaces only if they make architectural sense, not because I need to be able to mock up some objects so I can test my classes. That's putting the cart before the horse.
You might have gone a bit too far if you are asking this question. Having only one public method in a class isn't bad as such, if that class has a clear responsibility/function and encapsulates all logic concerning that function, even if most of it is in private methods.
When refactoring such legacy code, I usually try to identify the components in play at a high level that can be assigned distinct roles/responsibilities and separate them into their own classes. I think about which functions should be which components's responsibility and move the methods into that class.
You write a class so that instances of the class maintain state. You put this state in a class because all the state in the class is related.You have function to managed this state so that invalid permutations of state can't be set (the infamous square that has members width and height, but if width doesn't equal height it's not really a square.)
If you don't have state, you don't need a class, you could just use free functions (or in Java, static functions).
So, the question isn't "should I have one function?" but rather "what state-ful entity does my class encapsulate?"
Maybe you have one function that sets all state -- and you should make it more granular, so that, e.g., instead of having void Rectangle::setWidthAndHeight( int x, int y) you should have a setWidth and a separate setHeight.
Perhaps you have a ctor that sets things up, and a single function that doesIt, whatever "it" is. Then you have a functor, and a single doIt might make sense. E.g., class Add implements Operation { Add( int howmuch); Operand doIt(Operand rhs);}
(But then you may find that you really want something like the Visitor Pattern -- a pure functor is more likely if you have purely value objects, Visitor if they're arranged in a tree and are related to each other.)
Even if having these many small objects, single-function is the correct level of granularity, you may want something like a facade Pattern, to compose out of primitive operations, often-used complex operations.
There's no one answer. If you really have a bunch of functors, it's cool. If you're really just making each free function a class, it's foolish.
The real answer lies in answering the question, "what state am I managing, and how well do my classes model my problem domain?"
I'd be speculating if I gave a definite answer without looking at the code.
However it sounds like you're concerned and that is a definite flag for reviewing the code. The short answer to your question comes back to the definition of Simple Design. Minimal number of classes and methods is one of them. If you feel like you can take away some elements without losing the other desirable attributes, go ahead and collapse/inline them.
Some pointers to help you decide:
Do you have a good check for "Single Responsibility" ? It's deceptively difficult to get it right but is a key skill (I still don't see it like the masters). It doesn't necessarily translate to one method-classes. A good yardstick is 5-7 public methods per class. Each class could have 0-5 collaborators. Also to validate against SRP, ask the question what can drive a change into this class ? If there are multiple unrelated answers (e.g. change in the packet structure (parsing) + change in the packet contents to action map (command dispatcher) ) , maybe the class needs to be split. On the other end, if you feel that a change in the packet structure, can affect 4 different classes - you've run off the other cliff; maybe you need to combine them into a cohesive class.
If you have trouble naming the concrete implementations, maybe you don't need the interface. e.g. XXXImpl classes implmenting XXX need to be looked at. I recently learned of a naming convention, where the interface describes a Role and the implementation is named by the technology used to implement the role (or falling back to what it does). e.g. XmppAuction implements Auction (or SniperNotifier implements AuctionEventListener)
Lastly are you finding it difficult to add / modify / test existing code (e.g. test setup is long or painful ) ? Those can be signs that you need to go refactoring.
In couple of recent projects that I took part in I was almost addicted to the following coding pattern: (I'm not sure if there is a proper name for this, but anyway...)
Let's say some object is in some determined state and we wan't to change this state from outside. These changes could mean any behaviour, could invoke any algorithms, but the fact is that they focus on changing the state (member state, data state, etc...) of some object.
And let's call one discrete way of changing those object a Mutator. Mutators are applied once (generally) and they have some internal method like apply(Target& target, ...), which instantly provokes changing the state of the object (in fact, they're some sort of functional objects).
They also could be easily assimilated into chains and applied one-by-one (Mutator m1, m2, ...); they also could derive from some basic BasicMutator with virtual void apply(...) method.
I have introduced classes called InnerMutator and ExplicitMutator which differ in terms of access - first of them can also change the internal state of the object and should be declared as a friend (friend InnerMutator::access;).
In those projects my logic turned to work the following way:
Prepare available mutators, choose which to apply
Create and set the object to some determined state
foreach (mutator) mutator.apply(object);
Now the question.
This scheme turnes to work well and
(to me) seems as a sample of some non-standard but useful design
pattern.
What makes me feel uncomfortable is that InnerMutator stuff. I don't
think declaring mutator a friend to
every object which state could be
changed is a good idea and I wan't to
find the appropriate alternative.
Could this situation be solved in terms of Mutators or could you
advice some alternative pattern with
the same result?
Thanks.
I hope this isn't taken offensively, but I can't help but think this design is an overly fancy solution for a rather simple problem.
Why do you need to aggregate mutators, for instance? One can merely write:
template <class T>
void mutate(T& t)
{
t.mutate1(...);
t.mutate2(...);
t.mutate3(...);
}
There's your aggregate mutator, only it's a simple function template relying on static polymorphism rather than a combined list of mutators making one super-mutator.
I once did something that might have been rather similar. I made function objects which could be combined by operator && into super function objects that applied both operands (in the order from left to right) so that one could write code like:
foreach(v.begin(), v.end(), attack && burn && drain);
It was really neat and seemed really clever to me at the time and it was pretty efficient because it generated new function objects on the fly (no runtime mechanisms involved), but when my co-workers saw it, they thought I was insane. In retrospect I was trying to cram everything into functional style programming which has its usefulness but probably shouldn't be over-relied on in C++. It would have been easy enough to write:
BOOST_FOR_EACH(Something& something, v)
{
attack(something);
burn(something);
drain(something);
}
Granted it is more lines of code but it has the benefit of being centralized and easy to debug and doesn't introduce alien concepts to other people working with my code. I've found it much more beneficial instead to focus on tight logic rather than trying to forcefully reduce lines of code.
I recommend you think deeply about this and really consider if it's worth it to continue with this mutator-based design no matter how clever and tight it is. If other people can't quickly understand it, unless you have the authority of boost authors, it's going to be hard to convince people to like it.
As for InnerMutator, I think you should avoid it like the plague. Having external mutator classes which can modify a class's internals as directly as they can here defeats a lot of the purpose of having internals at all.
Would it not be better to simply make those Mutators methods of the classes they're changing? If there's common functionality you want several classes to have then why not make them inherit from the same base class? Doing this will deal with all the inner mutator discomfort I would imagine.
If you want to queue up changes you could store them as sets of function calls to the methods within the classes.