Programming pattern for components that are toggleable at runtime - c++

I'm wondering if there is some kind of logical programming pattern or structure that I should be using if sometimes during runtime a component should be used and other times not. The obvious simple solution is to just use if-else statements everywhere. I'm trying to avoid littering my code with if-else statements since once the component is toggled on, it will more than likely be on for a while and I wonder if its worth it to recheck if the same component is active all over the place when the answer will most likely not have changed between checks.
Thanks
A brief example of what I'm trying to avoid
class MainClass
{
public:
// constructors, destructors, etc
private:
ComponentClass m_TogglableComponent;
}
// somewhere else in the codebase
if (m_TogglableComponent.IsActive())
{
// do stuff
}
// somewhere totally different in the codebase
if (m_TogglableComponent.IsActive())
{
// do some different stuff
}

Looks like you're headed towards a feature toggle. This is a common occurrence when there's a piece of functionality that you need to be able to toggle on or off at run time. The key piece of insight with this approach is to use polymorphism instead of if/else statements, leveraging object oriented practices.
Martin Fowler details an approach here, as well as his rationale: http://martinfowler.com/articles/feature-toggles.html
But for a quick answer, instead of having state in your ComponentClass that tells observers whether it's active or not, you'll want to make a base class, AbstractComponentClass, and two base classes ActiveComponentClass and InactiveComponentClass. Bear in mind that m_TogglableComponent is currently an automatic member, and you'll need to make it a pointer under this new setup.
AbstractComponentClass will define pure virtual methods that both need to implement. In ActiveComponentClass you will put your normal functionality, as if it were enabled. In InactiveComponentClass you do as little as possible, enough to make the component invisible as far as MainClass is concerned. Void functions will do nothing and functions return values will return neutral values.
The last step is creating an instance of one of these two classes. This is where you bring in dependency injection. In your constructor to MainClass, you'll take a pointer of type AbstractComponentClass. From there on it doesn't care if it's Active or Inactive, it just calls the virtual functions. Whoever owns or controls MainClass is the one that injects the kind that you want, either active or inactive, which could be read by configuration or however else your system decides when to toggle.
If you need to change the behaviour at run time, you'll also need a setter method that takes another AbstractComponentClass pointer and replaces the one from the constructor.

Related

C++ What design pattern do I need to use?

I have a question about design patterns. My project is starting to get bigger and bigger and I feel like my design pattern is inaccurate/imprecise in terms of maintainability, scalability, and readability. The way my project is currently structured is that I have a MainLoopFile where I poll my events, I handle the events, draw the textures, and most importantly, instanciate external classes such as MyClassA up to some arbitrary MyClassZ.
The issue is that I'm making only one instance of each of MyClassX, just so that I can access their member functions fooA(), fooB() using MyClassX x; x.fooA(); from with the main loop file.
Is this the best approach to do this? I feel like
Write a class
Make a instance of the class
Call the functions of that class
methodology is useful whenever one needs to make multiple instances of that class. In my case, I'm only making one. Here is an example
MyClassA.hpp
MyClassA {
fooA();
fooB();
};
MyClassB.hpp
MyClassB {
fooA();
fooB();
};
...
MyClassZ.hpp
MyClassZ {
fooA();
fooB();
};
MainLoopFile.cpp
#include "MyClassA.hpp"
#include "MyClassB.hpp"
...
#include "MyClassZ.hpp"
MainLoopFile {
MyClassA a;
MyClassB b;
...
MyClassZ z;
while() {
Event event;
// do operations on a,b,...,z
}
}
Is this design pattern correct? Is there anything better?
Things I have considered
Making namespaces although I don't understand if that is the correct solution
Making some of the classes static (since only one instance necessary), but then half of my project's content becomes static, and it just feels odd
Read online about some design patterns, but was not able to find an answer to this specific solution
The model you describe is called a 'singleton' and its a common design pattern. The debate as to whether 'raw code' not in any class , singleton or static methods of a class is a long running lively one.
A singleton is better if some point you might need 2. Now you simply have to create another one. If its all static you have a lot of rewriting to do.
The thing that encapsulates the start up logic 'MainFile' in your case clearly is best static, since you are only running one program.
Things that encapsulate objects that you operate on are probably best as singletons.
Advantage of statics is that they are always there, you dont have to hand pointer / references around. You can just say Froodle::Noodle(widget) and there you are. The down side of that though is its a huge reorg if suddenly you need 2 Froodles
Just so I understand what you're doing -- you're using MyClassA .. Z in order to organize your code. That seems reasonable. But you don't actually have data associated with them.
The two most obvious answers, if I understand your problem correctly, could be:
Use namespaces and free functions within the namespaces
Use static methods so at least you're not instantiating empty objects
For the latter, you could then just call MyClassA::foo(); without having an instance of MyClassA.
To ask a design question, the most important part is that "what you want to achieve and what's the limitation" instead of "what you've done" (unless you're facing a legacy codebase, then current architecture is the limitation).
Please explain the context what class A ~ Z does.
Are they holding states or just having the same function names?
Why do you need so many different flavors of the same function name?
You may combine multiple patterns to achieve what you want to do.
By your sample code, I presume you want an event loop system (event loop is a design pattern as well!).
You may have multiple event handler instances. Making them Singleton might be what you want, but I don't suggest to make them singleton if your instances have dependency on other instances
To handle event, I think Observer Pattern is too big for this case. I'd suggest to use Strategy Pattern here since you might want to use different handlers according to different event types.

Is using an empty base class justified in this example?

I'm writing a Window class which propagates different types of events, listed in
enum Event {WINDOW_ClOSE=0x1, WINDOW_REDRAW=0x2, MOUSE_MOVE=0x4, ...};
to objects which have registered for notification with the window. For each type of event, I have an abstract class which any object must extend in order to allow notification. To react to, say, a MOUSE_MOVE event, my object would inherit from MouseMoveListener, which has a process_mouse_move_event() method which is called by Window. Listening to many events can be combined by extending multiple of these classes, which all inherit from the EventListener base class. To register an object, I would call
void Window::register(EventListener* object, int EventTypes)
{
if(EventTypes&WINDOW_CLOSE)
/* dynamic_cast object to WindowCloseListener*, add to internal list of all
WindowCloseListeners if the cast works, else raise error */
if(EventTypes&MOUSE_MOVE)
/* dynamic_cast object to MouseMoveListener*, add to internal list of all
MouseMoveListeners if the cast works, else raise error */
...
}
This works fine, but my gripe is that EventListener is completely empty and that seems code smelly to me. I know I could avoid this by removing EventListener altogether and having a separate Window::register for each type of event, but I feel that this would blow up my interface needlessly (especially since methods other than register might crop up with the same problem). So I guess I am looking for answers that either say:
"You can keep doing it the way you do, because ..." or
"Introduce the separate Window::register methods anyway, because ..." or of course
"You are doing it all wrong, you should ...".
EDIT:
From the link in Igors comment: What I do above only works if there is at least one virtual member in EventListener for example a virtual destructor, so the class is not technically completely empty.
EDIT 2:
I prematurely accepted n.m.'s solution as one of the type "I'm doing it all wrong". However, it is of the second type. Even if I can call EventListener->register(Window&) polymorphically, Window needs to implement a highly redundant interface (in terms of declared methods) that allows EventListeners to register for selective notification. This is equivalent to my alternative solution described above, only with the additional introduction of the EventListener class for no good reason. In conclusion, the canonical answer seems to be:
Don't do dynamic_cast + empty base class just to avoid declaring many similar functions, it will hurt you when maintaining the code later. Write the many functions.
EDIT 3:
I found a solution (using templates) which is satisfactory for me. It does not use an empty base class any more and it does not exhibit the maintenance problem pointed out by n.m.
object->registerWindow (this, EventTypes);
Of course you need to implement registerWindow for all EventListener heirs. Let them check for event types which are relevant to them.
UPDATE
If this means you need to redesign your code, then you need to redesign your code. Why is it so? Because dynamic_cast is not a proper way to do switch-on-types. It is not a proper way because every time you add a class in your hierarchy, you need to go over and possibly update all switches-by-dynamic-cast in your old code. This becomes very messy and unmaintainable very quickly, and this is exactly the reason why virtual functions were invented.
If you do your switch-on-types with virtual functions, every time you change your hierarchy you have to do... nothing. The virtual call mechanism will take care of your changes.
This is what I ended up doing:
template <int EventType> void register_(EventListener<EventType> Listener)
{
// do stuff with Listener, using more templates
};
It turned out that static polymorphism was better suited for my needs - I just wanted to avoid writing
register_mouse_motion_event(...)
register_keyboard_event(...)
and so on. This approach also nicely eliminates the need for an empty base class.

Is it better to take object argument or use member object?

I have a class which I can write like this:
class FileNameLoader
{
public:
virtual bool LoadFileNames(PluginLoader&) = 0;
virtual ~FileNameLoader(){}
};
Or this:
class FileNameLoader
{
public:
virtual bool LoadFileNames(PluginLoader&, Logger&) = 0;
virtual ~FileNameLoader(){}
};
The first one assumes that there is a member Logger& in the implementation of FileNameLoader. The second one does not. However, I have some classes which have a lot of methods which internally use Logger. So the second method would make me write more code in that case. Logger is a singleton for the moment. My guess is that it will remain that way. What is the more 'beautiful' of the two and why? What is the usual practice?
EDIT:
What if this class was not named Logger? :). I have a Builder also. How about then?
I don't see what extra advantage approach two has over one (even considering unit testing!), infact with two, you have to ensure that everywhere you call a particular method, a Logger is available to pass in - and that could make things complicated...
Once you construct an object with the logger, do you really see the need to change it? If not, why bother with approach two?
I prefer the second method as it allows for more robust black box testing. Also it makes the interface of the function clearer (the fact that it uses such a Logger object).
The first thing is to be sure that the Logger dependency is being provided by the user in either case. Presumably in the first case, the constructor for FileNameLoader takes a Logger& parameter?
In no case would I, under any circumstances, make the Logger a Singleton. Never, not ever, no way, no how. It's either an injected dependency, or else have a Log free function, or if you absolutely must, use a global reference to a std::ostream object as your universal default logger. A Singleton Logger class is a way of creating hurdles to testing, for absolutely no practical benefit. So what if some program does create two Logger objects? Why is that even bad, let alone worth creating trouble for yourself in order to prevent? One of the first things I find myself doing, in any sophisticated logging system, is creating a PrefixLogger which implements the Logger interface but prints a specified string at the start of all messages, to show some context. Singleton is incompatible with with this kind of dynamic flexibility.
The second thing, then, is to ask whether users are going to want to have a single FileNameLoader, and call LoadFileNames on it several times, with one logger the first time and another logger the second time.
If so, then you definitely want a Logger parameter to the function call, because an accessor to change the current Logger is (a) not a great API, and (b) impossible with a reference member anyway: you'd have to change to a pointer. You could perhaps make the logger parameter a pointer with a default value of 0, though, with 0 meaning "use the member variable". That would allow uses where the users initial setup code knows and cares about logging, but then that code hands the FileNameLoader object off to some other code that will call LoadFileNames, but doesn't know or care about logging.
If not, then the Logger dependency is an invariant for each instance of the class, and using a member variable is fine. I always worry slightly about reference member variables, but for reasons unrelated to this choice.
[Edit regarding the Builder: I think you can pretty much search and replace in my answer and it still holds. The crucial difference is whether "the Builder used by this FileNameLoader object" is invariant for a given object, or whether "the Builder used in the call" is something that callers to LoadFileNames need to configure on a per-call basis.
I might be slightly less adamant that Builder should not be a Singleton. Slightly. Might.]
In general I think less arguments equals better function. Typically, the more arguments a function has, the more "common" the function tends to become - this, in turn, can lead to large complicated functions that try to do everything.
Under the assumption that the Logger interface is for tracing, in this case I doubt the the user of the FileNameLoader class really wants to be concerned with providing the particular logging instance that should be used.
You can also probably apply the Law of Demeter as an argument against providing the logging instance on a function call.
Of course there will be specific times where this isn't appropriate. General examples might be:
For performance (should only be done after identification of specific performance issues).
To aid testing through mock objects (In this case I think a constructor is a more appropriate location, for logging remaining a singleton is probably a better option...)
I would stick with the first method and use the Logger as a singleton. Different sinks and identifying where data was logged from is a different problem altogether. Identifying the sink can be as simple or as complex as you want. For example (assuming Singleton<> is a base-class for singletons in your code):
class Logger : public Singleton<Logger>
{
public:
void Log(const std::string& _sink, const std::string& _data);
};
Your class:
class FileNameLoader
{
public:
virtual bool LoadFileNames(PluginLoader& _pluginLoader)
{
Logger.getSingleton().Log("FileNameLoader", "loading xyz");
};
virtual ~FileNameLoader(){}
};
You can have an inherently complex Log Manager with different sinks, different log-levels different outputs. Your Log() method on the log manager should support simple logging as described above, and then you can allow for more complex examples. For debugging purposes, for example, you could define different outputs for different sinks as well as having a combined log.
The approach to logging that I like best is to have a member of type Logger in my class (not a reference or pointer, but an actual object).
Depending on the logging infrastructure, that makes it possible to decide, on a per-class basis, where the output should go or which prefix to use.
This has the advantage over your second approach that you can't (accidentally) create a situation where members of the same class can not be easily identified as such in the logfiles.

C++ program design question

In couple of recent projects that I took part in I was almost addicted to the following coding pattern: (I'm not sure if there is a proper name for this, but anyway...)
Let's say some object is in some determined state and we wan't to change this state from outside. These changes could mean any behaviour, could invoke any algorithms, but the fact is that they focus on changing the state (member state, data state, etc...) of some object.
And let's call one discrete way of changing those object a Mutator. Mutators are applied once (generally) and they have some internal method like apply(Target& target, ...), which instantly provokes changing the state of the object (in fact, they're some sort of functional objects).
They also could be easily assimilated into chains and applied one-by-one (Mutator m1, m2, ...); they also could derive from some basic BasicMutator with virtual void apply(...) method.
I have introduced classes called InnerMutator and ExplicitMutator which differ in terms of access - first of them can also change the internal state of the object and should be declared as a friend (friend InnerMutator::access;).
In those projects my logic turned to work the following way:
Prepare available mutators, choose which to apply
Create and set the object to some determined state
foreach (mutator) mutator.apply(object);
Now the question.
This scheme turnes to work well and
(to me) seems as a sample of some non-standard but useful design
pattern.
What makes me feel uncomfortable is that InnerMutator stuff. I don't
think declaring mutator a friend to
every object which state could be
changed is a good idea and I wan't to
find the appropriate alternative.
Could this situation be solved in terms of Mutators or could you
advice some alternative pattern with
the same result?
Thanks.
I hope this isn't taken offensively, but I can't help but think this design is an overly fancy solution for a rather simple problem.
Why do you need to aggregate mutators, for instance? One can merely write:
template <class T>
void mutate(T& t)
{
t.mutate1(...);
t.mutate2(...);
t.mutate3(...);
}
There's your aggregate mutator, only it's a simple function template relying on static polymorphism rather than a combined list of mutators making one super-mutator.
I once did something that might have been rather similar. I made function objects which could be combined by operator && into super function objects that applied both operands (in the order from left to right) so that one could write code like:
foreach(v.begin(), v.end(), attack && burn && drain);
It was really neat and seemed really clever to me at the time and it was pretty efficient because it generated new function objects on the fly (no runtime mechanisms involved), but when my co-workers saw it, they thought I was insane. In retrospect I was trying to cram everything into functional style programming which has its usefulness but probably shouldn't be over-relied on in C++. It would have been easy enough to write:
BOOST_FOR_EACH(Something& something, v)
{
attack(something);
burn(something);
drain(something);
}
Granted it is more lines of code but it has the benefit of being centralized and easy to debug and doesn't introduce alien concepts to other people working with my code. I've found it much more beneficial instead to focus on tight logic rather than trying to forcefully reduce lines of code.
I recommend you think deeply about this and really consider if it's worth it to continue with this mutator-based design no matter how clever and tight it is. If other people can't quickly understand it, unless you have the authority of boost authors, it's going to be hard to convince people to like it.
As for InnerMutator, I think you should avoid it like the plague. Having external mutator classes which can modify a class's internals as directly as they can here defeats a lot of the purpose of having internals at all.
Would it not be better to simply make those Mutators methods of the classes they're changing? If there's common functionality you want several classes to have then why not make them inherit from the same base class? Doing this will deal with all the inner mutator discomfort I would imagine.
If you want to queue up changes you could store them as sets of function calls to the methods within the classes.

Do polymorphism or conditionals promote better design?

I recently stumbled across this entry in the google testing blog about guidelines for writing more testable code. I was in agreement with the author until this point:
Favor polymorphism over conditionals: If you see a switch statement you should think polymorphisms. If you see the same if condition repeated in many places in your class you should again think polymorphism. Polymorphism will break your complex class into several smaller simpler classes which clearly define which pieces of the code are related and execute together. This helps testing since simpler/smaller class is easier to test.
I simply cannot wrap my head around that. I can understand using polymorphism instead of RTTI (or DIY-RTTI, as the case may be), but that seems like such a broad statement that I can't imagine it actually being used effectively in production code. It seems to me, rather, that it would be easier to add additional test cases for methods which have switch statements, rather than breaking down the code into dozens of separate classes.
Also, I was under the impression that polymorphism can lead to all sorts of other subtle bugs and design issues, so I'm curious to know if the tradeoff here would be worth it. Can someone explain to me exactly what is meant by this testing guideline?
Actually this makes testing and code easier to write.
If you have one switch statement based on an internal field you probably have the same switch in multiple places doing slightly different things. This causes problems when you add a new case as you have to update all the switch statements (if you can find them).
By using polymorphism you can use virtual functions to get the same functionality and because a new case is a new class you don't have to search your code for things that need to be checked it is all isolated for each class.
class Animal
{
public:
Noise warningNoise();
Noise pleasureNoise();
private:
AnimalType type;
};
Noise Animal::warningNoise()
{
switch(type)
{
case Cat: return Hiss;
case Dog: return Bark;
}
}
Noise Animal::pleasureNoise()
{
switch(type)
{
case Cat: return Purr;
case Dog: return Bark;
}
}
In this simple case every new animal causes requires both switch statements to be updated.
You forget one? What is the default? BANG!!
Using polymorphism
class Animal
{
public:
virtual Noise warningNoise() = 0;
virtual Noise pleasureNoise() = 0;
};
class Cat: public Animal
{
// Compiler forces you to define both method.
// Otherwise you can't have a Cat object
// All code local to the cat belongs to the cat.
};
By using polymorphism you can test the Animal class.
Then test each of the derived classes separately.
Also this allows you to ship the Animal class (Closed for alteration) as part of you binary library. But people can still add new Animals (Open for extension) by deriving new classes derived from the Animal header. If all this functionality had been captured inside the Animal class then all animals need to be defined before shipping (Closed/Closed).
Do not fear...
I guess your problem lies with familiarity, not technology. Familiarize yourself with C++ OOP.
C++ is an OOP language
Among its multiple paradigms, it has OOP features and is more than able to support comparison with most pure OO language.
Don't let the "C part inside C++" make you believe C++ can't deal with other paradigms. C++ can handle a lot of programming paradigms quite graciously. And among them, OOP C++ is the most mature of C++ paradigms after procedural paradigm (i.e. the aforementioned "C part").
Polymorphism is Ok for production
There is no "subtle bugs" or "not suitable for production code" thing. There are developers who remain set in their ways, and developers who'll learn how to use tools and use the best tools for each task.
switch and polymorphism are [almost] similar...
... But polymorphism removed most errors.
The difference is that you must handle the switches manually, whereas polymorphism is more natural, once you get used with inheritance method overriding.
With switches, you'll have to compare a type variable with different types, and handle the differences. With polymorphism, the variable itself knows how to behave. You only have to organize the variables in logical ways, and override the right methods.
But in the end, if you forget to handle a case in switch, the compiler won't tell you, whereas you'll be told if you derive from a class without overriding its pure virtual methods. Thus most switch-errors are avoided.
All in all, the two features are about making choices. But Polymorphism enable you to make more complex and in the same time more natural and thus easier choices.
Avoid using RTTI to find an object's type
RTTI is an interesting concept, and can be useful. But most of the time (i.e. 95% of the time), method overriding and inheritance will be more than enough, and most of your code should not even know the exact type of the object handled, but trust it to do the right thing.
If you use RTTI as a glorified switch, you're missing the point.
(Disclaimer: I am a great fan of the RTTI concept and of dynamic_casts. But one must use the right tool for the task at hand, and most of the time RTTI is used as a glorified switch, which is wrong)
Compare dynamic vs. static polymorphism
If your code does not know the exact type of an object at compile time, then use dynamic polymorphism (i.e. classic inheritance, virtual methods overriding, etc.)
If your code knows the type at compile time, then perhaps you could use static polymorphism, i.e. the CRTP pattern http://en.wikipedia.org/wiki/Curiously_Recurring_Template_Pattern
The CRTP will enable you to have code that smells like dynamic polymorphism, but whose every method call will be resolved statically, which is ideal for some very critical code.
Production code example
A code similar to this one (from memory) is used on production.
The easier solution revolved around a the procedure called by message loop (a WinProc in Win32, but I wrote a simplier version, for simplicity's sake). So summarize, it was something like:
void MyProcedure(int p_iCommand, void *p_vParam)
{
// A LOT OF CODE ???
// each case has a lot of code, with both similarities
// and differences, and of course, casting p_vParam
// into something, depending on hoping no one
// did a mistake, associating the wrong command with
// the wrong data type in p_vParam
switch(p_iCommand)
{
case COMMAND_AAA: { /* A LOT OF CODE (see above) */ } break ;
case COMMAND_BBB: { /* A LOT OF CODE (see above) */ } break ;
// etc.
case COMMAND_XXX: { /* A LOT OF CODE (see above) */ } break ;
case COMMAND_ZZZ: { /* A LOT OF CODE (see above) */ } break ;
default: { /* call default procedure */} break ;
}
}
Each addition of command added a case.
The problem is that some commands where similar, and shared partly their implementation.
So mixing the cases was a risk for evolution.
I resolved the problem by using the Command pattern, that is, creating a base Command object, with one process() method.
So I re-wrote the message procedure, minimizing the dangerous code (i.e. playing with void *, etc.) to a minimum, and wrote it to be sure I would never need to touch it again:
void MyProcedure(int p_iCommand, void *p_vParam)
{
switch(p_iCommand)
{
// Only one case. Isn't it cool?
case COMMAND:
{
Command * c = static_cast<Command *>(p_vParam) ;
c->process() ;
}
break ;
default: { /* call default procedure */} break ;
}
}
And then, for each possible command, instead of adding code in the procedure, and mixing (or worse, copy/pasting) the code from similar commands, I created a new command, and derived it either from the Command object, or one of its derived objects:
This led to the hierarchy (represented as a tree):
[+] Command
|
+--[+] CommandServer
| |
| +--[+] CommandServerInitialize
| |
| +--[+] CommandServerInsert
| |
| +--[+] CommandServerUpdate
| |
| +--[+] CommandServerDelete
|
+--[+] CommandAction
| |
| +--[+] CommandActionStart
| |
| +--[+] CommandActionPause
| |
| +--[+] CommandActionEnd
|
+--[+] CommandMessage
Now, all I needed to do was to override process for each object.
Simple, and easy to extend.
For example, say the CommandAction was supposed to do its process in three phases: "before", "while" and "after". Its code would be something like:
class CommandAction : public Command
{
// etc.
virtual void process() // overriding Command::process pure virtual method
{
this->processBefore() ;
this->processWhile() ;
this->processAfter() ;
}
virtual void processBefore() = 0 ; // To be overriden
virtual void processWhile()
{
// Do something common for all CommandAction objects
}
virtual void processAfter() = 0 ; // To be overriden
} ;
And, for example, CommandActionStart could be coded as:
class CommandActionStart : public CommandAction
{
// etc.
virtual void processBefore()
{
// Do something common for all CommandActionStart objects
}
virtual void processAfter()
{
// Do something common for all CommandActionStart objects
}
} ;
As I said: Easy to understand (if commented properly), and very easy to extend.
The switch is reduced to its bare minimum (i.e. if-like, because we still needed to delegate Windows commands to Windows default procedure), and no need for RTTI (or worse, in-house RTTI).
The same code inside a switch would be quite amusing, I guess (if only judging by the amount of "historical" code I saw in our app at work).
Unit testing an OO program means testing each class as a unit. A principle that you want to learn is "Open to extension, closed to modification". I got that from Head First Design Patterns. But it basically says that you want to have the ability to easily extend your code without modifying existing tested code.
Polymorphism makes this possible by eliminating those conditional statements. Consider this example:
Suppose you have a Character object that carries a Weapon. You can write an attack method like this:
If (weapon is a rifle) then //Code to attack with rifle else
If (weapon is a plasma gun) //Then code to attack with plasma gun
etc.
With polymorphism the Character does not have to "know" the type of weapon, simply
weapon.attack()
would work. What happens if a new weapon was invented? Without polymorphism you will have to modify your conditional statement. With polymorphism you will have to add a new class and leave the tested Character class alone.
I'm a bit of a skeptic: I believe inheritance often adds more complexity than it removes.
I think you are asking a good question, though, and one thing I consider is this:
Are you splitting into multiple classes because you are dealing with different things? Or is it really the same thing, acting in a different way?
If it's really a new type, then go ahead and create a new class. But if it's just an option, I generally keep it in the same class.
I believe the default solution is the single-class one, and the onus is on the programmer proposing inheritance to prove their case.
Not an expert in the implications for test cases, but from a software development perspective:
Open-closed principle -- Classes should be closed to alteration, but open to extension. If you manage conditional operations via a conditional construct, then if a new condition is added, your class needs to change. If you use polymorphism, the base class need not change.
Don't repeat yourself -- An important part of the guideline is the "same if condition." That indicates that your class has some distinct modes of operation that can be factored into a class. Then, that condition appears in one place in your code -- when you instantiate the object for that mode. And again, if a new one comes along, you only need to change one piece of code.
Polymorphism is one of the corner stones of OO and certainly is very useful.
By dividing concerns over multiple classes you create isolated and testable units.
So instead of doing a switch...case where you call methods on several different types or implemenations you create a unified interface, having multiple implementations.
When you need to add an implementation, you do not need to modify the clients, as is the case with switch...case. Very important as this helps to avoid regression.
You can also simplify your client algorithm by dealing with just one type : the interface.
Very important to me is that polymorphism is best used with a pure interface/implementation pattern ( like the venerable Shape <- Circle etc... ) .
You can also have polymorphism in concrete classes with template-methods ( aka hooks ), but its effectiveness decreases as complexity increases.
Polymorphism is the foundation on which our company's codebase is built, so I consider it very practical.
Switches and polymorphism does the same thing.
In polymorphism (and in class-based programming in general) you group the functions by their type. When using switches you group the types by function. Decide which view is good for you.
So if your interface is fixed and you only add new types, polymorphism is your friend.
But if you add new functions to your interface you will need to update all implementations.
In certain cases, you may have a fixed amount of types, and new functions can come, then switches are better. But adding new types makes you update every switch.
With switches you are duplicating sub-type lists. With polymorphism you are duplicating operation lists. You traded a problem to get a different one. This is the so called expression problem, which is not solved by any programming paradigm I know. The root of the problem is the one-dimensional nature of the text used to represent the code.
Since pro-polymorphism points are well discussed here, let me provide a pro-switch point.
OOP has design patterns to avoid common pitfalls. Procedural programming has design patterns too (but no one have wrote it down yet AFAIK, we need another new Gang of N to make a bestseller book of those...). One design pattern could be always include a default case.
Switches can be done right:
switch (type)
{
case T_FOO: doFoo(); break;
case T_BAR: doBar(); break;
default:
fprintf(stderr, "You, who are reading this, add a new case for %d to the FooBar function ASAP!\n", type);
assert(0);
}
This code will point your favorite debugger to the location where you forgot to handle a case. A compiler can force you to implement your interface, but this forces you to test your code thoroughly (at least to see the new case is noticed).
Of course if a particular switch would be used more than one places, it's cut out into a function (don't repeat yourself).
If you want to extend these switches just do a grep 'case[ ]*T_BAR' rn . (on Linux) and it will spit out the locations worth looking at. Since you need to look at the code, you will see some context which helps you how to add the new case correctly. When you use polymorphism the call sites are hidden inside the system, and you depend on the correctness of the documentation, if it exists at all.
Extending switches does not break the OCP too, since you does not alter the existing cases, just add a new case.
Switches also help the next guy trying to get accustomed to and understand the code:
The possible cases are before your eyes. That's a good thing when reading code (less jumping around).
But virtual method calls are just like normal method calls. One can never know if a call is virtual or normal (without looking up the class). That's bad.
But if the call is virtual, possible cases are not obvious (without finding all derived classes). That's also bad.
When you provide an interface to a thirdparty, so they can add behavior and user data to a system, then that's a different matter. (They can set callbacks and pointers to user-data, and you give them handles)
Further debate can be found here: http://c2.com/cgi/wiki?SwitchStatementsSmell
I'm afraid my "C-hacker's syndrome" and anti-OOPism will eventually burn all my reputation here. But whenever I needed or had to hack or bolt something into a procedural C system, I found it quite easy, the lack of constraints, forced encapsulation and less abstraction layers makes me "just do it". But in a C++/C#/Java system where tens of abstraction layers stacked on the top of each other in the software's lifetime, I need to spend many hours sometimes days to find out how to correctly work around all the constraints and limitations that other programmers built into their system to avoid others "messing with their class".
This is mainly to do with encapsulation of knowledge. Let's start with a really obvious example - toString(). This is Java, but easily transfers to C++. Suppose you want to print a human friendly version of an object for debugging purposes. You could do:
switch(obj.type): {
case 1: cout << "Type 1" << obj.foo <<...; break;
case 2: cout << "Type 2" << ...
This would however clearly be silly. Why should one method somewhere know how to print everything. It will often be better for the object itself to know how to print itself, eg:
cout << object.toString();
That way the toString() can access member fields without needing casts. They can be tested independently. They can be changed easily.
You could argue however, that how an object prints shouldn't be associated with an object, it should be associated with the print method. In this case, another design pattern comes in helpful, which is the Visitor pattern, used to fake Double Dispatch. Describing it fully is too long for this answer, but you can read a good description here.
If you are using switch statements everywhere you run into the possibility that when upgrading you miss one place thats needs an update.
It works very well if you understand it.
There are also 2 flavors of polymorphism. The first is very easy to understand in java-esque:
interface A{
int foo();
}
final class B implements A{
int foo(){ print("B"); }
}
final class C implements A{
int foo(){ print("C"); }
}
B and C share a common interface. B and C in this case can't be extended, so you're always sure which foo() you're calling. Same goes for C++, just make A::foo pure virtual.
Second, and trickier is run-time polymorphism. It doesn't look too bad in pseudo-code.
class A{
int foo(){print("A");}
}
class B extends A{
int foo(){print("B");}
}
class C extends B{
int foo(){print("C");}
}
...
class Z extends Y{
int foo(){print("Z");
}
main(){
F* f = new Z();
A* a = f;
a->foo();
f->foo();
}
But it is a lot trickier. Especially if you're working in C++ where some of the foo declarations may be virtual, and some of the inheritance might be virtual. Also the answer to this:
A* a = new Z;
A a2 = *a;
a->foo();
a2.foo();
might not be what you expect.
Just keep keenly aware of what you do and don't know if you're using run-time polymorphism. Don't get overconfident, and if you're not sure what something is going to do at run-time, then test it.
I must re-iterate that finding all switch statments can be a non trivial processes in a mature code base. If you miss any then the application is likely to crash because of an unmatched case statement unless you have default set.
Also check out "Martin Fowlers" book on "Refactoring"
Using a switch instead of polymorphism is a code smell.
It really depends on your style of programming. While this may be correct in Java or C#, I don't agree that automatically deciding to use polymorphism is correct. You can split your code into lots of little functions and perform an array lookup with function pointers (initialized at compile time), for instance. In C++, polymorphism and classes are often overused - probably the biggest design mistake made by people coming from strong OOP languages into C++ is that everything goes into a class - this is not true. A class should only contain the minimal set of things that make it work as a whole. If a subclass or friend is necessary, so be it, but they shouldn't be the norm. Any other operations on the class should be free functions in the same namespace; ADL will allow these functions be used without lookup.
C++ is not an OOP language, don't make it one. It's as bad as programming C in C++.