My game logic model consists of multiple connected classes. There are Board, Cell, Character, etc. Character can be placed (and moved) in Cell (1-1 rel).
There are two approaches:
Make each class of model implement interfaces so that they can be mocked and each class can be tested independently. It forces me to make implementation of each class to not rely on another. But in practice it's hard to avoid Board knowing about Cells too much and Characters knowing how Cell storing mechanism works. I have a Character.Cell and Cell.CurrentCharacter properties. In order for setters to work correctly (not go recursively) they should rely on each others implementation. It feels like the model logic should be considered as a single unit.
Make all public members to return interfaces but use exact classes inside (can involve some downcasting). The cons here are such that I should test the whole model as a single and can't use mocking to test different parts independently. Also there is no sense to use dependency injection inside model, only to get another full model implementation from controller.
So what to do?
UPDATE
You can propose other options.
Why are these the only 2 options?
If you intend to have different versions/types of the classes then interfaces/abstract base classes are a good option to enforce shared behaviour and generalize many operations. However the idea of building the classes independently without knowledge of each other is ridiculous.
It is always a good idea to separate class storage/behaviour to the class/layer it belongs. E.g. no business logic code in the data layer, etc. but the classes need to know about each other in order to function properly. If you make everything independent and based on interfaces you run the risk of over generalizing the application and reducing your efficiency.
Basically if you think you would need to ever downcast the incoming objects to more than one type it's a good idea to look at the design and see if you are gaining anything for the performance loss and nasty casting code you are about to write. If you will be required to handle every type of downcast object you have not gained anything and using polymorphism and a base class is a much better way to go.
Using interfaces does not eliminate your trouble in testing. You will still have to instantiate some version of the objects to test most of the functions on the cell/board anyway. Which for full regression testing will require you test each character's interaction with both.
Don't get me wrong, your character class should most likely have a base class or have an interface. All characters will (I'm sure) share many actions and can benefit from this design. E.g. Moving a character on the board is a fairly generic operation and can be made independent of the character except for a few pieces of information (such as how the character moves, if they are allowed to move, etc.) which should be part of said base class/interface.
When it is reasonable, design classes independently so that they can be tested on their own, but do not use testing as a reason to write bad code. Simple stubs or basic testing instances can be created to help with component testing and takes far less time and effort than fixing unnecessarily complex code.
Interfaces have a purpose, but if you will not be treating 2 classes the same... that is not it.
*Using MVC gives you a leg up on testing as well. If done correctly you should be able to swap out any of the layers to ease your testing of a single layer.
I am reading the book "Exceptional C++" by Herb Sutter, and in that book I have learned about the PIMPL idiom. Basically, the idea is to create a structure for the private objects of a class and dynamically allocate them to decrease the compilation time (and also hide the private implementations in a better manner).
For example:
class X
{
private:
C c;
D d;
} ;
could be changed to:
class X
{
private:
struct XImpl;
XImpl* pImpl;
};
and, in the .cpp file, the definition:
struct X::XImpl
{
C c;
D d;
};
This seems pretty interesting, but I have never seen this kind of approach before, neither in the companies I have worked, nor in open source projects that I've seen the source code. So, I am wondering whether this technique is really used in practice.
Should I use it everywhere, or with caution? And is this technique recommended to be used in embedded systems (where the performance is very important)?
So, I am wondering it this technique is really used in practice? Should I use it everywhere, or with caution?
Of course it is used. I use it in my project, in almost every class.
Reasons for using the PIMPL idiom:
Binary compatibility
When you're developing a library, you can add/modify fields to XImpl without breaking the binary compatibility with your client (which would mean crashes!). Since the binary layout of X class doesn't change when you add new fields to Ximpl class, it is safe to add new functionality to the library in minor versions updates.
Of course, you can also add new public/private non-virtual methods to X/XImpl without breaking the binary compatibility, but that's on par with the standard header/implementation technique.
Data hiding
If you're developing a library, especially a proprietary one, it might be desirable not to disclose what other libraries / implementation techniques were used to implement the public interface of your library. Either because of Intellectual Property issues, or because you believe that users might be tempted to take dangerous assumptions about the implementation or just break the encapsulation by using terrible casting tricks. PIMPL solves/mitigates that.
Compilation time
Compilation time is decreased, since only the source (implementation) file of X needs to be rebuilt when you add/remove fields and/or methods to the XImpl class (which maps to adding private fields/methods in the standard technique). In practice, it's a common operation.
With the standard header/implementation technique (without PIMPL), when you add a new field to X, every client that ever allocates X (either on stack, or on heap) needs to be recompiled, because it must adjust the size of the allocation. Well, every client that doesn't ever allocate X also need to be recompiled, but it's just overhead (the resulting code on the client side will be the same).
What is more, with the standard header/implementation separation XClient1.cpp needs to be recompiled even when a private method X::foo() was added to X and X.h changed, even though XClient1.cpp can't possibly call this method for encapsulation reasons! Like above, it's pure overhead and is related with how real-life C++ build systems work.
Of course, recompilation is not needed when you just modify the implementation of the methods (because you don't touch the header), but that's on par with the standard header/implementation technique.
Is this technique recommended to be used in embedded systems (where the performance is very important)?
That depends on how powerful your target is. However the only answer to this question is: measure and evaluate what you gain and lose. Also, take into consideration that if you're not publishing a library meant to be used in embedded systems by your clients, only the compilation time advantage applies!
It seems that a lot of libraries out there use it to stay stable in their API, at least for some versions.
But as for all things, you should never use anything everywhere without caution. Always think before using it. Evaluate what advantages it gives you, and if they are worth the price you pay.
The advantages it may give you are:
helps in keeping binary compatibility of shared libraries
hiding certain internal details
decreasing recompilation cycles
Those may or may not be real advantages to you. Like for me, I don't care about a few minutes recompilation time. End users usually also don't, as they always compile it once and from the beginning.
Possible disadvantages are (also here, depending on the implementation and whether they are real disadvantages for you):
Increase in memory usage due to more allocations than with the naïve variant
increased maintenance effort (you have to write at least the forwarding functions)
performance loss (the compiler may not be able to inline stuff as it is with a naïve implementation of your class)
So carefully give everything a value, and evaluate it for yourself. For me, it almost always turns out that using the PIMPL idiom is not worth the effort. There is only one case where I personally use it (or at least something similar):
My C++ wrapper for the Linux stat call. Here the struct from the C header may be different, depending on what #defines are set. And since my wrapper header can't control all of them, I only #include <sys/stat.h> in my .cxx file and avoid these problems.
I agree with all the others about the goods, but let me put in evidence about a limit: doesn't work well with templates.
The reason is that template instantiation requires the full declaration available where the instantiation took place. (And that's the main reason you don't see template methods defined into .cpp files.)
You can still refer to templatised subclasses, but since you have to include them all, every advantage of "implementation decoupling" on compiling (avoiding to include all platform-specific code everywhere, shortening compilation) is lost.
It is a good paradigm for classic OOP (inheritance based), but not for generic programming (specialization based).
Other people have already provided the technical up/downsides, but I think the following is worth noting:
First and foremost, don't be dogmatic. If PIMPL works for your situation, use it - don't use it just because "it's better OO since it really hides implementation", etc. Quoting the C++ FAQ:
encapsulation is for code, not people (source)
Just to give you an example of open source software where it is used and why: OpenThreads, the threading library used by the OpenSceneGraph. The main idea is to remove from the header (e.g., <Thread.h>) all platform-specific code, because internal state variables (e.g., thread handles) differ from platform to platform. This way one can compile code against your library without any knowledge of the other platforms' idiosyncrasies, because everything is hidden.
I would mainly consider PIMPL for classes exposed to be used as an API by other modules. This has many benefits, as it makes recompilation of the changes made in the PIMPL implementation does not affect the rest of the project. Also, for API classes they promote a binary compatibility (changes in a module implementation do not affect clients of those modules, they don't have to be recompiled as the new implementation has the same binary interface - the interface exposed by the PIMPL).
As for using PIMPL for every class, I would consider caution because all those benefits come at a cost: an extra level of indirection is required in order to access the implementation methods.
I think this is one of the most fundamental tools for decoupling.
I was using PIMPL (and many other idioms from Exceptional C++) on embedded project (SetTopBox).
The particular purpose of this idiom in our project was to hide the types XImpl class uses.
Specifically, we used it to hide details of implementations for different hardware, where different headers would be pulled in. We had different implementations of XImpl classes for one platform and different for the other. Layout of class X stayed the same regardless of the platform.
I used to use this technique a lot in the past but then found myself moving away from it.
Of course it is a good idea to hide the implementation detail away from the users of your class. However you can also do that by getting users of the class to use an abstract interface and for the implementation detail to be the concrete class.
The advantages of pImpl are:
Assuming there is just one implementation of this interface, it is clearer by not using abstract class / concrete implementation
If you have a suite of classes (a module) such that several classes access the same "impl" but users of the module will only use the "exposed" classes.
No v-table if this is assumed to be a bad thing.
The disadvantages I found of pImpl (where abstract interface works better)
Whilst you may have only one "production" implementation, by using an abstract interface you can also create a "mock" inmplementation that works in unit testing.
(The biggest issue). Before the days of unique_ptr and moving you had restricted choices as to how to store the pImpl. A raw pointer and you had issues about your class being non-copyable. An old auto_ptr wouldn't work with forwardly declared class (not on all compilers anyway). So people started using shared_ptr which was nice in making your class copyable but of course both copies had the same underlying shared_ptr which you might not expect (modify one and both are modified). So the solution was often to use raw pointer for the inner one and make the class non-copyable and return a shared_ptr to that instead. So two calls to new. (Actually 3 given old shared_ptr gave you a second one).
Technically not really const-correct as the constness isn't propagated through to a member pointer.
In general I have therefore moved away in the years from pImpl and into abstract interface usage instead (and factory methods to create instances).
As many other said, the Pimpl idiom allows to reach complete information hiding and compilation independency, unfortunately with the cost of performance loss (additional pointer indirection) and additional memory need (the member pointer itself). The additional cost can be critical in embedded software development, in particular in those scenarios where memory must be economized as much as possible.
Using C++ abstract classes as interfaces would lead to the same benefits at the same cost.
This shows actually a big deficiency of C++ where, without recurring to C-like interfaces (global methods with an opaque pointer as parameter), it is not possible to have true information hiding and compilation independency without additional resource drawbacks: this is mainly because the declaration of a class, which must be included by its users, exports not only the interface of the class (public methods) needed by the users, but also its internals (private members), not needed by the users.
Here is an actual scenario I encountered, where this idiom helped a great deal. I recently decided to support DirectX 11, as well as my existing DirectX 9 support, in a game engine.
The engine already wrapped most DX features, so none of the DX interfaces were used directly; they were just defined in the headers as private members. The engine uses DLL files as extensions, adding keyboard, mouse, joystick, and scripting support, as week as many other extensions. While most of those DLLs did not use DX directly, they required knowledge and linkage to DX simply because they pulled in headers that exposed DX.
In adding DX 11, this complexity was to increase dramatically, however unnecessarily. Moving the DX members into a PIMPL, defined only in the source, eliminated this imposition.
On top of this reduction of library dependencies, my exposed interfaces became cleaner as I moved private member functions into the PIMPL, exposing only front facing interfaces.
One benefit I can see is that it allows the programmer to implement certain operations in a fairly fast manner:
X( X && move_semantics_are_cool ) : pImpl(NULL) {
this->swap(move_semantics_are_cool);
}
X& swap( X& rhs ) {
std::swap( pImpl, rhs.pImpl );
return *this;
}
X& operator=( X && move_semantics_are_cool ) {
return this->swap(move_semantics_are_cool);
}
X& operator=( const X& rhs ) {
X temporary_copy(rhs);
return this->swap(temporary_copy);
}
PS: I hope I'm not misunderstanding move semantics.
It is used in practice in a lot of projects. It's usefulness depends heavily on the kind of project. One of the more prominent projects using this is Qt, where the basic idea is to hide implementation or platform-specific code from the user (other developers using Qt).
This is a noble idea, but there is a real drawback to this: debugging
As long as the code hidden in private implementations is of premium quality this is all well, but if there are bugs in there, then the user/developer has a problem, because it just a dumb pointer to a hidden implementation, even if he/she has the implementations source code.
So as in nearly all design decisions there are pros and cons.
I thought I would add an answer because although some authors hinted at this, I didn't think the point was made clear enough.
The primary purpose of PIMPL is to solve the N*M problem. This problem may have other names in other literature, however a brief summary is this.
You have some kind of inhertiance hierachy where if you were to add a new subclass to your hierachy, it would require you to implement N or M new methods.
This is only an approximate hand-wavey explanation, because I only recently became aware of this and so I am by my own admission not yet an expert on this.
Discussion of existing points made
However I came across this question, and similar questions a number of years ago, and I was confused by the typical answers which are given. (Presumably I first learned about PIMPL some years ago and found this question and others similar to it.)
Enables binary compatiability (when writing libraries)
Reduces compile time
Hides data
Taking into account the above "advantages", none of them are a particularly compelling reason to use PIMPL, in my opinion. Hence I have never used it, and my program designs suffered as a consequence because I discarded the utility of PIMPL and what it can really be used to accomplish.
Allow me to comment on each to explain:
1.
Binary compatiability is only of relevance when writing libraries. If you are compiling a final executable program, then this is of no relevance, unless you are using someone elses (binary) libraries. (In other words, you do not have the original source code.)
This means this advantage is of limited scope and utility. It is only of interest to people who write libraries which are shipped in proprietary form.
2.
I don't personally consider this to be of any relevance in the modern day when it is rare to be working on projects where the compile time is of critical importance. Maybe this is important to the developers of Google Chrome. The associated disadvantages which probably increase development time significantly probably more than offset this advantage. I might be wrong about this but I find it unlikely, especially given the speed of modern compilers and computers.
3.
I don't immediatly see the advantage that PIMPL brings here. The same result can be accomplished by shipping a header file and a binary object file. Without a concrete example in front of me it is difficult to see why PIMPL is relevant here. The relevant "thing" is shipping binary object files, rather than original source code.
What PIMPL actually does:
You will have to forgive my slightly hand-wavey answer. While I am not a complete expert in this particular area of software design, I can at least tell you something about it. This information is mostly repeated from Design Patterns. The authors call it "Bridge Pattern" aka Handle aka Body.
In this book, the example of writing a Window manager is given. The key point here is that a window manager can implement different types of windows as well as different types of platform.
For example, one may have a
Window
Icon window
Fullscreen window with 3d acceleration
Some other fancy window
These are types of windows which can be rendered
as well as
Microsoft Windows implementation
OS X platform implementation
Linux X Window Manger
Linux Wayland
These are different types of rendering engines, with different OS calls and possibly fundamentally different functionality as well
The list above is analagous to that given in another answer where another user described writing software which should work with different kinds of hardware for something like a DVD player. (I forget exactly what the example was.)
I give slightly different examples here compared to what is written in the Design Patterns book.
The point being that there are two seperate types of things which should be implemented using an inheritance hierachy, however using a single inheritance hierachy does not suffice here. (N*M problem, the complexity scales like the square of the number of things in each bullet point list, which is not feasible for a developer to implement.)
Hence, using PIMPL, one seperates out the types of windows and provides a pointer to an instance of an implementation class.
So PIMPL:
Solves the N*M problem
Decouples two fundamentally different things which are being modelled using inheritance such that there are 2 or more hierachies, rather than just one monolith
Permits runtime exchange of the exact implementation behaviour (by changing a pointer). This may be advantagous in some situations, whereas a single monolith enforces static (compile time) behaviour selection rather than runtime behaviour selection
There may be other ways to implement this, for example with multiple inheritance, but this is usually a more complicated and difficult approach, at least in my experience.
I am tasked to maintain and update a library which allows a computer to send commands at a hardware device and then receive its response. Currently the code is setup in such a way that every single possible command the device can receive is sent via its own function. Code repetition is everywhere; a DRY advocate's worst nightmare.
Obviously there is much opportunity for improvement. The problem is each command has a different payload. Currently the data that is to be the payload is passed to each command function in the form of arguments. It's difficult to consolidate functionality without pushing the complexity to a level that calls the library.
When a response is received from the device its data is put into an object of a class solely responsible for holding this data, they do nothing else. There are hundreds of classes which do this. These objects are then used to access the returned data by the app layer.
My objectives:
Throughly reduce code repetition
Maintain similiar level of complexity at application layer
Make it easier to add new commands
My idea:
Have one function to send a command and one to receive (the receiving function is automatically called when a response from the device is detected). Have a struct holding all command/response data which will be passed to sending function and returned by receiving function. Since each command has a corresponding enum value, have a switch statement which sets up any command specific data for sending.
Is my idea the best way to do it? Is there a design pattern I could use here? I've looked and looked but nothing seems to fit my needs.
Thanks in advance! (Please let me know if clarification is necessary)
This reminds me of the REST vs. SOA debate, albeit on a smaller physical scale.
If I understand you correctly, right now you have calls like
device->DoThing();
device->DoOtherThing();
and then sometimes I get a callback like
callback->DoneThing(ThingResult&);
callback->DoneOtherTHing(OtherThingResult&)
I suggest that the user is the key component here. Do the current library users like the interface at the level it is designed? Is the interface consistent, even if it is large?
You seem to want to propose
device->Do(ThingAndOtherThingParameters&)
callback->Done(ThingAndOtherThingResult&)
so to have a single entry point with more complex data.
The downside from a library user perspective may that now I have to use a manual switch() or other type statement to tell what really happened. While the dispatching to the appropriate result callback used to be done for me, now you have made it a burden upon the library user.
Unless this bought me as a user some level of flexibility, that I as as user wanted I would consider this a step backwards.
For your part as an implementor, one suggestion would be to go to the generic form internally, and then offer both interfaces externally. Perhaps the old specific interface could even be auto-generated somehow.
Good Luck.
Well, your question implies that there is a balance between the library's complexity and the client's. When those are the only two choices, one almost always goes with making the client's life easier. However, those are rarely really the only two choices.
Now in the text you talk about a command processing architecture where each command has a different set of data associated with it. In the olden days, this would typically be implemented with a big honking case statement in a loop, where each case called a different routine with different parameters and perhaps some setup code. Grisly. McCabe complexity analysers hate this.
These days what you can do with an OO language is use dynamic dispatch. Create a base abstract "command" class with a standard "handle()" method, and have each different command inherit from it to add their own members (to represent the different "arguments" to the different commands). Then you create a big honking array of these at startup, usually indexed by the command ID. For languages like C++ or Ada it has to be an array of pointers to "command" objects, for the dynamic dispatch to work. Then you can just call the appropriate command object for the command ID you read from the client. The big honking case statement is now handled implicitly by the dynamic dispatch.
Where you can get the big savings in this scenario is in subclassing. Do you have several commands that use the exact same parameters? Make a subclass for them, and then derive all of those commands from that subclass. Do you have several commands that have to perform the same operation on one of the parameters? Make a subclass for them with that one method implemented for that operation, and then derive all those commands from that subclass.
Your first objective should be to produce a library that decouples higher software layers from the hardware. Users of your library shouldn't care that you have a hardware device that can execute a number of functions with a different payload. They should only care what the device does in a higher level. In this sense, it is in my opinion a good thing that every command is mapped to each one function.
My plan will be:
Identify the objects the higher data layers need to get the job done. Model the objects in C++ classes from their perspective, not from the perspective of the hardware
Define the interface of the library using the above objects
Start the implementation of the library. Perhaps an intermediate layer that maps software objects to hardware objects is necessary
There are many things you can do to reduce code repetition. You can use polymorphism. Define a class with the base functionality and extend it. You can also use utility classes, that implement functions needed for many commands.
I have never used Factories before for the simple reason, I don't understand when I need them. I have been working on a little game in my spare time, and I decided to implement FMOD for the sound. I looked at a wrapper designed for OpenAL(different sound setup) and it looked something like...
SoundObject*
SoundObjectManager*
SoundObjectFactory*
The SoundObject was basically the instance of each sound object. The SoundObjectManager just manages all of these objects. This is straight forward enough and makes plenty of sense, but I don't get what the factory is doing or what it is used. I have been reading up on Factorys but still don't really get them.
Any help would be appreciated!
Think of Factory as a "virtual constructor". It lets you construct objects with a common compile time type but different runtime types. You can switch behavior simply by telling the Factory to create an instance of a different runtime type.
Factories are used when the implementation needs to be parametrized. FMOD is cross platform, it needs to decide what concrete implementation to give you for your platform. That is what the Factory is doing. There are two main Patterns Abstract Factory Pattern and Factory Method Pattern.
Hypothetical situation: I'm writing a sound library that I want to run on multiple platforms. I'll try to make as much of the code as possible be platform independent, but certainly some of it will need to change for Windows versus OSX versus Linux.
So I write all these different implementations, but I don't want the end user to have to make their program depend on Linux or Windows or whatever. I also don't want to maintain 4 different interfaces to my API. (Note these are just some of the reasons you might create a factory -- there are certainly other situations).
So I define this nice generic SoundObject base class that defines all the methods the client gets to use. Then I make my LinuxSoundObject, WindowsSoundObject, and 5 others derive from SoundObject. But I'm going to hide all these concrete implementations from the user and only provide them with a SoundObject. Instead, you have to call my SoundObjectFactory to grab what appears to you to be a plain old SoundObject, but really I've chosen the correct runtime type for you and instantiated it myself.
2 years later, a new OS comes about and displaces Windows. Instead of forcing you to rewrite your software, I just update my library to support the new platform and you never see a change to the interface.
This is all pretty contrived, but hopefully you get the idea.
Factories isolate consumers of an interface from what runtime type (i.e. implementation) is really being used.
Factories can be used to implement inversion of control, and to separate instantiation code (the 'new's) from the logic of your components. This is helpful when you're writing unit tests since you may not want tested objects to create a bunch of other objects.
i'm developing a simple simulation with OpenGL and this simulation has some global constants that are changed by the user during the simulation execution. I would like to know if the Singleton design pattern is the best way to work as a temporary, execution time, "configuration repository"
A singleton is probably the best option if you need to keep these settings truly "global".
However, for simulation purposes, I'd consider whether you can design your algorithms to pass a reference to a configuration instance, instead. This would make it much easier to store configurations per simulation, and eventually allow you to process multiple simulations with separate configurations concurrently, if requirements change.
Often, trying to avoid global state is a better, long term approach.
I think in the past I've used namespaces for this purpose, not singleton classes, but this should work too (probably even better).
Of course, if you want to be able to change the configuration without recompiling, you might want to move everything to a separate properties file (or XML or YAML or CSV or whatever you prefer) and then load it at application start up. Then you WOULD need a "config" class to store all the values in a hashmap (or something like that).