Understanding Factories and should I use them? - c++

I have never used Factories before for the simple reason, I don't understand when I need them. I have been working on a little game in my spare time, and I decided to implement FMOD for the sound. I looked at a wrapper designed for OpenAL(different sound setup) and it looked something like...
SoundObject*
SoundObjectManager*
SoundObjectFactory*
The SoundObject was basically the instance of each sound object. The SoundObjectManager just manages all of these objects. This is straight forward enough and makes plenty of sense, but I don't get what the factory is doing or what it is used. I have been reading up on Factorys but still don't really get them.
Any help would be appreciated!

Think of Factory as a "virtual constructor". It lets you construct objects with a common compile time type but different runtime types. You can switch behavior simply by telling the Factory to create an instance of a different runtime type.

Factories are used when the implementation needs to be parametrized. FMOD is cross platform, it needs to decide what concrete implementation to give you for your platform. That is what the Factory is doing. There are two main Patterns Abstract Factory Pattern and Factory Method Pattern.

Hypothetical situation: I'm writing a sound library that I want to run on multiple platforms. I'll try to make as much of the code as possible be platform independent, but certainly some of it will need to change for Windows versus OSX versus Linux.
So I write all these different implementations, but I don't want the end user to have to make their program depend on Linux or Windows or whatever. I also don't want to maintain 4 different interfaces to my API. (Note these are just some of the reasons you might create a factory -- there are certainly other situations).
So I define this nice generic SoundObject base class that defines all the methods the client gets to use. Then I make my LinuxSoundObject, WindowsSoundObject, and 5 others derive from SoundObject. But I'm going to hide all these concrete implementations from the user and only provide them with a SoundObject. Instead, you have to call my SoundObjectFactory to grab what appears to you to be a plain old SoundObject, but really I've chosen the correct runtime type for you and instantiated it myself.
2 years later, a new OS comes about and displaces Windows. Instead of forcing you to rewrite your software, I just update my library to support the new platform and you never see a change to the interface.
This is all pretty contrived, but hopefully you get the idea.
Factories isolate consumers of an interface from what runtime type (i.e. implementation) is really being used.

Factories can be used to implement inversion of control, and to separate instantiation code (the 'new's) from the logic of your components. This is helpful when you're writing unit tests since you may not want tested objects to create a bunch of other objects.

Related

Given an abstract interface. Only clue for the need to provide an own implementation lies in NOT finding a factory function?

Admittedly a quite theoretical question.
And I would like to ask it more from the perspective of a library designer, than a library user.
Although the goal is to provide the easiest possible design for the user.
Is there any gideline / best practice of how to communicate that a given interface is supposed to be implemented by the user all the time? Or that somewhere there are factory functions provided, creating reasonable objects implementing that interface?
Of course in almost all cases this should be clear from the context. Another library function expecting such an interface as parameter could be self-explanatory of where to get that from. Because it will just be a link in some chain.
But I hope some of you can imagine a rather evolved system or library that is not that easily understood anymore.
How can you prevent the understanding of interfaces from getting more and more difficult, regarding the basic question of whether there are some factory functions somewhere or whether the user always needs to provide its own implementation?
Does the answer lie in comments, documentation or code?
I would just guess that factory functions should always be declared in the very vicinity of the interface. And if not, there is none.
But I don't know if this is too soft a guideline or perhaps cannot be realized all the time anyhow.

Is the PIMPL idiom really used in practice?

I am reading the book "Exceptional C++" by Herb Sutter, and in that book I have learned about the PIMPL idiom. Basically, the idea is to create a structure for the private objects of a class and dynamically allocate them to decrease the compilation time (and also hide the private implementations in a better manner).
For example:
class X
{
private:
C c;
D d;
} ;
could be changed to:
class X
{
private:
struct XImpl;
XImpl* pImpl;
};
and, in the .cpp file, the definition:
struct X::XImpl
{
C c;
D d;
};
This seems pretty interesting, but I have never seen this kind of approach before, neither in the companies I have worked, nor in open source projects that I've seen the source code. So, I am wondering whether this technique is really used in practice.
Should I use it everywhere, or with caution? And is this technique recommended to be used in embedded systems (where the performance is very important)?
So, I am wondering it this technique is really used in practice? Should I use it everywhere, or with caution?
Of course it is used. I use it in my project, in almost every class.
Reasons for using the PIMPL idiom:
Binary compatibility
When you're developing a library, you can add/modify fields to XImpl without breaking the binary compatibility with your client (which would mean crashes!). Since the binary layout of X class doesn't change when you add new fields to Ximpl class, it is safe to add new functionality to the library in minor versions updates.
Of course, you can also add new public/private non-virtual methods to X/XImpl without breaking the binary compatibility, but that's on par with the standard header/implementation technique.
Data hiding
If you're developing a library, especially a proprietary one, it might be desirable not to disclose what other libraries / implementation techniques were used to implement the public interface of your library. Either because of Intellectual Property issues, or because you believe that users might be tempted to take dangerous assumptions about the implementation or just break the encapsulation by using terrible casting tricks. PIMPL solves/mitigates that.
Compilation time
Compilation time is decreased, since only the source (implementation) file of X needs to be rebuilt when you add/remove fields and/or methods to the XImpl class (which maps to adding private fields/methods in the standard technique). In practice, it's a common operation.
With the standard header/implementation technique (without PIMPL), when you add a new field to X, every client that ever allocates X (either on stack, or on heap) needs to be recompiled, because it must adjust the size of the allocation. Well, every client that doesn't ever allocate X also need to be recompiled, but it's just overhead (the resulting code on the client side will be the same).
What is more, with the standard header/implementation separation XClient1.cpp needs to be recompiled even when a private method X::foo() was added to X and X.h changed, even though XClient1.cpp can't possibly call this method for encapsulation reasons! Like above, it's pure overhead and is related with how real-life C++ build systems work.
Of course, recompilation is not needed when you just modify the implementation of the methods (because you don't touch the header), but that's on par with the standard header/implementation technique.
Is this technique recommended to be used in embedded systems (where the performance is very important)?
That depends on how powerful your target is. However the only answer to this question is: measure and evaluate what you gain and lose. Also, take into consideration that if you're not publishing a library meant to be used in embedded systems by your clients, only the compilation time advantage applies!
It seems that a lot of libraries out there use it to stay stable in their API, at least for some versions.
But as for all things, you should never use anything everywhere without caution. Always think before using it. Evaluate what advantages it gives you, and if they are worth the price you pay.
The advantages it may give you are:
helps in keeping binary compatibility of shared libraries
hiding certain internal details
decreasing recompilation cycles
Those may or may not be real advantages to you. Like for me, I don't care about a few minutes recompilation time. End users usually also don't, as they always compile it once and from the beginning.
Possible disadvantages are (also here, depending on the implementation and whether they are real disadvantages for you):
Increase in memory usage due to more allocations than with the naïve variant
increased maintenance effort (you have to write at least the forwarding functions)
performance loss (the compiler may not be able to inline stuff as it is with a naïve implementation of your class)
So carefully give everything a value, and evaluate it for yourself. For me, it almost always turns out that using the PIMPL idiom is not worth the effort. There is only one case where I personally use it (or at least something similar):
My C++ wrapper for the Linux stat call. Here the struct from the C header may be different, depending on what #defines are set. And since my wrapper header can't control all of them, I only #include <sys/stat.h> in my .cxx file and avoid these problems.
I agree with all the others about the goods, but let me put in evidence about a limit: doesn't work well with templates.
The reason is that template instantiation requires the full declaration available where the instantiation took place. (And that's the main reason you don't see template methods defined into .cpp files.)
You can still refer to templatised subclasses, but since you have to include them all, every advantage of "implementation decoupling" on compiling (avoiding to include all platform-specific code everywhere, shortening compilation) is lost.
It is a good paradigm for classic OOP (inheritance based), but not for generic programming (specialization based).
Other people have already provided the technical up/downsides, but I think the following is worth noting:
First and foremost, don't be dogmatic. If PIMPL works for your situation, use it - don't use it just because "it's better OO since it really hides implementation", etc. Quoting the C++ FAQ:
encapsulation is for code, not people (source)
Just to give you an example of open source software where it is used and why: OpenThreads, the threading library used by the OpenSceneGraph. The main idea is to remove from the header (e.g., <Thread.h>) all platform-specific code, because internal state variables (e.g., thread handles) differ from platform to platform. This way one can compile code against your library without any knowledge of the other platforms' idiosyncrasies, because everything is hidden.
I would mainly consider PIMPL for classes exposed to be used as an API by other modules. This has many benefits, as it makes recompilation of the changes made in the PIMPL implementation does not affect the rest of the project. Also, for API classes they promote a binary compatibility (changes in a module implementation do not affect clients of those modules, they don't have to be recompiled as the new implementation has the same binary interface - the interface exposed by the PIMPL).
As for using PIMPL for every class, I would consider caution because all those benefits come at a cost: an extra level of indirection is required in order to access the implementation methods.
I think this is one of the most fundamental tools for decoupling.
I was using PIMPL (and many other idioms from Exceptional C++) on embedded project (SetTopBox).
The particular purpose of this idiom in our project was to hide the types XImpl class uses.
Specifically, we used it to hide details of implementations for different hardware, where different headers would be pulled in. We had different implementations of XImpl classes for one platform and different for the other. Layout of class X stayed the same regardless of the platform.
I used to use this technique a lot in the past but then found myself moving away from it.
Of course it is a good idea to hide the implementation detail away from the users of your class. However you can also do that by getting users of the class to use an abstract interface and for the implementation detail to be the concrete class.
The advantages of pImpl are:
Assuming there is just one implementation of this interface, it is clearer by not using abstract class / concrete implementation
If you have a suite of classes (a module) such that several classes access the same "impl" but users of the module will only use the "exposed" classes.
No v-table if this is assumed to be a bad thing.
The disadvantages I found of pImpl (where abstract interface works better)
Whilst you may have only one "production" implementation, by using an abstract interface you can also create a "mock" inmplementation that works in unit testing.
(The biggest issue). Before the days of unique_ptr and moving you had restricted choices as to how to store the pImpl. A raw pointer and you had issues about your class being non-copyable. An old auto_ptr wouldn't work with forwardly declared class (not on all compilers anyway). So people started using shared_ptr which was nice in making your class copyable but of course both copies had the same underlying shared_ptr which you might not expect (modify one and both are modified). So the solution was often to use raw pointer for the inner one and make the class non-copyable and return a shared_ptr to that instead. So two calls to new. (Actually 3 given old shared_ptr gave you a second one).
Technically not really const-correct as the constness isn't propagated through to a member pointer.
In general I have therefore moved away in the years from pImpl and into abstract interface usage instead (and factory methods to create instances).
As many other said, the Pimpl idiom allows to reach complete information hiding and compilation independency, unfortunately with the cost of performance loss (additional pointer indirection) and additional memory need (the member pointer itself). The additional cost can be critical in embedded software development, in particular in those scenarios where memory must be economized as much as possible.
Using C++ abstract classes as interfaces would lead to the same benefits at the same cost.
This shows actually a big deficiency of C++ where, without recurring to C-like interfaces (global methods with an opaque pointer as parameter), it is not possible to have true information hiding and compilation independency without additional resource drawbacks: this is mainly because the declaration of a class, which must be included by its users, exports not only the interface of the class (public methods) needed by the users, but also its internals (private members), not needed by the users.
Here is an actual scenario I encountered, where this idiom helped a great deal. I recently decided to support DirectX 11, as well as my existing DirectX 9 support, in a game engine.
The engine already wrapped most DX features, so none of the DX interfaces were used directly; they were just defined in the headers as private members. The engine uses DLL files as extensions, adding keyboard, mouse, joystick, and scripting support, as week as many other extensions. While most of those DLLs did not use DX directly, they required knowledge and linkage to DX simply because they pulled in headers that exposed DX.
In adding DX 11, this complexity was to increase dramatically, however unnecessarily. Moving the DX members into a PIMPL, defined only in the source, eliminated this imposition.
On top of this reduction of library dependencies, my exposed interfaces became cleaner as I moved private member functions into the PIMPL, exposing only front facing interfaces.
One benefit I can see is that it allows the programmer to implement certain operations in a fairly fast manner:
X( X && move_semantics_are_cool ) : pImpl(NULL) {
this->swap(move_semantics_are_cool);
}
X& swap( X& rhs ) {
std::swap( pImpl, rhs.pImpl );
return *this;
}
X& operator=( X && move_semantics_are_cool ) {
return this->swap(move_semantics_are_cool);
}
X& operator=( const X& rhs ) {
X temporary_copy(rhs);
return this->swap(temporary_copy);
}
PS: I hope I'm not misunderstanding move semantics.
It is used in practice in a lot of projects. It's usefulness depends heavily on the kind of project. One of the more prominent projects using this is Qt, where the basic idea is to hide implementation or platform-specific code from the user (other developers using Qt).
This is a noble idea, but there is a real drawback to this: debugging
As long as the code hidden in private implementations is of premium quality this is all well, but if there are bugs in there, then the user/developer has a problem, because it just a dumb pointer to a hidden implementation, even if he/she has the implementations source code.
So as in nearly all design decisions there are pros and cons.
I thought I would add an answer because although some authors hinted at this, I didn't think the point was made clear enough.
The primary purpose of PIMPL is to solve the N*M problem. This problem may have other names in other literature, however a brief summary is this.
You have some kind of inhertiance hierachy where if you were to add a new subclass to your hierachy, it would require you to implement N or M new methods.
This is only an approximate hand-wavey explanation, because I only recently became aware of this and so I am by my own admission not yet an expert on this.
Discussion of existing points made
However I came across this question, and similar questions a number of years ago, and I was confused by the typical answers which are given. (Presumably I first learned about PIMPL some years ago and found this question and others similar to it.)
Enables binary compatiability (when writing libraries)
Reduces compile time
Hides data
Taking into account the above "advantages", none of them are a particularly compelling reason to use PIMPL, in my opinion. Hence I have never used it, and my program designs suffered as a consequence because I discarded the utility of PIMPL and what it can really be used to accomplish.
Allow me to comment on each to explain:
1.
Binary compatiability is only of relevance when writing libraries. If you are compiling a final executable program, then this is of no relevance, unless you are using someone elses (binary) libraries. (In other words, you do not have the original source code.)
This means this advantage is of limited scope and utility. It is only of interest to people who write libraries which are shipped in proprietary form.
2.
I don't personally consider this to be of any relevance in the modern day when it is rare to be working on projects where the compile time is of critical importance. Maybe this is important to the developers of Google Chrome. The associated disadvantages which probably increase development time significantly probably more than offset this advantage. I might be wrong about this but I find it unlikely, especially given the speed of modern compilers and computers.
3.
I don't immediatly see the advantage that PIMPL brings here. The same result can be accomplished by shipping a header file and a binary object file. Without a concrete example in front of me it is difficult to see why PIMPL is relevant here. The relevant "thing" is shipping binary object files, rather than original source code.
What PIMPL actually does:
You will have to forgive my slightly hand-wavey answer. While I am not a complete expert in this particular area of software design, I can at least tell you something about it. This information is mostly repeated from Design Patterns. The authors call it "Bridge Pattern" aka Handle aka Body.
In this book, the example of writing a Window manager is given. The key point here is that a window manager can implement different types of windows as well as different types of platform.
For example, one may have a
Window
Icon window
Fullscreen window with 3d acceleration
Some other fancy window
These are types of windows which can be rendered
as well as
Microsoft Windows implementation
OS X platform implementation
Linux X Window Manger
Linux Wayland
These are different types of rendering engines, with different OS calls and possibly fundamentally different functionality as well
The list above is analagous to that given in another answer where another user described writing software which should work with different kinds of hardware for something like a DVD player. (I forget exactly what the example was.)
I give slightly different examples here compared to what is written in the Design Patterns book.
The point being that there are two seperate types of things which should be implemented using an inheritance hierachy, however using a single inheritance hierachy does not suffice here. (N*M problem, the complexity scales like the square of the number of things in each bullet point list, which is not feasible for a developer to implement.)
Hence, using PIMPL, one seperates out the types of windows and provides a pointer to an instance of an implementation class.
So PIMPL:
Solves the N*M problem
Decouples two fundamentally different things which are being modelled using inheritance such that there are 2 or more hierachies, rather than just one monolith
Permits runtime exchange of the exact implementation behaviour (by changing a pointer). This may be advantagous in some situations, whereas a single monolith enforces static (compile time) behaviour selection rather than runtime behaviour selection
There may be other ways to implement this, for example with multiple inheritance, but this is usually a more complicated and difficult approach, at least in my experience.

Is it better to have lot of interfaces or just one?

I have been working on this plugin system. I thought I passed design and started implementing. Now I wonder if I should revisit my design. my problem is the following:
Currently in my design I have:
An interface class FileNameLoader for loading the names of all the shared libraries my application needs to load. i.e. Load all files in a directory, Load all files specified in a XML file, Load all files user inputs, etc.
An Interface class LibLoader that actually loads the shared object. This class is only responsible for loading a shared object once its file name has been given. There are various ways one may need to load a shared lib. i.e. Use RTLD_NOW/RTLD_LAZY...., check if lib has been already loaded, etc.
An ABC Plugin which loads the functions I need from a handle to a library once that handle is supplied. There are so many ways this could change.
An interface class PluginFactory which creates Plugins.
An ABC PluginLoader which is the mother class which manages everything.
Now, my problem is I feel that FileNameLoader and LibLoader can go inside Plugin. But this would mean that if someone wanted to just change RTLD_NOW to RTLD_LAZY he would have to change Plugin class. On the other hand, I feel that there are too many classes here. Please give some input. I can post the interface code if necessary. Thanks in advance.
EDIT:
After giving this some thought, I have come to the conclusion that more interfaces is better (In my scenario at least). Suppose there are x implementations of FileNameLoader, y implementations of LibLoader, z implementations of Plugin. If I keep these classes separate, I have to write x + y + z implementation classes. Then I can combine them to get any functionality possible. On the other hand, if all these interfces were in Plugin class, I'd have to write x*y*z implementation classes to get all the possible functionalities which is larger than x + y + z given that there are at least 2 implementations for an interface. This is just one side of it. The other advantage is, the purpose of the interfaces are more clearer when there are more interfaces. At least that is what I think.
My c++ projects generally consists of objects that implement one or more interfaces.
I have found that this approach has the following effects:
Use of interfaces enforces your design.
(my opinion only) ensures a better program design.
Related functionality is grouped into interfaces.
The compiler will let you know if your implementation of the interface is incomplete or incorrect (good for changes to interfaces).
You can pass interface pointers around instead of entire objects.
Passing around interface pointers has the benefit that you're exposing only the functionality required to other objects.
COM employs the use of interfaces heavily, as its modular design is useful for IPC (inter process communication), promotes code reuse and enable backwards compatiblity.
Microsoft use COM extensively and base their OS and most important APIs (DirectX, DirectShow, etc.) on COM, for these reasons, and although it's hardly the most accessible technology, COM's not going away any time soon.
Will these aid your own program(s)? Up to you. If you're going to turn a lot of your code into COM objects, it's definitely the right approach.
The other good stuff you get with interfaces that I've mentioned - make your own judgement as to how useful they'll be to you. Personally, I find interfaces indispensable.
Generally the only time I provide more than one interface, it will be because I have two completely different kinds of clients (eg: clients and The Server). In that case, yes it is perfectly OK.
However, this statement worries me:
I thought I passed design and started
implementing
That's old-fashioned Waterfall thinking. You never are done designing. You will almost always have to do a fairly major redesign the first time a real client tries to use your class. Thereafter every now and then you'll discover edge cases of client use that require (or would greatly benifit by) an extra new call or two, or a slightly different approach to all the calls.
You might be interested in the Interface Segregation Principle, which results in more, smaller interfaces.
"Clients should not be forced to depend on interfaces that they do not use."
More detail on this principle is provided by this paper: http://www.objectmentor.com/resources/articles/isp.pdf
This is part of the Bob Martin's synergistic SOLID principles.
There isn't a golden rule. It'll depend on the scenario, and even then you may find in the future some assumptions have changed and you need to update it accordingly.
Personally I like the way you have it now. You can replace at the top level, or very specific pieces.
Having the One Big Class That Does Everything is wrong. So is having One Big Interface That Defines Everything.

Keeping modules independent, while still using each other

A big part of my C++ application uses classes to describe the data model, e.g. something like ClassType (which actually emulates reflection in plain C++).
I want to add a new module to my application and it needs to make use of these ClassType's, but I prefer not to introduce dependencies from my new module on ClassType.
So far I have the following alternatives:
Not making it independent and introduce a dependency on ClassType, with the risk of creating more 'spaghetti'-dependencies in my application (this is my least-preferred solution)
Introduce a new class, e.g. IType, and letting my module only depend on IType. ClassType should then inherit from IType.
Use strings as identification method, and forcing the users of the new module to convert the ClassType to a string or vice versa where needed.
Use GUID's (or even simple integers) as identification, also requiring conversions between GUID's and ClassType's
How far should you try to go when decoupling modules in an application?
just introduce an interface and let all the other modules rely on the interface? (like in IType describe above)
even decouple it further by using other identifications like strings or GUID's?
I afraid that by decoupling it too far, the code becomes more unstable and more difficult to debug. I've seen one such example in Qt: signals and slots are linked using strings and if you make a typing mistake, the functionality doesn't work, but it still compiles.
How far should you keep your modules decoupled?
99% of the time, if your design is based on reflection, then you have major issues with the design.
Generally speaking, something like
if (x is myclass)
elseif (x is anotherclass)
else
is a poor design because it neglects polymorphism. If you're doing this, then the item x is in violation of the Liskov Substitution Principle.
Also, given that C++ already has RTTI, I don't see why you'd reinvent the wheel. That's what typeof and dynamic_cast are for.
I'll steer away from thinkng about your reflection, and just look at the dependency ideas.
Decouple what it's reasonable to decouple. Coupling implies that if one thing changes so must another. So your NewCode is using ClassType, if some aspects of it change then yuou surely must change NewCode - it can't be completely decoupled. Which of the following do you want to decouple from?
Semantics, what ClassType does.
Interface, how you call it.
Implementation, how it's implemented.
To my eyes the first two are reasonable coupling. But surely an implementation change should not require NewCode to change. So code to Interfaces. We try to keep Interfaces fixed, we tend to extend them rather than change them, keeping them back-compatible if at all possible. Sometimes we use name/value pairs to try to make the interface extensible, and then hit the typo kind of errors you allude to. It's a trade-off between flexibility and "type-safety".
It's a philosophical question; it depends on the type of module, and the trade-offs. I think I have personally done all of them at various times, except for the GUID to type mapping, which doesn't have any advantages over the string to type mapping in my opinion, and at least strings are readable.
I would say you need to look at what level of decoupling is required for the particular module, given the expected external usage and code organization, and go from there. You've hit all the conceptual methods as far as I know, and they are each useful in particular situations.
That's my opinion, anyway.

Guidelines for writing flexible software?

I've been developing an interpreter in C++ for my (esoteric, if you want) programming language some time now. One of the main things that I have noticed: I start with a flexible concept, and the further I code (Tokenizer->Parser->Interpreter) the less flexible the whole system gets.
For example: I didn't implement an include function at first, yet the interpreter was already up and running - I had extreme difficulties implementing it and it was just like "patching something out" later on. My system had lost flexibility very quickly.
How can I learn to keep relatively small C++ projects as flexible and extensible as possible during development?
If you need to keep
C++ projects as flexible and extensible as possible during development
then you haven't got a product specification, you have no real goal and no way of defining a finished product.
For a commercial product this is the worst situation to be in. To paraphrase one well known blogger (can't remember who) "you haven't got a product until you define what you aren't going to do."
For personal projects this might not be a problem. Chalk it up to experience and remember for future reference. Refactor and move on.
Define the structure of the project before you start coding. Outline your main objectives and think about how can you achieve that.
Code the headers.
Look if it's possible to implement every feature using this set of interfaces
If no -> go back to (2)
If yes -> code .cpp files
Enjoy.
Of course, this doesn't apply to really large projects. But if your design is modular, there shouldn't be any problems to divide the project into separate parts.
Don't fear Evolution (Refactoring).
If there are many class that fit a theme, create a common base class.
Instead of hard coding data members, use pointers to an abstract base class.
For example, instead of using std::ifstream use std::istream.
In my project, I have abstract classes for Reading and Writing. Classes that support reading and writing use these interfaces. I can pass specialized readers to these classes without changing any code. A data base reader would inherit from the base Reader class, and thus can be used anywhere a reader is used.