name of this c++ implementation pattern - c++

There is certain implementation pattern in C++, that I describe below, it is used in std::iostream library and other similar libraries.
Can anybody recall name of this pattern?
The pattern is described like this:
- There is central class IO used for output of data, or for conversion of data (e.g, std::ostream).
- for every application class for which output conversion is defined, "converters" are GLOBAL functions, not member functions of IO. The motivation for this pattern is
(1) designer of IO wants to have it "finished", not needing any changes when another application class with convertor is added, and
(2) because you want IO to be a small manageable class, not a class with 100 members and 1000s of lines. This pattern is common when decoupling is needed between IO class and multitude of "user" classes.
What is name of this pattern?

looks like it's the Herb Sutters' "Interface Principle"
at least I read it from one of his books
the interface must be minimal, all functions, which do not need private data (for compiling or runtime speed), should be in outer functions.

This is not a design pattern at all.
Design patterns are not tied to a programmng language. What you describe is done because the class std::ostream comes from a library. So you can not conveniently add "operator <<(MyClass &ob)" member functions.
The proper term instead of design pattern is "idiom". See e.g.: http://en.wikibooks.org/wiki/C++_Programming/Idioms or http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms (not sure if your case is listed, on a first glance I did not find it)

Related

Using functions with the same name from different classes; what pattern is this?

I would like to know what OOP technique this is so that I can gain a better understanding of how to use it, and what is does in a wider range of applications.
I have seen this technique used in a few programs. Yet I have tried researching it and I have not found any article anywhere that even makes mention of it.
in fileA.cpp when I have this member function of class A
// an object of class B is instantiated in class B
classB _classB
void signup(int n){
_classB.signup(n);
// rest of the function
};
then in fileB we have this member function of class B
void signup(int n){
// rest of function
};
Is there definition for this technique where a function of a particular name calls a function of similar name from another class. Is there any articles I can read about this, so that I can further use it in my programming?
I'm not aware of any particular name for what you're doing, because it's not a pattern nor anything really particularly special/magical.
That being said, in colloquial discussions I might say that these functions "forward" to the other functions. If I'm "forwarding" in this manner I'll usually have the same function names, because they do the same things.
But, again, this is just me.
Other terms (stolen from the comments section) may include façade or proxy.
When you start doing things like hiding implementations, or wrapping symbols for transport across ABI boundaries, you might be using techniques like marshalling or patterns like PIMPL. Again, these terms do not describe your function names at all, but you'll almost certainly find a degree of function name re-use when employing such techniques/patterns.

Name this design pattern

While doing a study on the practical use of Inheritance concepts in C#, I encounted an interesting pattern of code. A non-generic interfaceI inherits from a generic type I<T> multiple times, each with a different type argument. The only reason I inherits from I<T> is for the purpose of declaring overloads, I<T> is never referenced anywhere in code, except for the inheritance relation. To illustrate:
interface Combined : Operations<Int32>, Operations<Int64>, Operations<double> {}
interface Operations<T> {
T Add(T left, T right);
T Multiply(T left, T right);
}
In practice, the IOperations interface has 30 methods with extensive XML documentation, so it seems logical to not want to repeat these declarations so many times. I googled for 'overload repeat design ', and 'method declaration reuse design pattern' etc but could not find any useful information.
Maybe this pattern has a more profound use in languages supporting multiple inheritance like C++, where the implementation of the operations could also be provided.
tl;dr: Can you name the design pattern in the above code example?
I don't think it has a name. The classic set of patterns were based largely on code in older Java and pre-standardization C++, neither of which supported parametric polymorphism (templates/generics), so patterns that require them don't really show up. As far as the GoF is concerned, that's just inheriting from three different interfaces.
It's also a little bit too ugly to qualify as a pattern. Why just those three types? Why not Int16, or Uint32? Why is the interface generic, rather than the methods?
One suggestion - could be Adapter pattern in the part of
A non-generic interfaceI inherits from a generic type I multiple times, each with a different type argument. The only reason I inherits from I is for the purpose of declaring overloads
I use it with classes too. It helps to convert the interface of a class into another interface, that is expect. Adapter lets classes work together that couldn't otherwise because of incompatible interfaces.
To be honest in your case I don't know what concept is implemented in the non-generic Interface I, but I suppose that is because when calling a generic method for storing an object there are occasionally needs to handle a specific type differently.

Extending libraries in C++

Is it possible to extend a class from a C++ library without the source code? Would having the header be enough to allow you to use inheritance? I am just learning C++ and am getting into the theory. I would test this but I don't know how.
Short answer
YES, definitively you can.
Long answer:
WARNING: THe following text may hurt children an sensitive OOP integralists. If you feel or retain to be one of such, stay away from this answer: mine your and everyone alse life will be more easier
Let me reveal a secret: STL code is just nothing more than regular C++ code that comes with headers and libraries, exactly like your code can -and most likely- do.
STL authors are just programmer LIKE YOU. They are no special at all respect to the compiler. Thay don't have any superpower towards it. They sits on their toilet exacly like you do on yours, to do exactly what you do. Don't over-mistify them.
STL code follows the exact same rules of your own written code: what is overridden will be called instead of the base: always if it is virtual, and only according to the static type of its referring pointer if it is not virtual, like every other piece of C++ code. No more no less.
The important thing is not to subvert design issues respecting the STL name convention and semantics, so that every further usage of your code will not confuse people expectation, including yourself, reading your code after 10 years, not remembering anymore certain decisions.
For example, overriding std::exception::what() must return an explanatory persistent C string, (like STL documentation say) and not add unexpected other fuzzy actions.
Also, overriding streams or streaming operators shold be done cosidering the entire design (do you really need to override the stream or just the streambuffer or just add a specific facet to the locale it imbued?): In other words, study not just "the class" but the design of all its "world" to properly understand how it works with what is around.
Last, but not least, one of the most controversial aspect are containers and everything not having virtual destructors.
My opinion is that the noise about the "classic OOP rule: Dont' derive what has no virtual destructor" is over-inflated: simply don't expect a cow to became an horse just because you place a saddle on it.
If you need (really really need) a class that manage a sequence of character with the exact same interface of std::string that is able to convert implicitly into an std::string and that has something more, you have two ways:
do what the good good girls do, embed std:string and rewrite all its 112 (yes: they are more than 100) methods with function that do nothing more than calling them and be sure you come still virgin to the marriage with another good good boy programmer's code, or ...
After discover that this takes about 30 years and you are risking to become 40 y.o. virgin no good good boy programmer is anymore interested in, be more practical, sacrifice your virginity and derive std::string. The only thing you will loose is your possibility to marry an integralist. And you can even discover it not necessarily a problem: you're are even staying away from the risk to be killed by him!
The only thing you have to take care is that, being std::string not polymorphic your derivation will mot make it as such, so don't expect and std::string* or std::string& referring yourstring to call your methods, including the destructor, that is no special respect every other method; it just follow the exact same rules.
But ... hey, if you embed and write a implicit conversion operator you will get exactly that result, no more no less!
The rule is easy: don't make yourself your destructor virtual and don't pretend "OOP substitution principle" to work with something that is not designed for OOP and everything will go right.
With all the OOP integralist requemscant in pacem their eternal sleep, your code will work, while they are still rewriting the 100+ std::string method just to embed it.
Yes, the declaration of the class is enough to derive from it.
The rest of the code will be picked up when you link against the library.
Yes you can extend classes in standard C++ library. Header file is enough for that.
Some examples:
extending std::exception class to create custom exception
extending streams library to create custom streams in your application
But one thing you should be aware is don't extend classes which does not have a virtual destructor. Examples are std::vector, std::string
Edit : I just found another SO question on this topic Extending the C++ Standard Library by inheritance?
Just having an header file is enough for inheriting from that class.
C++ programs are built in two stages:
Compilation
Compiler looks for definition of types and checks your program for language correctness.This generates object files.
Linking
The compiled object files are linked together to form a executable.
So as long as you have the header file(needed for compilation) and the library(needed for linking) You can derive from a class.
But note that one has to be careful whether that class is indeed meant for inheritance.
For example: If you have a class with non virtual destructor then that class is not meant for inheritance. Just like all standard library container classes.
So in short, Just having a interface of class is enough for derivation but the implementation and design semantics of the class do play an important role.

What is a fat Interface?

Ciao, I work in movie industry to simulate and apply studio effects. May I ask what is a fat interface as I hear someone online around here stating it ?
Edit: It is here said by Nicol Bolas (very good pointer I believe)
fat interface - an interface with more member functions and friends than are logically necessary. TC++PL 24.4.3
source
very simple explanation is here:
The Fat Interface approach [...]: in addition to the core services (that are part of the thin interface) it also offers a rich set of services that satisfy common needs of client code. Clearly, with such classes the amount of client code that needs to be written is smaller.
When should we use fat interfaces? If a class is expected to have a long life span or if a class is expected to have many clients it should offer a fat interface.
Maxim quotes Stroustrup's glossary:
fat interface - an interface with more member functions and friends than are logically necessary. TC++PL 24.4.3
Maxim provides no explanation, and other existing answers to this question misinterpret the above - or sans the Stroustrup quote the term itself - as meaning an interface with an arguably excessive number of members. It's not.
It's actually not about the number of members, but whether the members make sense for all the implementations.
That subtle aspect that doesn't come through very clearly in Stroustrup's glossary, but at least in the old version of TC++PL I have - is clear where the term's used in the text. Once you understand the difference, the glossary entry is clearly consistent with it, but "more member functions and friends than are logically necessary" is a test that should be applied from the perspective of each of the implementations of a logical interface. (My understanding's also supported by Wikipedia, for whatever that's worth ;-o.)
Specifically when you have an interface over several implementations, and some of the interface actions are only meaningful for some of the implementations, then you have a fat interface in which you can ask the active implementation to do something that it has no hope of doing, and you have to complicate the interface with some "not supported" discovery or reporting, which soon adds up to make it harder to write reliable client code.
For example, if you have a Shape base class and derived Circle and Square classes, and contemplate adding a double get_radius() const member: you could do so and have it throw or return some sentinel value like NaN or -1 if called on a Square - you'd then have a fat interface.
"Uncle Bob" puts a different emphasis on it below (boldfacing mine) in the context of the Interface Segregation Principle (ISP) (a SOLID principle that says to avoid fat interfaces):
[ISP] deals with the disadvantages of “fat” interfaces. Classes that have “fat” interfaces are classes whose interfaces are not cohesive. In other words, the interfaces of the class can be broken up into groups of member functions. Each group serves a different set of clients. Thus some clients use one group of member functions, and other clients use the other groups.
This implies you could have e.g. virtual functions that all derived classes do implementation with non-noop behaviours, but still consider the interface "fat" if typically any given client using that interface would only be interested in one group of its functions. For example: if a string class provided regexp functions and 95% of client code never used any of those, and especially if the 5% that did didn't tend to use the non-regexp string functions, then you should probably separate the regexp functionality from the normal textual string functionality. In that case though, there's a clear distinction in member function functionality that forms 2 groups, and when you were writing your code you'd have a clear idea whether you wanted regexp functionality or normal text-handling functionality. With the actual std::string class, although it has a lot of functions I'd argue that there's no clear grouping of functions where it would be weird to evolve a need to use some functions (e.g. begin/end) after having initially needed only say insert/erase. I don't personally consider the interface "fat", even though it's huge.
Of course, such an evocative term will have been picked up by other people to mean whatever they think it should mean, so it's no surprise that the web contains examples of the simpler larger-than-necessary-interface usage, as evidenced by the link in relaxxx's answer, but I suspect that's more people guessing at a meaning than "educated" about prior usage in Computing Science literature....
An interface with more methods or friends than is really necessary.

Should I use nested classes in this case?

I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes which do the video decoding and video encoding.
I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine.
The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use, and to avoid any possible naming conflicts? I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue.
I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure.
My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem.
I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation.
You would use a nested class to create a (small) helper class that's required to implement the main class. Or for example, to define an interface (a class with abstract methods).
In this case, the main disadvantage of nested classes is that this makes it harder to re-use them. Perhaps you'd like to use your VideoDecoder class in another project. If you make it a nested class of VideoPlayer, you can't do this in an elegant way.
Instead, put the other classes in separate .h/.cpp files, which you can then use in your VideoPlayer class. The client of VideoPlayer now only needs to include the file that declares VideoPlayer, and still doesn't need to know about how you implemented it.
One way of deciding whether or not to use nested classes is to think whether or not this class plays a supporting role or it's own part.
If it exists solely for the purpose of helping another class then I generally make it a nested class. There are a whole load of caveats to that, some of which seem contradictory but it all comes down to experience and gut-feeling.
sounds like a case where you could use the strategy pattern
Sometimes it's appropriate to hide the implementation classes from the user -- in these cases it's better to put them in an foo_internal.h than inside the public class definition. That way, readers of your foo.h will not see what you'd prefer they not be troubled with, but you can still write tests against each of the concrete implementations of your interface.
We hit an issue with a semi-old Sun C++ compiler and visibility of nested classes which behavior changed in the standard. This is not a reason to not do your nested class, of course, just something to be aware of if you plan on compiling your software on lots of platforms including old compilers.
Well, if you use pointers to your workhorse classes in your Interface class and don't expose them as parameters or return types in your interface methods, you will not need to include the definitions for those work horses in your interface header file (you just forward declare them instead). That way, users of your interface will not need to know about the classes in the background.
You definitely don't need to nest classes for this. In fact, separate class files will actually make your code a lot more readable and easier to manage as your project grows. it will also help you later on if you need to subclass (say for different content/codec types).
Here's more information on the PIMPL pattern (section 3.1.1).
You should use an inner class only when you cannot implement it as a separate class using the would-be outer class' public interface. Inner classes increase the size, complexity, and responsibility of a class so they should be used sparingly.
Your encoder/decoder class sounds like it better fits the Strategy Pattern
One reason to avoid nested classes is if you ever intend to wrap the code with swig (http://www.swig.org) for use with other languages. Swig currently has problems with nested classes, so interfacing with libraries that expose any nested classes becomes a real pain.
Another thing to keep in mind is whether you ever envision different implementations of your work functions (such as decoding and encoding). In that case, you would definitely want an abstract base class with different concrete classes which implement the functions. It would not really be appropriate to nest a separate subclass for each type of implementation.