So in a recent C++ project I'm starting to find that a quick way to decouple a lot of code is to write template classes which inherit from the template argument. Here's a general example:
class BaseBehavior
{
// this class has a well defined and extensive interface, however I'll show this function as an example
virtual const std::string name() const {return "base1";};
};
class DerivedBehavior: public BaseBehavior
{
// may add functions to the interface or override any virtual in BaseBehavior
virtual const std::string name() const {return "base2";};
};
Those are two different behaviors which are then inheritable by at least two other classes
template<class T>
class ImplementBehavior1: public T
{
// an important feature is that this inherits the interface of T as well
virtual const std::string greet() const {return "hello"+name();};
};
template<class T>
class ImplementBehavior2: public ImplementBehavior1<T>
{
// now this can has T's interface as well as ImplementedBehavior's
virtual const std::string greet() const {return "good evening"+name();};
};
I used this technique (in a more useful case) in my code where essentially I almost wanted a table of behaviors. Here we can have 4 different classes with 4 different behaviors. I first noticed that this strategy could have the same benefit without templates, using polymorphic components, however my code didn't require that the implementations be dynamic at runtime, and this also decoupled a lot of code since I was able to inherit the interface without having to worry about writing a stub interface. Further it lets a lot of things happen at compile time which I'd imagine make it more efficient at runtime.
I've never seen this style SUGGESTED and it certainly looks obscure, however I've found it was the best way for my case, and I could see myself applying it to a lot of situations. I'm wondering if there are any inherent flaws with this structure which I'm missing now?
As you're asking about
"Is inheriting a template argument bad practice?"
I'd say it (as so often) totally depends on your actual use case. There might be valid uses, but more often these will apply:
The template class should be a wrapper for T, then in most cases a T member;1 variable will be the most appropriate choice.
The template class should provide some mixed in behavior2, then the classical CRTP, where T inherits a mixed in implementation will be the better choice.
There are rare cases3 for the situation mentioned in the 1st point, where it could save efforts, when simply derive T with a wrapper class, though this might introduce further problems (e.g. clashing inheritance structures).
(1)
template<typename T>
class Wrapper {
public:
void foo() { member.foo(); }
protected:
T member;
};
(2)
template<class Derived>
class MixIn {
public:
void foo() { static_cast<Derived*>(this)->doFoo(); }
protected:
MixIn() {}
void doFoo() {
// Provide a default implementation
}
};
class Impl : public MixIn<Impl> {
friend class MixIn<Impl>;
// Optionally provide a deviate implementation
// void doFoo() {
// // Optionally include the default behavior
// MixIn<Impl>::doFoo()
// }
};
(3)
template<class Base>
class Adapter : public Base {
public:
Adapter() : Base() {}
Adapter(const Adapter& rhs) : Base(rhs) {}
Adapter& operator=(const Adapter& rhs) {
Base::operator=(rhs);
return *this;
}
// Totally depends on what needs to be adapted
};
Don't worry:
Plain inheritance is almost always the wrong choice. That topic doesn't correlate with templates and meta-programming in particular or primarily.
I guess it depends on the real usage of your concept if it's the best way or not, but using template classes to do generic tasks at compiletime is a pretty common way.
Atm I'm using a library at work for processing medical images wich is completely template based and work quite fine, so don't mind your concept and go ahead!
Cheers Usche
PS.: here is the template based lib: http://www.itk.org/ITK/help/documentation.html
Related
I am trying to understand the internals of https://github.com/vshymanskyy/TinyGSM/tree/master/src and am confused with how the classes are constructed.
In particular I see that in TinyGsmClientBG96.h they define a class that inherits from multiple templated parent classes.
class TinyGsmBG96 : public TinyGsmModem<TinyGsmBG96>,
public TinyGsmGPRS<TinyGsmBG96>,
public TinyGsmTCP<TinyGsmBG96, TINY_GSM_MUX_COUNT>,
public TinyGsmCalling<TinyGsmBG96>,
public TinyGsmSMS<TinyGsmBG96>,
public TinyGsmTime<TinyGsmBG96>,
public TinyGsmGPS<TinyGsmBG96>,
public TinyGsmBattery<TinyGsmBG96>,
public TinyGsmTemperature<TinyGsmBG96>
Fair enough. If I look at one of these, for example TinyGsmTemperature, I find some confusing code.
It looks like the static cast is in place so the we can call the hardware agnostic interface getTemperature() and use the implementation defined in TinyGsmBG96.
Why not use function overriding in this case?
What is the thinking behind this implementation?
Is this a common pattern in c++?
template <class modemType>
class TinyGsmTemperature
{
public:
/*
* Temperature functions
*/
float getTemperature()
{
return thisModem().getTemperatureImpl();
}
/*
* CRTP Helper
*/
protected:
inline const modemType &thisModem() const
{
return static_cast<const modemType &>(*this);
}
inline modemType &thisModem()
{
return static_cast<modemType &>(*this);
}
float getTemperatureImpl() TINY_GSM_ATTR_NOT_IMPLEMENTED;
};
Is this a common pattern in c++?
Yes, it is called CRTP - curiously recurring template pattern.
Why not use function overriding in this case?
override relies on virtual tables, causing extra runtime overhead.
What is the thinking behind this implementation?
Say, we want a class hierarchy with overridable methods. The classic OOP approach is virtual functions. However, they aren't zero-cost: when you have
void foo(Animal& pet) { pet.make_noise(); }
you don't statically know (in general) which implementation has been passed to foo() because you've erased its type from Dog (or Cat? or something else?) to Animal. So, the OOP approach uses virtual tables to find the right function at runtime.
How do we avoid this? We can instead remember the derived type statically:
template<typename Derived /* here's where we keep the type */> struct Animal {
void make_noise() {
// we statically know we're a Derived - no runtime dispatch!
static_cast<Derived&>(*this).make_noise();
}
};
struct Dog: public Animal<Dog /* here's how we "remember" the type */> {
void make_noise() { std::cout << "Woof!"; }
};
Now, let's rewrite foo() in a zero-cost manner:
template<typename Derived> void foo(Animal<Derived>& pet) { pet.make_noise(); }
Unlike the first attempt, we haven't erased the type from ??? to Animal: we know Animal<Derived> is actually a Derived, which is a templated - therefore, fully known to the compiler - type. This turns the virtual function call into a direct one (so, even allows inlining).
I'm somewhat new to the more advanced features of C++. Yesterday, I posted the following question and I learned about virtual inheritance and the dreaded diamond of death.
Inheriting from both an interface and an implementation C++
I also learned, through other links, that multiple inheritance is typically a sign of a bad code design and that the same results can usually be better achieved without using MI. The question is... I don't know what is a better, single-inheritance approach for the following problem.
I want to define an Interface for two types of Digital Points. An Input Digital Point and an Output Digital Point. The Interface is to be slim, with only what's required to access the information. Of course, the vast majority of properties are common to both types of digital points. So to me, this is a clear case of Inheritance, not Composition.
My Interface Definitions look something like this:
// Interface Definitions
class IDigitalPoint
{
public:
virtual void CommonDigitalMethod1() = 0;
};
class IDigitalInputPoint : virtual IDigitalPoint
{
public:
virtual void DigitialInputMethod1() = 0;
};
class IDigitalOutputPoint : virtual IDigitalPoint
{
public:
virtual void DigitialOutputMethod1() = 0;
};
My implementations look like this:
// Implementation of IDigitalPoint
class DigitalPoint : virtual public IDigitalPoint
{
public:
void CommonDigitalMethod1();
void ExtraCommonDigitalMethod2();
}
// Implementation of IDigitalInputPoint
class DigitalInputPoint : public DigitalPoint, public IDigitalInputPoint
{
public:
void DigitialInputMethod1();
void ExtraDigitialInputMethod2();
}
// Implementation of IDigitalOutputPoint
class DigitalOutputPoint : public DigitalPoint, public IDigitalOutputPoint
{
public:
void DigitialOutputMethod1();
void ExtraDigitialOutputMethod2();
}
So how could I reformat this structure, to avoid MI?
"multiple inheritance is typically a sign of a bad code design" - parents that are pure interfaces are not counted in regards to this rule. Your I* classes are pure interfaces (only contain pure virtual functions) so you Digital*Point classes are OK in this respect
(Multiple) inheritance and interfaces tend to needless complications of simple relations.
Here we need only a simple structure and few freestanding functions:
namespace example {
struct Point { T x; T y; }
Point read_method();
void write_method(const Point&)
void common_method(Point&);
void extra_common_method(Point&);
} // example
The common_method might be a candidate for a member function of Point.
The extra_common_method, which is not so common, might be a candidate for another class encapsulating a Point.
This is exactly the situation in which the standard library does use virtual inheritance, in the std::basic_iostream hierarchy.
So, it may be the rare case where it genuinely makes sense.
However, this depends on exactly the fine details you've removed for clarity, so it isn't possible to say for certain whether a better solution exists.
For example, why is an input point different from an output point? A DigitalPoint sounds like a thing with properties, that might be modeled by a class. A DigitalInputPoint, however, just sounds like ... a DigitalPoint somehow coupled to an input source. Does it have different properties? Different behaviour? What are they and why?
You can go to below link to understand more about multiple inheritance
Avoid Multiple Inheritance
Also, in your case, multiple inheritance makes sense!!.
You may use composition if you want.
Consider a different approach:
class DigitalPoint
{
public:
void CommonDigitalMethod1();
void ExtraCommonDigitalMethod2();
}
// Implementation of IDigitalInputPoint
class DigitalInputPoint
{
public:
void CommonDigitalMethod1();
void DigitialInputMethod1();
void ExtraDigitialInputMethod2();
}
// Implementation of IDigitalOutputPoint
class DigitalOutputPoint
{
public:
void CommonDigitalMethod1();
void DigitialOutputMethod1();
void ExtraDigitialOutputMethod2();
}
To be used like this:
template <class T>
void do_input_stuff(T &digitalInputPoint){
digitalInputPoint.DigitialInputMethod1();
}
You get an easier implementation with a clearer design and less coupling with most likely better performance. The only One downside is that the interface is implicitly defined by the usage. This can be mitigated by documenting what the template expects and eventually you will be able to do it in concepts to have the compiler check it for you.
Another downside is that you cannot have a vector<IDigitalPoint*> anymore.
Are you really sure that you need 3 interfaces?
class IDigitalPoint
{
public:
virtual void CommonDigitalMethod1() = 0;
};
enum class Direction : bool { Input, Output };
template <Direction direction>
class DigitalPoint : public IDigitalPoint
{
public:
void CommonDigitalMethod1() {}
void ExtraCommonDigitalMethod2() {}
virtual void DigitialMethod1() = 0;
};
class DigitalInputPoint : public DigitalPoint<Direction::Input>
{
public:
void DigitialInputMethod1() {}
void ExtraDigitialInputMethod2() {}
// This is like DigitialInputMethod1()
virtual void DigitialMethod1() override
{}
};
class DigitalOutputPoint : public DigitalPoint<Direction::Output>
{
public:
void DigitialOutputMethod1() {}
void ExtraDigitialOutputMethod2() {}
// This is like DigitialOutputMethod1()
virtual void DigitialMethod1() override
{}
};
You could use composition instead of inheritance. Live Example
If the child classes do not use functionality from DigitalPoint, then you can try using CRTP. It can be confusing if you don't understand CRTP, but it works like a charm when it fits properly. Live Example
I'm looking for some advice of what would be an appropriate interface for dealing with aspects about classes (that deal with classes), but which are not part of the actual class they are dealing with (meta-aspects). This needs some explanation...
In my specific example I need to implement a custom RTTI system that is a bit more complex than the one offered by C++ (I won't go into why I need that). My base object is FooBase and each child class of this base is associated a FooTypeInfo object.
// Given a base pointer that holds a derived type,
// I need to be able to find the actual type of the
// derived object I'm holding.
FooBase* base = new FooDerived;
// The obvious approach is to use virtual functions...
const FooTypeInfo& info = base->typeinfo();
Using virtual functions to deal with the run-time type of the object doesn't feel right to me. I tend to think of the run-time type of an object as something that goes beyond the scope of the class, and as such it should not be part of its explicit interface. The following interface makes me feel a lot more comfortable...
FooBase* base = new FooDerived;
const FooTypeInfo& info = foo::typeinfo(base);
However, even though the interface is not part of the class, the implementation would still have to use virtual functions, in order for this to work:
class FooBase
{
protected:
virtual const FooTypeInfo& typeinfo() const = 0;
friend const FooTypeInfo& ::foo::typeinfo(const FooBase*);
};
namespace foo
{
const FooTypeInfo& typeinfo(const FooBase* ptr) {
return ptr->typeinfo();
}
}
Do you think I should use this second interface (that feels more appropriate to me) and deal with the slightly more complex implementation, or shoud I just go with the first interface?
#Seth Carnegie
This is a difficult problem if you don't even want derived classes to know about being part of the RTTI ... because you can't really do anything in the FooBase constructor that depends on the runtime type of the class being instantiated (for the same reason you can't call virtual methods in a ctor or dtor).
FooBase is the common base of the hierarchy. I also have a separate CppFoo<> class template that reduces the amount of boilerplate and makes the definition of types easier. There's another PythonFoo class that work with Python derived objects.
template<typename FooClass>
class CppFoo : public FooBase
{
private:
const FooTypeInfo& typeinfo() const {
return ::foo::typeinfo<FooClass>();
}
};
class SpecificFoo : public CppFoo<SpecificFoo>
{
// The class can now be implemented agnostic of the
// RTTI system that works behind the scenes.
};
A few more details about how the system works can be found here:
► https://stackoverflow.com/a/8979111/627005
You can tie dynamic type with static type via typeid keyword and use returned std::type_info objects as means of identification. Furthermore, if you apply typeid on a separate class created specially for the purpose, it will be totally non-intrusive for the classes you are interesed in, althought their names still have to be known in advance. It is important that typeid is applied on a type which supports dynamic polymorphism - it has to have some virtual function.
Here is example:
#include <typeinfo>
#include <cstdio>
class Base;
class Derived;
template <typename T> class sensor { virtual ~sensor(); };
extern const std::type_info& base = typeid(sensor<Base>);
extern const std::type_info& derived = typeid(sensor<Derived>);
template <const std::type_info* Type> struct type
{
static const char* name;
static void stuff();
};
template <const std::type_info* Type> const char* type<Type>::name = Type->name();
template<> void type<&base>::stuff()
{
std::puts("I know about Base");
}
template<> void type<&derived>::stuff()
{
std::puts("I know about Derived");
}
int main()
{
std::puts(type<&base>::name);
type<&base>::stuff();
std::puts(type<&derived>::name);
type<&derived>::stuff();
}
Needless to say, since std::type_info are proper objects and they are unique and ordered, you can manage them in a collection and thus erase type queried from the interface:
template <typename T> struct sensor {virtual ~sensor() {}};
struct type
{
const std::type_info& info;
template <typename T>
explicit type(sensor<T> t) : info(typeid(t))
{};
};
bool operator<(const type& lh, const type& rh)
{
return lh.info.before(rh.info);
}
int main()
{
std::set<type> t;
t.insert(type(sensor<Base>()));
t.insert(type(sensor<Derived>()));
for (std::set<type>::iterator i = t.begin(); i != t.end(); ++i)
std::puts(i->info.name());
}
Of course you can mix and match both, as you see fit.
Two limitations:
there is no actual introspection here . You can add it to template struct sensor via clever metaprogramming, it's very wide subject (and mind bending, sometimes).
names of all types you want to support have to be known in advance.
One possible variation is adding RTTI "framework hook" such as static const sensor<Myclass> rtti_MyClass; to implementation files where class names are already known and let the constructor do the work. They would also have to be complete types at this point to enable introspection in sensor.
I have a number of class, all with exactly the same interface. This interface defines a few methods, some of which are templated (the class itself may or may not be).
So the interface looks something like this
class MyClass
{
public:
void Func1();
template <typename T>
void Func2(T param);
};
I have a number of functions which take various objects which conform to this interface but want to avoid having to know the exact implementation at compile time.
Obviously, the default C++ solution would be to have a base type that all these classes derive from and pass around a pointer to that and have polymorphism do all the work.
The problem is that templated member functions cannot be virtual so this method cannot be used. I also want to avoid changing the current set of classes that follow this interface because there are a large number of them, some of which are defined outside the scope of my project.
The other solution is to template the functions that use these objects so they specialise for the right type. This could be a solution but due to legacy requirements templating a large number functions may not be possible (this is something I cannot do anything about as the client code isn't something I have responsibility for).
My initial thought was to provide some kind of carrier class which is type neutral and in effects wraps the common interface here and has a base interface class to pass around the internal type.
Something along the lines of
class MyInterface
{
public:
virtual void Func1() = 0;
};
template <typename T>
class MyImplementation
{
public:
virtual void Func1()
{
m_impl->Func1();
}
private:
T* m_impl;
};
But again the templated member functions seem to block this approach.
I looked at the boost::any and boost::function classes which I thought might offer some kind of solution but they don't seem to give me the right answer.
So, does anyone have any suggestions or work around on how to make this possible, if indeed it is? Personally I'm leaning towards having to template the various functions that require these objects - since that's the functionality templates provide - but thought it worth investigating first.
Thanks in advance
What's not entirely clear to me is how you're resolving the parameter T to Func2, do you need some kind of dynamic dispatch on that too, or is it known at compile time at the call site?
In the former case, it sounds like multimethods. In the latter, how about this variation on your interface idea:
#include <iostream>
template<class T> struct generic_delegate
{
virtual void call(T param) = 0;
};
template<class U, class T> class fn_delegate : public generic_delegate<T>
{
U* obj;
void (U::*fn)(T);
public:
fn_delegate(U* o, void (U::*f)(T)) :
obj(o), fn(f)
{}
virtual void call(T param)
{
(obj->*fn)(param);
}
};
class A
{
public:
template<class T> void fn(T param)
{
std::cout << "A: " << param << std::endl;
}
};
class B
{
public:
template<class T> void fn(T param)
{
std::cout << "B: " << param << std::endl;
}
};
template<class T, class U> generic_delegate<T>* fn_deleg(U* o)
{
return new fn_delegate<U, T>(o, &U::template fn<T>);
}
int main()
{
A a;
B b;
generic_delegate<int>* i = fn_deleg<int>(&a);
generic_delegate<int>* j = fn_deleg<int>(&b);
i->call(4);
j->call(5);
}
Obviously, the thing you'd be passing around are the generic delegate pointers.
If you use templates you need to know AT COMPILE TIME which type(s) you're using. That's just the nature of templates (templates look like code that's dynamic at runtime, but in reality it's just shorthand that tells the compiler what versions of the function to compile and include in the object code). Best case senario is something like this:
template <class T>
void DoSomethingWithMyInterface(MyInterface<T> X)
{
//do something
}
...
switch (MyObject.GetTypeCode())
{
case TYPE1: DoSomethingWithMyInterface<type1>(MyObject); break;
case TYPE2: DoSomethingWithMyInterface<type2>(MyObject); break;
case TYPE3: DoSomethingWithMyInterface<type3>(MyObject); break;
case TYPE4: DoSomethingWithMyInterface<type4>(MyObject); break;
}
I actually use this situation a lot. I write templated c++ code that does the processing for a dynamically typed language. That means that the top level language doesn't know the data types until run time, but I need to know them at compile time. So I create this "TypeSwitch" (I actually have a fancy reusable one). That looks at the datatypes at run time and then figures out which of the already compiled template functions to run.
Note - that this requires me knowing all the types I'm going to support before hand (and I do) and the switch statement actually causes the compiler to generate all of the code that could ever be executed. Then at runtime the right one is selected.
Say you have a class who's job it is to connect to a remote server. I want to abstract this class to provide two versions, one that connects through UDP and the other through TCP. I want to build the leanest runtime code possible and instead of using polymorphism I am considering templates. Here is what I'm envisioning but I'm not sure it's the best way of doing this:
class udp {};
class tcp {};
template<class T,typename X>
class service
{
private:
// Make this private so this non specialized version can't be used
service();
};
template<typename X>
class service<udp, X>
{
private:
udp _udp;
X _x;
};
template<typename X>
class service<tcp, X>
{
private:
tcp _tcp;
X _x;
};
So the end benefit is that the genericness of T is still available, but the very different code required to setup a UDP or TCP connection has been specialized. I suppose you could put it both into one class, or provide another class that adheres to some pure virtual interface for setting up the network connection, like IConnectionManager.
But this does leave the problem of the code for the generic T now having to be written in and maintained in both specialized versions, where they are ultimately the same. How best to address this? I have a feeling I am going about this all wrong.
This can be best done using a policy for the transport protocol:
template<typename Transport>
class service : Transport {
public:
typedef Transport transport_type;
// common code
void do_something() {
this->send(....);
}
};
class tcp {
public:
void send(....) {
}
};
class udp {
public:
void send(....) {
}
};
typedef service<tcp> service_tcp;
typedef service<udp> service_udp;
Note that this is also polymorphic. It's called compile time polymorphism. Putting the policy into a base class will benefit from the Empty-Base-Class-Optimization. That is, your base class does not need to take any space. Putting the policy as a member has the other drawback that you always have to delegate stuff to that member, which can become annoying with time. The book Modern C++ Design describes this pattern in-depth.
Ideally, the transport protocol doesn't need to know anything about the protocol above it. But if for some reason you have to get some information about it, you can use the crtp pattern wiki:
template<template<typename Service> class Transport>
class service : Transport<service> {
// since we derive privately, make the transport layer a friend of us,
// so that it can cast its this pointer down to us.
friend class Transport<service>;
public:
typedef Transport<service> transport_type;
// common code
void do_something() {
this->send(....);
}
};
template<typename Service>
class tcp {
public:
void send(....) {
}
};
template<typename Service>
class udp {
public:
void send(....) {
}
};
typedef service<tcp> service_tcp;
typedef service<udp> service_udp;
You don't have to put your templates into headers. If you explicitly instantiate them, you will gain faster compilation times, as much fewer code has to be included. Put this into service.cpp:
template class service<tcp>;
template class service<udp>;
Now, code that uses service does not need to know about the template code of service, since that code is already generated into the object file of service.cpp.
I would use the curious recuring template pattern, aka Five Point Palm Exploding Alexandrescu Technique:
template <typename Underlying>
class Transmit
{
public:
void send(...)
{
_U.send(...)
};
private:
Underlying _U;
};
class Tcp
{
public:
void send(...) {};
};
class Udp
{
public:
void send(...) {};
};
There would probably be many more template parameters and sub classes but you get the idea, you can also use static methods.
By the way template code is generally more efficient but also much bigger.
Templates are not necessary (though a possible solution). This is just dependency injection via templates rather than via a constructor. Personally I would do it via a constructor. But doing it via template gives you the dubious benifit of a cheaper method call (it does not need to be virtual). But also does allow for easier compiler optimization.
Both the udp and tcp objects must still support the same interface.
If you do it via inheritance they must both implement a common interface (virtual base class), it it is done via templates this is not necessary but the compiler will check that they support the same method calls that the Service object requires.
As asked in the original question, I see no explicit need(or benefit) for partial template specialization (in the situation as described).
Template Method
class udp {/*Interface Plop*/static void plop(Message&);};
class tcp {/*Interface Plop*/static void plop(Message&);};
template<typename T>
class Service
{
public:
void doPlop(Message& m) { T::plop(m);}
// Do not actually need to store an object if you make the methods static.
// Alternatively:
public:
void doPlop(Message& m) { protocol.plop(m);}
private:
T protocol;
};
Polymorphic Version
class Plop{virtual void plop(Message&) = 0;} // Destruct or omitted for brevity
class upd:public Plop {/*Interface Plop*/void plop(Message&);};
class tcp:public Plop {/*Interface Plop*/void plop(Message&);};
class Service
{
public:
Service(Plop& p):protocol(p) {};
void doPlop(Message& m) { protocol.plop(m);}
private:
Plop& protocol;
};
I think that the main point in choosing amongst polimorphism or template specialization, in this particular case at least, is if you want to choose which behavior to use at run time or at compile time.
If you want to have a udp or a tcp connection based, for example, on a connection string provided the user, then polimorphism best fits your needs; create a concrete class and then pass it to generic code that handles a pointer to a base interface.
Otherwise, you might consider using templates - I'm not sure if you need template specialization.
Hope this helps :)