I will put my question first and add some longer explanation below. I have the following class design which is not working as C++ does not support virtual template methods. I would be happy to learn about alternatives and workarounds to implement this behaviour.
class LocalParametersBase
{
public:
template<unsigned int target>
virtual double get() const = 0; //<--- not allowed by C++
};
template<unsigned int... params>
class LocalParameters : public LocalParametersBase
{
public:
template<unsigned int target>
double get() const; //<--- this function should be called
};
Using a simple function argument instead of the template parameter is at the moment no alternative for the following reasons:
The implementation of this method in the derived class relies on some template meta-programming (using the variadic class template arguments). As far as I know it is not possible to use function arguments (even if they are of constant integral type) as template arguments.
The method will be only called with compile-time constants. Performance is crucial in my application and therefore I want to benefit from the calculation at compile time.
The common base class is needed (I have left out the rest of the interface for brevity).
Any suggestions are highly appreciated.
Update: Motivation
As there were many questions about the motivation for this kind of layout, I'll try to explain it with a simple example. Imagine you want to measure a trajectory in a three-dimensional space, In my specific example these are tracks of charged particles (of fixed mass) in a magnetic field. You measure these tracks by sensitive detectors which are approximated as 2D surfaces. At each intersection of a track with a sensitive detector, the trajectory is uniquely identified by 5 parameters:
two local coordinates describing the intersection point of the track with the surface in the local coordinate system of the detector surface (that's why the class names are chosen this way),
two angles specifying the direction of the trajectory,
one parameter containing the information about the momentum and the electric charge of the particle.
A trajectory is therefore completely identified by a set of five parameters (and the associated surface). However, individual measurements only consist of the first two parameters (the intersection point in the local 2D coordinate system of the surface). These coordinate systems can be of different types (kartesian, cylindrical, spherical etc). So each measurement potentially constraints different parameters out of the full set of 5 parameters (or maybe even non-linear combinations of those). Nevertheless, a fitting algorithm (think of a simple chi2 minimizer for instance) should not depend of the specific type of a measurement. It only needs to calculate residuals. That looks like
class LocalParametersBase
{
public:
virtual double getResidual(const AtsVector& fullParameterSet) const = 0;
};
This works fine as each derived class knows how to map the full 5-d parameter set on its local coordinate system and then it can calculate the residuals. I hope this explains a bit why I need a common base class. There are other framework related reasons (such like the existing I/O infrastructure) which you could think of as external constraints.
You may be wondering that the above example does not require to have the templated get method I am asking about. Only the base class is supposed to be exposed to the user. Therefore it would be very confusing if you have a list of LocalParameterBase objects and you can fit a trajectory using them. You even can get the values of the measured local parameters. But you can't access the information which values where actually measured (which renders the previous information useless).
I hope this could shed some light on my problem. I appreciate all the comments received so far.
For my current project I am writing a class whose main purpose is to act as a wrapper around a sparse vector of fixed size. Instead of storing the whole vector (which is the representation of some system state) my class has a vector of reduced size as member variable (= corresponding to a sub-domain of the total parameter space). I hope the illustration below gives you an idea of what I am trying to describe:
VectorType(5) allParameters = {0.5, 2.1, -3.7, 4, 15/9}; //< full parameter space
VectorType(2) subSpace = {2.1, 4}; //< sub domain only storing parameters with index 1 and 3
In order to be able to make the connection to the original vector, I need to "store" the indexes which are copied to my "shortened" vector. This is achieved using non-type variadic template parameters. I also need to be able to query the value of the parameter with a certain index. This should yield a compile time error in case this parameter is not stored in the "shortened" vector. My simplified code for this looks like:
template<unsigned int... index>
class LocalParameters
{
public:
template<unsigned int target>
double get() const;
private:
AtsVectorX m_vValues;
};
LocalParameters<0,1,4> loc;
//< ... do some initialisation ...
loc.get<1>(); //< query value of parameter at index 1
loc.get<2>(); //<-- this should yield a compile time error as the parameter at index 2 is not stored in this local vector class
I managed to implement this behaviour using some simple template programming. But other parts of my code need to treat these "shortened" vectors uniformly through one interface. I still want to be able to access through the interface LocalParametersBase the information whether a parameter with a specific index is stored (if not I want to get a compile time error), and if yes, I would like to access the value of this parameter. In code this should look similar to
LocalParametersBase* pLoc = new LocalParameters<0,1,3>();
pLoc->get<1>();
A suggestion
Without more information about what you are doing, I am only making educated guesses about what is driving you towards this approach.
A common performance problem with code that depends on a virtual interface is that the framework provides generic functionality that dispatches to the virtual methods at very high frequency. This seems to be the issue that you are facing. You have code that is performing computation on sparse vectors, and you want to provide to it a generic interface representing each sparse vector you happen to create.
void compute (LocalParametersBase *lp) {
// code that makes lots of calls to lp->get<4>()
}
However, an alternative approach is to make the computation generic by using a template parameter to represent the derived object type being manipulated.
template <typename SPARSE>
void perform_compute (SPARSE *lp) {
// code that makes lots of calls to lp->get<4>()
}
Each get<> call in the template version of compute is against the derived object. This allows the computation to occur as fast as if you had written code to directly manipulate a LocalParameters<0,1,4>, rather than performing a dynamic dispatch per get<> call.
If you must allow the framework control when the computation is performed, and so the computation is performed on the base class, the base class version can dispatch to a virtual method.
class ComputeBase {
public:
virtual void perform_compute () = 0;
};
void compute (LocalParametersBase *lp) {
auto c = dynamic_cast<ComputeBase *>(lp);
c->perform_compute();
}
By using CRTP, you can create a helper class that takes the derived type as a template parameter, and it implements this virtual method by passing in the derived. Thus, the computation only costs one dynamic dispatch, and the rest of the computation is performed on the actual sparse vector itself.
template <typename Derived>
class CrtpCompute : public ComputeBase {
void perform_compute () {
auto d = static_cast<Derived *>(this);
perform_compute(d);
}
};
Now your sparse vector derives from this helper class.
template <unsigned int... params>
class LocalParameters
: public LocalParametersBase,
public CrtpCompute<LocalParameters<params...>> {
public:
template <unsigned int target> double get() const;
};
Making your interface work the way you have specified it
After the results are computed, you want to place the resulting sparse vector into a container for later retrieval. However, that should no longer be a performance sensitive operation, so you can use the method described below to achieve that.
Base template method → Base template class virtual method → Derived template method
If you are wish to use polymorphism, then delegate the template method call in the base class to a virtual function. Since it is a template method, the virtual function has to come from a template class. You can use a dynamic cast to get to the corresponding template class instance.
template <unsigned int target>
class Target {
public:
virtual double get() const = 0;
};
class LocalParametersBase {
public:
virtual ~LocalParametersBase () = default;
template <unsigned int target> double get() const {
auto d = dynamic_cast<const Target<target> *>(this); // XXX nullptr
return d->get();
}
};
To automate the implementation of the virtual methods for each Target, you can again use CRTP, passing in the derived type to the helper. The helper casts to the derived type to invoke the corresponding template method.
template <typename, unsigned int...> class CrtpTarget;
template <typename Derived, unsigned int target>
class CrtpTarget<Derived, target> : public Target<target> {
double get() const {
auto d = static_cast<const Derived *>(this);
return d->template get<target>();
}
};
template <typename Derived, unsigned int target, unsigned int... params>
class CrtpTarget<Derived, target, params...>
: public CrtpTarget<Derived, target>,
public CrtpTarget<Derived, params...> {
};
And now, you inherit appropriately from your derived class.
template <unsigned int... params>
class LocalParameters
: public LocalParametersBase,
public CrtpCompute<LocalParameters<params...>>,
public CrtpTarget<LocalParameters<params...>, params...> {
public:
template <unsigned int target> double get() const;
};
Related
I am working on a fairly tightly coupled library which up until now has explicitly assumed all computations are done with doubles. I'm in the process of converting some of the core classes to templates so that we can start computing with std::complex<double>. I've templated about 10 of our classes so far have noticed a tendency toward proliferation of templates. As one class becomes templated, any other class that uses the templated class appears to need templating as well. I think I can avoid some of this proliferation by defining abstract base classes for my templates so that other classes can just use pointers to the abstract class and then refer to either a double or std::complex<double> version of the derived class. This seems to work on at the header level, but when I dive into the source files, the templated class will often have functions which compute a value or container of values of type double or std::complex<double>. It seems like a waste to template a whole class just because a couple of lines in the source file are different because of some other classes return type.
The use of auto seems like a possible way to fix this, but I'm not 100% sure it would work. Suppose I have an abstract base class AbstractFunction from which Function<Scalar> derives, where Scalar can be double or std::complex<double>. Now suppose we have two member functions:
virtual Scalar Function<Scalar>::value(double x);
virtual void Function<Scalar>::values(std::vector<Scalar> &values, std::vector<double> x);
And suppose I have some other class (that I don't want to template) with a member function that calls one of these.
// populate double x and std::vector<double> xs
auto value = functionPtr->value(x);
std::vector<auto> values;
functionPtr->values(values, xs);
// do something with value and values
where functionPtr is of type std::shared_ptr<AbstractFunction>.
I could see auto working for the first case, but I don't believe I could construct a vector of auto to be filled with the second one. Does this necessitate making the calling class a template? Can someone recommend another strategy to cut down on the proliferation of templates?
I think you are already wrong in assuming that the first use-case is going to work. If you have an abstract base class, then either value is a member of it and you can call it through std::shared_ptr<AbstractFunction> or value is not a member of it and only available if you know the derived class' type. In the first case, the AbstractFunction::value method must have a fixed return type, it can not depend on Scalar, which is the template parameter of the derived class.
That said: In my experience the two concept often don't mix well. You either want to create an abstract base class with the full interface or you want a template. In the latter case, there is often no need / no benefit for having an abstract base class. It then follows that also the code using your template works with templates.
What might help you is to "export" the template parameter from Function, i.e.
template<typename T>
class Function
{
public:
using value_type = T;
value_type value() const;
// ...
};
and in other parts of the code, use a template which takes any T which behaves like Function if you don't want to directly write out (and limit yourself) to Function:
template<typename T>
void something( const std::shared_ptr<T>& functionPtr )
{
// ignoring where x comes from...
using V = typename T::value_type;
V value = functionPtr->value(x);
std::vector<V> values;
functionPtr->values(values, xs);
}
Note that this is just one option, I don't know if it is the best option for your use-case.
I am making the engine for a game and I can't seem to solve the following problem.
So, I have a base component class from which all the different components are derived. A GameObject is basically a container for different components. The components are stored in a vector containing pointers to the base component class. Now I need the GameObject class to have a getComponent member function template that will return the component with the requested type from the vector.
To be more clear:
class Component
{
/..../
};
class RigidBody : Component
{
/..../
};
class Animation : Component
{
/..../
};
class GameObject
{
public:
template <class T>
T* getComponent();
void addComponent(Component*);
private:
std::vector<Component*> m_components;
};
/...../
GameObject test;
test.AddComponent(new RigidBody());
test.AddComponent(new Animation());
Animation * animation = test.getComponent<Animation>();
Or something among those lines.
For simplicity's sake say that the vector is guaranteed to have the component that we are looking for and that there are no components of the same type.
Since the pointers in the vector are of the base component type, how can I check if they originally were of the requested type? Thanks in advance!
Assuming that Component has at least one virtual function (otherwise what's the point of inheriting from it, right?) you should be able to do what you need using Runtime Type Information (RTTI) and dynamic_cast, like this:
template <class T> T* getFirstComponent() {
for (int i = 0 ; i != m_components.size() ; i++) {
T *candidate = dynamic_cast<T*>(m_components[i]);
if (candidate) {
return candidate;
}
}
return nullptr;
}
Recall that dynamic_cast<T*> would return a non-null value only when the cast has been successful. The code above goes through all pointers, and picks the first one for which dynamic_cast<T*> succeeds.
Important note: While this should do the trick at making your program do what you want, consider changing your design: rather than pulling out objects by type, give them virtual functions that would let you use them all in a uniform way. It is pointless to put objects of different classes into one container, only to pull them apart at some later time. RTTI should be used as the last resort, not as a mainstream tool, because it makes your program harder to understand.
Another valid approach would be to store the individual components separately, not in a single vector, and get the vector only when you need to treat the objects uniformly.
Less important note: if nullptr does not compile on your system, replace with return 0.
There are occasions where a system would want to group derived types from a base class vector, for example, the optimisation of multithreading.
One system I cooked up uses polymorphism to create a user defined type to avoid typeid or derived_class, here is some pseudo code...
class BaseType {
public:
virtual int getType() = 0;
}
class ThisType : public BaseType {
public:
int getType() {return 1;};
}
class TypeMaster {
private:
std::vector<ThisType*> myObjects;
public:
void add(ThisType* bc){ myObjects.push_back(bc); };
}
std::map<int,TypeMaster*> masters;
std::vector<BaseType*> objects;
for(int i=0;i<objects.size();i++){
masters.find(objects[i].getType())->second.add(objects[i]);
}
You would have to do a bit of work to make a system but the rudements are there to convey the idea. This code processes an arbitary vector of base objects and appends them to the vector of its type master.
My example has a collection of execution pools with multiple instances of the type master meaning the type master cannot be polymorphed because in that scenario the object would not be able to move around execution pools.
Note the lack of use of typeid or derived class. For me, implementations using native types keeps it simple without importing bloating libraries or any unnecessary execution fuss. You could perform speed trials but I have always found simple native type implementations to be quite succinct.
I have a library where there is a lot of small objects, which now all have virtual functions. It goes to such an extent that the size of the pointer to a virtual function table can exceed the size of the useful data in the object (it can often be just a structure with a single float in it). The objects are elements in a numerical simulation on a sparse graph, and as such cannot be easily merged / etc.
I'm not concerned as much about the cost of the virtual function call, rather about the cost of the storage. What is happening is that the pointer to the virtual function table is basically reducing the efficiency of the cache. I'm wondering if I would be better off with a type id stored as an integer, instead of the virtual function.
I cannot use static polymorphism, as all of my objects are in a single list, and I need to be able to perform operations on items, selected by an index (which is a runtime value - therefore there is no way to statically determine the type).
The question is: is there a design pattern or a common algorithm, that can dynamically call a function from an interface, given a list of types (e.g. in a typelist) and a type index?
The interface is defined and does not change much, but new objects will be declared in the future by (possibly less-skilled) users of the library and there should not be a large effort needed in doing so. Performance is paramount. Sadly, no C++11.
So far, I have perhaps a silly proof of concept:
typedef MakeTypelist(ClassA, ClassB, ClassC) TList; // list of types
enum {
num_types = 3 // number of items in TList
};
std::vector<CommonBase*> uniform_list; // pointers to the objects
std::vector<int> type_id_list; // contains type ids in range [0, num_types)
template <class Op, class L>
class Resolver { // helper class to make a list of functions
typedef typename L::Head T;
// specialized call to op.Op::operator ()<T>(p)
static void Specialize(CommonBase *p, Op op)
{
op(*(T*)p);
}
// add a new item to the list of the functions
static void BuildList(void (**function_list)(CommonBase*, Op))
{
*function_list = &Specialize;
Resolver<Op, typename L::Tail>::BuildList(function_list + 1);
}
};
template <class Op>
class Resolver<Op, TypelistEnd> { // specialization for the end of the list
static void BuildList(void (**function_list)(CommonBase*, Op))
{}
};
/**
* #param[in] i is index of item
* #param[in] op is a STL-style function object with template operator ()
*/
template <class Op>
void Resolve(size_t i, Op op)
{
void (*function_list[num_types])(CommonBase*, Op);
Resolver<Op, TList>::BuildList(function_list);
// fill the list of functions using the typelist
(*function_list[type_id_list[i]])(uniform_list[i], op);
// call the function
}
I have not looked into the assembly yet, but I believe that if made static, the function pointer array creation could be made virtually for free. Another alternative is to use a binary search tree generated on the typelist, which would enable inlining.
I ended up using the "thunk table" concept that I outlined in the question. For each operation, there is a single instance of a thunk table (which is static and is shared through a template - the compiler will therefore automatically make sure that there is only a single table instance per operation type, not per invokation). Thus my objects have no virtual functions whatsoever.
Most importantly - the speed gain from using simple function pointer instead of virtual functions is negligible (but it is not slower, either). What gains a lot of speed is implementing a decision tree and linking all the functions statically - that improved the runtime of some not very compute intensive code by about 40%.
An interesting side effect is being able to have "virtual" template functions, which is not usually possible.
One problem that I needed to solve was that all my objects needed to have some interface, as they would end up being accessed by some calls other than the functors. I devised a detached facade for that. A facade is a virtual class, declaring the interface of the objects. A detached facade is instance of this virtual class, specialized for a given class (for all in the list, operator [] returns detached facade for the type of the selected item).
class CDetachedFacade_Base {
public:
virtual void DoStuff(BaseType *pthis) = 0;
};
template <class ObjectType>
class CDetachedFacade : public CDetachedFacade_Base {
public:
virtual void DoStuff(BaseType *pthis)
{
static_cast<ObjectType>(pthis)->DoStuff();
// statically linked, CObjectType is a final type
}
};
class CMakeFacade {
BaseType *pthis;
CDetachedFacade_Base *pfacade;
public:
CMakeFacade(BaseType *p, CDetachedFacade_Base *f)
:pthis(p), pfacade(f)
{}
inline void DoStuff()
{
f->DoStuff(pthis);
}
};
To use this, one needs to do:
static CDetachedFacade<CMyObject> facade;
// this is generated and stored in a templated table
// this needs to be separate to avoid having to call operator new all the time
CMyObject myobj;
myobj.DoStuff(); // statically linked
BaseType *obj = &myobj;
//obj->DoStuff(); // can't do, BaseType does not have virtual functions
CMakeFacade obj_facade(obj, &facade); // choose facade based on type id
obj_facade.DoStuff(); // calls CMyObject::DoStuff()
This allows me to use the optimized thunk table in the high performance portion of the code and still have polymorphically behaving objects to be able to conveniently handle them where performance is not required.
CRTP is a compile time alternative to virtual functions:
template <class Derived>
struct Base
{
void interface()
{
// ...
static_cast<Derived*>(this)->implementation();
// ...
}
static void static_func()
{
// ...
Derived::static_sub_func();
// ...
}
};
struct Derived : Base<Derived>
{
void implementation();
static void static_sub_func();
};
It relies on the fact that definition of the member are not instantiated till they are called. So Base should refer to any member of Derived only in the definition of its member functions, never in prototypes or data members
I am writing a few algorithms to build random forests, each forest will be
trained on separate data with separate functions (each tree will use a set of
functions with a fixed signature however different trees will be trained using
different sets of functions which could have a different signature), however I
would like to just write the code to build the random trees once, using
templates. I currently have something like the following:
template class T corresponds to the training data type (i.e. image patch, or
pixel) template class V corresponds to the function pointer type
template<class T, class V>
class RandomTree{
void build(RandomTreeNode<T>& current_node,
vector<V>& functions,
vector<T>& data) {
... some code that basically calls a function passing in data T
}
}
and I create the object like so:
typedef double (*function_ptr)(TrainingDataPoint& data_point);
RandomTree<TrainingDataPoint, function_ptr> tree = ...
The problem is that, for efficiency reasons, for one of the trees I'm
building, I want the set of functions (function_ptr's) to take in not only the
TrainingDataPoint(template type T) but a cache of data. So that my function
pointer will look like:
typedef double (*function_ptr)(TrainingDataPoint&,
unordered_map<string, cv::Mat>& preloaded_images);
Now the problem is, I cant think of a way to keep the RandomTree class generic
but have some function sets (template type V) that take more than just the
training point (template type T).
So far I have thought of:
Making the cache global so that the functions can access it
adding a pointer to the cache to each training data point (but who is responsible for the clean up?)
Adding a third template parameter to the RandomTree, but in this case if I am building a tree that doesn't require this third parameter, what do I put there?
None of these options seem particularly appealing to me, hopefully someone can lend some experience and tell me of a better way?
Thanks
Use a functor for the functions that need state. A functor in C++ is a class (or struct) with an overloaded operator(), so that an instance of the functor can be "called like" a function. The arguments to the functor in the RandomTree should be exactly those parameters that vary and are under the control of the RandomTree, the rest should be bound outside. A sample functor with additional state that wraps a function:
template<typename Retval, typename Arg1, typename ExtraData>
struct BindExtraData
{
typedef Retval(*func_type)(Arg1, ExtraData);
BindExtraData( ExtraData const& d_, func_type func_ ):d(d_), func(func_) {};
ExtraData d;
func_type func;
Retval operator()( Arg1 a1 )
{
return func(a1, d);
}
};
but you can do better. If this is a one-off, there is no need to make it a template. bind2nd(well, binder2nd) is the standard library version of the above, and will be better written.
Can you add another paramter to RandomTree that takes in a Cache. The default would be an empty cache if not provided. For example
template<typename T, typename V, typename CacheDataType = EmptyCache>
class RandomTree{ ... }
RandomTree<TrainingDataPoint, function_ptr, ProloadedImageCache>
I'm pretty sure the answer is "you can't use templates, you have to use virtual functions (dynamic polymorphism)", but it seems like I'd have to duplicate a lot of code if I went that route. Here is the setup:
I currently have two classes, ColorImageSegmentation and GrayscaleImageSegmentation. They do essentially the same thing, but there are three differences
- they operate on different types (ColorImage and GrayscaleImage)
- a parameter, the dimensionality of the histogram (3 vs 1) is different
- The PixelDifference function is different based on the image type
If I create a class
template <TImageType>
class ImageSegmentation
{
};
I would be in good shape. However, I want to have this object as a member of another class:
class MyMainClass
{
ImageSegmentation MyImageSegmentation;
};
But the user needs to determine the type of MyImageSegmentation (if the user opens a grayscale image, I want to instantiate MyImageSegmentation<GrayScaleType>. Likewise for a color image, MyImageSegmentation<ColorType>.)
With derived classes, I could store a pointer and then do:
class MyMainClass
{
ImageSegmentation* MyImageSegmentation;
};
... user does something...
MyImageSegmentation = new ColorImageSegmentation;
but how would I do something like this with templates? The problem is I have a lot of:
typedef TImageType::HistogramType HistogramType;
typedef TImageType::PixelType PixelType;
sort of things going on, so I don't know how I would convert them to the dynamic polymorphic model without duplicating a whole bunch of code.
Sorry for the rambling... does anyone have any suggestions for me?
Thanks,
David
Maybe there are additional requirements you haven't told us about, but from what you have so far, you can pass the type down through the containing class:
template<typename TImage>
class MyMainClass
{
ImageSegmentation<TImage> MyImageSegmentation;
};
Most likely you'll need some layer of dynamic dispatch, but only at the highest level of abstraction:
struct IMainClass
{
virtual bool SaveToFile(std::string filename) = 0;
virtual bool ApplySharpenFilter(int level) = 0;
...
};
template<typename TImage>
class MyMainClass : public IMainClass
{
ImageSegmentation<TImage> MyImageSegmentation;
public:
virtual bool SaveToFile(std::string filename);
virtual bool ApplySharpenFilter(int level);
};
IMainClass* pMain = new MyMainClass<GrayscaleImage>();
You want to create a templated version of your objects but have those objects take different parameter types based on the templated parameter? That's not a very easy thing to integrate into a library but there are a few ways of going about it.
Take a look at unary_function for inspiration. There they are using templated traits to carry around the type parameters without having to work any sort of magic:
template <class Arg, class Result>
struct unary_function {
typedef Arg argument_type;
typedef Result result_type;
};
'unary_function' does not contain any functionality other than declaring typedefs. These typedefs, however, allow you to express in code and at compile time named equivalents between code segments. They leverage the way template parameters are checked.
What this means is that you can have objects that work on this:
template<typename T>
struct Foo{
typedef typename T::argument_type argument_type;
Foo(T _myFunc) : m_Func(_myFunc)
void myWrappedFunction(argument_type _argument){ m_Func( _argument ); }
};
which contains within it the value type of the arguments without having to specify them in advance. So if you have pixel_type or something similar for each of your image objects then simply stating typename T::pixel_type will call forward the type parameter you need.