Jörg's answer to this question nicely delineates between "normal" templates (what the question refers to, perhaps erroneously, as generics) which operate on data and meta templates which operate on a program. Jörg then wisely mentions that programs are data so its really all one and the same. That said, meta-templates are still a different beast. Where do normal templates end and meta templates begin?
The best test I can come up with is if a template's arguments are exclusively class or typename the template is "normal" and meta otherwise. Is this test correct?
The boundary: Signature with Logical Behaviour
Well, in my opinion the boundary-line is to be drawn where a template's signature stops to be a simple signature yielding runtime-code and becomes a definition of explicit or implicit logic, which will be executed/resolved at compile-time.
Some examples and explanation
Regular Templates, i.e. with only typename, class or possibly value-type template parameters, produce executable cpp code, once instantiated during compile time.
The code is (important) not executed at compile time
E.g. (very simple and most likely unrealistic example, but explains the concept):
template<typename T>
T add(const T& lhs, const T& rhs) {
return(lhs + rhs);
}
template<>
std::string add<std::string>(
const std::string& lhs,
const std::string& rhs) {
return (lhs.append(rhs));
}
int main() {
double result = add(1.0, 2.0); // 3.0
std::string s = add("This is ", " the template specialization...");
}
Once compiled, the root-template will be used to instantiate the above code for the type double, but will not execute it.
In addition, the specialization-template will be instantiated for the text-concatenation, but also: not executed at compile time.
This example, however:
#include <iostream>
#include <string>
#include <type_traits>
class INPCWithVoice {
void doSpeak() { ; }
};
class DefaultNPCWithVoice
: public INPCWithVoice {
public:
inline std::string doSpeak() {
return "I'm so default, it hurts... But at least I can speak...";
}
};
class SpecialSnowflake
: public INPCWithVoice {
public:
inline std::string doSpeak() {
return "WEEEEEEEEEEEH~";
}
};
class DefaultNPCWithoutVoice {
public:
inline std::string doSpeak() {
return "[...]";
}
};
template <typename TNPC>
static inline void speak(
typename std::enable_if<std::is_base_of<INPCWithVoice, TNPC>::value, TNPC>::type& npc)
{
std::cout << npc.doSpeak() << std::endl;
};
int main()
{
DefaultNPCWithVoice npc0 = DefaultNPCWithVoice();
SpecialSnowflake npc1 = SpecialSnowflake();
DefaultNPCWithoutVoice npc2 = DefaultNPCWithoutVoice();
speak<DefaultNPCWithVoice>(npc0);
speak<SpecialSnowflake>(npc1);
// speak<DefaultNPCWithoutVoice>(npc2); // Won't compile, since DefaultNPCWithoutVoice does not derive from INPCWithVoice
}
This sample shows template meta programming (and in fact a simple sample...).
What happens here, is that the 'speak'-function has a templated parameter, which is resolved at compile time and decays to TNPC, if the type passed for it is derived from INPCWithVoice.
This in turn means, if it doesn't, the template will not have a candidate for instantiation and the compilation already fails.
Look up SFINAE for this technique: http://eli.thegreenplace.net/2014/sfinae-and-enable_if/
At this point there's some logic executed at compile time and the entire program, will be fully resolved once linked to the executable/library
Another very good example is: https://akrzemi1.wordpress.com/2012/03/19/meta-functions-in-c11/
Here you can see a template meta programming implementation of the factorial-function, demonstrating, that even the bytecode can be entirely equal to a fixed-value use, if the meta-template decays to a constant.
Finalizing example: Fibonacci
#include <iostream>
#include <string>
#include <type_traits>
template <intmax_t N>
static unsigned int fibonacci() {
return fibonacci<N - 1>() + fibonacci<N - 2>();
}
template <>
unsigned int fibonacci<1>() {
return 1;
}
template <>
unsigned int fibonacci<2>() {
return fibonacci<1>();
}
template <intmax_t MAX>
static void Loop() {
std::cout << "Fibonacci at " << MAX << ": " << fibonacci<MAX>() << std::endl;
Loop<MAX - 1>();
}
template <>
void Loop<0>() {
std::cout << "End" << std::endl;
}
int main()
{
Loop<10>();
}
This code implements scalar template argument only template meta programming for the fibonacci-sequence at position N.
In addition, it shows a compile-time for loop counting from 10 to 0!
Finally
I hope this clarifies things a bit.
Remember though: The loop and fibonacci examples instantiate the above templates for each index!!!
Consequently, there's a horrible amount of redundancy and binary bloat!!!
I'm not the expert myself and I'm sure there's a template meta programming kung fu master on stackoverflow, who can append any necessary information missing.
Attempt to differentiate and define the terms
Let's first try to roughly define the terms. I start with a hopefully good enough definition of "programming", and then repeatedly apply the "usual" meaning of meta- to it:
programming
Programming results in a program that transforms some data.
int add(int value) { return value + 42; }
I just wrote code that will result in a program which transforms some data - an integer - to some other data.
templates (meta programming)
Meta programming results in a "program" that transforms some program into another. With C++ templates, there's no tangible "program", it's an implicit part of the compiler's doings.
template<typename T>
std::pair<T,T> two_of_them(T thing) {
return std::make_pair(thing, thing);
}
I just wrote code to instruct the compiler to behave like a program that emits (code for) another program.
meta templates (meta meta programming?)
Writing a meta template results in a ""program"" that results in a "program" which results in a program. Thus, in C++, writing code that results in new templates. (From another answer of me:)
// map :: ([T] -> T) -> (T -> T) -> ([T] -> T)
// "List" "Mapping" result "type" (also a "List")
// --------------------------------------------------------
template<template<typename...> class List,
template<typename> class Mapping>
struct map {
template<typename... Elements>
using type = List<typename Mapping<Elements>::type...>;
};
That's a description of how the compiler can transform two given templates into a new template.
Possible objection
Looking at the other answers, one could argue that my example of meta programming is not "real" meta programming but rather "generic programming" because it does not implement any logic at the "meta" level. But then, can the example given for programming be considered "real" programming? It does not implement any logic either, it's a simple mapping from data to data, just as the meta programming example implements a simple mapping from code (auto p = two_of_them(42);) to code (the template "filled" with the correct type).
Thus, IMO, adding conditionals (via specialization for example) just makes a template more complex, but does not change it's nature.
Your test
Definitively no. Consider:
template<typename X>
struct foo {
template<typename Y>
using type = X;
};
foo is a template with a single typename parameter, but "results" in a template (named foo::type ... just for consistency) that "results" - no matter what parameter is given - to the type given to foo (and thus to the behavior, the program implemented by that type).
Let me start answering using a definition from dictionary.com
Definition
meta -
a prefix added to the name of a subject and designating another subject that analyzes the original one but at a more abstract, higher level: metaphilosophy; metalinguistics.
a prefix added to the name of something that consciously references or comments upon its own subject or features: a meta-painting of an
artist painting a canvas.
Template programming is formost used as a way to express relations in the type system of C++. I would argue it is therefore fair to say that template programming inherently makes use of the type system itself.
From this angle of perspective, we can rather directly apply the definition given above. The difference between template programming and meta (template-)programming lies the treatment of template arguments and the intended result.
Template code that inspects its arguments clearly falls into the former defintion while the creation of new types from template arguments arguably falls into the later. Note that this must also be combined with the intent of your code to operate on types.
Examples
Let's take a look at some examples:
Implementation of std::aligned_storage;
template<std::size_t Len, std::size_t Align /* default alignment not implemented */>
struct aligned_storage {
typedef struct {
alignas(Align) unsigned char data[Len];
} type;
};
This code fulfills the second condition, the type std::aligned_storage is used to create another type. We could make this ever clearer by creating a wrapper
template<typename T>
using storage_of = std::aligned_storage<sizeof(T), alignof(T)>::type;
Now we fulfill both of the above, we inspect the argument type T, to extract its size and aligment, then we use that information to construct a new type dependent on our argument. This clearly constitutes meta-programming.
The original std::aligned_storage is less clear but still quite pervasive. We provide a result in the form of a type, and both of the arguments are used to create a new type. The inspection arguably happens when the internal array type of type::data is create.
A counter examples for completeness of the argument:
template<
class T,
class Container = std::vector<T>,
class Compare = std::less<typename Container::value_type>
> class priority_queue { /*Implementation defined implementation*/ };
Here, you might have the question:
But doesn't priority queue also do type inspection, for example to retrieve the underlying Container, or to assess the type of its iterators?
And yes it does, but the goal is different. The type std::priority_queue itself does not constitute meta template programming, since it doesn't make use of the information to operate within the type system. Meanwhile the following would be meta template programming:
template<typename C>
using PriorityQueue = std::priority_queue<C>;
The intent here is to provide a type, not the operations on the data themselves. This gets clearer when we look at the changes we can make to each code.
We can change the implementation of std::priority_queue maybe to change the permitted operations. For example to support a faster access, additional operations or compact storage of the bits inside the container. But all of that is entirely for the actual runtime-functionality and not concerned with the type system.
In contrast look at what we can do to PriotityQueue. If we were to choose a different underlying implementation, for example if we found that we like Boost.Heap better or that we link against Qt anyways and want to choose their implementation, that's a single line change. This is what meta programming for, we make choices within the type system based arguments formed by other types.
(Meta-)Template signatures
Regarding your test, as we have seen above, storage_of has exclusively typename arguments but is very clearly meta programming. If you dig deaper, you will find that the type system itself is, with templates, Turing-complete. Without even needing to explicitely state any integral variables, we could for example easily replace them by recursively stacked templates (i.e. Zermelo construction of the natural numbers)
using Z = void;
template<typename> struct Zermelo;
template<typename N> using Successor = Zermelo<N>;
A better test in my eyes would be to ask if the given implementation has runtime effects. If a template struct or alias does not contain any definition with an effect only happening at runtime, it's probably template meta programming.
Closing words
Of course normal template programming might utilize meta template programming. You can use meta template programming to determine properties of normal template arguments.
For example you might choose different output strategies (assuming some meta-programming implementation of template<class Iterator> struct is_pointer_like;
template<class It> generateSomeData(It outputIterator) {
if constexpr(is_pointer_like<outputIterator>::value) {
generateFastIntoBuffer(static_cast<typename It::pointer> (std::addressof(*outputIterator));
} else {
generateOneByOne(outputIterator);
}
}
This constitutes template programming employing the feature implemented with meta template programming.
Where do normal templates end and meta templates begin?
When the code generated by templates rely on the fundamental aspects of programming, such as branching and looping, you have crossed the line from normal templates to template meta programming.
Following the description from the article you linked:
A regular function
bool greater(int a, int b)
{
return (a > b);
}
A regular function that works with only one type (ignoring implicit conversions for the time being).
A function template (generic programming)
template <typename T>
bool greater(T a, T b)
{
return (a > b);
}
By using a function template, you have created generic code that can be applied to many types. However, depending on its usage, it may not be correct for null terminated C strings.
Template Metaprogramming
// Generic implementation
template <typename T>
struct greater_helper
{
bool operator(T a, T b) const
{
return (a > b);
}
};
template <typename T>
bool greater(T a, T b)
{
return greater_helper<T>().(a > b);
}
// Specialization for char const*
template <>
struct greater_helper<char const*>
{
bool operator(char const* a, char const* b) const
{
return (strcmp(a, b) > 0);
}
};
Here, you have written code as if to say:
If T is char const*, use a special function.
For all other values of T, use the generic function.
Now you have crosses the threshold of normal templates to template metaprogramming. You have introduced the notion if-else branching using templates.
Related
I'm working on a program where some data is statically allocated and some is dynamically allocated. Now I want to have another type that can be called with any template of the type as its argument.
#include <array>
#include <vector>
template <int size> class Foo {
std::array<int, size> data;
public:
int& operator[](std::size_t idx) {return data[idx];}
};
template <> class Foo<-1> {
std::vector<int> data;
public:
int& operator[](std::size_t idx) {return data[idx];}
};
// option 1- polymorphism
struct FooCaller {
virtual void operator()(Foo data) = 0; // how would I make this work with both forms of Foo?
};
// option 2- generic programming
template <class T> concept CanCallFoo = requires (const T& t) {
t(std::declval<Foo&>()); // how do I ensure that this can call any overload of Foo?
};
Both methods would be fine, but I'm not sure how to go about this. Because the full code is more complex, I'd rather not have both Foos inherit from a base.
A callable F could write a restriction that it can be called by Foo<x> such that an arbitrary function of x must be true to be valid.
In order for your "can be called with any Foo" test to work, you would have to invert an arbitrary function at compile time.
There is no practical way to do this short of examinjng all 2^32 possible values of x. No, really. The problem you have is that the type F is possibly too powerful for you to determine its properties. This is related to Halt and Rice's theorem and the fact that template metaprogramming and C++ overload resolution is Turing complete (only related, because 2^32 is finite).
In the other case, you could type erase. Write a type RefAnyFoo/AnyFooValue that can be constructed from any Foo type and "type erases" the operations you want, like how std::function<void(int)> type erases. It could either duck type the Foo or actually restrict to instsnces of types made from the template.
Then your interface is one that takes a AnyFooValue/RefAnyFoo (depending if you are talking about copies or references).
Now, flipping this over, you can write a concept that accepts F that take RefFooAny. This isn't quite what you asked for, but plausibly you have an X/Y problem; you had a real problem, came up with incomplete solutions, then asked about how to get your solutions working, instead of the original problem.
Similarly, it is possible you only care about certain ducktype properties of Foo<?> and not the specific type. Then a concept that checks those ducktypes could be used by your callable; this again deosn't solve your problem, as much as it flips it upside down, because you cannot verify a callable accepts an entire concept from outside the callable.
In my job, there are several template mathematical classes (e.g matrix).
An object can be implemented using either floats or doubles (or other numerical types but for this matter, it doesn't really matter).
A double object can only interact with another double object. For this matter the function convert() was implemented for various types, with an implementation similar to this:
Matrix<T2> convert(const Matrix<T1>& m, T2 dummy) {
// create a matrix with type T2 and cast m values into it
// retMatrix(i, j) = (T2)m(i,j)
}
You would call it with:
auto floatMatrix = convert(doubleMatrix, 0.f);
Or the slightly more verbose:
auto floatMatrix = convert(doubleMatrix, float());
I want to add a function like the one below that will enable a cleaner (IMHO) way to call these functions
template <typename T, typename S>
auto convert(S&& s) -> decltype(convert(s, T())) {
return convert(s, T());
}
Now they can be called using:
auto floatMatrix = convert<float>(doubleMatrix);
My question is that my function signature is pretty awkward, I need to repeat the convert(s, T()) both in the decltype and in the actual function body
How do I overcome this?
thanks
edit:
currently, we are not using c++14
edit #2:
the Matrix class was just an example, there are quite a few relevant classes that have the convert() function implemented for them. Each of them already "specialized" like someone suggested in an answer below (deleted meanwhile). I would like to adjust the way convert() is called without re implementing everything
edit #3:
supported types are obviously other than only float and double. please treat the example I gave as an example and not the actual problem I'm trying to solve
the "dummy" functions are already implemented, I was trying to make it work with minimal effort, instead of refactoring 30 functions and all usages
I don't think the question is so far fetched considering cpp14 allows to just remove the -> decltype() thingy...
A little confusing why the need for templates in the first place instead of function overloading:
Matrix<double> convert(const Matrix<float>& m) {
// ...
}
Matrix<float> convert(const Matrix<double>& m) {
// ...
}
float->float and double->double don't seem like meaningful operations that need to be preserved so actually making that a compiler error seems beneficial (whereas the template mechanism might actually succeed and just create an unnecessary copy).
Also the need for the dummy parameter in the first place is confusing without a more complete example.
If you're set on templates (e.g. types go beyond just these two):
template <typename T2, typename T1>
Matrix<T2> convert(const Matrix<T1>& m) {
}
If you're trying to write 1 single generic conversion function across all your other conversion functions you have no way of simplifying what you wrote until C++14 (I mean there are other ways to write it but it seems unlikely to be simpler).
One option might be helper classes that know the conversion type from Matrix for T to Matrix, but they are no prettier than the decltype statement, which is readable and local to the code.
Could Matrix derive from a base class that knows how to generate Matrix from T? Perhaps as a member so you can write:
class MatrixBase
{
public:
template <class T> class To
{ typedef Matrix<T> To; };
};
class Matrix<int>:public MatrixBase {
// ...
};
All this just to write: -> S::To<T>::To
As you say, come the C++14 revolution you can do the fully automatic return type thing.
so basically I'm trying to find a way to use C++ functions in Lua that are not lua_CFunctions (don't return and int and take a lua_State as a parameter). Basically your regular old C++ function. The catch, though, is I'm trying to find a way to do it without writing it's own dedicated lua_CFunction (so basically imagine I already have a program or a bunch of functions in C++ that I want to use in Lua, and I don't want to have to write a new function for each of them).
So, say I have a very simple C++ function:
static int doubleInt(int a) {
return a*2;
}
(with or without the static, it shouldn't(?) matter).
Say I want to use this function in Lua by calling doubleInt(10) in a lua script. Is there a way to do this without writing a separate
static int callFunc(lua_State *L) {
//do stuff to make the function work in lua
}
for every individual function? So something along the lines of what luaBind does with their def() function (and I know it sucks, but I can't really use a separate dedicated binding library; have to write my own).
I know I have to write a class with templates for this but I don't even have the slightest idea about how to go about getting the function in Lua. I don't think there is a way in C++ to automatically generate a custom function (presumably at compile time) - that would be amazing - so I don't even know where to start.
This is a very open-ended question.
I have been working on a lua binding library recently, so I can explain how I did this, but there are many ways you could do it.
You didn't tag this question C++11. I'm going to assume however that you are using C++11. If not, then it is extremely difficult and I would say not at all practical to roll your own especially if you don't already know enough about boost::mpl to have some idea how to do it. You should definitely just use luabind in that case.
The first thing you need is, you need to create some basic infrastructure that tells you how to convert C++ types to corresponding lua types and back.
IMO the best way to do this is using a type trait, and not one massive overloaded function. So, you will define a primary template:
namespace traits {
template <typename T>
struct push;
template <>
struct push<int> {
static void to_stack(lua_State * L, int x) { lua_pushinteger(L, x); }
};
template <>
struct push<double> {
static void to_stack(lua_State * L, double d) { lua_pushnumber(L, d); }
};
...
} // end namespace traits
Etc. You probably also want to specialize it for std::string and things like that.
Then you can make a generic push function like this:
template <typename T>
void push(lua_State * L, const T & t) {
traits::push<T>::to_stack(L, t);
}
The advantage here is that implicit conversions are not considered when you call push. Either the type you passed exactly matches something you defined the trait for, or it fails. And you can't get ambiguous overload problems between double and int etc., which can be a big pain in the butt.
Then, you have to do the same thing for read, so you have a trait that tells you how to read values of a given type off the stack. Your read technique needs to signal failures somehow, you can decide if that should be using exceptions or a different technique.
Once you have this, you can try to make an adapt template that will take an arbitrary function pointer and try to adapt it into a lua_CFunction that does roughly the same thing.
Basically, you want to use variadic templates so that you can specialize against all the parameters of the function pointer. You pass those types one by one to your read method, and use an index sequence to read from the correct stack positions. You try to read them all, and if you can do it without errors, then you can call the target function, and then you return its results.
If you want to also push generic C++ objects back as the return value, then you can call your push function at the end.
First, to help, you need an "index sequence" facility. If you are in C++14 you can use std::make_integer_sequence, if not then you have to roll your own. Mine looks like this:
namespace detail {
/***
* Utility for manipulating lists of integers
*/
template <std::size_t... Ss>
struct SizeList {
static constexpr std::size_t size = sizeof...(Ss);
};
template <typename L, typename R>
struct Concat;
template <std::size_t... TL, std::size_t... TR>
struct Concat<SizeList<TL...>, SizeList<TR...>> {
typedef SizeList<TL..., TR...> type;
};
/***
* Count_t<n> produces a sizelist containing numbers 0 to n-1.
*/
template <std::size_t n>
struct Count {
typedef
typename Concat<typename Count<n - 1>::type, SizeList<n - 1>>::type type;
};
template <>
struct Count<0> {
typedef SizeList<> type;
};
template <std::size_t n>
using Count_t = typename Count<n>::type;
} // end namespace detail
Here's what your adapt class might look like:
// Primary template
template <typename T, T>
class adapt;
// Specialization for C++ functions: int (lua_State *, ...)
template <typename... Args, int (*target_func)(lua_State * L, Args...)>
class adapt<int (*)(lua_State * L, Args...), target_func> {
template <typename T>
struct impl;
template <std::size_t... indices>
struct impl<detail::SizeList<indices...>> {
static int adapted(lua_State * L) {
try {
return target_func(L, read<Args>(L, 1 + indices)...);
} catch (std::exception & e) {
return luaL_error(L, "Caught an exception: %s", e.what());
}
}
};
public:
static int adapted(lua_State * L) {
using I = detail::Count_t<sizeof...(Args)>;
return impl<I>::adapted(L);
}
};
The real code from my implementation is here. I decided to do it without using exceptions.
This technique also works at compile-time -- since you are passing a function pointer to an arbitrary C++ function as a non-type template parameter, and the adapt template produces a lua_CFunction as a static class member, when you take a pointer to adapt<...>::adapted, it has to all be resolved at compile-time. This means that all the different bits can be inlined by the compiler.
To work around the inability to deduce the type of a non-type template parameter like a function pointer (prior to C++17), I use a macro which looks like this:
#define PRIMER_ADAPT(F) &::primer::adapt<decltype(F), (F)>::adapted
So, I can take a complicated C++ function f, and then use PRIMER_ADAPT(&f) as if it were simply a lua_CFunction.
You should realize though that making all this stuff and testing it takes a really long time. I worked on this library more than a month, and it is refactored out from some code in another project where I had refined it longer. There are also a lot of pitfalls in lua related to "automating" stack operations like this, because it doesn't do any bounds checking for you and you need to call lua_checkstack to be strictly correct.
You should definitely use one of the existing libraries unless you have a really compelling need that prevents it.
If you're not limited to standard Lua lib, you can try LuaJIT. It has ffi support. Calling external function is as simple as:
local ffi = require("ffi")
ffi.cdef[[
int printf(const char *fmt, ...);
]]
ffi.C.printf("Hello %s!", "world")
Since we define the template type over the class declaration, why do we have to specify it after each function definition? I'm confused because its even in the same file so it seems almost unnecessary to have to specify it over every function and because we are using the :: operator shouldnt it go back to the class declaration and see that T is already defined.
I'm new to c++ and still need to clear up some misunderstandings.
#ifndef __Foo_H__
#define __Foo_H__
template <class T>
class Foobar{
private:
bool foo1(T);
bool foo2(T);
public:
FooBar();
};
template <class T> bool FooBar<T>::foo1(T data){
code..
}
template <class T> bool FooBar<T>::foo2(T data){
code..
}
#endif
First you may rename the argument as for normal function:
template <class U> bool FooBar<U>::foo1(U and_here_too){/**/}
It also manages to handle (partial) specialization:
template <> bool FooBar<int>::foo1(int i){/**/}
template <typename T> bool FooBar<std::vector<T>>::foo1(std::vector<T> v){/**/}
Templates are example of generic programming. The idea is to reuse code/algorithms. In languages with strict type control you come across seemingly unnecessary constraints. For instance you may have some sorting function doing great job in one project but incompatible with types used in another.
C++, C#, and Java introduce generic programming as templates (C++) and generics (C#, Java). In generics (let's talk about Java) classes are existing entities and class parameters serve mainly as type control service. That is their purpose in collections. When you inspect how list works you see list gathers Objects and cast back to the parameterized type only when the object is retrieved. When you write class you can only assume the parameterized type is Object or declared interface like in the following example:
class Test<T extends Comparable> {}
Here you can use T as Comparable. Unless you explicitly declare the interface, the parameter is treated as Object.
Now comes the difference between generics and templates. In C++ you can assume much more about the parameterized type in implementation. You can write sorting of objects of unknown type. In Java you have to at least know what is interface the parameter type. This causes that C++ have to build new class for each parameter (in order to check if the code is correct). Vector<int> **is completely separate type from **Vector<float>. While in Java there exists one class Vector<? extends Comparable>.
:: is scope operator. You can access scope of Vector<int> because the class exists, however, Vector does not.
As a result Java can compile generics and C++ cannot. All templates have to be available in headers to all programmers; you cannot hide it (there is some work to compile templates but I don't know what is its status).
So when you use generics you can refer to method Vector.add() while when templates you have to specify parameter template<class T> Vector<T>.
PS. since template parameter is integral part of class name you may use templates for compile time calculations like fibonaci sequence
template<int N> struct Fibonaci {
static const int element = Fibonacci<N-1>::data + Fibonacci<N-2::data>;
}
template<1> struct Fibonaci {
static const int element = 1;
}
template<0> struct Fibonaci {
static const int element = 0;
}
I have stumbled many times on classes defined like
class PureVirtualClass
{
virtual int foo() = 0;
virtual bool bar() = 0;
}
template <class T> class ImplClass : public virtual PureVirtualClass
{
virtual ~ImplClass(){};
int foo() { return 42;}
bool bar() { return true;}
//several other method having nothing to do with T
}
This "design" appears so often I want to think the original developer knew what he was doing by defining ImplClass as template class but without any reference to the template argument T anywhere. My own c++ template knowledge is kinda limited.
Is there a benefit to this or is it just a confused programmer?
There can be a benefit for classes being templated but not depending on the argument. Most often you see such things to define (empty) tag-structures for template metaprogramming:
template <class X>
struct some_tag {};
The benefit of classes like yours in general is that while you have the same functionality in each class, they are different classes and you can't copy one into the other, i.e. an object of type ImplClass<int> is not compatible with another object of type ImplCalss<float>.
There are many useful cases of the idea mentioned by Arne. For instance, looking at Very basic tuple implementation, this is how a single tuple element is defined:
template <size_t N, typename T>
class TupleElem
{
T elem;
public:
T& get() { return elem; }
const T& get() const { return elem; }
};
It is templated on N, without depending on it. Why? Because the tuple implementation
template <size_t... N, typename... T>
class TupleImpl <sizes <N...>, T...> : TupleElem <N, T>...
{
//..
};
derives multiple such elements, each with a unique N, serving as an identifier. Without it, TupleImpl would be deriving the same class twice, had two element types been identical within parameter pack T.... Neither random access to elements would work in this case (via an explicit call of function get() of the appropriate TupleElem base class, which would be ambiguous), nor empty base optimization (via specializing TupleElem for empty types T to not have a data member of type T).
This is a real use case, and exactly how std::tuple is implemented by clang. Of course, a class like TupleElem would be a hidden implementation detail, and not part of the interface. For instance, gcc follows an entirely different recursive class design.
In general, you will need to study the context where classes are used to understand the intent of the designer.
maybe that developer simply is too lazy to split the classes into .h and .cpp files?
Without using templates, linker errors would occur if the classes are used in multiple compilations units. When using templates, the linker usually discards duplicate instantiations of a template at link time (or handles the problem in a different way).
While this may be an answer to "why did the developer do this", I would not recommend this if the question was "when should I introduce template arguments which are never used" (see the other answers for this). Even though it is annoying to split code into .h and .cpp (especially when used to languages like Java or C#), it's the usual C++ way. And it is definitely easier to read/understand than using templates only for this purpose. Also, it makes the use of the classes less readable.