ComputeLibrary data type templates - c++

In the ARM ComputeLibrary, we can have a Tensor object of various types. When choosing the type of Tensors, we pass the type to the initaliser of the Tensor's allocator, such as float32 here:
my_tensor.allocator()->init(armcl::TensorInfo(shape_my_tensor, 1, armcl::DataType::F32));
A better introduction to the topic of tensor allocation can be found here.
There are several ARMCL types to choose from (see here for a list). Notice that the ComputeLibrary types are not primitive ones, though one can easily copy primitive typed data to them.
However, when writing templated C++ code, where one can have functions defined for arbitrary types, this "type choice which is not a type" creates a design problem.
Say I want to write a function that takes data of primitive types such as int, float, or double. In the templated function, this type would be referred to as being of type T (or whatever).
Now say I want to copy this data to an ARMCL Tensor within the scope of the templated function. This tensor needs to be initalised to the correct datatype. We need this datatype to be a good fit for the type T, so if T is a float, then our ARMCL should be of type F32, if T is an int then our tensor should be S8, etc.
We need some sort of mapping between primitive types, and the ARMCL types.
Would a "nice" and sensible approach be to have a utility function that takes the type T and maybe using a switch statement, and something like std::is_same. The switch statement would then return the appropriate ARM Compute Library DataType object for T? Or is there a different approach that might be more elegant?
I have been looking around the docs to see if this is already handled, but to no avail yet. If it isn't handled, then perhaps this is not a specific to ARMCL question, and is broader in scope.

Well... armcl types are, if I understand correctly, enum values.
So a possible approach is a template struct, with full specializations, with a value in it.
I mean... something as
template <typename>
struct typeMap;
template <>
struct typeMap<int>
{ static constexpr auto value = armcl::DataType::S8; };
template <>
struct typeMap<float>
{ static constexpr auto value = armcl::DataType::F32; };
// other cases
You can use it as follows
template <typename T>
void foo ()
{ bar(typeMap<T>::value); }

Related

Dynamic convertibility type trait

Is it possible to enumerate a list of types to which a type can be converted to? Or would this require a technique similar to static reflection?
struct a {
operator int() const
{ return i; }
explicit operator float() const
{ return f; }
int i;
float f;
};
// enumerated type list for type 'a': tuple<int, float>
One of the things I would like to be able to do—which is related to this but with a somewhat narrower scope—is to check if a type is convertible to, let's say, an integral type, without having to explicitly list them.
The only way I'm able to do this now, is to create a tuple-like type-list of all those integer types, throw it in std::is_convertible and expand them within a std::conjunction. But I would really prefer to use a way which doesn't require me write out all the types of a specific type class. I'm specifically looking for a solution that is compatible with C++17 but if that is not possible or is simply too cumbersome, a C++20 solution is acceptable too.
The first part of my question seems to be well answered by Igor Tandetnik and Barry. Enumerating all of the types to which a type can be converted to would be undesirable for the reasons they have laid out. Enumerating the non-templated user-defined converion operators of a type could be useful but that would require someting like static reflection, which isn't possible for the time being.
Regarding the second part of my question, it seems that there are some nuances involved with type conversions between arithmetic types. Since they are all convertible to each other, finding the type trait that fits my needs maybe requires a different approach. I've decided to list a couple examples to better describe the requirements of this type trait.
#include <type_traits>
#include <utility>
struct a {
operator int() const
{ return i; }
explicit operator float() const
{ return f; }
int i;
float f;
};
// this succeeds, but instead of explicitly writing out 'int' I would like to
// express my intent and write someting as 'any_integral_type' (note: I
// understand that integral types might be too narrow to properly detect,
// so 'any_arithmetic_type' could be acceptable too) and I would
// like to do so without listing all of the integral types by hand
static_assert(std::is_convertible_v<a, int>);
struct b {
int i;
float f;
};
// the difference between type 'a' and type 'b' seems rather obvious and
// I would like to have a type trait that can express that. how to write
// 'assert that b cannot be converted to any type that belongs to the
// std::integral_types' without explicitly writing 'int'?
static_assert(not std::is_convertible_v<b, int>);
// another idea is to use the unary addition operator to force the implicit
// conversion, this way we don't have to be upfront about which type to convert
// to, but this runs into ambiguity issues when there is more than one viable
// conversion operator (e.g.: when operator float isn't marked explicit)
static_assert(std::is_integral_v<decltype(+std::declval<a>())>);
Live example.
Is it possible to enumerate a list of types to which a type can be converted to?
No. Such a list of types is infinite. For instance, your a is convertible to int and float, yes. But also short and double and char and so forth, as far as obvious things go.
But then also std::any because it's copyable. And std::optional<a>. And std::variant<a>. And then std::variant<a, T> for all types T that are not a or a const (even if a is convertible to T, like int). Which is an obviously infinite list, even by itself. And then std::variant<a, T1, T2>, etc.
So not only is such a list of types infinite, but it's uncountably infinite.
Or would this require a technique similar to static reflection?
I suspect what you actually are asking for is a very narrow question: Given a type T, what are all of its conversion functions? For those conversion functions that are not functinon templates (you can't really enumerate template <typename T> operator T() const; for instance), then yes -- static reflection would let you enumerate that list.
But note that that list is just going to be the list of types that T has conversion functions into. That list is not the list of types that T is convertible to. Just a subset thereof.

Variables that can be used as data types.

Is there any way in C++ through I can store the data types in the program like (int, char, std::string etc.) in some particular kind of variable and then use that variable instead, in place of the regular data types (for eg: to declare other variables)?
For eg:-
T = some-function("int")
now std::vector < T > is equivalent to std::vector <int> ?
You can use templates and decltype.
A minimal, working example based on your snippet:
#include<vector>
template<typename T>
T some_function() { return {}; }
int main() {
// t has type int
auto t = some_function<int>();
// vec has type std::vector<int> now
std::vector<decltype(t)> vec;
}
You can alias types (give them another name) with the using keyword:
using vi = std:: vector<int>; // I recommend against such short names
// ...
vi some_vector_with_integers;
Of course this happens purely at compile time.
Wrapping such declarations in templates allows for compile programming:
template<int N>
using X = std::conditional<(N > 42), int, double>:: type;
C++ is a statically typed language, which implies that types pretty much do not exist in the runtime. Function return type is completely defined by parameter types (const char*) and cannot depend on parameter values ("int").
Computational flow can be influenced by types, e.g. via overload - but not vice versa. As a result, you cannot "compute" a type by calling some function.
Instead you can use templates/decltype/auto to produce complex and context-dependent types in compile time, or use polymorphic types.
Polymorphic types do indeed have runtime-defined behavior: you can make your some-function return an abstract factory, and then use that factory to produce your objects - their concrete type would be unknown at compile time. Of course, you would still need to instantiate the vector with some static type - usually a pointer to the generic class (AbstractType*).
The fact that you mention int, char and std::string hints that you probably don't want the whole polymorphic hierarchy and can manage with static types.
Here are some templates to determine the result type of calling a function. Notice that the function is not even called - again, the return type only depends on parameter types, not some computation.

Why does Boost MPL have integral constants?

Since you can take integral values as template parameters and perform arithmetic on them, what's the motivation behind boost::mpl::int_<> and other integral constants? Does this motivation still apply in C++11?
You can take integral values as template parameters, but you cannot take both types and non-type template parameters with a single template. Long story short, treating non-type template parameters as types allows for them to be used with a myriad of things within MPL.
For instance, consider a metafunction find that works with types and looks for an equal type within a sequence. If you wished to use it with non-type template parameters you would need to reimplement new algorithms 'overloads', a find_c for which you have to manually specify the type of the integral value. Now imagine you want it to work with mixed integral types as the rest of the language does, or that you want to mix types and non-types, you get an explosion of 'overloads' that also happen to be harder to use as you have to specify the type of each non-type parameter everywhere.
This motivation does still apply in C++11.
This motivation will still apply to C++y and any other version, unless we have some new rule that allows conversion from non-type template parameters to type template parameters. For instance, whenever you use 5 and the template requests a type instantiate it with std::integral_constant< int, 5 > instead.
tldr; Encoding a value as a type allows it to be used in far more places than a simple value. You can overload on types, you can't overload on values.
K-Ballo's answer is great.
There's something else I think is relevant though. The integral constant types aren't only useful as template parameters, they can be useful as function arguments and function return types (using the C++11 types in my examples, but the same argument applies to the Boost ones that predate them):
template<typename R, typename... Args>
std::integral_constant<std::size_t, sizeof...(Args)>
arity(R (*)(Args...))
{ return {}; }
This function takes a function pointer and returns a type telling you the number of arguments the function takes. Before we had constexpr functions there was no way to call a function in a constant expression, so to ask questions like "how many arguments does this function type take?" you'd need to return a type, and extract the integer value from it.
Even with constexpr in the language (which means the function above could just return sizeof...(Args); and that integer value would be usable at compile time) there are still good uses for integral constant types, e.g. tag dispatching:
template<typename T>
void frobnicate(T&& t)
{
frob_impl(std::forward<T>(t), std::is_copy_constructible<T>{});
}
This frob_impl function can be overloaded based on the integer_constant<bool, b> type passed as its second argument:
template<typename T>
void frob_impl(T&& t, std::true_type)
{
// do something
}
template<typename T>
void frob_impl(T&& t, std::false_type)
{
// do something else
}
You could try doing something similar by making the boolean a template parameter:
frob_impl<std::is_copy_constructible<T>::value>(std::forward<T>(t));
but it's not possible to partially specialize a function template, so you couldn't make frob_impl<true, T> and frob_impl<false, T> do different things. Overloading on the type of the boolean constant allows you to easily do different things based on the value of the "is copy constructible" trait, and that is still very useful in C++11.
Another place where the constants are useful is for implementing traits using SFINAE. In C++03 the conventional approach was to have overloaded functions that return two types with different sizes (e.g an int and a struct containing two ints) and test the "value" with sizeof. In C++11 the functions can return true_type and false_type which is far more expressive, e.g. a trait that tests "does this type have a member called foo?" can make the function indicating a positive result return true_type and make the function indicating a negative result return false_type, what could be more clear than that?
As a standard library implementor I make very frequent use of true_type and false_type, because a lot of compile-time "questions" have true/false answers, but when I want to test something that can have more than two different results I will use other specializations of integral_constant.

C++ reflection how it is achieved

I know that C++ does not support reflection, but I went through paper Reflection support by means of template meta-programming , But did not understand how this is achieved. Would anybody have more details or examples on how this can be achived in C++ using template meta-programming?
Here is an example of a struct that tests at compile time if a Type of type Obj has a public data member of type Type that is named "foo". It uses C++11-features. While it can be done using C++03-features, I consider this approach superior.
First, we check if Obj is a class using std::is_class. If it is not a class, it cannot have data members so the test returns false. This is achieved with the partial template specialization below.
We will use SFINAE to detect if the object contains the data member. We declare the struct helper that has the template parameter of type "pointer to data member of type Type of the class Obj". Then we declare two overloaded versions of the static function test: The first, which rsturns a type indicating a failed test accepts any parameter via the ellipsis. Note that the ellipsis has the lowest precendence in overload resolution. The second, which returns a type indicating success, accepts a pointer to the helper struct with template
parameter &U::foo. Now we check what a call to test with U bound to Obj returns if called with a nullptr and typedef that to testresult. The compiler tries the second version of test first since the ellipsis is tried last. If helper<&Obj::foo> is a legal type which is only true if Obj has a public data member of type Type then this overload is chosen and
testresult will be std::true_type. If this is not a legal type the overload is excluded from the list of possible candidates (SFINAE) so the remaining version of test which accepts any parameter type will be chosen and testresult will be std::false_type. Finally, the static member value of testresult is assigned to our static member value which indicates whether our test was successful or not.
One downside of that technique is that you need to know the name of the data member you are testing explicitly ("foo" in my example) so to do that for different names you would have to write a macro.
You can write similar tests to test if a type has a static data member with a certain name and type, if it has an inner type or typedef with a certain name, if it has a member function with a certain name that can be called with given parameter types and so on but that exceeds the scope of my time right now.
template <typename Obj, typename Type, bool b = std::is_class<Obj>::value>
struct has_public_member_foo
{
template <typename Type Obj::*>
struct helper;
template <typename U>
static std::false_type test(...);
template <typename U>
static std::true_type test(helper<&U::foo> *);
typedef decltype(test<Obj>(nullptr)) testresult;
static const bool value = testresult::value;
};
template <typename Obj, typename Type>
struct has_public_member_foo<Obj, Type, false> : std::false_type { };
struct Foo
{
double foo;
};
struct Bar
{
int bar;
};
void stackoverflow()
{
static_assert(has_public_member_foo<Foo, double>::value == true, "oops");
static_assert(has_public_member_foo<Foo, int>::value == false, "oops");
static_assert(has_public_member_foo<Bar, int>::value == false, "oops");
static_assert(has_public_member_foo<double, int>::value == false, "oops");
}
It is possible to query certain characteristics of a type at compile time.
The simplest case is probably the built-in sizeof operator. As MadScientist
posted, you can also probe for specific members.
Within frameworks that use generic programming or template metaprogramming
there are typically contracts on the synopsis of classes (formalized as
Concepts).
The STL for instance uses a member typedef result_type for function objects.
boost:result_of (that later became std::result_of) extended this contract
to allow a nested class template in order to compute the result type of a
function object whose parameters were generic (in other words - having an
overloaded or template operator()). Then boost:result_of would perform compile
time reflection to allow client code to determine the result type of a function
pointer, STL function object or "templated function object" uniformly allowing
to write more generic code that will "just work" in either case. Sidenote: C++11
can do better in this particular case - I used it as the example because it is
both nontrivial enough and based on widespread components.
Further it is possible to use client code that will emit a data structure that
contains meta information deduced at compile time (or even passed-in by client
code) when registering a certain type. The framework code could e.g. use the
typeid operator to obtain a runtime representation of the type and generate
call stubs for a specific constructor, the destructor and a set of member
functions (some might be optional) and store this information in a std::map
keyed by std::type_index (or a hand-written wrapper around std::type_info
for older versions of the language). At a later point in the program this
information can be found given (a runtime representation of) some object's type
in order to run an algorithm that e.g. creates more instances of the same type
where some have temporary lifetime, run some operations and tidy up.
Combining both techniques is very powerful, because code that is to be run
at high complexity can be generated at compile time using aggressive inlining
for possibly many variations generated from templates on the fly, interfacing
with less time-critical parts by similar means at program run time.

Reason for using non-type template parameter instead of regular parameter?

In C++ you can create templates using a non-type template parameter like this:
template< int I >
void add( int& value )
{
value += I;
}
int main( int argc, char** argv )
{
int i = 10;
add< 5 >( i );
std::cout << i << std::endl;
}
Which prints "15" to cout. What is the use for this? Is there any reason for using a non-type template parameter instead of something more conventional like:
void add( int& value, int amount )
{
value += amount;
}
Sorry if this has already been asked (I looked but couldn't find anything).
There are many applications for non-type template arguments; here are a few:
You can use non-type arguments to implement generic types representing fixed-sized arrays or matrices. For example, you might parameterize a Matrix type over its dimensions, so you could make a Matrix<4, 3> or a Matrix<2, 2>. If you then define overloaded operators for these types correctly, you can prevent accidental errors from adding or multiplying matrices of incorrect dimensions, and can make functions that explicitly communicate the expected dimensions of the matrices they accept. This prevents a huge class of runtime errors from occur by detecting the violations at compile-time.
You can use non-type arguments to implement compile-time function evaluation through template metaprogramming. For example, here's a simple template that computes factorial at compile-time:
template <unsigned n> struct Factorial {
enum {
result = n * Factorial<n - 1>::result
};
};
template <> struct Factorial<0> {
enum {
result = 1
};
};
This allows you to write code like Factorial<10>::result to obtain, at compile-time, the value of 10!. This can prevent extra code execution at runtime.
Additionally, you can use non-type arguments to implement compile-time dimensional analysis, which allows you to define types for kilograms, meters, seconds, etc. such that the compiler can ensure that you don't accidentally use kilograms where you meant meters, etc.
Hope this helps!
You're probably right in this case, but there are cases where you need to know this information at compile time:
But how about this?
template <std::size_t N>
std::array<int, N> get_array() { ... }
std::array needs to know its size at compile time (as it is allocated on the stack).
You can't do something like this:
std::array<int>(5);
Well, this the typical choice between compile-time polymorphism and run-time polymorphism.
From the wording of your question in appears that you see nothing unusual in "ordinary" template parameters, while perceiving non-type parameters as something strange and/or redundant. In reality the same issue can be applied to template type parameters (what you called "ordinary" parameters) as well. Identical functionality can often be implemented either through polymorphic classes with virtual functions (run-time polymorphism) or through template type parameters (compile-time polymorphism). One can also ask why we need template type parameters, since virtually everything can be implemented using polymorphic classes.
In case of non-type parameters, you might want to have something like this one day
template <int N> void foo(char (&array)[N]) {
...
}
which cannot be implemented with a run-time value.
In that particular instance, there's not really any advantage. But using template parameters like that, you can do a lot of things you couldn't do otherwise, like effectively bind variables to functions (like boost::bind), specify the size of a compile-time array in a function or class (std::array being a ready example of that), etc.
For instance, with that function, you write a function like
template<typename T>
void apply(T f) {
f(somenum);
}
Then you can pass apply a function:
apply(&add<23>);
That's an extremely simple example, but it demonstrates the principle. More advanced applications include applying functions to every value in a collection, calculating things like the factorial of a function at compile time, and more.
You couldn't do any of that any other way.
There are lots of reasons, like doing template metaprogramming (check Boost.MPL). But there is no need to go that far, C++11's std::tuple has an accessor std::get<i> that needs to be indexed at compile time, since the result is dependent on the index.
The most frequent use for a value parameter that I can think of is std::get<N>, which retrieves the Nth element of a std::tuple<Args...>. The second-most frequent use would be std::integral_constant and its main derivatives std::true_type and std::false_type, which are ubiquitous in any sort of trait classes. In fact, type traits are absolutely replete with value template parameters. In particular, there are SFINAE techniques which leverage a template of signature <typename T, T> to check for the existence of a class member.