I have a templated class with the template argument the number of dimensions of some datapoints the class shall save. This class has a specialized version MyClass<-1> that allows for dimensions not known at compile time.
How can I cast a specific class (say MyClass<2>) to this more general form?
To be a bit more concrete, here is some artificial example that shows the situation. (I use the Eigen library, but I suppose for the general principle this should not matter)
using namespace Eigen;
template <std::size_t dim>
class MyClass {
public:
// Some constructors...
// A sample function:
Matrix<double, dim, 1> returnPoint();
// Some more functions here
private:
Matrix<double, dim, 1> point;
}
Now, suppose I have the following code segment:
MyClass<2> *foo;
MyClass<Dynamic> *bar; // Dynamic is a Eigen constant, being defined as -1
// Do something here
// How to do this:
bar = some_cast<MyClass<Dynamic> *>(foo);
Thinking about the problem I suppose what I want is impossible to archive without actually copying the values of point. Anybody able to prove me wrong or confirm this assumption?
It is possible to achieve the casting without actually copying the values but only if you have been careful make it work.
When you instantiate a class template with two different sets of arguments you get two distinct classes that are not related. Unless you specifically define one to inherit from the other, for example:
namespace with_inheritance {
template <class T, long sz>
class vector : public vector<T,-1> {
typedef vector<T,-1> base_t;
public:
vector() : base_t (sz) { }
};
template <class T>
class vector<T, -1> {
T* v_;
size_t sz_;
public:
vector(size_t sz) : v_ (new T[sz]), sz_ (sz) { }
~vector() { delete [] v_; }
T& operator[](size_t i)
{
if (i >= sz_) throw i;
return v_[i];
}
};
} // with_inheritance
So in this case you can cast as in:
namespace wi = with_inheritance;
wi::vector<double, 10> v;
wi::vector<double, -1>* p = &v;
std::cout << (*p)[1] << '\n';
Without the inheritance relationship casting between them will not be permitted. You can, however, use reinterpret_cast to get around the type system when you want to. But you have be very careful that the objects have identical layout and invariants to make sure everthing will work ok. As in:
namespace with_lots_of_care {
template <class T, long sz>
class vector {
T* v_;
size_t sz_;
public:
vector() : v_ (new T[sz]), sz_ (sz) { }
~vector() { delete [] v_; }
T& operator[](size_t i)
{
if (i >= sz_) throw i;
return v_[i];
}
};
template <class T>
class vector<T, -1> {
T* v_;
size_t sz_;
public:
vector(size_t sz) : v_ (new T[sz]), sz_ (sz) { }
~vector() { delete [] v_; }
T& operator[](size_t i)
{
if (i >= sz_) throw i;
return v_[i];
}
};
} // with_lots_of_care
And then cast as in:
namespace wc = with_lots_of_care;
wc::vector<double, 10> v;
wc::vector<double, -1>* p = reinterpret_cast<wc::vector<double, -1>*>(&v);
std::cout << (*p)[1] << '\n';
Related
Not sure if this can be done using templates but I want to give it a try.
I have a template class which takes any struct, stores it and returns it. Additionally, I want an interface that resets the struct's data whenever requested.
#define MYDEFAULT {1,2,3}
template < typename ITEM, ITEM Default>
class myClass{
public:
myClass(ITEM item) : _item(item) {}
const ITEM* get(){
return &_item;
}
void reset(){
_item = Default;
}
ITEM _item;
};
// Set to default when instantiated
myClass<myStruct, MYDEFAULT> ABC(MYDEFAULT);
Of course that's not working at all, but what I want to achieve is the replacement of Default in reset(). I mean it would work if _item would be of type int.
How can this be realized?
EDIT: I want something like this:
template <typename Y, Y T>
class myclass {
public:
void reset() {
xxx = T;
}
Y xxx{10};
};
void test()
{
myclass<int, 5> _myclass;
}
Initially xxx is 10 and after invoking reset it is 5. This works, so it seems it is not possible for POD or class types?
EDIT2: It seems it is all about non-type template-arguments. https://stackoverflow.com/a/2183121/221226
So there is no way around traits when using structs.
As a viable solution, you can use a trait class as shown in the following working example:
#include<cassert>
struct S {
int i;
};
template<typename T>
struct Traits {
static constexpr auto def() { return T{}; }
};
template<>
struct Traits<S> {
static constexpr auto def() { return S{42}; }
};
template <typename ITEM>
class myClass {
public:
myClass(): _item(Traits<ITEM>::def()) {}
myClass(ITEM item): _item(item) {}
const ITEM* get() {
return &_item;
}
void reset() {
_item = Traits<ITEM>::def();
}
ITEM _item;
};
int main() {
myClass<S> ABC{};
myClass<int> is;
assert((ABC.get()->i == 42));
assert((*is.get() == 0));
}
The basic trait uses the default constructor of the type ITEM.
You can then specialize it whenever you want a different defaulted value for a specific class.
The same can be accomplished even with a factory function as:
template<typename T>
constexpr auto def() { return T{}; }
template<>
constexpr auto def<S>() { return S{42}; }
Anyway, traits can easily provide more types and functions all at once.
You can maybe simulate it using a data structure with a data member of type std::array.
A minimal, working example follows:
#include<cstddef>
#include<array>
#include<cassert>
template<typename T, T... I>
struct S {
S(): arr{ I... } {}
S(const T (&val)[sizeof...(I)]) {
for(std::size_t i = 0; i < sizeof...(I); ++i) {
arr[i] = val[i];
}
}
const T * get() {
return arr.data();
}
void reset() {
arr = { I... };
}
private:
std::array<T, sizeof...(I)> arr;
};
int main() {
S<int, 1, 3, 5> s{{ 0, 1, 2 }};
assert(s.get()[1] == 1);
s.reset();
assert(s.get()[1] == 3);
}
I'm not sure I got exactly what you are asking for, but the interface in the example is close to the one in the question and the implementation details should not affect the users of your class.
I made an N-dimensional structure with vectors and templates:
//----------------N-dimensional vector--------------------------------
template<int dim,typename T> class n_dim_vector {
public:
typedef std::vector<typename n_dim_vector<dim - 1, T>::vector> vector;
};
template<typename T> class n_dim_vector <0, T> {
public:
typedef T vector;
};
It can be instatiaated with different dimnsion-counts and is prt of a class that represent a search space.
template<int dim, typename T> class n_dim_ssc {
private:
typename n_dim_vector<dim, T>::vector searchspace;
};
My problem: I cannot get operator[] right to access searchspace properly, specifically the return type.
I tried:
template<typename V> std::vector<V>& operator[](unsigned i) {
return searchspace[i];
}
T& operator[](unsigned i) {
return searchspace[i];
}
at first, thinking the compiler would derive typename V as whatever type searchspace contained at all but the last level. Thats what T& operator[](unsigned i) was for.
But alas, doen't work this way. And I cannot work out how it would
EDIT Don't fear, I do not access empty memory, the structure is initialized and filled, I just didn't include the code for clarity's sake.
Also, I don't intend to access it with a single integer, I wanted to use searchspace[i][j]..[k]
The way to let compiler deduces the return type is auto:
In C++14:
auto operator[](unsigned i) { return searchspace[i]; }
In C++11:
auto operator[](unsigned i) -> decltype(searchspace[i]) { return searchspace[i]; }
I'm answering to your comment
Feel free to recommend something better, I'd appreciate it.
The following code shows one way to handle the multidimensional vector at once, i.e. non-recursively. It could be improved in several ways which I didn't consider for now (for instance, I wouldn't want to use and pass that many arrays but rather use variadic parameter lists. This however requires much more and more diffcult code, so I'll let it be.)
#include <numeric>
template<size_t Dim, typename T>
struct MultiDimVector
{
std::array<size_t, Dim> Ndim;
std::array<size_t, Dim> stride;
std::vector<T> container;
MultiDimVector(std::array<size_t, Dim> const& _Ndim) : Ndim(_Ndim), container(size())
{
stride[0] = 1;
for (size_t i = 1; i<Dim; ++i)
{
stride[i] = stride[i - 1] * Ndim[i - 1];
}
}
size_t size() const
{
return std::accumulate(Ndim.begin(), Ndim.end(), 1, std::multiplies<size_t>());
}
size_t get_index(std::array<size_t, Dim> const& indices) const
{
//here one could also use some STL algorithm ...
size_t ret = 0;
for (size_t i = 0; i<Dim; ++i)
{
ret += stride[i] * indices[i];
}
return ret;
}
T const& operator()(std::array<size_t, Dim> const& indices) const
{
return container[get_index(indices)];
}
};
You can use it like
MultiDimVector<3, double> v({ 3, 2, 5 }); //initialize vector of dimension 3x2x5
auto a = v({0,1,0}); //get element 0,1,0
But as I wrote, the curly brackets suck, so I'd rewrite the whole thing using variadic templates.
The problem with your approach is that you're not initializing any memory inside the vector and just trying to return non-existent memory spots. Something on the line of the following (WARNING: uncleaned and unrefactored code ahead):
#include <iostream>
#include <vector>
template<int dim,typename T> class n_dim_vector {
public:
typedef std::vector<typename n_dim_vector<dim - 1, T>::vector> vector;
};
template<typename T> class n_dim_vector <0, T> {
public:
typedef T vector;
};
template<int dim, typename T> class n_dim_ssc {
public:
typename n_dim_vector<dim, T>::vector searchspace;
n_dim_ssc() {}
n_dim_ssc(typename n_dim_vector<dim, T>::vector space) : searchspace(space) {}
n_dim_ssc<dim-1, T> operator[](std::size_t i) {
if(searchspace.size() < ++i)
searchspace.resize(i);
return n_dim_ssc<dim-1, T>(searchspace[--i]);
}
typename n_dim_vector<dim, T>::vector get() {
return searchspace;
}
};
template<typename T> class n_dim_ssc<0,T> {
public:
typename n_dim_vector<0, T>::vector searchspace;
n_dim_ssc() {}
n_dim_ssc(typename n_dim_vector<0, T>::vector space) : searchspace(space) {}
typename n_dim_vector<0, T>::vector get() {
return searchspace;
}
};
int main(int argc, char** argv) {
n_dim_ssc<0, int> ea;
int a = ea.get();
n_dim_ssc<1, int> ea2;
auto dd2 = ea2[0].get();
n_dim_ssc<2, int> ea3;
auto dd3 = ea3[0][0].get();
}
Try it out
will work with an accessor method (you can modify this as you want).
Anyway I strongly have to agree with Kerrek: a contiguous memory space accessed in a multi-dimensional array fashion will both prove to be faster and definitely more maintainable/easier to use and read.
I'd like to write an n-dimensional histogram class. It should be in the form of bins that contains other bins etc. where each bin contains min and max range, and a pointer to the next dimension bins
a bin is defined like
template<typename T>
class Bin {
float minRange, maxRange;
vector<Bin<either Bin or ObjectType>> bins;
}
This definition is recursive. So in run time the user defines the dimension of the histogram
so if its just 1-dimension, then
Bin<Obj>
while 3-dimensions
Bin<Bin<Bin<Obj>>>
Is this possible?
Regards
Certainly, C++11 has variable length parameter lists for templates. Even without C++11 you can use specialisation, if all your dimensions have the same type:
template <typename T, unsigned nest>
struct Bin {
std::vector<Bin<T, (nest-1)> > bins;
};
template <typename T>
struct Bin<T,0> {
T content;
};
You can only specify the dimension at runtime to a certain degree. If it is bound by a fixed value you can select the appropriate type even dynamically. However, consider using a one-dimensional vector instead of a multi-dimensional jagged vector!
To get the exact syntax you proposed, do:
template <typename T>
class Bin
{
float minRange, maxRange;
std::vector<T> bins;
};
And it should do exactly what you put in your question:
Bin< Bin< Bin<Obj> > > bins;
To do it dynamically (at runtime), I employed some polymorphism. The example is a bit complex. First, there is a base type.
template <typename T>
class BinNode {
public:
virtual ~BinNode () {}
typedef std::shared_ptr< BinNode<T> > Ptr;
virtual T * is_object () { return 0; }
virtual const T * is_object () const { return 0; }
virtual Bin<T> * is_vector() { return 0; }
const T & operator = (const T &t);
BinNode<T> & operator[] (unsigned i);
};
BinNode figures out if the node is actually another vector, or the object.
template <typename T>
class BinObj : public BinNode<T> {
T obj;
public:
T * is_object () { return &obj; }
const T * is_object () const { return &obj; }
};
BinObj inherits from BinNode, and represents the object itself.
template <typename T>
class Bin : public BinNode<T> {
typedef typename BinNode<T>::Ptr Ptr;
typedef std::map<unsigned, std::shared_ptr<BinNode<T> > > Vec;
const unsigned dimension;
Vec vec;
public:
Bin (unsigned d) : dimension(d) {}
Bin<T> * is_vector() { return this; }
BinNode<T> & operator[] (unsigned i);
};
Bin is the vector of BinNode's.
template <typename T>
inline const T & BinNode<T>::operator = (const T &t) {
if (!is_object()) throw 0;
return *is_object() = t;
}
Allows assignment to a BinNode if it is actually the object;
template <typename T>
BinNode<T> & BinNode<T>::operator[] (unsigned i) {
if (!is_vector()) throw 0;
return (*is_vector())[i];
}
Allows the BinNode to be indexed if it is a vector.
template <typename T>
inline BinNode<T> & Bin<T>::operator[] (unsigned i)
{
if (vec.find(i) != vec.end()) return *vec[i];
if (dimension > 1) vec[i] = Ptr(new Bin<T>(dimension-1));
else vec[i] = Ptr(new BinObj<T>);
return *vec[i];
}
Returns the indexed item if it is present, otherwise creates the appropriate entry, depending on the current dimension depth. Adding a redirection operator for pretty printing:
template <typename T>
std::ostream &
operator << (std::ostream &os, const BinNode<T> &n) {
if (n.is_object()) return os << *(n.is_object());
return os << "node:" << &n;
}
Then you can use Bin like this:
int dim = 3;
Bin<float> v(dim);
v[0][1][2] = 3.14;
std::cout << v[0][1][2] << std::endl;
It doesn't currently support 0 dimension, but I invite you to try to do it yourself.
I'm trying to reduce the number of template function instantiations, but am running into a snag.
Suppose we have the following class (I know it's not optimized: this is done on purpose to illustrate the issue):
//class no_inherit is implemented the same way as class base (below).
//This is done to illustrate the issue I'm seeing.
template<typename T, size_t SIZE>
class no_inherit
{
private:
T m_data[SIZE];
const size_t m_size;
public:
no_inherit() :m_size(SIZE){}
T& operator[](size_t i)
{return m_data[i];}
inline size_t size() const
{return m_size;}
};
The following function:
template<typename T>
void huge_func(T& v)
{
//..do lots of stuff with v. For example
for(size_t i = 0; i < v.size(); ++i)
v[i] = v[i] + i;
//...do lots more with v
}
And the following code:
int main()
{
no_inherit<int, 4> v1;
no_inherit<int, 2> v2;
huge_func(v1);
huge_func(v2);
}
huge_func() would get instantiated twice:
void huge_func(no_inherit<int, 4>& v);
void huge_func(no_inherit<int, 2>& v);
Since huge_func() is, well, huge, I'm trying to reduce the instantiation count by taking one of the template parameters and turning it into a dynamic parameter by creating the following class hierarchy:
//Base class only has 1 template parameter.
template<typename T>
class base
{
private:
T *m_data;
const size_t m_size; //hold child's templated size parameter.
protected:
inline base(T* data, size_t size): m_data(data), m_size(size){}
public:
T& operator[](size_t i)
{return m_data[i];}
inline size_t size() const
{return m_size;}
};
//Child class has two template parameters
template<typename T, size_t SIZE>
class inherit: public base<T>
{
private:
T m_data[SIZE];
public:
//Pass template parameter to base class
inherit() : base<T>(m_data, SIZE){}
};
And I call huge_func() as follows:
int main()
{
inherit<int, 4> v1;
inherit<int, 2> v2;
//make sure only one instantiation of huge_func() is made
//by using the same type.
base<int> &v1b = v1;
base<int> &v2b = v2;
huge_func(v1b);
huge_func(v2b);
}
This would only instantiate a single huge_func() function:
void huge_func(base<int>& v);
And thus would decrease the code size.
But ALAS! The code size increases when I use the class hierarchy. How is this possible?
Even more bizzare, if I have the following code.
int main()
{
inherit<int, 4> v1;
inherit<int, 2> v2;
huge_func(v1);
huge_func(v2);
}
The code size is the same as calling huge_func(v1b) and huge_func(v2b).
What is the compiler doing?
First of all, if huge_func is indeed "huge", you likely would benefit from splitting it up into several reusable smaller functions.
That aside, you can also template-ize it:
template<typename T, int SIZE> void huge_func(no_inherit<T, SIZE>& v)
{
// function implementation goes here
}
Then you are implementing it once, and you maintain your flat class structure.
I'd like to write an n-dimensional histogram class. It should be in the form of bins that contains other bins etc. where each bin contains min and max range, and a pointer to the next dimension bins
a bin is defined like
template<typename T>
class Bin {
float minRange, maxRange;
vector<Bin<either Bin or ObjectType>> bins;
}
This definition is recursive. So in run time the user defines the dimension of the histogram
so if its just 1-dimension, then
Bin<Obj>
while 3-dimensions
Bin<Bin<Bin<Obj>>>
Is this possible?
Regards
Certainly, C++11 has variable length parameter lists for templates. Even without C++11 you can use specialisation, if all your dimensions have the same type:
template <typename T, unsigned nest>
struct Bin {
std::vector<Bin<T, (nest-1)> > bins;
};
template <typename T>
struct Bin<T,0> {
T content;
};
You can only specify the dimension at runtime to a certain degree. If it is bound by a fixed value you can select the appropriate type even dynamically. However, consider using a one-dimensional vector instead of a multi-dimensional jagged vector!
To get the exact syntax you proposed, do:
template <typename T>
class Bin
{
float minRange, maxRange;
std::vector<T> bins;
};
And it should do exactly what you put in your question:
Bin< Bin< Bin<Obj> > > bins;
To do it dynamically (at runtime), I employed some polymorphism. The example is a bit complex. First, there is a base type.
template <typename T>
class BinNode {
public:
virtual ~BinNode () {}
typedef std::shared_ptr< BinNode<T> > Ptr;
virtual T * is_object () { return 0; }
virtual const T * is_object () const { return 0; }
virtual Bin<T> * is_vector() { return 0; }
const T & operator = (const T &t);
BinNode<T> & operator[] (unsigned i);
};
BinNode figures out if the node is actually another vector, or the object.
template <typename T>
class BinObj : public BinNode<T> {
T obj;
public:
T * is_object () { return &obj; }
const T * is_object () const { return &obj; }
};
BinObj inherits from BinNode, and represents the object itself.
template <typename T>
class Bin : public BinNode<T> {
typedef typename BinNode<T>::Ptr Ptr;
typedef std::map<unsigned, std::shared_ptr<BinNode<T> > > Vec;
const unsigned dimension;
Vec vec;
public:
Bin (unsigned d) : dimension(d) {}
Bin<T> * is_vector() { return this; }
BinNode<T> & operator[] (unsigned i);
};
Bin is the vector of BinNode's.
template <typename T>
inline const T & BinNode<T>::operator = (const T &t) {
if (!is_object()) throw 0;
return *is_object() = t;
}
Allows assignment to a BinNode if it is actually the object;
template <typename T>
BinNode<T> & BinNode<T>::operator[] (unsigned i) {
if (!is_vector()) throw 0;
return (*is_vector())[i];
}
Allows the BinNode to be indexed if it is a vector.
template <typename T>
inline BinNode<T> & Bin<T>::operator[] (unsigned i)
{
if (vec.find(i) != vec.end()) return *vec[i];
if (dimension > 1) vec[i] = Ptr(new Bin<T>(dimension-1));
else vec[i] = Ptr(new BinObj<T>);
return *vec[i];
}
Returns the indexed item if it is present, otherwise creates the appropriate entry, depending on the current dimension depth. Adding a redirection operator for pretty printing:
template <typename T>
std::ostream &
operator << (std::ostream &os, const BinNode<T> &n) {
if (n.is_object()) return os << *(n.is_object());
return os << "node:" << &n;
}
Then you can use Bin like this:
int dim = 3;
Bin<float> v(dim);
v[0][1][2] = 3.14;
std::cout << v[0][1][2] << std::endl;
It doesn't currently support 0 dimension, but I invite you to try to do it yourself.