I have to create a class that uses an array. The array can have variable length that is defined (with an actual number) upon the object construction, meaning I know it's size at the compile time.
So, I tried to create a template where I could input the size and have the object be constructed with that size. This way I could still use arrays.
But this code doesn't compile.
use of class template requires template argument list
What am I doing wrong?
h:
#pragma once
class BinaryMemoryReader;
template <int bytesPerValue>
class ChunkBlockChannel
{
public:
std::map<unsigned int, unsigned char[bytesPerValue * 1024]> cbcLayer;
std::map<unsigned int, unsigned char[bytesPerValue]> jj;
void read(BinaryMemoryReader &reader);
ChunkBlockChannel();
~ChunkBlockChannel();
};
cpp:
#include "ChunkBlockChannel.h"
#include "BinaryMemoryReader.h"
template <int bytesPerValue>
void ChunkBlockChannel::read(BinaryMemoryReader &reader) {
for (int i = 0; i < 64; ++i) {
bool flag;
reader.read<bool>(flag);
std::vector<unsigned char> cbcLayerData;
std::vector<unsigned char> jjData;
if (!flag) {
unsigned char data[bytesPerValue * 1024];
reader.readBytes(data, bytesPerValue * 1024);
cbcLayer[i] = data;
} else {
unsigned char data[bytesPerValue];
reader.readBytes(data, bytesPerValue);
jj[i] = data;
}
}
}
template <int bytesPerValue>
ChunkBlockChannel::ChunkBlockChannel() {}
template <int bytesPerValue>
ChunkBlockChannel::~ChunkBlockChannel() {}
Maybe there is other but... you've forgotten the template argument in method definition.
I mean
template <int bytesPerValue>
void ChunkBlockChannel<bytesPerValue>::read(BinaryMemoryReader &reader) {
// remember this ^^^^^^^^^^^^^^^
and
template <int bytesPerValue>
ChunkBlockChannel<bytesPerValue>::ChunkBlockChannel() {}
// and this ^^^^^^^^^^^^^^^
and
template <int bytesPerValue>
ChunkBlockChannel<bytesPerValue>::~ChunkBlockChannel() {}
// and this ^^^^^^^^^^^^^^^
Related
I've got a communication class which can handle 2 different protocols. The protocol is chosen by a enum template variable.
Now 1 of the 2 protocols only supports 2-byte values while the other supports 4-byte values.
Communicating is done via a template member function. How to change the static_assert so it takes 2 or 4 bytes depending on the class specialization (= chosen protocol)?
#include <iostream>
#include <math.h>
enum Protocol { P1, P2 };
template <Protocol P>
class Communicator {
public:
template <typename T>
void communicate(T arg) {
static_assert(sizeof(arg) <= sizeof(float), "argument size must be <= 4 bytes");
float value = (float)arg;
uint8_t length = sizeof(arg); //length in bytes
_communicate(value, length);
}
void _communicate(float f, uint8_t length) {
std::cout << f;
}
};
EDIT: I can just pick 1 answer as correct. And although I learned the most from Roy, I picked MM's answer because it keeps things as simple as possible. (upvoted both though)
There's several ways to approach this... here is one:
template<Protocol P>
size_t value_size();
template<> size_t value_size<P1>() { return 2; }
template<> size_t value_size<P2>() { return 4; }
// ... inside your other function
static_assert(sizeof(arg) <= value_size<P>(),
Here's a different approach
#include <iostream>
#include <math.h>
#include <cstdint>
// enum Protocol { P1, P2 }; // let's use types with traits instead.
struct P1
{
constexpr static const int protocol_id = 1;
//^^ or maybe use an enum
// type, this may need refactoring
// to fit your code and style.
using data_type = uint16_t; //< or whatever your 2-byte datatype is.
// can add more data traits and even static member functions here
// you could also add protocol specific non-static data if you add a
// P1 data member to your Communicator class.
// A struct with traits is a very good entry point for many compile-time
// polymorphism techniques.
};
struct P2
{
constexpr static const int protocol_id = 2;
using data_type = uint32_t; //< or whatever your 4-byte datatype is.
};
template <typename _Proto>
class Communicator {
public:
using data_type = typename _Proto::data_type;
constexpr static const int proto_id = typename _Proto::protocol_id;
public:
void communicate(data_type arg) // we can use the trait from either P1 or P2
{
float value = (float)arg;
uint8_t length = sizeof(data_type); //length in bytes
_communicate(value, length);
}
void _communicate(float f, uint8_t length)
{
std::cout << f;
}
};
Here's code to convert an enum (If that's what you already have to a class.
enum protocol_t { p1, p2 };
template <protocol_t _p> struct Protocol {};
// simply derive the template specialization from the desired type
template <> struct Protocol<p1> : P1 {};
// simply derive the template specialization from the desired type
template <> struct Protocol<p2> : P2 {};
You can also derive from P1, P2 to help organize code.
struct P1
{
// ... + specialized functions:
void _communicate(value_type x) { ... } // that you'll call from Communicator
void _communicate(const value_type* begin, const value_type* end) { ... }
};
struct P2 { /* same thing */ };
template <typename _Proto>
class Communicator : _Proto // < can control visibility here.
{ ... };
CUB provides an iterator for texture references, the implementation of which is readily accessible.
Since I couldn't figure out how to implement template-able texture references myself - they "can only be declared as a static global variable" - I am now trying to understand how it's done in CUB. But some of it is beyond my C++ knowledge, and I haven't been able to find the answers elsewhere (then again, I don't really know what to search for).
Specifically:
Is the unnamed namespace surrounding IteratorTexRef significant? I can only think that it is to limit IteratorTexRef::TexId::ref to file/translation unit scope.
What is the purpose of IteratorTexRef? It only wraps TexId, but removing it results in unintelligible (to me) compile-time errors.
This code, a stripped-down version of the linked-to implementation, compiles and runs:
#include <thrust/device_vector.h>
namespace {
template <typename T>
struct IteratorTexRef
{
template <int UNIQUE_ID>
struct TexId
{
// Assume T is a valid texture word size.
typedef texture<T> TexRef;
static TexRef ref;
static __device__ T fetch(ptrdiff_t offset)
{
return tex1Dfetch(ref, offset);
}
};
};
template <typename T>
template <int UNIQUE_ID>
typename IteratorTexRef<T>:: template TexId<UNIQUE_ID>::TexRef IteratorTexRef<T>:: template TexId<UNIQUE_ID>::ref;
} // Anomymous namespace
template <typename T, int UNIQUE_ID = 0>
class TextureRefIterator
{
private:
typedef typename IteratorTexRef<T>:: template TexId<UNIQUE_ID> TexId;
ptrdiff_t tex_offset;
public:
__device__ T operator[](int i) const
{
return TexId::fetch(this->tex_offset + i);
}
cudaError_t bind(
const T* const ptr,
size_t bytes = size_t(-1))
{
size_t offset;
cudaError_t state = cudaBindTexture(&offset, TexId::ref, ptr, bytes);
this->tex_offset = (ptrdiff_t) (offset / sizeof(T));
return state;
}
};
template <typename TexIter>
__global__ void kernel(TexIter iter)
{
int a = iter[threadIdx.x];
printf("tid %d, a %d\n", threadIdx.x, a);
}
template <typename T>
void launch_kernel(T* d_in)
{
TextureRefIterator<T> tex_iter;
tex_iter.bind(d_in);
kernel<<<1, 32>>>(tex_iter);
}
int main()
{
thrust::device_vector<float> d_in(32, 1);
launch_kernel(thrust::raw_pointer_cast(d_in.data()));
}
The closest I got was something similar to the below, based on how one would normally access a static template member. For clarity, the below simply eliminates IteratorTexRef from the above:
#include <thrust/device_vector.h>
namespace {
template <typename T, int UNIQUE_ID>
struct TexId
{
// Assume T is a valid texture word size.
typedef texture<T> TexRef;
static TexRef ref;
static __device__ T fetch(ptrdiff_t offset)
{
return tex1Dfetch(ref, offset);
}
};
template <typename T, int UNIQUE_ID>
typename TexId<T, UNIQUE_ID>::TexRef TexId<T, UNIQUE_ID>::ref;
} // Anonymous namespace
template <typename T, int UNIQUE_ID = 0>
class TextureRefIterator
{
private:
typedef TexId<T, UNIQUE_ID> TexId;
ptrdiff_t tex_offset;
public:
__device__ T operator[](int i) const
{
return TexId::fetch(this->tex_offset + i);
}
cudaError_t bind(
const T* const ptr,
size_t bytes = size_t(-1))
{
size_t offset;
cudaError_t state = cudaBindTexture(&offset, TexId::ref, ptr, bytes);
this->tex_offset = (ptrdiff_t) (offset / sizeof(T));
return state;
}
};
template <typename TexIter>
__global__ void kernel(TexIter iter)
{
int a = iter[0];
printf("tid %d, a %d\n", threadIdx.x, a);
}
template <typename T>
void launch_kernel(T* d_in)
{
TextureRefIterator<T> tex_iter;
tex_iter.bind(d_in);
kernel<<<1, 32>>>(tex_iter);
}
int main()
{
thrust::device_vector<float> d_in(32, 1);
launch_kernel(thrust::raw_pointer_cast(d_in.data()));
}
It gives these somewhat esoteric compile-time errors. (Compiled with nvcc iter.cu and CUDA 7.0):
In file included from tmpxft_000057d4_00000000-4_test2.cudafe1.stub.c:1:0:
/tmp/tmpxft_000057d4_00000000-4_test2.cudafe1.stub.c:30:3737: error: macro "__text_var" passed 3 arguments, but takes just 2
dIfLi0EE3refE,::_NV_ANON_NAMESPACE::TexId<float, (int)0> ::ref), 1, 0, 0);__cudaReg
^
/tmp/tmpxft_000057d4_00000000-4_test2.cudafe1.stub.c:30:1: error: macro "__device__text_var" passed 3 arguments, but takes just 2
static void __nv_cudaEntityRegisterCallback(void **__T2202){__nv_dummy_param_ref(__
^
/tmp/tmpxft_000057d4_00000000-4_test2.cudafe1.stub.c:30:1: error: macro "__name__text_var" passed 3 arguments, but takes just 2
That compile error is due to the generated code using macros that contain template types so the commas in the templates make the preprocessor think that that they are more arguments. I fixed this by patching the crt/host_runtime header and making the cpp parameter of those macros (__text_var, __device__text_var, and __name__text_var) variadic. In other words, replace cpp with cpp....
In continuation of this topic Variadic template heterogeneous container, I would like to ask the following. Assume, that we have several classes with two members that is dynamic arrays. Now suppose that there is a sequence of objects of these classes, which is packed in heterogeneous container. In this sequence one of arrays-mebers is "output" vector and another array-member is "input" vector, which is pointer to appropriate output array from preceding object. This sequence is implemented as variadic template class:
//Classes, objects which are members of the sequence
template<int NumberElements>
struct A
{
A() : output(new float[NumberElements]){}//allocate output
~A(){delete[] output;}
float *input;//input vector - pointer to output vector from preceding object of sequence
float *output;// output vector (size - NumberElements) of current member of sequence
};
template<int NumberElements>
struct B
{
B() : output(new float[NumberElements]){}//allocate output
~B(){delete[] output;}
float *input;
float *output;
};
template<int NumberElements>
struct C
{
C() : output(new float[NumberElements]){}//allocate output
~C(){delete[] output;}
float *input;
float *output;
};
//Container
template<typename...Arg>
struct HeterogenousContainer
{
HeterogenousContainer();//Do something to setup the sequence
std::tuple<Arg...> elements;
};
How can I properly allocate memory (via new/malloc) for output vectors, and set up input pointers to preceding output vectors? For example, I write next code:
HeterogenousContainer<A<5>, B<7>, C<9>> sequence;
I want that input from first member of sequence to be nullptr, input from second - points to output from first, etc. How to implement it correctly?
Firstly, don't mess around with manual (de)allocation if you can avoid it. For a simple array, you can:
#include <array>
#include <tuple>
template<int NumberElements>
struct A
{
float *input;
std::array<float, NumberElements> output;
};
You just need to recurse down the tuple, and specialize for the termination case. I'm doing it backwards since the first element is your special case.
namespace detail {
template <int N> struct Connector;
template <>
struct Connector<0> {
template <typename... TL> static void connect(std::tuple<TL...> &t) {
std::get<0>(t).input = nullptr;
}
};
template <int N> struct Connector {
template <typename... TL> static void connect(std::tuple<TL...> &t) {
std::get<N>(t).input = &std::get<N-1>(t).output.front();
Connector<N-1>::connect(t);
}
};
template <typename... TL> void connect(std::tuple<TL...> &t) {
Connector<sizeof...(TL)-1>::connect(t);
}
}
and use it something like this:
template <typename... Arg> struct HeterogenousContainer {
std::tuple<Arg...> elements;
HeterogenousContainer() { detail::connect(elements); }
};
Inspired by Useless' answer (no pun intended), I came up with this:
template<typename...Arg>
struct HeterogenousContainer
{
std::tuple<Arg...> elements;
void init(std::integral_constant<std::size_t, 0>)
{
std::get<0>(elements).input = nullptr;
}
template < std::size_t index = sizeof...(Arg)-1 >
void init(std::integral_constant<std::size_t, index> = {})
{
std::get<index>(elements).input = std::get<index-1>(elements).output;
init(std::integral_constant<std::size_t, index-1>{});
}
HeterogenousContainer()
: elements{}
{
init();
}
};
I have the following problem:
Suppose I have some basic counter class Counter. And suppose we also have some sets of classes, that can be counted. Let's name some of them class CountedA and class CountedB.
Now, every class, which can be counted (such as CountedA and CountedB) has the following statically declared parts: one enum and one int part, that acts like a part of counted data.
For example, it's declaration could look the following way:
enum CountedType { A, B };
template <CountedType Type, int N>
class Counted { };
// Now we can declare 'CountedA' and 'CountedB'
typedef Counted<A, 25> CountedA;
typedef Counted<B, 7> CountedB;
Now, the declaration of the counter:
// C++0x variadic or simply bunch of 'typename XX' definitions for C++03
template <typename T0, typename T1, typename ...>
class Counter
{
// I don't know how to implement this
// for now!
int GetTotalN() { ... }
// Retrieve the corresponding type
// so that GetTypeAt<0> returns
// enum from 'T0'
template <int Pos>
CountedType GetTypeAt() { ... }
};
I want to be able to write something like:
class RealCounter : public Counter<CountedA, CountedB> { };
And use it the following way:
RealCounter counter;
int n = counter.GetTotalN();
CountedType type = counter.GetTypeAt<0>();
Now, I'm pretty sure that this can be done. But what's the best way of implementing it? (don't ask me why would I need such crazy kind of things :)
Does boost::mpl offer something for this case?
Thank you.
Small update:
In this particular example, GetTotalN() should return 25 + 7.
If we add, for example, typedef Counted<C, 2> CountedC, then the result for
RealCounter : public Counter<CountedA, CountedB, CountedC>
should become 25 + 7 + 2.
Here's C++03 code which works (for up to 10 template arguments). The main trick is giving class Counter a multiple inheritance, and passing objects of type Counter to function templates which must select a base class. The actual summation is done recursively.
Counter.hpp
enum CountedType { A, B };
template <CountedType Type, int N>
struct Counted {};
struct DummyCounted {};
template <int Pos, typename T>
struct IndexedType {};
template <unsigned int Terms>
struct PartialSum
{
template <typename CounterT>
static int getSum(const CounterT& ctr)
{ return PartialSum<Terms-1>::getSum(ctr) + ctr.template GetNAt<Terms>(); }
};
template <> struct PartialSum<0U>
{
template <typename CounterT>
static int getSum(const CounterT& ctr)
{ return ctr.template GetNAt<0>(); }
};
template <typename T0, typename T1=DummyCounted,
typename T2=DummyCounted, typename T3=DummyCounted,
typename T4=DummyCounted, typename T5=DummyCounted,
typename T6=DummyCounted, typename T7=DummyCounted,
typename T8=DummyCounted, typename T9=DummyCounted>
class Counter :
public IndexedType<0, T0>, public IndexedType<1, T1>,
public IndexedType<2, T2>, public IndexedType<3, T3>,
public IndexedType<4, T4>, public IndexedType<5, T5>,
public IndexedType<6, T6>, public IndexedType<7, T7>,
public IndexedType<8, T8>, public IndexedType<9, T9>
{
public:
static int GetTotalN() {
return PartialSum<9>().getSum( Counter() );
}
template <int Pos>
static CountedType GetTypeAt() { return _getTypeAt<Pos>( Counter() ); }
template <int Pos>
static int GetNAt() { return _getNAt<Pos>( Counter() ); }
private:
template <int Pos, CountedType Type, int N>
static CountedType _getTypeAt(const IndexedType<Pos, Counted<Type,N> >&)
{ return Type; }
template <int Pos, CountedType Type, int N>
static int _getNAt(const IndexedType<Pos, Counted<Type,N> >&)
{ return N; }
template <int Pos>
static int _getNAt(const IndexedType<Pos, DummyCounted>&)
{ return 0; }
};
Counter.cpp
#include "Counter.hpp"
#include <iostream>
typedef Counted<A, 25> CountedA;
typedef Counted<B, 7> CountedB;
class RealCounter : public Counter<CountedA, CountedB> {};
int main()
{
RealCounter counter;
int n = counter.GetTotalN();
CountedType type = counter.GetTypeAt<0>();
std::cout << "n is " << n
<< "\ntype check is " << (type == A) << std::endl;
return 0;
}
Output:
n is 32
type check is 1
That C++0x variadic template stuff looks interesting, but I haven't taken a good look at it yet. But I do think in C++0x, all this example's functions (except main of course) could be constexpr.
I'm not sure why you need to embed those parameters in the templates arguments and not simply in a constructor since they are all the same types for each "derived" CountedA/B types.
Anyways you can embed the resulting types into a std::tuple as shown in the link below (see Message class for an example). Then create a variadic template function similar to the applyTuple version in the link below that will add all your integer arguments and return the final result once all arguments have been unrolled. As for the returning of the enum value for the item in "Pos" simply call the get( tuple ).getEnum() or .value to get it.
How do I expand a tuple into variadic template function's arguments?
Can anyone help me with this code. I'm trying to specialise a method. At the moment it doesn't work with one specialisation (1) but I'd like to ultimately have lots of specialisations (2, 3, 4, 5 etc)
class X
{
public:
// declaration
template< int FLD >
void set_native( char *ptr, unsigned int length );
// specialisations
template<> void set_native< 1 >( char *ptr, unsigned int length )
{
}
};
The error messages I'm getting are..
x.cpp:13: error: explicit specialization in non-namespace scope 'class X'
x.cpp:13: error: template-id 'set_native<1>' for 'void set_native(char*, unsigned int)' does not match any template declaration
x.cpp:13: error: invalid function declaration
As Benoit proposed, you have to specialize the member function in the surrounding namespace:
struct X {
template<int N> void f() {}
};
template<> void X::f<1>() {} // explicit specialization at namespace scope
This is because of ยง14.7.3 (C++03):
An explicit specialization shall be declared in the namespace of which the template is a member, or, for member templates, in the namespace of which the enclosing class or enclosing class template is a member.
VC however doesn't conform to the standard in that regard and thus creates some portability headaches.
Try the following
class X
{
public:
// declaration
template< int FLD >
void set_native( char *ptr, unsigned int length );
};
// specialisations
template<> void X::set_native< 1 >( char *ptr, unsigned int length )
{
}
If it does not work, try adding a templated class behind set_native
template<int FLD> class SetNative;
class X
{
public:
// declaration
template< int FLD >
void set_native( char *ptr, unsigned int length )
{ return SetNative()(ptr, length); }
};
template<> class SetNative<1>
{
void operator()( char *ptr, unsigned int length ){...}
};