CUDA device functors factory - c++

Let say there is a C++ functor:
class Dummy
{
public:
int operator() (const int a, const int b)
{
return a+b;
}
};
This functor doesn't use any function that can't execute on GPU but it can't be called from CUDA kernel cause there is no __device__ declaration in front of operator(). I would like to create factory class that converts such functors to device compatible functors that can be called within CUDA kernel. For example:
Dummy d;
auto cuda_d = CudaFunctorFactory.get(d);
Can this be accomplished in any way? Feel free to add some constraints as long as it can be accomplished...

The one word answer is no, this isn't possible.
There is no getting around the fact that in the CUDA compilation model, any method code contained in a class or structure which will execute on the GPU must be statically declared and defined at compile time. Somewhere in that code, there has to be a __device__ function available during compilation, otherwise the compilation fails. That is a completely non-negotiable cornerstone of CUDA as it exists today.
A factory design pattern can't sidestep that requirement. Further, I don't think it is possible to implement a factory for GPU instances in host code because there still isn't any way of directly accessing __device__ function pointers from the host, and no way of directly instantiating a GPU class from the host because the constructor must execute on the GPU. At the moment, the only program units which the host can run on the GPU are __global__ functions (ie. kernels), and these cannot be contained within classes. In CUDA, GPU classes passed by argument must be concretely defined, virtual methods aren't supported (and there is not RTTI). That eliminates all the paths I can think of to implement a factory in CUDA C++ for the GPU.
In summary, I don't see any way to make magic that can convert host code to device code at runtime.

Related

How to integrate CUDA into an existing class structure?

I have a working CPU-based implementation of a simple deep learning framework where the main components are nodes of a computation graph which can perform computations on tensors.
Now I need to extend my implementation to GPU, I would like to use the existing class structure and only extend its functionality to GPU however, I'm not sure if that's even possible.
Most of the classes have methods that work on and return tensors such as:
tensor_ptr get_output();
where tensor_ptr is simply std::shared_ptr pointer of my tensor class. Now what I would like to do is to add a GPU version for each such method. The idea that I had in mind was to define a struct in a separate file tensor_gpu.cuh as follows
struct cu_shape {
int n_dims;
int x,y,z;
int len;
};
struct cu_tensor {
__device__ float * array;
cu_shape shape;
};
and then the previous function would be mirrored by:
cu_tensor cu_get_output();
The problem seems to be that the .cuh file gets treated as a regular header file and is compiled by the default c++ compiler and gives error:
error: attribute "device" does not apply here
on the line with the definition of __device__ float * array.
I am aware that you cannot mix CUDA and pure C++ code so I planned to hide all the CUDA runtime api functions into .cu files which would be defined in .h files. The problem is that I wanted to store the device pointers within my class and then pass those to the CUDA-calling functions.
This way I could still use all of my existing object structure and only modify the initialization and computation parts.
If a regular c++ class cannot touch anything with __device__ flag then how can you even integrate CUDA code into C++ code?
Can you only use CUDA runtime calls and keywords literally just in .cu files?
Or is there some smart way to hide the fact from c++ compiler that it is dealing with CUDA pointers?
Any insight is deeply appreciated!
EDIT: There seems to be a misunderstanding on my part. You don't need to put the __device__ flag and you'll still be able to use it as a pointer to device memory. If you have something valuable to add to good practices on CUDA integration or clarify something else, don't hesitate!
'__' is reserved for implementation purposes. That is why the Nvidia implementation can use __device__. But the other "regular" C++ implementation has its own reserved symbols.
In hindsight Nvidia could have designed a better solution but that is not going to help you here.

How to properly implement an execute-function-on-each-element with CUDA?

I have a class representing one or several containers of objects. The class offers a function to run a callback for each of the elements. A simple implementation could look like:
struct MyData{
Foo* foo;
void doForAllFoo(std::function<void(Foo)> fct){
for( /* all indices i in foo */){
fct(f[i]);
}
}
}
Driving code:
MyData d = MyData(...);
TypeX param1 = create_some_param();
TypeY param2 = create_some_more_param();
d.doForAll([&](Foo f) {my_function(f, param1, param2);});
I think this is a good solution for flexible callbacks on a container.
Now I'd like to parallelize this with CUDA. I'm not quite sure about what is allowed with lambdas in CUDA and I'm also not sure about compilation for __device__ and __host__.
I can (and will probably have to) change MyData, but I'd like to have no trace of the CUDA background in the driving code, except that I have to allocate memories in a CUDA-accessible way of course.
I think a minimal example would be very helpful.
Before you start to write the C style CUDA kernel function, you could check Thrust library. It is part of the CUDA and provide high level abstract for simple GPU algorithm development.
Here is a code example to show the use of function object and lamda expression with thrust.
https://github.com/thrust/thrust/blob/master/examples/lambda.cu
Even with Thrust, you still need to use __device__ and __host__ to ask the compiler to generate device code and host code for you. Since there's no place to put them in standard C++ lamda expression, you probably need to write longer code.

Using external library classes in CUDA project

I am trying to enhance a small C++ project with CUDA.
My project is using a custom library's classes and functions for example Matrix3d, Vector3d, Plane2d etc. They are mostly geometric objects.
When I try to use my code in the device (either __host__ __device__ functions or a kernel) all the library functions/objects are considered as host code and I get multiple warnings and errors for example error: identifier "Plane3d::~Plane3d" is undefined in device code
Is there a way to use my library on device as well? How is it done?
I don't have experience on CUDA and C++ (I have only used CUDA with simple C code without classes) so I don't get the strategy very well.
Is there a method to avoid changing the library source code? It is possible to change the library's code but it would be better if I could avoid it.
Thanks a lot.
There is no particular problem with using C++ classes in CUDA. The object model is only slightly different to standard C++.
Any structure or class data members are automatically defined in whichever memory space (so host or device) the class or structure is instantiated in. What is not automatic is the code generation for function members and operators within classes and structure. The programmer must explicitly define and compile those for whichever memory space the object will be instantiated in. This latter requirement means you must have both __device__ and __host__ definitions of each function you call within the object. This includes the constructor and destructor, the latter being the error you show in your question.
You don't need to change the source code - what you need is to write an adapter.
CUDA kernels work with low level structures e.g. double*, double*, double** or float*, float*, float** as well as with the built in CUDA types.
CUDA can not work directly on memory allocated outside CUDA anyway (only with memory allocated on the Graphics card, not regular RAM), so you will have to copy your data into the graphics memory.
If you provide methods which have access to the buffers used by your types, you can copy them, continuously if your types have continuous memory, or in chunks if not, into the graphics card (using the CUDA memory copy function), then you can process them with kernels as double*** using simple indexing.

CUDA kernel as member function of a class

I am using CUDA 5.0 and a Compute Capability 2.1 card.
The question is quite straightforward: Can a kernel be part of a class?
For example:
class Foo
{
private:
//...
public:
__global__ void kernel();
};
__global__ void Foo::kernel()
{
//implementation here
}
If not then the solution is to make a wrapper function that is member of the class and calls the kernel internally?
And if yes, then will it have access to the private attributes as a normal private function?
(I'm not just trying it and see what happens because my project has several other errors right now and also I think it's a good reference question. It was difficult for me to find reference for using CUDA with C++. Basic functionality examples can be found but not strategies for structured code.)
Let me leave cuda dynamic parallelism out of the discussion for the moment (i.e. assume compute capability 3.0 or prior).
remember __ global__ is used for cuda functions that will (only) be called from the host (but execute on the device). If you instantiate this object on the device, it won't work. Furthermore, to get device-accessible private data to be available to the member function, the object would have to be instantiated on the device.
So you could have a kernel invocation (ie. mykernel<<<blocks,threads>>>(...); embedded in a host object member function, but the kernel definition (i.e. the function definition with the __ global__ decorator) would normally precede the object definition in your source code. And as stated already, such a methodology could not be used for an object instantiated on the device. It would also not have access to ordinary private data defined elsewhere in the object. (It may be possible to come up with a scheme for a host-only object that does create device data, using pointers in global memory, that would then be accessible on the device, but such a scheme seems quite convoluted to me at first glance).
Normally, device-usable member functions would be preceded by the __ device__ decorator. In this case, all the code in the device member function executes from within the thread that called it.
This question gives an example (in my edited answer) of a C++ object with a member function callable from both the host and the device, with appropriate data copying between host and device objects.

Serializing function objects

Is it possible to serialize and deserialize a std::function, a function object, or a closure in general in C++? How? Does C++11 facilitate this? Is there any library support available for such a task (e.g., in Boost)?
For example, suppose a C++ program has a std::function which is needed to be communicated (say via a TCP/IP socket) to another C++ program residing on another machine. What do you suggest in such a scenario?
Edit:
To clarify, the functions which are to be moved are supposed to be pure and side-effect-free. So I do not have security or state-mismatch problems.
A solution to the problem is to build a small embedded domain specific language and serialize its abstract syntax tree.
I was hoping that I could find some language/library support for moving a machine-independent representation of functions instead.
Yes for function pointers and closures. Not for std::function.
A function pointer is the simplest — it is just a pointer like any other so you can just read it as bytes:
template <typename _Res, typename... _Args>
std::string serialize(_Res (*fn_ptr)(_Args...)) {
return std::string(reinterpret_cast<const char*>(&fn_ptr), sizeof(fn_ptr));
}
template <typename _Res, typename... _Args>
_Res (*deserialize(std::string str))(_Args...) {
return *reinterpret_cast<_Res (**)(_Args...)>(const_cast<char*>(str.c_str()));
}
But I was surprised to find that even without recompilation the address of a function will change on every invocation of the program. Not very useful if you want to transmit the address. This is due to ASLR, which you can turn off on Linux by starting your_program with setarch $(uname -m) -LR your_program.
Now you can send the function pointer to a different machine running the same program, and call it! (This does not involve transmitting executable code. But unless you are generating executable code at run-time, I don't think you are looking for that.)
A lambda function is quite different.
std::function<int(int)> addN(int N) {
auto f = [=](int x){ return x + N; };
return f;
}
The value of f will be the captured int N. Its representation in memory is the same as an int! The compiler generates an unnamed class for the lambda, of which f is an instance. This class has operator() overloaded with our code.
The class being unnamed presents a problem for serialization. It also presents a problem for returning lambda functions from functions. The latter problem is solved by std::function.
std::function as far as I understand is implemented by creating a templated wrapper class which effectively holds a reference to the unnamed class behind the lambda function through the template type parameter. (This is _Function_handler in functional.) std::function takes a function pointer to a static method (_M_invoke) of this wrapper class and stores that plus the closure value.
Unfortunately, everything is buried in private members and the size of the closure value is not stored. (It does not need to, because the lambda function knows its size.)
So std::function does not lend itself to serialization, but works well as a blueprint. I followed what it does, simplified it a lot (I only wanted to serialize lambdas, not the myriad other callable things), saved the size of the closure value in a size_t, and added methods for (de)serialization. It works!
No.
C++ has no built-in support for serialization and was never conceived with the idea of transmitting code from one process to another, lest one machine to another. Languages that may do so generally feature both an IR (intermediate representation of the code that is machine independent) and reflection.
So you are left with writing yourself a protocol for transmitting the actions you want, and the DSL approach is certainly workable... depending on the variety of tasks you wish to perform and the need for performance.
Another solution would be to go with an existing language. For example the Redis NoSQL database embeds a LUA engine and may execute LUA scripts, you could do the same and transmit LUA scripts on the network.
No, but there are some restricted solutions.
The most you can hope for is to register functions in some sort of global map (e.g. with key strings) that is common to the sending code and the receiving code (either in different computers or before and after serialization).
You can then serialize the string associated with the function and get it on the other side.
As a concrete example the library HPX implements something like this, in something called HPX_ACTION.
This requires a lot of protocol and it is fragile with respect to changes in code.
But after all this is no different from something that tries to serialize a class with private data. In some sense the code of the function is its private part (the arguments and return interface is the public part).
What leaves you a slip of hope is that depending on how you organize the code these "objects" can be global or common and if all goes right they are available during serialization and deserialization through some kind predefined runtime indirection.
This is a crude example:
serializer code:
// common:
class C{
double d;
public:
C(double d) : d(d){}
operator(double x) const{return d*x;}
};
C c1{1.};
C c2{2.};
std::map<std::string, C*> const m{{"c1", &c1}, {"c2", &c2}};
// :common
main(int argc, char** argv){
C* f = (argc == 2)?&c1:&c2;
(*f)(5.); // print 5 or 10 depending on the runtime args
serialize(f); // somehow write "c1" or "c2" to a file
}
deserializer code:
// common:
class C{
double d;
public:
operator(double x){return d*x;}
};
C c1;
C c2;
std::map<std::string, C*> const m{{"c1", &c1}, {"c2", &c2}};
// :common
main(){
C* f;
deserialize(f); // somehow read "c1" or "c2" and assign the pointer from the translation "map"
(*f)(3.); // print 3 or 6 depending on the code of the **other** run
}
(code not tested).
Note that this forces a lot of common and consistent code, but depending on the environment you might be able to guarantee this.
The slightest change in the code can produce a hard to detect logical bug.
Also, I played here with global objects (which can be used on free functions) but the same can be done with scoped objects, what becomes trickier is how to establish the map locally (#include common code inside a local scope?)