CUDA function call-able by either the device or host - c++

I have a re-useable function in some CUDA code that needs to be called from both the device and the host. Is there an appropriate qualifier for this?
e.g. what's the correct definition for func1 in this case:
int func1 (int a, int b) {
return a+b;
}
__global__ devicecode (float *A) {
int i = blockDim.x * blockIdx.x + threadIdx.x;
A[i] = func1(i,i);
}
void main() {
// Normal cuda memory set-up
// Call func1 from inside main:
int j = func1(2,4)
// Normal cuda memory copy / program run / retrieve data
}
So far I can only get this to work by having the function twice: once explicitly for the device and once for the host. Is there a better way?

From the CUDA Programming Guide:
The __device__ and __host__ qualifiers can be used together however, in
which case the function is compiled for both the host and the device.

Related

Can't use my template class in cuda kernel

I thought I knew how to write some clean cuda code. Until I tried to make a simple template class and use it in a simple kernel.
I've been trouble shooting for days. Every single thread I've visited made me feel a little more stupid.
For error checking I used this
Here is my class.h:
#pragma once
template <typename T>
class MyArray
{
public:
const int size;
T *data;
__host__ MyArray(int size); //gpuErrchk(cudaMalloc(&data, size * sizeof(T)));
__device__ __host__ T GetValue(int); //return data[i]
__device__ __host__ void SetValue(T, int); //data[i] = val;
__device__ __host__ T& operator()(int); //return data[i];
~MyArray(); //gpuErrchk(cudaFree(data));
};
template class MyArray<double>;
The relevant content of class.cu is in the comments. If you think the whole thing is relevant Id be happy to add it.
Now for the main class:
__global__ void test(MyArray<double> array, double *data, int size)
{
int j = threadIdx.x;
//array.SetValue(1, j); //doesn't work
//array(j) = 1; //doesn't work
//array.data[j] = 1; //doesn't work
data[j] = 1; //This does work !
printf("Reach this code\n");
}
}
int main(int argc, char **argv)
{
MyArray x(20);
test<<<1, 20>>>(x, x.data, 20);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
}
When I say "doesn't work", I mean that the program stops there (before reaching the printf) without outputting any error. Plus I get the following error both from cudaDeviceSynchronize and from cudaFree:
an illegal memory access was encountered
What I can't understand is that there should be no issue with memory management since sending the array directly to the kernel works fine. So why doesn't it work when I send a class and try to access the classes data? And why do I receive no warning or error message when clearly my code bumped into some error?
Here is the output of nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
(Editorial note: there is quite a bit of disinformation in the comments on this question, so I have assembled an answer as a community wiki entry.)
There is no particular reason why a template class cannot be passed as an argument to a kernel. There are some limitations which need to be clearly understood before doing so:
CUDA kernel arguments, for all intent and purpose, always passed by value. Pass by reference is supported under extremely limited set of circumstances (the argument in question must be stored in managed memory). This does not apply here.
As a result of (1), POD arguments just work, because they are trivially copyable and rely on no special behaviour
Classes are different, in that when you pass a class by value, you are implicitly invoking copy construction or move construction semantics. That means that classes passed by value as kernel arguments must be trivially copy constructable. There is no way to run non trivial copy constructors on the device as part of a kernel launch.
CUDA further requires that classes don't contain virtual members
Although the <<< >>> kernel launch syntax looks like a simple function call, it isn't. There is several layers of abstraction boilerplate and a API call between what you write in host code and what actually is emitted by the toolchain on the host side. This means that there are several copy construction operations between your code and the GPU. If you do something like put a cudaFree call in your destructor, you should assume that it will get called as part of the function call sequence which launches a kernel when one of those copies falls out of scope. You do not want that.
You did not show how the class member functions were actually implemented in this case, so saying why one of the many permutations your code comments hinted at did or did not work is impossible, beyond passing the raw pointer to the kernel, which works because it is a trivially copyable POD value, when the class was almost certainly not.
Here is a simple, complete example showing how to make this work:
$cat classy.cu
#include <vector>
#include <iostream>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
template <typename T>
class MyArray
{
public:
int len;
T *data;
__device__ __host__ void SetValue(T val, int i) { data[i] = val; };
__device__ __host__ int size() { return sizeof(T) * len; };
__host__ void DevAlloc(int N) {
len = N;
gpuErrchk(cudaMalloc(&data, size()));
};
__host__ void DevFree() {
gpuErrchk(cudaFree(data));
len = -1;
};
};
__global__ void test(MyArray<double> array, double val)
{
int j = threadIdx.x;
if (j < array.len)
array.SetValue(val, j);
}
int main(int argc, char **argv)
{
const int N = 20;
const double val = 5432.1;
gpuErrchk(cudaSetDevice(0));
gpuErrchk(cudaFree(0));
MyArray<double> x;
x.DevAlloc(N);
test<<<1, 32>>>(x, val);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
std::vector<double> y(N);
gpuErrchk(cudaMemcpy(&y[0], x.data, x.size(), cudaMemcpyDeviceToHost));
x.DevFree();
for(int i=0; i<N; ++i) std::cout << i << " = " << y[i] << std::endl;
return 0;
}
which compiles and runs like so:
$ nvcc -std=c++11 -arch=sm_53 -o classy classy.cu
$ cuda-memcheck ./classy
========= CUDA-MEMCHECK
0 = 5432.1
1 = 5432.1
2 = 5432.1
3 = 5432.1
4 = 5432.1
5 = 5432.1
6 = 5432.1
7 = 5432.1
8 = 5432.1
9 = 5432.1
10 = 5432.1
11 = 5432.1
12 = 5432.1
13 = 5432.1
14 = 5432.1
15 = 5432.1
16 = 5432.1
17 = 5432.1
18 = 5432.1
19 = 5432.1
========= ERROR SUMMARY: 0 errors
(CUDA 10.2/gcc 7.5 on a Jetson Nano)
Note that I have included host side functions for allocation and deallocation which do not interact with the constructor and destructor. Otherwise the class is extremely similar to your design and has the same properties.

Tensorflow GPU new op memory allocation

I am trying to create a new tensorflow GPU op following the instructions on their website.
Looking at their example, it seems they feed a C++ pointer directly into the CUDA kernel without allocating device memory and copying the contents of the host pointer to the device pointer.
From what I understand of CUDA you always have to allocate memory on the device and then use device pointers inside the kernels.
What am I missing? I checked that input_tensor.flat<T>().data() should return a regular C++ pointer. Here is a copy of the code I am referring to:
// kernel_example.cu.cc
#ifdef GOOGLE_CUDA
#define EIGEN_USE_GPU
#include "example.h"
#include "tensorflow/core/util/cuda_kernel_helper.h"
using namespace tensorflow;
using GPUDevice = Eigen::GpuDevice;
// Define the CUDA kernel.
template <typename T>
__global__ void ExampleCudaKernel(const int size, const T* in, T* out) {
for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < size;
i += blockDim.x * gridDim.x) {
out[i] = 2 * ldg(in + i);
}
}
// Define the GPU implementation that launches the CUDA kernel.
template <typename T>
void ExampleFunctor<GPUDevice, T>::operator()(
const GPUDevice& d, int size, const T* in, T* out) {
// Launch the cuda kernel.
//
// See core/util/cuda_kernel_helper.h for example of computing
// block count and thread_per_block count.
int block_count = 1024;
int thread_per_block = 20;
ExampleCudaKernel<T>
<<<block_count, thread_per_block, 0, d.stream()>>>(size, in, out);
}
// Explicitly instantiate functors for the types of OpKernels registered.
template struct ExampleFunctor<GPUDevice, float>;
template struct ExampleFunctor<GPUDevice, int32>;
#endif // GOOGLE_CUDA
When you look on https://www.tensorflow.org/extend/adding_an_op at this code lines you will see that the allocation is done in kernel_example.cc:
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& input_tensor = context->input(0);
// Create an output tensor
Tensor* output_tensor = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, input_tensor.shape(),
&output_tensor));
// Do the computation.
OP_REQUIRES(context, input_tensor.NumElements() <= tensorflow::kint32max,
errors::InvalidArgument("Too many elements in tensor"));
ExampleFunctor<Device, T>()(
context->eigen_device<Device>(),
static_cast<int>(input_tensor.NumElements()),
input_tensor.flat<T>().data(),
output_tensor->flat<T>().data());
}
in context->allocate_output(....) they hand over a reference to the output Tensor, which is then allocated. The context knows if it is running on GPU or CPU and allocates the tensor respectively either on host or device. The pointer handed over to CUDA just points then to the actual data within the Tensor class.

Copy huge structure of arrays to GPU

I need to transform an existing Code about SPH (=Smoothed Particle Hydrodynamics) into a code that can be run on a GPU.
Unfortunately, it has a lot of data structure that I need to copy from the CPU to the GPU. I already looked up in the web and I thought, that I did the right thing for my copying-code, but unfortunately, I get an error (something with unhandled exception).
When I opened the Debugger, I saw that there is no information passed to my variables that should be copied to the GPU. It's just saying "The memory could not be read".
So here is an example of one data structure that needs to be copied to the GPU:
__device__ struct d_particle_data
{
float Pos[3]; /*!< particle position at its current time */
float PosMap[3]; /*!< initial boundary particle postions */
float Mass; /*!< particle mass */
float Vel[3]; /*!< particle velocity at its current time */
float GravAccel[3]; /*!< particle acceleration due to gravity */
}*d_P;
and I pass it on the GPU with the following:
cudaMalloc((void**)&d_P, N*sizeof(sph_particle_data));
cudaMemcpy(d_P, P, N*sizeof(d_sph_particle_data), cudaMemcpyHostToDevice);
The data structure P looks the same as the data structure d_P. Does anybody can help me?
EDIT
So, here's a pretty small part of that code:
First, the headers I have to use in the code:
Allvars.h: Variables that I need on the host
struct particle_data
{
float a;
float b;
}
*P;
proto.h: Header with all the functions
extern void main_GPU(int N, int Ntask);
Allvars_gpu.h: all the variables that have to be on the GPU
__device__ struct d_particle_data
{
float a;
float b;
}
*d_P;
So, now I call from the .cpp-File the -.cu-File:
hydra.cpp:
#include <stdio.h>
#include <cuda_runtime.h>
extern "C" {
#include "proto.h"
}
int main(void) {
int N_gas = 100; // Number of particles
int NTask = 1; // Number of CPUs (Code has MPI-stuff included)
main_GPU(N_gas,NTask);
return 0;
}
Now, the action takes place in the .cu-File:
hydro_gpu.cu:
#include <cuda_runtime.h>
#include <stdio.h>
extern "C" {
#include "Allvars_gpu.h"
#include "allvars.h"
#include "proto.h"
}
__device__ void hydro_evaluate(int target, int mode, struct d_particle_data *P) {
int c = 5;
float a,b;
a = P[target].a;
b = P[target].b;
P[target].a = a+c;
P[target].b = b+c;
}
__global__ void hydro_particle(struct d_particle_data *P) {
int i = threadIdx.x + blockIdx.x*blockDim.x;
hydro_evaluate(i,0,P);
}
void main_GPU(int N, int Ntask) {
int Blocks;
cudaMalloc((void**)&d_P, N*sizeof(d_particle_data));
cudaMemcpy(d_P, P, N*sizeof(d_particle_data), cudaMemcpyHostToDevice);
Blocks = (N+N-1)/N;
hydro_particle<<<Blocks,N>>>(d_P);
cudaMemcpy(P, d_P, N*sizeof(d_particle_data), cudaMemcpyDeviceToHost);
cudaFree(d_P);
}
The really short answer is probably not to declare *d_P as a static __device__ symbol. Those cannot be passed as device pointer arguments to cudaMalloc, cudaMemcpy, or kernel launches and your use of __device__ is both unecessary and incorrect in this example.
If you make that change, your code might start working. Note that I lost interest in trying to actually compile your MCVE code some time ago, and there might well be other problems, but I'm too bored with this question to look for them. This answer has mostly been added to get this question off the unanswered queue for the CUDA tag.

How to access a class from one cuda kernel in the next kernel

I have a dev variable which I used to allocate space on the device using a class header.
Neu *dev_NN;
cudaStatus = cudaMalloc((void**)&dev_NN, sizeof(Neu));
Then I call a kernel which initialises the class on the GPU.
KGNN<<<1, threadsPerBlock>>>(dev_LaySze, dev_NN);
in the kernel
__global__ void KGNN(int * dev_LaySze, Neu * NN)
{
...
NN = Neu(dev_LaySze[0], dev_LaySze[1], dev_LaySze[2]);
}
After the return of this kernel I want to use another kernel to input data to class methods and retrieve output data (the allocators and copies are already done and work), such as
__global__ void KGFF(double *dev_inp, double *dev_outp, int *DataSize)
{
int i = threadIdx.x;
...
NN.Analyse(dev_inp, dev_outp, DataSize );
}
The second kernel knows nothing about the class that was created. As you would expect NN is unrecognised. How do I access the first NN without re-creating the class and re-initialising it? The second kernel has to be called several times, remembering the changes it made to the class variables earlier. I don't want to use the class with the CPU, only the GPU, and I don't want to pass it back and forth each time.
I don't think this has anything to do with CUDA, actually. I believe a similar problem would be observed if you tried this in ordinary C++ (assuming the pointer to NN is not a global variable).
The key aspect of the solution as pointed out by Park Young-Bae is simply to pass the pointer to the allocated space for NN to both kernels. There were a few other changes that I think needed to be made to what you have shown, according to my understanding of what you are trying to do (since you haven't posted a complete code.) Here's a fully worked example:
$ cat t635.cu
#include <stdio.h>
class MC {
int md;
public:
__host__ __device__ int get_md() { return md;}
__host__ __device__ MC(int val) { md = val; }
};
__global__ void kernel1(MC *d){
*d = MC(3);
}
__global__ void kernel2(MC *d){
printf("val = %d\n", d->get_md());
}
int main(){
MC *d_obj;
cudaMalloc(&d_obj, sizeof(MC));
kernel1<<<1,1>>>(d_obj);
kernel2<<<1,1>>>(d_obj);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -arch=sm_20 -o t635 t635.cu
$ ./t635
val = 3
$
The other changes I suggest:
in your first kernel, you're passing a pointer (NN) (which presumably you have made a device allocation for), and then you are creating an opject and copying that object to the allocated space. In that case I think you need:
*NN = Neu(dev_LaySze[0], dev_LaySze[1], dev_LaySze[2]);
in your second kernel, if NN is a pointer, we must use:
NN->Analyse(dev_inp, dev_outp, DataSize );
I have made those two changes to my posted example. Again, I think this is all just C++ mechanics, not anything specific to CUDA.

Implicit constructor in CUDA kernel call

I'm trying to pass some POD to a kernel which has as parameters some non-POD, and has non explicit constructors. Idea behind that is: allocate some memory on the host, pass the memory to the kernel, and it encapsulate the memory in the objects without the user to explicitly do that step.
The constructors are marked as __device__ code, but they are not called when passing the parameters, and I can't figure out why.
My question is not really related about how should I do the thing, but trying to understand what's happening behind the scenes.
Here an example (I'm using CUDA 5 with a GPU of capability 2.1, hence the printf).
#include <stdio.h>
struct Test {
__device__ Test() {
printf("Default\n"),
_n = 0;
}
__device__ Test(int n) {
printf("Construct %d\n", n);
_n = n;
}
__device__ Test(const Test &t) {
printf("Copy constr %d\n", t._n);
_n = t._n;
}
__device__ Test &operator=(const Test &t) {
printf("Assignment %d\n", t._n);
_n = t._n;
return *this;
}
__device__ int calc() const {
printf("Calculating %d\n", threadIdx.x + 10 * _n);
return threadIdx.x + 10 * _n;
}
int _n;
};
__global__ void dosome(Test a, Test b) {
printf("Kernel data %d %d\n", a._n, b._n);
a.calc();
b.calc();
}
int main(int argc, char **argv) {
dosome<<<1, 2>>>(2, 3);
cudaError_t cudaerr = cudaDeviceSynchronize();
if (cudaerr != cudaSuccess)
printf("kernel launch failed with error:\n\t%s\n",cudaGetErrorString(cudaerr));
return 0;
}
EDIT: Forgot to say that, none of the constructor message is printed, but the calc and kernel message are.
EDIT2: Is it guaranteed that CUDA will initialize a Test object before copying it on the device?
You have to see a constructor just like a normal method. If you qualify it with __host__, then you'll be able to call it host-side. If you qualify it with __device__, you'll be able to call it device-side. If you qualify it with both, you'll be able to call it on both sides.
What happens when you do dosome<<<1, 2>>>(2, 3); is that the two objects are implictly constructed (because your constructor is not explicit, so maybe that's confusing you too) host side and then memcpy'd to the device. There is no copy-constructor involved in the process.
Let's illustrate this:
__global__ void dosome(Test a, Test b) {
a.calc();
b.calc();
}
int main(int argc, char **argv) {
dosome<<<1, 2>>>(2, 3); // Constructors must be at least __host__
return 0;
}
// Outputs:
Construct 2 (from the host side)
Construct 3 (from the host side)
Now if you change your kernel to take ints instead of Test:
__global__ void dosome(int arga, int argb) {
// Constructors must be at least __device__
Test a(arga);
Test b(argb);
a.calc();
b.calc();
}
int main(int argc, char **argv) {
dosome<<<1, 2>>>(2, 3);
return 0;
}
// Outputs:
Construct 2 (from the device side)
Construct 3 (from the device side)
Ok, I found it works (constructors are called) if I add both __host__ and __device__ qualifiers to the constructors. The constructor of the objects happened at host side, and then they were copied to device (stack?). This is why the constructors weren't called: they were device code (but what was called on the host side?!?)
Using both __host__ and __device__ in the constructors allowed to use the class without problems.
EDIT: Still, I'm not sure if the construction always happens before the copy to device.