CUDA memory error - c++

I run high-performance calculations on multiple GPUs (two GPUs per machine), currently I test my code on GeForce GTX TITAN. Recently I noticed that random memory errors occur so that I can't rely on the outcome anymore. Tried to debug and ran into things I don't understand. I'd appreciate if someone could help me understand why the following is happening.
So, here's my GPU:
$ nvidia-smi -a
Driver Version : 331.67
GPU 0000:03:00.0
Product Name : GeForce GTX TITAN
...
VBIOS Version : 80.10.2C.00.02
FB Memory Usage
Total : 6143 MiB
Used : 14 MiB
Free : 6129 MiB
Ecc Mode
Current : N/A
Pending : N/A
My Linux machine (Ubuntu 12.04 64-bit):
$ uname -a
Linux cluster-cn-211 3.2.0-61-generic #93-Ubuntu SMP Fri May 2 21:31:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Here's my code (basically, allocate 4G of memory, fill with zeros, copy back to host and check if all values are zero; spoiler: they're not)
#include <cstdio>
#define check(e) {if (e != cudaSuccess) { \
printf("%d: %s\n", e, cudaGetErrorString(e)); \
return 1; }}
int main() {
size_t num = 1024*1024*1024; // 1 billion elements
size_t size = num * sizeof(float); // 4 GB of memory
float *dp;
float *p = new float[num];
cudaError_t e;
e = cudaMalloc((void**)&dp, size); // allocate
check(e);
e = cudaMemset(dp, 0, size); // set to zero
check(e);
e = cudaMemcpy(p, dp, size, cudaMemcpyDeviceToHost); // copy back
check(e);
for(size_t i=0; i<num; i++) {
if (p[i] != 0) // this should never happen, amiright?
printf("%lu %f\n", i, p[i]);
}
return 0;
}
I run it like this
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Sat_Jan_25_17:33:19_PST_2014
Cuda compilation tools, release 6.0, V6.0.1
$ nvcc test.cu
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
$ ./a.out | head
516836128 -0.000214
516836164 -0.841684
516836328 -3272.289062
516836428 -644673853950867887966360388719607808.000000
516836692 0.000005
516850472 232680927002624.000000
516850508 909806289566040064.000000
...
$ echo $?
0
This is not what I expected: many elements are non-zero. Here a couple of observations
I checked with cuda-memcheck - no errors. Checked with valgrind's memcheck - no errors.
the memory allocation works as expected, nvidia-smi reports 4179MiB / 6143MiB
the same happens if I
allocate less memory (e.g. 2 GB)
compile with -arch sm_30 or -arch compute_30 (see capabilities)
go from SDK version 6.0 back to 5.5
go from GTX Titan to Tesla K20c (here the ECC checking is enabled and all counters are zero); behavior is the same, I was able to test it on five different GPU cards.
allocate multiple smaller arrays on the device
the errors disappear if I test on a GTX 680
Again, the question is: why do I see those memory errors and how can I ensure that this never happens?

I also perform calculations using CPU and we have found the same issue. We are using the model GeForce GTX 660 Ti.
I have check that the number of errors increases with the time which the GPU has been working.
The problem can be solved by shutting down the computer (it doesn't work if the machine is rebooted), but after some time working the problem starts again.
I have no idea why that happens. I have tried several codes to check the memory and all of them give the same result.
As far as I have checked this problem cannot be avoided, and the only way to be sure that your results are ok is to check the memory after the calculations and to shutdown the machine evey so often. I know this is not a good solution but is the only I have found.

Related

CUDA GPU __global__ function does not complete

__global__ void functionA()
{
printf("functionA");
}
int main()
{
printf("main1");
functionA<<<1,1>>>();
printf("main2");
}
I'm trying to run a simple test with the above. But the program only outputs "main1". The program should output "functionA" and "main2" too.
This seems to have two reasons:
First of all you need to add
cudaDeviceSynchronize();
after the CUDA routine in order to block the main until the device has completed all tasks.
Furthermore this might happen if you set the wrong GPU architecture/compute capability XX when compiling the code
$ nvcc -gencode=arch=compute_XX,code=sm_XX -o my_app my_app.cu
In this case only the host code is run while the parts on the accelerator will be omitted it seems. You can find an overview of the corresponding number XX for the different hardware generations over here. The K20m you are running is 35. So it should be
$ nvcc -gencode=arch=compute_35,code=sm_35 -o my_app my_app.cu
in your case.
This might also occur if you have multiple graphic accelerators in your system and the code is executed on the wrong one. Each graphics card/accelerator is assigned a particular device id. The device with number 0 should be assigned automatically to the most powerful device and will be used by default. Therefore the first time I compiled the code on my system containing a powerful Tesla K80 (architecture 37) and a low power Quadro P620 (architecture 60) I selected 37 and had the same error as you have while when selecting 60 the code would run. I then used then the Querying Device Properties example to give me a list of the CUDA-capable devices and their corresponding device id, just to find out that on my system the Tesla K80 is set as 1 and 2 while the simple Quadro P620 graphics card is set as 0. I assume this is the case as the K80 is deprecated in CUDA 11!
You can select the device inside your code with cudaSetDevice or change it when launching the program with
$ CUDA_VISIBLE_DEVICES="1" ./my_app
where 1 has to be replaced by the device id you wish to use. Doing so should make your code run without any problems.
You can also test if this really is the issue this by cloning the Github repository of "Learn CUDA Programming", then browsing Chapter01/01_cuda_introduction/01_hello_world/, compile the make file with $ make and finally run it with $ ./hello_world. It automatically compiles for multiple architectures/compute capabilities and should therefore run without any issue!

Strange behaviour of Parallel Boost Graph Library example code

I have set up simple tests with Parallel Boost Graph Library (PBGL), which I have never used before, and observed entirely unexpected behaviour I would like to explain.
My steps were as follows:
Dump test data in METIS format (a kind of social graph with 50 mln vertices and 100 mln edges);
Build modified PBGL example from graph_parallel\example\dijkstra_shortest_paths.cpp
Example was slightly extended to proceed with Eager, Crauser and delta-stepping algorithms.
Note: building of the example required some obscure workaround about the MUTABLE_QUEUE define in crauser_et_al_shortest_paths.hpp (example code is in fact incompatible with the new mutable_queue)
int lookahead = 1;
delta_stepping_shortest_paths(g, start, dummy_property_map(), get(vertex_distance, g), get(edge_weight, g), lookahead);
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)).lookahead(lookahead));
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)));
Run
mpiexec -n 1 mytest.exe mydata.me
mpiexec -n 2 mytest.exe mydata.me
mpiexec -n 4 mytest.exe mydata.me
mpiexec -n 8 mytest.exe mydata.me
The observed behaviour:
-n 1:
mem usage: 35 GB in 1 running process, which utilizes exactly 1 device thread (processor load 12.5%)
delta stepping time: about 1 min 20 s
eager time: about 2 min
crauser time: about 3 min 20 s.
-n 2:
crash in the stage of data load.
-n 4:
mem usage: 40+ Gb in roughly equal parts in 4 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
-n 8:
mem usage: 44+ Gb in roughly equal parts in 8 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
So, except the unapropriate memory usage and very low total performance the only changes I observe when more MPI processes are running are slightly increased total memory consumption and linear rise of processor load.
The fact that initial graph is somehow partitioned between processes (probably by the vertices number ranges) is nevertheless evident.
What is wrong with this test (and, probably, my idea of MPI usage in whole)?
My enviromnent:
- one Win 10 PC with 64 Gb and 8 kernels;
- MS MPI 10.0.12498.5;
- MSVC 2017, toolset 141;
- boost 1.71
N.B. See original example code here.

OpenCV: is OpenCL actually enabled?

I'm trying to speed up the research of features of stitching algorithm using OpenCL. I'm using the code of the example provided here: https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp
I read online that the only thing I have to do is change Mat to Umat. I did it.
However I am not sure my code is actually using OpenCL.
First: I'm working on an Ubuntu 16.04
Virtual Machine using Parallels
Desktop on a Macbook Pro. Therefore
the only device supported by OpenCL
will be the CPU (no GPU). I installed the correct drivers, sdk, etc and CPU should work correctly. You can see the result of command "clinfo" in Ubuntu shell below. Working with a CPU, I do not
expect performance improvement. My
plan is just to work my virtual
machine and than deploy the code on a
real Ubuntu machine.
Second: the code has no improvement (as expected, see above). Actually the required time seems to be the same. Is it right? I mean, I know I am working still on the CPU, but I expected to be some differences. Moreover looking at the call graph profiled with grof and gprof2dot there no differences (for ones who have never heard about gprof, it is simply a code profiler that can generate a call graph showing all calls among functions: which function calls what other function, and so on). Is it possible? OpenCV with and without OpenCL should call exactly the same function?
How can I be sure the code is actually working with OpenCL? I read online there were some bugs in features finding on OpenCL and therefore I would like to check myself. Moreover, obviously, I would like to work and edit the code, this is just the beginning.
I'm using this code to check if OpenCL is working:
void checkOpenCL() {
if (!cv::ocl::haveOpenCL())
{
cout << "OpenCL is not available..." << endl;
//return;
}
cv::ocl::Context context;
if (!context.create(cv::ocl::Device::TYPE_ALL))
{
cout << "Failed creating the context..." << endl;
//return;
}
cout << context.ndevices() << " CPU devices are detected." << endl; //This bit provides an overview of the OpenCL devices you have in your computer
for (int i = 0; i < context.ndevices(); i++)
{
cv::ocl::Device device = context.device(i);
cout << "name: " << device.name() << endl;
cout << "available: " << device.available() << endl;
cout << "imageSupport: " << device.imageSupport() << endl;
cout << "OpenCL_C_Version: " << device.OpenCL_C_Version() << endl;
cout << endl;
}
cv::ocl::Device(context.device(0)); //Here is where you change which GPU to use (e.g. 0 or 1)
}
And it prints:
1 CPU devices are detected.
name: Intel(R) Core(TM) i7-4850HQ CPU # 2.30GHz
available: 1
imageSupport: 1
OpenCL_C_Version: OpenCL C 1.2
Running clinfo in Ubuntu shell report
Number of platforms 1
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 1.2 LINUX
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64
Platform Extensions function suffix INTEL
Platform Name Intel(R) OpenCL
Number of devices 1
Device Name Intel(R) Core(TM) i7-4850HQ CPU # 2.30GHz
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 1.2 (Build 25)
Driver Version 1.2.0.25
Device OpenCL C Version OpenCL C 1.2
Device Type CPU
Device Profile FULL_PROFILE
Max compute units 4
Max clock frequency 2300MHz
Device Partition (core)
Max number of sub-devices 4
Supported partition types by counts, equally, by names (Intel)
Max work item dimensions 3
Max work item sizes 8192x8192x8192
Max work group size 8192
Preferred work group size multiple 128
Preferred / native vector sizes
char 1 / 32
short 1 / 16
int 1 / 8
long 1 / 4
half 0 / 0 (n/a)
float 1 / 8
double 1 / 4 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 6103834624 (5.685GiB)
Error Correction support No
Max memory allocation 1525958656 (1.421GiB)
Unified memory for Host and Device Yes
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Global Memory cache type Read/Write
Global Memory cache size 262144
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 480
Max size for 1D images from buffer 95372416 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 480
Max number of write image args 480
Local memory type Global
Local memory size 32768 (32KiB)
Max constant buffer size 131072 (128KiB)
Max number of constant args 480
Max size of kernel argument 3840 (3.75KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Local thread execution (Intel) Yes
Prefer user sync for interop No
Profiling timer resolution 1ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels Yes
SPIR versions 1.2
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Available Yes
Compiler Available Yes
Linker Available Yes
Device Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [INTEL]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
Compile and run the facedetect.cpp from the opencv's T-API folder in the samples folder, it will tell you if you have OpenCL ON/OFF, displayed on the rendered window.
Here is how to run the command:
./ufacedetect --cascade=/home/user/opencv/data/haarcascades/haarcascade_frontalface_alt.xml /dev/video5 --scale=1.3

Why does my "Hello world" program take almost 10s?

I have installed the CUDA runtime and drivers version 7.0 to my workstation (Ubuntu 14.04, 2xIntel XEON e5 + 4x Tesla k20m). I've used the following program to check whether my installation works:
#include <stdio.h>
__global__ void helloFromGPU()
{
printf("Hello World from GPU!\n");
}
int main(int argc, char **argv)
{
printf("Hello World from CPU!\n");
helloFromGPU<<<1, 1>>>();
printf("Hello World from CPU! Again!\n");
cudaDeviceSynchronize();
printf("Hello World from CPU! Yet again!\n");
return 0;
}
I get the correct output, but it's taken an enourmus amount of time:
$ nvcc hello.cu -O2
$ time ./hello > /dev/null
real 0m8.897s
user 0m0.004s
sys 0m1.017s`
If I remove all device code the overall execution takes 0.001s. So why does my simple program almost take 10 seconds?
The apparent slow runtime of your example is due to the underlying fixed cost of setting up the GPU context.
Because you are running on a platform that supports unified addressing, the CUDA runtime has to map 64GB of host RAM and 4 x 5120MB from your GPUs into a single virtual address space and register that with the Linux kernel.
There are a lot of kernel API calls required to do that, and it isn't fast. I would guess that is the main source of the slow performance you are observing. You should view this as a fixed start-up cost which must be amortised over the life of your application. In real world applications, a 10 second startup is trivial and of no real importance. In a hello world example, it isn't.

pthread multithreading in Mac OS vs windows multithreaing

I've developed a multi platform program (using the FLTK toolkit) and implement multithreading to do background intensive tasks.
I have followed the FLTK tutorials/examples on multithreading which involve using pthread on Mac, ie the function pthread_create and windows threading on windows ie _beginthread
What I have noticed is that the performance is much higher on Windows ie 3 to 4 times faster in these background threads (in the time to execute them).
Why might this be? Is it the threading libraries I'm using? Surely there shouldn't be such a difference? Or could it be the runtime libraries underneath it all?
Here are my machine details
Mac:
Intel(R) Core(TM) i7-3820QM CPU # 2.70GHz
16 GB DDR3 1600 MHz
Model MacBookPro9,1
OS: Mac OSX 10.8.5
Windows:
Intel(R) Core(TM) i7-3520M CPU # 2.90GHz
16 GB DDR3 1600 MHz
Model: Dell Latitude E5530
OS: Windows 7 Service Pack 1
EDIT
To just do a basic speed comparison I compiled this on both machines running from the command line
int main(int agrc, char **argv)
{
time_t t = time(NULL);
tm* tt=localtime(&t);
std::stringstream s;
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"1: "<<s.str()<<std::endl;
double sum=0;
for (int i=0;i<100000000;i++){
double ii=i*0.123456789;
sum=sum+sin(ii)*cos(ii);
}
t = time(NULL);
tt=localtime(&t);
s.str("");
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"2: "<<s.str()<<std::endl;
}
Windows takes less than a second. Mac takes 4/5 seconds. Any ideas?
On Mac I'm compiling with g++, with visual studio 2013 on windows.
SECOND EDIT
if I change the line
std::cout<<"2: "<<s.str()<<std::endl;
to
std::cout<<"2: "<<s.str()<<" "<<sum<<std::endl;
Then all of a sudden Windows takes a little bit longer...
This makes me think that the whole thing might be compiler optimisation. So the question would be is g++ (4.2 is the version I have) worse at optimisation or do I need to provide additional flags?
THIRD(!) AND FINAL EDIT
I can report that I achieve comparable performance by ensuring g++ optimisation flags -O were provided at compile time. One of those annoying things that happens so often
A: Im tearing my hair out on problem x
B: Are you sure you're not doing y?
A: That works, why is this information not plastered all over the place and in every tutorial on problem x on the web?
B: Did you read the manual?
A: No, if I completely read the manual for every single bit of code/program I used I would never actually get round to doing anything...
Meh.