With OpenCL, How to get GPU memory usage? - c++

I'm looking for a reliable way to determine current GPU memory usage with OpenCL.
I have found NVidia API: cudaMemGetInfo( size_t* free, size_t* total ) to get free memory and total memory on the current device.
But I'm looking for a solution for AMD and OpenCL. I did not find if there is similar functionality in OpenCL and I don't know if AMD has something equivalent.
I don't want to know how much free memory there is on OpenCL devices before allocating buffers but free memory afer allocating buffers.
A priori as indicated in How do I determine available device memory in OpenCL?, With OpenCL, there is no way, and there is no need to know it.

devices before allocating buffers but free memory afer allocating buffers.
For AMD, perhaps try CL_DEVICE_GLOBAL_FREE_MEMORY_AMD from the cl_amd_device_attribute_query extension - this extension will probably only work with proprietary drivers, though.
In general case, it's impossible, because AFAIK there's no way to know when buffers are allocated (on the device). In this sense OpenCL is higher-level than CUDA. Buffers belong to contexts, not devices. Calling clCreateBuffer() can but doesn't have to allocate any memory on any device; the implementations automatically migrate buffers to device memory before they execute a kernel which needs those buffers, and move them away from the device if they need to free memory for next kernel. Even if you get the free memory of a device, you can't 100% reliably use it to make decisions on whether to run a kernel, because clEnqueueNDRange() doesn't necessarily immediately launch a kernel (it just enqueues it; if there's something else in the queue, it can be delayed), and some other application on the same computer could get scheduled on the GPU in meantime.
If you want to avoid swapping memory, you'll have to make sure 1) your application is the only one using the GPU, 2) for each of your kernels, total buffer arguments size must be <= GLOBAL_MEM_SIZE.

Related

Does cudaMallocManaged() create a synchronized buffer in RAM and VRAM?

In an Nvidia developer blog: An Even Easier Introduction to CUDA the writer explains:
To compute on the GPU, I need to allocate memory accessible by the
GPU. Unified Memory in CUDA makes this easy by providing a single
memory space accessible by all GPUs and CPUs in your system. To
allocate data in unified memory, call cudaMallocManaged(), which
returns a pointer that you can access from host (CPU) code or device
(GPU) code.
I found this both interesting (since it seems potentially convenient) and confusing:
returns a pointer that you can access from host (CPU) code or device
(GPU) code.
For this to be true, it seems like cudaMallocManaged() must be syncing 2 buffers across VRAM and RAM. Is this the case? Or is my understanding lacking?
In my work so far with GPU acceleration on top of the WebGL abstraction layer via GPU.js, I learned the distinct performance difference between passing VRAM based buffers (textures in WebGL) from kernel to kernel (keeping the buffer on the GPU, highly performant) and retrieving the buffer value outside of the kernels to access it in RAM through JavaScript (pulling the buffer off the GPU, taking a performance hit since buffers in VRAM on the GPU don't magically move to RAM).
Forgive my highly abstracted understanding / description of the topic, since I know most CUDA / C++ devs have a much more granular understanding of the process.
So is cudaMallocManaged() creating synchronized buffers in both RAM
and VRAM for convenience of the developer?
If so, wouldn't doing so come with an unnecessary cost in cases where
we might never need to touch that buffer with the CPU?
Does the compiler perhaps just check if we ever reference that buffer
from CPU and never create the CPU side of the synced buffer if it's
not needed?
Or do I have it all wrong? Are we not even talking VRAM? How does
this work?
So is cudaMallocManaged() creating synchronized buffers in both RAM and VRAM for convenience of the developer?
Yes, more or less. The "synchronization" is referred to in the managed memory model as migration of data. Virtual address carveouts are made for all visible processors, and the data is migrated (i.e. moved to, and provided a physical allocation for) the processor that attempts to access it.
If so, wouldn't doing so come with an unnecessary cost in cases where we might never need to touch that buffer with the CPU?
If you never need to touch the buffer on the CPU, then what will happen is that the VA carveout will be made in the CPU VA space, but no physical allocation will be made for it. When the GPU attempts to actually access the data, it will cause the allocation to "appear" and use up GPU memory. Although there are "costs" to be sure, there is no usage of CPU (physical) memory in this case. Furthermore, once instantiated in GPU memory, there should be no ongoing additional cost for the GPU to access it; it should run at "full" speed. The instantiation/migration process is a complex one, and what I am describing here is what I would consider the "principal" modality or behavior. There are many factors that could affect this.
Does the compiler perhaps just check if we ever reference that buffer from CPU and never create the CPU side of the synced buffer if it's not needed?
No, this is managed by the runtime, not compile time.
Or do I have it all wrong? Are we not even talking VRAM? How does this work?
No you don't have it all wrong. Yes we are talking about VRAM.
The blog you reference barely touches on managed memory, which is a fairly involved subject. There are numerous online resources to learn more about it. You might want to review some of them. here is one. There are good GTC presentations on managed memory, including here. There is also an entire section of the CUDA programming guide covering managed memory.

Where are mapped device memory to, in virtual addressing, when using Intel I/OAT?

When I use Intel I/OAT for DMA zero-copy/zero-cycles(without CPU) transfer through async_memcpy, then where are mapped device memory to, in virtual addressing: to the kernel-buffer(kernel space) or to the user-buffer(user space)?
And does it make any sense to use I/OAT in modern x86_64 CPUs (when CPU-core can fast access to the RAM without north-bridge of chipset)?
http://www.intel.com/content/www/us/en/wireless-network/accel-technology.html
Given that the memory is physical memory, it can be any memory that the kernel can address, including both kernel buffers and user-space buffers. It does however have to be "pinned" or "locked", so that the memory doesn't get taken away (e.g. someone doing free on the memory should not release the memory back to the OS for reassignment to another process, because you could get very interesting effects if that is the case). This is of course the same rules that apply to various other DMA accesses.
I doubt very much this helps in copying data structures for your average user-mode application. On the other hand, I don't believe Intel would put these sort of features into the processor unless they thought it was beneficial in some way. The way I understand it is that it's helpful for copying the network receive buffer into user-mode application that is receiving the data, with less CPU involvement. It doesn't necessarily speed up the actual memory transfer much (if at all), but it offloads the CPU from the to do other things.
I'm pretty sure I saw something not so long ago about this technology [or something very similar] also going into the latest models of processors, so I expect there is some advantage to it.

libGL heap usage

I am working on a linux-based c++ OpenGL application, utilizing the Nvidia 290.10 64bit drivers. I am trying to reduce its memory footprint as it makes use of quite a lot of live data.
I've been using valgrind/massif to analyze heap usage, and while it helped me optimize various things, by now the largest chunk of heap memory used is allocated by libGL. No matter how I set the threshold, massif doesn't let me see in detail where those allocations come from, just that it's libGL. At peak times, I see about 250MB allocated by libGL (out of 900MB total heap usage). I hold a similar amount of memory on the graphics card, as VBOs and Textures (mostly one big 4096*4096 texture).
So it appears as if a similar amount of memory as what I upload to GPU memory is allocated on the heap by libGL. The libGL allocations also peak when the volume of VBOs peaks. Is that normal? I thought one of the benefits of having a lot of GPU memory is that it keeps the RAM free?
What you experience is perfectly normal, because a OpenGL implementation must keep a copy of the data in system memory for various reasons.
In OpenGL there's no exclusive access to the GPU, so depending on its use, it may become neccessary to swap out data (or just release some objects from GPU memory). Also GPUs may crash and drivers then just silently reset them without the user noticing. This too requires a full copy of all the buffer data.
And don't forget that there's a major difference between address space allocation (the value reported by Valgrind) and actual memory utilization.

CUDA Zero Copy memory considerations

I am trying to figure out if using cudaHostAlloc (or cudaMallocHost?) is appropriate.
I am trying to run a kernel where my input data is more than the amount available on the GPU.
Can I cudaMallocHost more space than there is on the GPU? If not, and lets say I allocate 1/4 the space that I need (which will fit on the GPU), is there any advantage to using pinned memory?
I would essentially have to still copy from that 1/4 sized buffer into my full size malloc'd buffer and that's probably no faster than just using normal cudaMalloc right?
Is this typical usage scenario correct for using cudaMallocHost:
allocate pinned host memory (lets call it "h_p")
populate h_p with input data-
get device pointer on GPU for h_p
run kernel using that device pointer to modify contents of array-
use h_p like normal, which now has modified contents-
So - no copy has to happy between step 4 and 5 right?
if that is correct, then I can see the advantage for kernels that will fit on the GPU all at once at least
Memory transfer is an important factor when it comes to the performance of CUDA applications. cudaMallocHost can do two things:
allocate pinned memory: this is page-locked host memory that the CUDA runtime can track. If host memory allocated this way is involved in cudaMemcpy as either source or destination, the CUDA runtime will be able to perform an optimized memory transfer.
allocate mapped memory: this is also page-locked memory that can be used in kernel code directly as it is mapped to CUDA address space. To do this you have to set the cudaDeviceMapHost flag using cudaSetDeviceFlags before using any other CUDA function. The GPU memory size does not limit the size of mapped host memory.
I'm not sure about the performance of the latter technique. It could allow you to overlap computation and communication very nicely.
If you access the memory in blocks inside your kernel (i.e. you don't need the entire data but only a section) you could use a multi-buffering method utilizing asynchronous memory transfers with cudaMemcpyAsync by having multiple-buffers on the GPU: compute on one buffer, transfer one buffer to host and transfer one buffer to device at the same time.
I believe your assertions about the usage scenario are correct when using cudaDeviceMapHost type of allocation. You do not have to do an explicit copy but there certainly will be an implicit copy that you don't see. There's a chance it overlaps nicely with your computation. Note that you might need to synchronize the kernel call to make sure the kernel finished and that you have the modified content in h_p.
Using host memory would be orders of magnitude slower than on-device memory. It has both very high latency and very limited throughput. For example capacity of PCIe x16 is mere 8GB/s when bandwidth of device memory on GTX460 is 108GB/s
Neither the CUDA C Programming Guide, nor the CUDA Best Practices Guide mention that the amount allocated by cudaMallocHost can 't be bigger than the device memory so I conclude it's possible.
Data transfers from page locked memory to the device are faster than normal data transfers and even faster if using write-combined memory. Also, the memory allocated this way can be mapped into device memory space eliminating the need to (manually) copy the data at all. It happens automatic as the data is needed so you should be able to process more data than fits into device memory.
However, system performance (of the host) can greatly suffer, if the page-locked amount makes up a significant part of the host memory.
So when to use this technique?, simple: If the data needs be read only once and written only once, use it. It will yield a performance gain, since one would've to copy data back and forth at some point anyway. But as soon as the need to store intermediate results, that don't fit into registers or shared memory, arises, process chunks of your data that fit into device memory with cudaMalloc.
Yes, you can cudaMallocHost more space than there is on the gpu.
Pinned memory can have higher bandwidth, but can decrease host performance. It is very easy to switch between normal host memory, pinned memory, write-combined memory, and even mapped (zero-copy) memory. Why don't you use normal host memory first and compare the performance?
Yes, your usage scenario should work.
Keep in mind that global device memory access is slow, and zero-copy host memory access is even slower. Whether zero-copy is right for you depends entirely on how you use the memory.
Also consider use of streams for overlapping data transfer/ kernel execution.
This provides gpu work on chunks of data

Should i orphan OpenCL buffers?

In OpenGL it is a common practice to orphan buffers that are used frequently. Ideally the drivers notices that a buffer of the same size is requested and if possible returns the old buffer if it is not needed anymore. The buffer only allocates new memory when the old buffer is still in use and can't be reused.
In OpenCL (on NVIDIA Hardware using the latest developer drivers) i am not sure about this technic. I got a 256kB buffer that is handled by the c++ wrapper refcounting which i reallocate frequently. Most of the time this works fine but in some cases OpenCL throws a CL_OUT_OF_MEMORY error while allocating a new buffer.
Do you think that i should switch my approach (e.g. using a constant number of buffers)? Or should i investigate in an other possible cause for this problem?
Kind regards,
Florian
OpenCL uses the C semantics for memory allocation and deallocation. As such, it will not automatically reuse buffers. You have to explicitly release a buffer and allocate a new buffer later. Alternatively, it seems to be a good practice to reuse buffers manually. Allocation can be a quite expensive operation.