Does cudaMallocManaged() create a synchronized buffer in RAM and VRAM? - c++

In an Nvidia developer blog: An Even Easier Introduction to CUDA the writer explains:
To compute on the GPU, I need to allocate memory accessible by the
GPU. Unified Memory in CUDA makes this easy by providing a single
memory space accessible by all GPUs and CPUs in your system. To
allocate data in unified memory, call cudaMallocManaged(), which
returns a pointer that you can access from host (CPU) code or device
(GPU) code.
I found this both interesting (since it seems potentially convenient) and confusing:
returns a pointer that you can access from host (CPU) code or device
(GPU) code.
For this to be true, it seems like cudaMallocManaged() must be syncing 2 buffers across VRAM and RAM. Is this the case? Or is my understanding lacking?
In my work so far with GPU acceleration on top of the WebGL abstraction layer via GPU.js, I learned the distinct performance difference between passing VRAM based buffers (textures in WebGL) from kernel to kernel (keeping the buffer on the GPU, highly performant) and retrieving the buffer value outside of the kernels to access it in RAM through JavaScript (pulling the buffer off the GPU, taking a performance hit since buffers in VRAM on the GPU don't magically move to RAM).
Forgive my highly abstracted understanding / description of the topic, since I know most CUDA / C++ devs have a much more granular understanding of the process.
So is cudaMallocManaged() creating synchronized buffers in both RAM
and VRAM for convenience of the developer?
If so, wouldn't doing so come with an unnecessary cost in cases where
we might never need to touch that buffer with the CPU?
Does the compiler perhaps just check if we ever reference that buffer
from CPU and never create the CPU side of the synced buffer if it's
not needed?
Or do I have it all wrong? Are we not even talking VRAM? How does
this work?

So is cudaMallocManaged() creating synchronized buffers in both RAM and VRAM for convenience of the developer?
Yes, more or less. The "synchronization" is referred to in the managed memory model as migration of data. Virtual address carveouts are made for all visible processors, and the data is migrated (i.e. moved to, and provided a physical allocation for) the processor that attempts to access it.
If so, wouldn't doing so come with an unnecessary cost in cases where we might never need to touch that buffer with the CPU?
If you never need to touch the buffer on the CPU, then what will happen is that the VA carveout will be made in the CPU VA space, but no physical allocation will be made for it. When the GPU attempts to actually access the data, it will cause the allocation to "appear" and use up GPU memory. Although there are "costs" to be sure, there is no usage of CPU (physical) memory in this case. Furthermore, once instantiated in GPU memory, there should be no ongoing additional cost for the GPU to access it; it should run at "full" speed. The instantiation/migration process is a complex one, and what I am describing here is what I would consider the "principal" modality or behavior. There are many factors that could affect this.
Does the compiler perhaps just check if we ever reference that buffer from CPU and never create the CPU side of the synced buffer if it's not needed?
No, this is managed by the runtime, not compile time.
Or do I have it all wrong? Are we not even talking VRAM? How does this work?
No you don't have it all wrong. Yes we are talking about VRAM.
The blog you reference barely touches on managed memory, which is a fairly involved subject. There are numerous online resources to learn more about it. You might want to review some of them. here is one. There are good GTC presentations on managed memory, including here. There is also an entire section of the CUDA programming guide covering managed memory.

Related

Does OpenGL takes care of GPU memory fragmentation?

So basically whenever I create buffer objects Opengl allocates some memory on the GPU.
Consider scenario 1 where I generate 2 uniform buffers for 2 uniform variables.
Now consider scenario 2 where I create a single buffer and enclose the 2 uniform variables inside an interface block.
My understanding is that for scenario 1, two separate regions of memory get allocate while for scenario 2, one big contiguous block of memory gets allocated. If so, then Scenario 1 might be susceptible to memory fragmentation and if this happens is it managed by OpenGL or something else OR should we keep this in mind before writing performance critical code?
Actually I have to fix that for you. It's
So basically whenever I create buffer objects Opengl allocates some memory.
You don't know – and it's invalid to make assumptions – about the whereabouts of where this memory is located. You just get the assurance that it's there (somewhere) and that you can make use of it.
managed by OpenGL or something
Yes. In fact, and reasonable OpenGL implementations do have to move around data on a regular basis. Think about it: On a modern computer system several applications do use the GPU in parallel, and neither process (usually) does care about or respect the inner working of the other processes that coinhabit the same machine. Yet the user (naturally) expects, that all processes will "just work" independent of the situation.
The GPU drivers do a lot of data pushing in the background, moving stuff between the system memory, the GPU memory or even swap space on storage devices without processes noticing any of that.
OR should we keep this in mind before writing performance critical code?
Average-joe-programmer will get the best performance by just using the OpenGL API in a straightforward way, without trying to outsmart the implementation. Every OpenGL implementation (= combination of GPU model + driver version) has "fast paths", however short of having access to intimately detailed knowledge about the GPU and driver details those are very difficult to hit.
Usually only the GPU makers themselves have this knowledge; if you're a AAA game studio, you're usually having a few GPU vendor guys on quick dial to come for a visit to your office and do their voodoo; most people visiting this site probably don't.

CUDA pinned memory and coalescing

On a compute capability 2.x device how would I make sure that the gpu uses coalesced memory access when using mapped pinned memory and assuming that normally when using global memory the 2D data would require padding?
I can't seem to find information about this anywhere, perhaps I should be looking better or perhaps I am missing something. Any pointers in the right direction are welcome...
The coalescing approach should be applied when using zero copy memory. Quoting the CUDA C BEST PRACTICES GUIDE:
Because the data is not cached on the GPU, mapped
pinned memory should be read or written only once, and the global loads and stores
that read and write the memory should be coalesced.
Quoting the "CUDA Programming" book, by S. Cook
If you think about what happens with access to global memory, an entire cache line is brought in from memory on compute 2.x hardware. Even on compute 1.x hardware the same 128 bytes, potentially reduced to 64 or 32, is fetched from global memory.
NVIDIA does not publish the size of the PCI-E transfers it uses, or details on how zero copy is actually implemented. However, the coalescing approach used for global memory could be used with PCI-E transfer. The warp memory latency hiding model can equally be applied to PCI-E transfers, providing there is enough arithmetic density to hide the latency of the PCI-E transfers.

Where are mapped device memory to, in virtual addressing, when using Intel I/OAT?

When I use Intel I/OAT for DMA zero-copy/zero-cycles(without CPU) transfer through async_memcpy, then where are mapped device memory to, in virtual addressing: to the kernel-buffer(kernel space) or to the user-buffer(user space)?
And does it make any sense to use I/OAT in modern x86_64 CPUs (when CPU-core can fast access to the RAM without north-bridge of chipset)?
http://www.intel.com/content/www/us/en/wireless-network/accel-technology.html
Given that the memory is physical memory, it can be any memory that the kernel can address, including both kernel buffers and user-space buffers. It does however have to be "pinned" or "locked", so that the memory doesn't get taken away (e.g. someone doing free on the memory should not release the memory back to the OS for reassignment to another process, because you could get very interesting effects if that is the case). This is of course the same rules that apply to various other DMA accesses.
I doubt very much this helps in copying data structures for your average user-mode application. On the other hand, I don't believe Intel would put these sort of features into the processor unless they thought it was beneficial in some way. The way I understand it is that it's helpful for copying the network receive buffer into user-mode application that is receiving the data, with less CPU involvement. It doesn't necessarily speed up the actual memory transfer much (if at all), but it offloads the CPU from the to do other things.
I'm pretty sure I saw something not so long ago about this technology [or something very similar] also going into the latest models of processors, so I expect there is some advantage to it.

Confusion regarding memory management in OpenGL

I'm asking this question because I don't want to spend time writing some code that duplicates functionalities of the OpenGL drivers.
Can the OpenGL driver/server hold more data than the video card? Say, I have enough video RAM to hold 10 textures. Can I ask OpenGL to allocate 15 textures without getting an GL_OUT_OF_MEMORY error?
If I can rely on the driver to cleverly send the textures/buffers/objects from the 'normal' RAM to the video RAM when needed then I don't really need to Gen/Delete these objects myself. I become limited by the 'normal' RAM which is often plentiful when compared to the video RAM.
The approach "memory is abundant so I don't need to delete" is bad, and the approach "memory is abundant, so I'll never get out of memory errors" is flawed.
OpenGL memory management is obscure, both for technical reasons (see t.niese's comment above) and for ideological reasons ("you don't need to know, you don't want to know"). Though there exist vendor extensions (such as ATI_meminfo) that let you query some non-authorative numbers (non-authorative insofar as they could change the next millisecond, and they do not take effects like fragmentation into account).
Generally, for the most part, your assumption that you can use more memory than there is GPU memory is correct.
However, you are not usually not able to use all available memory. More likely, there is a limit well below "all available RAM" due to constraints on what memory regions (and how large regions) the driver can allocate, lock, and DMA to/from. And even though you can normally use more memory than will fit on the GPU (even if you used it exclusively), this does not mean careless allocations can't and won't eventually fail.
Usually, but not necessarily, you consume as much system memory as GPU memory, too (without knowing, the driver does that secretly). Since the driver swaps resources in and out as needed, it needs to maintain a copy. Sometimes, it is necessary to keep 2 or 3 copies (e.g. when streaming or for ARB_copy_buffer operations). Sometimes, mapping a buffer object is yet another copy in a specially allocated block, and sometimes you're allowed to write straight into the driver's memory.
On the other hand, PCIe 2.0 (and PCIe 3.0 even more so) is fast enough to stream vertices from main memory, so you do not even strictly need GPU memory (other than a small buffer). Some drivers will stream dynamic geometry right away from system memory.
Some GPUs do not even have separate system and GPU memory (Intel Sandy Bridge or AMD Fusion).
Also, you should note that deleting objects does not necessarily delete them (at least not immediately). Usually, with very few exceptions, deleting an OpenGL object is merely a tentative delete which prevents you from further referencing the object. The driver will keep the object valid for as long as it needs to.
On the other hand, you really should delete what you do not need any more, and you should delete early. For example, you should delete a shader immediately after attaching it to the program object. This ensures that you do not leak resources, and it is guaranteed to work. Deleting and re-specifying the in-use vertex or pixel buffer when streaming (by calling glBufferData(... NULL); is a well-known idiom. This only affects your view of the object, and it allows the driver to continue using the old object in parallel for as long as it needs to.
Some additional information to my comment that did not fit in there.
There are different reasons why this is not part of OpenGL.
It isn't an easy task for the system/driver to guess which resources are and will be required. The driver for sure could create an internal heuristic if resource will be required often or rarely (like CPU does for if statements and doing pre executing code certain code parts on that guess). But the GPU will not know (without knowing the application code) what resource will be required next. It even has no knowledge where the geometry is places in the scene (because you do this with you model and view martix you pass to your shader yourself)
If you e.g. have a game where you can walk through a scene, you normally won't render the parts that are out of the view. So the GPU could think that these resources are not required anymore, but if you turn around then all this textures and geometry is required again and needs to be moved from system memory to gpu memory, which could result in really bad performance. But the Game Engine itself has, because of the use of octrees (or similar techniques) and the possible paths that can be walked, an in deep knowledge about the scene and which resource could be removed from the GPU and which one could be move to the GPU while playing and where it would be necessary to display a loading screen.
If you look at the evolution of OpenGL and which features become deprecated you will see that they go to the direction to remove everything except the really required features that can be done best by the graphic card, driver and system. Everything else is up to the user to implement on it's own to get the best performance. (you e.g. create your projection matrix yourself to pass it to the shader, so OpenGl even does not know where the object is placed in the scene).
Here's my TL;DR answer, I recommend reading Daemon's and t.niese's answers as well:
Can the OpenGL driver/server hold more data than the video card?
Yes
Say, I have enough video RAM to hold 10 textures. Can I ask OpenGL to allocate 15 textures without getting an GL_OUT_OF_MEMORY error?
Yes. Depending on the driver / GPU combination it might even be possible to allocate a single texture that exceeds the GPU's memory, and actually use it for rendering. At my current occupation I exploit that fact to extract slices of arbitrary orientation and geometry from large volumetric datasets, using shaders to apply filters on the voxel data in situ. Works well, but doesn't work for interactive frame rates.

OpenGL Buffer Object internal workings?

I've started to use Pixel Buffer Objects and while I understand how to use them and the gist of what they're doing, I really don't know what's going on under the hood. I'm aware that the OpenGL spec allows for leeway in regards to the exact implementation, but that's still beyond me.
So far as I understand, the Buffer Object typically resides server side in GRAM; though this apparently may vary depending on target and usage. This makes perfect sense as this would be why OpenGL calls on the BOs would operate so fast. But in what such instances would it reside in AGP or system memory? (side question: does PCI-e have an equivalent of AGP memory?)
Also, glMapBuffers() returns a pointer to a block of memory of the BO so the data may be read/written/changed. But how is this done? The manipulations are taking place client side, so the data still has to go from server to client some how. If it is, how is is better than glReadPixels()?
PBOs are obviously better than glReadPixels() as is obvious by the performance difference, I just don't understand how.
I haven't used FBOs yet, but I've heard they're better to use. Is this true? if so, why?
I can't tell you in what memory the buffer object will be allocated. Actually you mostly answered that question yourself, so you can hope that a good driver will actually do it this way.
glMapBuffer can be implemented the same way as memory mapped files. Remember the difference between physical memory and virtual address space: when you write to a memory location, the address is mapped through a page table to a physical location. If the required page is marked as swapped out an interrupt occurs and the system loads the required page from the swap to the RAM. This mechanism can be used to map files and other resources (like GPU memory) to your process's virtual address space. When you call glMapBuffer, the system allocates some address range (not memory, just addresses) and prepares the relevant entries in page table. When you try to read/write to these addresses the system loads/sends it to the GPU. Of course this would be slow, so some buffering is done on the way.
If you constantly transfer data between CPU and GPU, I doubt that PBOs will be faster. They are faster when you make many manipulations on the GPU (like load from frame buffer, change a few texels with CPU and use it as a texture again on the GPU). Well, they can be faster in case of integrated graphics processor or AGP memory, because in that case glMapBuffer can map the addresses directly to the physical memory, effectively eliminating one copy operation.
Are FBOs better? For what? They are better when you need to render to texture. That's again because they eliminate one data copy operation.