I have run into an issue I am unsure of how to properly handle. I recently began creating a particle system for my game, and have been using a structure called 'Particle' for my particle data. 'Particle' contains the vertex information for rendering.
The reason I am having issues is that I am pooling my particle structures in heap memory in order to save on large amounts of allocations, however I am unsure of how to use an array of pointers in glBufferData, I am under the impression that glBufferData requires the actual structure instance rather then a pointer to the structure instance.
I know I can rebuild an array of floats each render just to draw my particles, but is there an OpenGL call like glBufferData which I am missing somewhere that is able to de-reference my pointers as it is going through the data I supply? I would ideally like to prevent having to iterate over the array just to copy the data.
I am under the impression that glBufferData requires the actual structure instance rather then a pointer to the structure instance.
Correct. Effectively glBufferData creates a flat copy of the data preseted to it at the address pointed it via the data parameter.
which I am missing somewhere that is able to de-reference my pointers as it is going through the data I supply?
You're thinking of client side vertex arrays, and those are among the oldest features of OpenGL. They're around since OpenGL-1.1, released 19 years ago.
You just don't use a buffer object, i.e. don't call glGenBuffers, glBindBuffer, glBufferData and pass your client side data address directly to glVertexPointer or glVertexAttribPointer.
However I strongly advise to actually use buffer objects. The data must be copied to the GPU anyway, so that it can be rendered. And doing it through a buffer object enables the OpenGL driver to work more efficiently. Also since OpenGL-4 the use of buffer objects is no longer optional.
Related
I am currently working with OpenGL - version 4.4 and I had a question about the glMapBuffer function. In some other API's I have used (E.G. DX12 and VK) you can keep the pointer that the map function returns alive and flush the memory with a separate call instead of deallocating that pointer.
Is there a way to keep this pointer for a longer duration of time and updating the GPU memory without deallocating this pointer calling glUnmapBuffer?
A buffer object which has immutable storage (using gl(Named)BufferStorage from GL 4.4 or ARB_buffer_storage) can be so allocated with the GL_MAP_PERSISTENT_BIT flag. This allows glMap(Named)BufferRange to be given the same flag and thereby persistently map that range of the buffer's storage.
A buffer whose storage is persistently mapped can be used for many buffer operations without being unmapped first. However, the onus of all synchronization for data access is now on the user. So get comfortable with using fences and double-buffering to access/manipulate the data.
OpenGL provides the functions glInvalidateBufferData and glInvalidateBufferSubData for invalidating buffers.
What these functions do according to the OpenGL wiki:
When a buffer or region thereof is invalidated, it means that the contents of that buffer are now undefined.
and
The idea is that, by invalidating a buffer or range thereof, the implementation will simply grab a new piece of memory to use for any later operations. Thus, while previously issued GL commands can still read the buffer's original data, you can fill the invalidated buffer with new values (via mapping or glBufferSubData) without causing implicit synchronization.
I'm having a hard time understanding when this function should be called and when it shouldn't. In theory, if the contents of the buffer are going to be overwritten it makes absolutely no difference if the previous contents were trashed or not. Does this mean that a buffer should be called before every write? Or only in certain situations?
In theory, if the contents of the buffer are going to be overwritten it makes absolutely no difference if the previous contents were trashed or not.
You're absolutely right.
In essence whether you call glBufferData or glInvalidateBufferData you're achieving the same end goal. The difference is that with glBufferData you're more saying "I need this much memory now", which in turn means that the old memory is discarded. However with glInvalidateBufferData you're saying "I don't need this memory anymore". The same goes for glInvalidateBufferSubData() vs glBufferSubData() as well as all the other glInvalidate*() functions.
The key here is that if you have a buffer, and you currently aren't needing it anymore however you're keeping the handle for later use. Then you can call glInvalidateBufferData to tell that the memory can be released.
The same goes for glInvalidateBufferSubData(). If you suddenly don't need the last half chunk of the memory assigned to the buffer, then now your driver knows that this chunk of memory can be reassigned.
When should glInvalidateBufferData be used?
So to answer your question. When you have a buffer laying around and you don't need the memory anymore.
From OpenGL wiki:
If you will be updating a small section, use glBufferSubData. If you will update the entire VBO, use glBufferData (this information reportedly comes from a nVidia document). However, another approach reputed to work well when updating an entire buffer is to call glBufferData with a NULL pointer, and then glBufferSubData with the new contents. The NULL pointer to glBufferData lets the driver know you don't care about the previous contents so it's free to substitute a totally different buffer, and that helps the driver pipeline uploads more efficiently.
I realize from my research, what they mean by "helps pipeline upload more efficiently" is it allows asynchronous buffer writes which makes dynamic buffers faster.
However, this excerpt and others seem to suggest that the "NULL" argument to the glBufferData is very important, and it's the cause for the orphaning and it's what promotes asynchronous writes.
But my common sense is telling me, it doesn't need to be null, you can put a pointer to some of your data there instead of null. The important part for the orphaning is just the fact that you're calling glBufferData again and therefore reallocating the storage of the buffer, therefore orphaning the old storage, therefore allowing asynchronous writes. Am I correct by thinking this?
I want to customize std::vector class in order to use an OpenGL buffer object as storage.
Is it possible doing so without relying in a specific implementation of STL, by writing a custom allocator and/or subclassing the container?
My problem is how to create wrapper methods for glMapBuffer/glUnmapBuffer in order to user the buffer object for rendering which leave the container in a consistent state.
Is it possible doing so without relying in a specific implementation of STL, by writing a custom allocator and/or subclassing the container?
You can, but that doesn't make it a good idea. And even that you can is dependent on your compiler/standard library.
Before C++11, allocators can not have state. They can cannot have useful members, because the containers are not required to actually use the allocator you pass it. They are allowed to create their own. So you can set the type of allocator, but you cannot give it a specific allocator instance and expect that instance to always be used.
So your allocator cannot just create a buffer object and store the object internally. It would have to use a global (or private static or whatever) buffer. And even then, multiple instances would be using the same buffer.
You could get around this by having the allocator stores (in private static variables) a series of buffer objects and mapped pointers. This would allow you to allocate a buffer object of a particular size, and you get a mapped pointer back. The deallocator would use the pointer to figure out which buffer object it came from and do the appropriate cleanup for it.
Of course, this would be utterly useless for actually doing anything with those buffers. You can't use a buffer that is currently mapped. And if your allocator deletes the buffer once the vector is done with the memory, then you can never actually use that buffer object to do something.
Also, don't forget: unmapping a buffer can fail for unspecified reasons. If it does fail, you have no way of knowing that it did, because the unmap call is wrapped up in the allocator. And destructors shouldn't throw exceptions.
C++11 does make it so that allocators can have state. Which means that it is more or less possible. You can have the allocator survive the std::vector that built the data, and therefore, you can query the allocator for the buffer object post-mapping. You can also store whether the unmap failed.
That still doesn't make it a good idea. It'll be much easier overall to just use a regular old std::vector and use glBufferSubData to upload it. After all, mapping a buffer with READ_WRITE almost guarantees that it's going to be regular memory rather than a GPU address. Which means that unmapping is just going to perform a DMA, which glBufferSubData does. You won't gain much performance by mapping.
Reallocation with buffer objects is going to be much more painful. Since the std::vector object is the one that decides how much extra memory to store, you can't play games like allocating a large buffer object and then just expanding the amount of memory that the container uses. Every time the std::vector thinks that it needs more memory, you're going to have to create a new buffer object name, and the std::Vector will do an element-wise copy from mapped memory to mapped memory.
Not very fast.
Really, what you want is to just make your own container class. It isn't that hard. And it'll be much easier to control when it is mapped and when it is not.
I want to customize std::vector class in order to use an OpenGL buffer object as storage.
While this certainly is possible, I strongly discourage doing so. Mapped buffer objects must be unmapped before they can be used by OpenGL as data input. Thus such a derived, let's call it glbuffervector would have to map/unmap the buffer object for each and every access. Also taking an address of a dereferenced element will not work, since after dereferencing the buffer object would be unmapped again.
Instead of trying to make a vector that stores in a buffer object, I'd implement a referencing container, which can be created from an existing buffer object, together with a layout, so that iterators can be obtained. Following an RAII scheme the buffer object would be mapped creating an instance, and unmapped with instance deallocation.
Is it possible doing so without relying in a specific implementation
of STL, by writing a custom allocator and/or subclassing the
container?
If you are using Microsoft Visual C++, there is a blog post describing how to define a custom STL allocator: "The Mallocator".
I think writing custom allocators is STL-implementation-specific.
I've been using glBufferData, and it makes sense to me that you'd have to specify usage hints (e.g. GL_DYNAMIC_DRAW).
However, it was recently suggested to me on Stack Overflow that I use glMapBuffer or glMapBufferRange to modify non-contiguous blocks of vertex data.
When using glMapBuffer, there does not seem to be any point at which you specify a usage hint. So, my questions are as follows:
Is it valid to use glMapBuffer on a given VBO if you've never called glBufferData on that VBO?
If so, how does OpenGL guess the usage, since it hasn't been given a hint?
What are the advantages/disadvantages of glMapBuffer vs glBufferData? (I know they don't do exactly the same thing. But it seems that by getting a pointer with glMapBuffer and then writing to that address, you can do the same thing glBufferData does.)
Is it valid to use glMapBuffer on a given VBO if you've never called glBufferData on that VBO?
No, because to map some memory, it must be allocated first.
If so, how does OpenGL guess the usage, since it hasn't been given a hint?
It doesn't. You must call glBufferData at least once to initialize the buffer object. If you don't want to actually upload data (because you're going to use glMapBuffer), just pass a null pointer for the data pointer. This works just like with glTexImage, where a buffer/texture object is created, to be filled with either glBufferSubData/glTexSubImage, or in the case of a buffer object as well as through a memory mapping.
What are the advantages/disadvantages of glMapBuffer vs glBufferData? (I know they don't do exactly the same thing. But it seems that by getting a pointer with glMapBuffer and then writing to that address, you can do the same thing glBufferData does.)
glMapBuffer allows you to write to the buffer asynchronously from another thread. And for some implementations it may be possible, that the OpenGL driver gives your process direct access to DMA memory of the GPU or even better to the memory of the GPU itself. For example on SoC architectures with integrated graphics.
No, this appears to be invalid. You must call glBufferData, because otherwise OpenGL cannot know the size of your buffer.
As to which is faster, neither I nor the internet at large appears to know a definite answer. Just test it and see.