What i'm used to from opengl was that inside the command buffer resources are bound to the shader, like glUniformMatrix4fv.
Now as far I can see the only equivalent is vkCmdPushConstants.
But there is also the option to create a large buffer with the data of all the objects. And then use vkCmdBindDescriptorSetsto change the offset so the shader gets uses the data for the corresponding object (correct me is something is wrong here, this is how i suppose it could be done).
But now what is the "right" way to get per-object data to your shader? And in what way does it depend on the amount of data the shader needs to change per object.
The the other question I have has to do with synchronized gpu and cpu.
You need to wait for a frame to be ready before you copy the data for the next frame onto the gpu.
So how can you let the buffer copy happen in a command buffer? something like vkFlushMappedMemoryRanges that takes a command buffer Then you could set semaphores and wait for the usages of the data to be complete before overwriting the old data on the gpu with new data for the next frame from RAM. And in RAM use a separate buffer for each image in the swapchain so you can already start writing the data for the next frames (upto the swapchain image count).
If you cannot synchronize that buffer copy it seems to me you would need a buffer on the gpu with per-object data for each swpachain image. And that seems like a waste of space to me.
The problem that i see a bit explained, if there is only 1 buffer containing shader data both in RAM on on GPU memory, and if you do not want for the gpu to be idle after each frame (I think you only want to wait if you already submitted the command buffers for all the frames that fit in the swapchain)
cpu pushes objects positions for frame 0 to the gpu
cpu commits the command buffer for frame 1
gpu starts rendering frame 0
cpu pushes object positions for frame 1 to the gpu
cpu commits the command buffer for frame 1
gpu finishes frame 0
gpu starts frame 1
gpu finishes frame 1
In the example the data for frame 1 is allready pushed to the gpu while it is still rendering frame 0, and this corrupts frame 0.
I'm sorry if my post is a bit incoherent or vague, but it's hard to explain a problem that you do not fully understand.
EDIT:
per-vertex should have been per-object.
A function i would be looking for is:
VkCmdFillGpuMemory(VkCommandBuffer commandbuffer, VkDeviceMemory myMemory, void* ramData).
Preferrably also with a range option to copy only a part of the data (so there is the option to copy data only for objects whose data changed)
Uniform data now all have to go through either pushbuffers or UBO-like buffers.
Per-vertex state (vertex attributes) is set with VkPipelineVertexInputStateCreateInfo in the VkGraphicsPipelineCreateInfo and you set the buffers to be used with vkCmdBindVertexBuffers.
There is a vkCmdCopyBuffer to copy data between buffers.
Related
I am trying to sample a few fragments' depth data that I need to use in my client code (that runs on CPU).
I tried a glReadPixel() on my FrameBuffer Object, but turns out it stalls the render pipeline as it transfers data from Video Memory to Main Memory through the CPU, thus causes unbearable lag (please, correct me if I am wrong).
I read about Pixel Buffer objects, that we can use them as copies of other buffers, and very importantly, perform glReadPixel() operation without stalling the performance, but not without compromising to use outdated information. (That's OK for me.)
But, I am unable to understand about how to use Pixel Buffers.
What I've learnt is we need to sample data from a texture to store it in a PixelBuffer. But I am trying to sample from a Renderbuffer, which I've read is not possible.
So here's my problem - I want to sample the depth information stored in my Render Buffer, store it in RAM, process it and do other stuff, without causing any issues to the Rendering Pipeline. If I use a depth texture instead of a renderbuffer, i don't know how to use it for depth testing.
Is it possible to copy the entire Renderbuffer to the Pixelbuffer and perform read operations on it?
Is there any other way to achieve what I am trying to do?
Thanks!
glReadPixels can also transfer from a framebuffer to a standard GPU side buffer object. If you generate a buffer and bind it to the GL_PIXEL_PACK_BUFFER target, the data pointer argument to glReadPixels is instead an offset into the buffer object. (So probably should be 0 unless you are doing something clever.)
Once you've copied the pixels you need into a buffer object, you can transfer or map or whatever back to the CPU at a time convenient for you.
I am trying to update my vertex buffer data with the map function in dx. Though it does update the data once, but if i iterate over it the model disappears. i am actually trying to manipulate vertices in real-time by user input and to do so i have to update the vertex buffer every frame while the vertex is selected.
Perhaps this happens because the Map function disables GPU access to the vertices until the Unmap function is called. So if the access is blocked every frame, it kind of makes sense for it to not be able render the mesh. However when i update the vertex every frame and then stop after sometime, theatrically the mesh should show up again, but it doesn't.
i know that the proper way to update data every frame is to use constant buffers, but manipulating vertices with constant buffers might not be a good idea. and i don't think that there is any other way to update the vertex data. i expect dynamic vertex buffers to be able to handle being updated every frame.
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
// Disable GPU access to the vertex buffer data.
pRenderer->GetDeviceContext()->Map(pVBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
// Update the vertex buffer here.
memcpy((Vertex*)mappedResource.pData + index, pData, sizeof(Vertex));
// Reenable GPU access to the vertex buffer data.
pRenderer->GetDeviceContext()->Unmap(pVBuffer, 0);
As this has been already answered the key issue that you are using Discard (which means you won't be able to retrieve the contents from the GPU), I thought I would add a little in terms of options.
The question I have is whether you require performance or the convenience of having the data in one location?
There are a few configurations you can try.
Set up your Buffer to have both CPU Read and Write Access. This though mean you will be pushing and pulling your buffer up and down the bus. In the end, it also causes performance issues on the GPU such as blocking etc (waiting for the data to be moved back onto the GPU). I personally don't use this in my editor.
If memory is not the issue, set up a copy of your buffer on CPU side, each frame map with Discard and block copy the data across. This is performant, but also memory intensive. You obviously have to manage the data partioning and indexing into this space. I don't use this, but I toyed with it, too much effort!
You bite the bullet, you map to the buffer as per 2, and write each vertex object into the mapped buffer. I do this, and unless the buffer is freaking huge, I havent had issue with it in my own editor.
Use the Computer shader to update the buffer, create a resource view and access view and pass the updates via a constant buffer. Bit of a Sledgehammer to crack a wallnut. And still doesn't stop the fact you may need pull the data back off the GPU ala as per item 1.
There are some variations on managing the buffer, such as interleaving you can play with also (2 copies, one on GPU while the other is being written to) which you can try also. There are some rather ornate mechanisms such as building the content of the buffer in another thread and then flagging the update.
At the end of the day, DX 11 doesn't offer the ability (someone might know better) to edit the data in GPU memory directly, there is alot shifting between CPU and GPU.
Good luck on which ever technique you choose.
Mapping buffer with D3D11_MAP_WRITE_DISCARD flag will cause entire buffer content to become invalid. You can not use it to update just a single vertex. Keep buffer on the CPU side instead and then update entire buffer on GPU side once per frame.
If you develop for UWP - use of map/unmap may result in sync problems. ID3D11DeviceContext methods are not thread safe: https://learn.microsoft.com/en-us/windows/win32/direct3d11/overviews-direct3d-11-render-multi-thread-intro.
If you update buffer from one thread and render from another - you may get different errors. In this case you must use some synchronization mechanism, such as critical sections. Example is here https://developernote.com/2015/11/synchronization-mechanism-in-directx-11-and-xaml-winrt-application/
This article is commonly referenced when anyone asks about video streaming textures in OpenGL.
It says:
To maximize the streaming transfer performance, you may use multiple pixel buffer objects. The diagram shows that 2 PBOs are used simultaneously; glTexSubImage2D() copies the pixel data from a PBO while the texture source is being written to the other PBO.
For nth frame, PBO 1 is used for glTexSubImage2D() and PBO 2 is used to get new texture source. For n+1th frame, 2 pixel buffers are switching the roles and continue to update the texture. Because of asynchronous DMA transfer, the update and copy processes can be performed simultaneously. CPU updates the texture source to a PBO while GPU copies texture from the other PBO.
They provide a simple bench-mark program which allows you to cycle between texture updates without PBO's, with a single PBO, and with two PBO's used as described above.
I see a slight performance improvement when enabling one PBO.
But the second PBO makes no real difference.
Right before the code glMapBuffer's the PBO, it calls glBufferData with the pointer set to NULL. It does this to avoid a sync-stall.
// map the buffer object into client's memory
// Note that glMapBufferARB() causes sync issue.
// If GPU is working with this buffer, glMapBufferARB() will wait(stall)
// for GPU to finish its job. To avoid waiting (stall), you can call
// first glBufferDataARB() with NULL pointer before glMapBufferARB().
// If you do that, the previous data in PBO will be discarded and
// glMapBufferARB() returns a new allocated pointer immediately
// even if GPU is still working with the previous data.
So, Here is my question...
Doesn't this make the second PBO completely useless? Just a waste of memory !?
With two PBO's the texture data is stored 3 times. 1 in the texture, and one in each PBO.
With a single PBO. There are two copies of the data. And temporarily only a 3rd in the event that glMapBuffer creates a new buffer because the existing one is presently being DMA'ed to the texture?
The comments seem to suggest that OpenGL drivers internally are capable to creating the second buffer IF and only WHEN it is required to avoid stalling the pipeline. The in-use buffer is being DMA'ed, and my call to map yields a new buffer for me to write to.
The Author of that article appears to be more knowledgeable in this area than myself. Have I completely mis-understood the point?
Answering my own question... But I wont accept it as an answer... (YET).
There are many problems with the benchmark program linked to in the question. It uses immediate mode. It uses GLUT!
The program was spending most of its time doing things we are not interested in profiling. Mainly rendering text via GLUT, and writing pretty stripes to the texture. So I have removed those functions.
I cranked the texture resultion up to 8K, and added more PBO Modes.
NO PBO (yeilds 6fps)
1 PBO. Orphan previous buffer. (yields 12.2 fps).
2 PBO's. Orpha previous buffer. (yields 12.2 fps).
1 PBO. DONT orphan previous PBO (possible stall - added by myself. yields 12.4 fps).
2 PBO's. DONT orphan previous PBO (possible stall - added by myself. yields 12.4 fps).
If anyone else would like to examine my code, it is vailable here
I have experimented with different texture sizes... and different updatePixels functions... I cannot, despite my best efforts get the double PBO implementation to perform any better than the single-PBO implementation.
Furthermore... NOT orphanning the previous buffer, actually vields better performance. Exactly opposite to what the article claims.
Perhaps modern drivers / hardware does not suffer the problem that this design is attemtping to fix...
Perhaps my graphics hardware / driver is buggy, and not taking advantage of the double-PBO...
Perhaps the commonly referenced article is completely wrong?
Who knows. . . .
My test hardware is Intel(R) HD Graphics 5500 (Broadwell GT2).
Using glReadPixels on 1 single pixel stalls the pipeline even if I have swapped the buffers just before.
I don't need synchronization, I can do something like this:
pixel=DEFAULT_VALUE;
while (1){
draw(pixel);
swapBuffers();
pixel=glRead???;
}
How can I do this in an optimized(not stalling) way?
You can asynchronous pixel transfers via Pixel Buffer Objects (PBOs). When you issue a read call without PBOs, the pipeline is flushed and the CPU has to wait for the GPU to finish rendering and transfering the data. With PBOs, you provide a buffer in advance, and the data will be copied into that buffer when the GPU is ready, so it will not stall. It of course will stall when you try to access that buffer before it is ready (e.g. by glGetBufferSubData() or mapping that buffer for reading etc). So ideally, before reading back the data, you can queue up some other render commands, and also do some other CPU work, before accessing the buffer. The extension spec I linked has an example section, which is quite interesting.
This stuff can also be combined with sync objects. In that case, you can add a fence sync after the read call which will copy the data into the PBO. Then, on the CPU you can actually check if the operation is already completed. If not, you can do some other work and check back.
The main problem with all this asynchronous transfers is that you trade throughput for latency. So if you need that pixel value immediately, and do';t have any other work for the GPU and CPU which can be done inbetween, there is not much to gain. You then cannot really avoid the stalling, then.
Okay , I read everything about PBO here : http://www.opengl.org/wiki/Pixel_Buffer_Object
and there http://www.songho.ca/opengl/gl_pbo.html , but I still have a question and I don't know if I'll get any benefit out of a PBO in my case :
I'm doing video-streaming,currently I have a function copying my data buffers to 3 different textures, and then I'm doing some maths in a fragment shader and I display the texture.
I thought PBO could increase the upload time CPU -> GPU, but here it is, let's say we have this example here taken from the second link above.
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds[nextIndex]);
// map the buffer object into client's memory
// Note that glMapBufferARB() causes sync issue.
// If GPU is working with this buffer, glMapBufferARB() will wait(stall)
// for GPU to finish its job. To avoid waiting (stall), you can call
// first glBufferDataARB() with NULL pointer before glMapBufferARB().
// If you do that, the previous data in PBO will be discarded and
// glMapBufferARB() returns a new allocated pointer immediately
// even if GPU is still working with the previous data.
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, DATA_SIZE, 0, GL_STREAM_DRAW_ARB);
GLubyte* ptr = (GLubyte*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY_ARB);
if(ptr)
{
// update data directly on the mapped buffer
updatePixels(ptr, DATA_SIZE);
glUnmapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB); // release pointer to mapping buffer
}
// measure the time modifying the mapped buffer
t1.stop();
updateTime = t1.getElapsedTimeInMilliSec();
///////////////////////////////////////////////////
// it is good idea to release PBOs with ID 0 after use.
// Once bound with 0, all pixel operations behave normal ways.
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
Well, whatever is the behavior of the updatePixels function , it is still using CPU cycles to copy the data to the mapped buffer isn't it?
So let's say I wanted to use PBO in such a manner, that is, to update my frame pixels to the PBO in a function , and then in the display function to call glTexSubImage2D (which should return immediately)... Would I see any speed-up in term of performance?
I can't see why it would be faster... okay we're not waiting anymore during the glTex* call, but we're waiting during the function that uploads the frame to the PBO, aren't we?
Could someone clear that out for me please?
Thanks
The point about Buffer Objects is, that they can be use asynchronously. You can map a BO and then have some other part of the program update it (think threads, think asynchronous IO) while you can keep issuing OpenGL commands. A typical usage scenario with triple buffered PBOs may look like this:
wait_for_video_frame_load_complete(buffer[k-2])
glUnmapBuffer buffer[k-2]
glTexSubImage2D from buffer[k-2]
buffer[k] = glMapBuffer
start_load_next_video_frame(buffer[k]);
draw_texture
SwapBuffers
This allows your program to do usefull work and even upload data to OpenGL while its also used for rendering