Sending shader resource to GPU in DirectX 11 - c++

Lets say I have a simple 2D texture (shader resource)
ID3D11ShaderResourceView* srvTexture;
And a default (immediate) device context
ID3D11DeviceContext* dc;
Now, I set my texture in Pixel Shader like this
ID3D11ShaderResourceView* srvArrayTexture[1];
srvArrayTexture[0] = srvTexture;
dc->PSSetShaderResources(
0, // start slot (not important in this case)
1, // nb of views (one texture)
srvArrayTexture); // my texture as array (because DirectX wants array)
I understand this process as sending actual texture from RAM memory to GPU memory. I wander, why there are also similar methods like VSSetShaderResources, GSSetShaderResources and so on. Does it mean that every pipeline stage (VS, GS, ...) has its own GPU memory?
If I call
dc->VSSetShaderResources(A);
dc->GSSetShaderResources(A);
dc->PSSetShaderResources(A);
Does it mean that I am sending same data three times? Or maybe my data sending concept is inefficient?

These three functions are just binding, not copying, specific resources in the resource buffer to different shaders(vertex shader, pixel shader, geometry shader). A resource buffer can be read during different stages of the pipeline.
In your example, there is only one buffer of "A". However, the shaders binded with this buffer all have the right to read this buffer.

Related

Opengl 3/4 : Can I bind the same buffer object to different targets

In my specific case, I'm trying to bind a vertex buffer object into a uniform buffer object.
For more details, in my opaque object rendering pipeline in deferred shading, I create a G buffer then render light volumes one point light at a time using a light vbo.
I then need all these lights as a ubo available for iteration in forward rendering for translucent objects.
Texture objects are directly and forever associated with the target type with which they are first used. This is not the case for buffer objects.
There is no such thing as a "vertex buffer object" or a "uniform buffer object" (ignore the name of the corresponding extensions). There are only "buffer objects", which can be used for various OpenGL operations, like providing arrays of vertex data, or the storage for uniform blocks, or any number of other things. It is 100% fine to use a buffer as a source for vertex data, then use the same buffer (and same portion of that buffer) as a source for uniform data.

Get data back from OpenGL shader?

My computer doesn't support OpenCL on the GPU or OpenGL compute shaders so I was wondering if it would be a straight forward process to get data from a vertex or fragment shader?
My goal is to pass 2 textures to the shader and have the shader computer the locations where one texture exists in the other. Where there is a pixel match. I need to retrieve the locations of possible matches from the shader.
Is this plausible? If so, how would I go about it? I have the basic OpenGL knowledge, I have set up a program that draws polygons with colors. I really just need a way to get position values back from the shader.
You can render to memory instead of to screen, and then fetch data from it.
Create and bind a Framebuffer Object
Create a Renderbuffer Object and attach it to the Framebuffer Object
Render your scene. The result will end up in the bound Framebuffer Object instead of on the screen.
Use glReadPixels to pull data from the Framebuffer Object.
Be aware that glReadPixels, like most methods of fetching data from GPU memory back to main memory, is slow and likely unsuitable for real-time applications. But it's the best you can do if you don't have features intended for that, like Compute Shaders, or are willing to do it asynchronously with Pixel Buffer Objects.
You can read more about Framebuffers here.

Pass stream hint to existing texture?

I have a texture that was created by another part of my code (with QT5's bindTexture, but this isn't relevant).
How can I set an OpenGL hint that this texture will be frequently updated?
glBindTexture(GL_TEXTURE_2D, textures[0]);
//Tell opengl that I plan on streaming this texture
glBindTexture(GL_TEXTURE_2D, 0);
There is no mechanism to indicating that a texture will be updated repeatedly; that is only related to buffers (e.g., VBOs, etc.) through the usage parameter. However, there are two possibilities:
Attache your texture as a framebuffer object and update it that way. That's probably the most efficient method to do what you're asking. The memory associated with the texture remains resident on the GPU, and you can update it at rendering speeds.
Try using a pixel buffer object (commonly called a PBO, and has an OpenGL buffer type of GL_PIXEL_UNPACK_BUFFER) as the buffer that Qt writes its generated texture into, and mark that buffer as GL_DYNAMIC_DRAW. You'll still need to call glTexImage*D() with the buffer offset of the PBO (i.e., probably zero) for each update, but that approach may afford some efficiency over just blasting texels to the pipe directly through glTexImage*D().
There is no such hint. OpenGL defines functionality, not performance. Just upload to it whenever you need to.

How To Buffer Many Vertex, Geometry, and Pixel Shaders

What is the best way to buffer Vertex Shaders, Pixel Shaders, etc into the Device/Device Context without having to reload them from the filesystem every time?
ID3D11Device::CreateVertexShader
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476524(v=vs.85).aspx
ID3D11DeviceContext::VSSetShader
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476493(v=vs.85).aspx
Does Device::CreateVertexShader buffer a single instance of the shader in System, (not GPU), memory? Can I buffer more than 1?
DeviceContext::CreateVertexShader buffer a single instance of the shader in the GPU, (not System), memory? Can I buffer more than 1?
What are the recommended methods for buffering shaders within the system?
Thanks!
When you use ID3D11Device::CreateVertexShader, you retrieve a reference to them, will represents your vertex shader in gpu, so if you have 3 vertex shaders you do:
ID3D11VertexShader* vsref1;
ID3D11VertexShader* vsref2;
ID3D11VertexShader* vsref3;
CreateVertexShader(bytecode1,sizeofbytecode1,NULL,&vsref1);
CreateVertexShader(bytecode2,sizeofbytecode2,NULL,&vsref2);
CreateVertexShader(bytecode3,sizeofbytecode3,NULL,&vsref3);
Make sure you keep track of vsref1,2 and 3 (like as class members). Once created they are uploaded to your gpu, no need to do it again, VSSetShader is then called to select which one you want to use.
Then you can assign your vertex shader to the pipeline anytime using:
VSSetShader(vsref1,NULL,0);
or
VSSetShader(vsref2,NULL,0);
That doesn't cause an upload, it's just to tell your gpu which Vertex Shader you want to use for the next Draw call.

What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?

What is the difference between FBO and PBO? Which one should I use for off-screen rendering?
What is the difference between FBO and PBO?
A better question is how are they similar. The only thing that is similar about them is their names.
A Framebuffer Object (note the capitalization: framebuffer is one word, not two) is an object that contains multiple images which can be used as render targets.
A Pixel Buffer Object is:
A Buffer Object. FBOs are not buffer objects. Again: framebuffer is one word.
A buffer object that is used for asynchronous uploading/downloading of pixel data to/from images.
If you want to render to a texture or just a non-screen framebuffer, then you use FBOs. If you're trying to read pixel data back to your application asynchronously, or you're trying to transfer pixel data to OpenGL images asynchronously, then you use PBOs.
They're nothing alike.
A FBO (Framebuffer object) is a target where you can render images other than the default frame buffer or screen.
A PBO (Pixel Buffer Object) allows asynchronous transfers of pixel data to and from the device. This can be helpful to improve overall performance when rendering if you have other things that can be done while waiting for the pixel transfer.
I would read VBOs, PBOs and FBOs:
Apple has posted two very nice bits of
sample code demonstrating PBOs and
FBOs. Even though these are
Mac-specific, as sample code they're
good on any platoform because PBOs and
FBOs are OpenGL extensions, not
windowing system extensions.
So what are all these objects? Here's
the situation:
I want to highlight something.
FBO it not memory block. I think it look like struct of pointer. You Must attach Texture to FBO to use it. After attach Texture you now can draw to it for offscreen render or for second pass effect.
struct FBO{
AttachColor0 *ptr0;
AttachColor1 *ptr1;
AttachColor2 *ptr2;
AttachDepth *ptr3;
};
In the other hand, PBO is memory block "block to hold type of memory. "Try to think of it as malloc of x size, then you can use memcpy to copy data from it to texture/FBO or to it".
Why to use PBO?
Create intermediate memory buffer to interface with Host memory and not stop OpenGL drawing will upload texture to or from host.