How To Buffer Many Vertex, Geometry, and Pixel Shaders - c++

What is the best way to buffer Vertex Shaders, Pixel Shaders, etc into the Device/Device Context without having to reload them from the filesystem every time?
ID3D11Device::CreateVertexShader
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476524(v=vs.85).aspx
ID3D11DeviceContext::VSSetShader
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476493(v=vs.85).aspx
Does Device::CreateVertexShader buffer a single instance of the shader in System, (not GPU), memory? Can I buffer more than 1?
DeviceContext::CreateVertexShader buffer a single instance of the shader in the GPU, (not System), memory? Can I buffer more than 1?
What are the recommended methods for buffering shaders within the system?
Thanks!

When you use ID3D11Device::CreateVertexShader, you retrieve a reference to them, will represents your vertex shader in gpu, so if you have 3 vertex shaders you do:
ID3D11VertexShader* vsref1;
ID3D11VertexShader* vsref2;
ID3D11VertexShader* vsref3;
CreateVertexShader(bytecode1,sizeofbytecode1,NULL,&vsref1);
CreateVertexShader(bytecode2,sizeofbytecode2,NULL,&vsref2);
CreateVertexShader(bytecode3,sizeofbytecode3,NULL,&vsref3);
Make sure you keep track of vsref1,2 and 3 (like as class members). Once created they are uploaded to your gpu, no need to do it again, VSSetShader is then called to select which one you want to use.
Then you can assign your vertex shader to the pipeline anytime using:
VSSetShader(vsref1,NULL,0);
or
VSSetShader(vsref2,NULL,0);
That doesn't cause an upload, it's just to tell your gpu which Vertex Shader you want to use for the next Draw call.

Related

Sending shader resource to GPU in DirectX 11

Lets say I have a simple 2D texture (shader resource)
ID3D11ShaderResourceView* srvTexture;
And a default (immediate) device context
ID3D11DeviceContext* dc;
Now, I set my texture in Pixel Shader like this
ID3D11ShaderResourceView* srvArrayTexture[1];
srvArrayTexture[0] = srvTexture;
dc->PSSetShaderResources(
0, // start slot (not important in this case)
1, // nb of views (one texture)
srvArrayTexture); // my texture as array (because DirectX wants array)
I understand this process as sending actual texture from RAM memory to GPU memory. I wander, why there are also similar methods like VSSetShaderResources, GSSetShaderResources and so on. Does it mean that every pipeline stage (VS, GS, ...) has its own GPU memory?
If I call
dc->VSSetShaderResources(A);
dc->GSSetShaderResources(A);
dc->PSSetShaderResources(A);
Does it mean that I am sending same data three times? Or maybe my data sending concept is inefficient?
These three functions are just binding, not copying, specific resources in the resource buffer to different shaders(vertex shader, pixel shader, geometry shader). A resource buffer can be read during different stages of the pipeline.
In your example, there is only one buffer of "A". However, the shaders binded with this buffer all have the right to read this buffer.

Get data back from OpenGL shader?

My computer doesn't support OpenCL on the GPU or OpenGL compute shaders so I was wondering if it would be a straight forward process to get data from a vertex or fragment shader?
My goal is to pass 2 textures to the shader and have the shader computer the locations where one texture exists in the other. Where there is a pixel match. I need to retrieve the locations of possible matches from the shader.
Is this plausible? If so, how would I go about it? I have the basic OpenGL knowledge, I have set up a program that draws polygons with colors. I really just need a way to get position values back from the shader.
You can render to memory instead of to screen, and then fetch data from it.
Create and bind a Framebuffer Object
Create a Renderbuffer Object and attach it to the Framebuffer Object
Render your scene. The result will end up in the bound Framebuffer Object instead of on the screen.
Use glReadPixels to pull data from the Framebuffer Object.
Be aware that glReadPixels, like most methods of fetching data from GPU memory back to main memory, is slow and likely unsuitable for real-time applications. But it's the best you can do if you don't have features intended for that, like Compute Shaders, or are willing to do it asynchronously with Pixel Buffer Objects.
You can read more about Framebuffers here.

I need my GLSL fragment shader to return the distance calculation

I'm using some standard GLSL (version 120) vertex and fragment shaders to simulate LIDAR. In other words, instead of just returning a color at each x,y position (each pixel, via the fragment shader), it should return color and distance.
I suppose I don't actually need all of the color bits, since I really only want the intensity; so I could store the distance in gl_FragColor.b, for example, and use .rg for the intensity. But then I'm not entirely clear on how I get the value back out again.
Is there a simple way to return values from the fragment shader? I've tried varying, but it seems like the fragment shader can't write variables other than gl_FragColor.
I understand that some people use the GLSL pipeline for general-purpose (non-graphics) GPU processing, and that might be an option — except I still do want to render my objects normally.
OpenGL already returns this "distance calculation" via the depth buffer, although it's not linear. You can simply create a frame buffer object (FBO), attach colour and depth buffers, render to it, and you have the result sitting in the depth buffer (although you'll have to undo the depth transformation). This is the easiest option to program provided you are familiar with the depth calculations.
Another method, as you suggest, is storing the value in a colour buffer. You don't have to use the main colour buffer because then you'd lose your colour or have to render twice. Instead, attach a second render target (texture) to your FBO (GL_COLOR_ATTACHMENT1) and use gl_FragData[0] for normal colour and gl_FragData[1] for your distance (for newer GL versions you should be declaring out variables in the fragment shader). It depends on the precision you need, but you'll probably want to make the distance texture 32 bit float (GL_R32F and write to gl_FragData[1].r).
- This is a decent place to start: http://www.opengl.org/wiki/Framebuffer_Object
Yes, GLSL can be used for compute purposes. Especially with ARB_image_load_store and nvidia's bindless graphics. You even have access to shared memory via compute shaders (though I've never got one faster than 5 times slower). As #Jherico says, fragment shaders generally output to a single place in a framebuffer attachment/render target, and recent features such as image units (ARB_image_load_store) allow you to write to arbitrary locations from a shader. It's probably overkill and slower but you could also write your distances to a buffer via image units .
Finally, if you want the data back on the host (CPU accessible) side, use glGetTexImage with your distance texture (or glMapBuffer if you decided to use image units).
Fragment shaders output to a rendering buffer. If you want to use the GPU for computing and fetching data back into host memory you have a few options
Create a framebuffer and attach a texture to it to hold your data. Once the image has been rendered you can read back information from the texture into host memory.
Use an CUDA, OpenCL or an OpenGL compute shader to write the memory into an arbitrary bound buffer, and read back the buffer contents

Storing per-object data for fragment shader

I have a fragment shader that uses a few uniforms which are set on a per-object basis. Is there a way to store these uniforms on the graphics card somehow? I've heard of (but cannot find a tutorial for) vertex buffer objects--is there a trick to storing the information in there, so that I don't need to re-set the variables every time I draw a new object?
Each object has very few vertices, but they are completely static.
There are indeed Uniform Buffer Objects in later versions of OpenGL http://www.opengl.org/wiki/Uniform_Buffer_Object
If you use the same shader program ID for all the objects, then you can just set the uniforms once before you render your objects as their value will stay the same until you set them again. So e.g. in your code where you load and compile the shader source, set the uniform variables that are common for all the objects, then render your objects, only setting the per-object uniforms.
The uniform buffer idea in one of the answers can be used if you have different shaders for different objects but you want to share some data between them. This is not necessary in your case as you mention a single shader.

OpenGL vertex array pointers, different buffers per component

A bit of context :
I'm working on a GPU emulator (the NV2A if you want to know) at the push-buffer level, and I'm trying to implement the drawing using OpenGL. The GPU commands that I have to emulate contain separate pointers for each vertex component (so positions are in an entirely different memory address than fog coordinates, colors, texture coordinates, etc.)
Other data, like vertex component size, type and stride are also present in the push-buffer, but those are not really relevant to this question.
I've been reading about Vertex Array Objects, but as far as my tests go, the pointers you can set with glVertexAttribPointer should all be relative to a Vertex Buffer Object - something I would like to avoid, as I've already got a copy of the data in memory.
The question :
Is it possible in OpenGL to draw vertices using separate pointers (not managed by any OpenGL API) per vertex component? And how would the code look like, roughy?
PS: Since I'm emulating a GPU, I have to take vertex shader programs into account too. I haven't worked on these yet, so any suggestion on that is welcome too. TIA!
You don't need to use VBOs, glVertexAttribPointer takes a normal CPU-pointer if no VBO is bound (you can call glBindBuffer(GL_ARRAY_BUFFER, 0) to make sure). And yes, you can set up one address per attribute stream.