Specifying the target layer of a 3D rendertarget in vertex shader? [HLSL] - hlsl

When working in HLSL/Directx11 I see there are two methods for binding a 3D rendertarget: either you bind the entire target or you bind it while specifying a layer.
If you bind the entire target how does one specify the layer in HLSL code to which the output color is applied?
I have a suspicion this requires a geometry shader ... is that correct?
Is there any other approach which would allow this to be done in the vertex shader or elsewhere?

If you bind your whole volume texture (or TextureArray), you indeed need to use Geometry Shader to write to a specific slice.
your GS output structure will look like this:
struct GSOutput
{
float4 pos : SV_Position;
uint slice : SV_RenderTargetArrayIndex;
//Add anything else you need for your triangle
};
Please not that slice is not interpolated, so if you need to emit to several slices you need to push one primitive per slice.
Second case where you do not want to use Geometry Shader.
Create a rendertargetview description with the same parameters as the previous one, but for each slice, change those parameters (This is for a Texture2DArray, but it's mostly the same if you use Texture3D) :
D3D11_RENDER_TARGET_VIEW_DESC rtvd;
rtvd.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2DARRAY;
rtvd.Texture2DArray.ArraySize = 1;
rtvd.Texture2DArray.FirstArraySlice = yourslice;
Now you have a render target for the slice only, so you can directly bind a single slice in the pipeline.
Please note that this only works if you know in advance (in CPU) to which slice your draw call will render. Also you can render to a single slice only for this draw call.

Related

Is it possible to reprocess a fragment shader before it is drawn to the screen?

Is there a way to make the fragment shader pass through another fragment shader before it is drawn? As in the following example:
Consider that I want to draw a scene but only inside a shape, I can check in the shader
if the TexCoords of the fragment are inside the shape I want.
Pass 1: Bind post processing shader
Pass 2: Draw se scene
Pass 3: Bind default or disable post processing shader
Drawing without post processing shader
Drawing with post processing shader
I'm aware of the framebuffer, and it works, but it goes through a process of rendering the whole screen, and that can cost me performance in the future, especially considering that this post processing shader will be turned on, off and reset several times during the rendering of a frame
OpenGL does not recognize the idea of chaining shader stages in the manner you suggest. During any particular rendering operation, there is exactly one fragment shader active. Period.
Of course, OpenGL also does not care where your shader strings come from. It doesn't care if there's a single file on a disk with that text in it or not. All it cares about is that you pass text corresponding to valid GLSL to glShaderSource.
So if you like, you can manufacture a single shader from multiple conceptual "shaders". This can be as simple as just concatenating a bunch of file strings together (which glShaderSource can do for you, since it takes multiple strings), or it can be a complex operation where you recognize certain variables as interface variables and carefully synthesize a main function from these disparate pieces.
How you go about doing that is ultimately up to you.
Alternatively, you can take an "ubershader" approach. That is, put all of the possible post-processing stuff in one shader, and use uniform variables to tell whether or not a particular post-processing step is currently active.
Write a shader that does both things: calculates the colour, and discards the fragment if it's outside a certain shape. Render the scene with that shader.
Perhaps if you want to avoid wasting time processing pixels outside the shape, you can set the "scissor rectangle" to the bounding box of your shape, so OpenGL won't even run the shader for pixels outside that box.

Bad result using compute shader

I made 2 examples of rendering (here I only render traingles with a single texture).
The first uses forward rendering : I simply draw triangles directly to the swapchain with a simple fragment shader:
outColor = vec4(texture(texAlbedo, fragTexCoord).rgba);
The second uses deferred shading : a first pass draw the scene with the texture and a second pass copy the result pixels to another texture. The second pass only uses a compute shader.
imageStore(resultImage, ivec2(gl_GlobalInvocationID.xy), vec4(albedo.bgr, 1.0));
I think both results should be the same but the second get rendering problems.
The result of the first example :
The result of the second example:
I don't understand this problem. Thank you for your help ! :)
Thank you for your responses.
The problem was that in the second example, I was creating image views without mipmap levels.

Can I use different shader programs for the same rendering job?

EDIT:
My question was unclear at first, I'll try to rephrase it:
How do I use different shaders to do different rendering operations on the same mesh polygons? For example, I want to add lighting using one shader and add fog using another shader. I need to use the color interpolated from the first shader in the calculation of the second shader, but I don't know how to do it if I can't (or rather not supposed to) pass around the color buffer between shaders.
Also (and that was where my question started), I need the same world-view-projection calculations for both shaders, so am I supposed to calculate it in every shader seperatly? Am I supposed to use one big shader for all my rendering operations?
Original question:
Say I have two different shader programs. The first one calculates the vertex positions in the vertex shader and does some operations in the fragment shader.
Let's say I want to use the fragment shader to do different calculations, but I still want to use the same vertex positions calculated by the first vertex shader. Do I have to calculate the vertex positions again or is there a way to share state between different shader programs?
you got more options:
multi pass
this one usually render the geometry into depth and "color" buffer first and then in next passes uses that as input textures for rendering single rectangle covering whole screen/view. Deferred shading is an example of this but there are many other implementations of effects that are not Deferred shading related. Here an example of multi pass:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js?
In first pass the planets and stars and stuff is rendered, in second the atmosphere is added.
You can combine the passes either by blending or direct rendering. The direct rendering requires that you render to texture each pass and render in the last one. Blending is changing the color of the output in each pass.
single pass
what you describe is more like you should encode the different shaders as a functions for single fragment shader... Yes you can combine more shaders into single one if they are compatible and combine their results to final output color.
Big shader is a performance hit but I think it would be still faster than having multiple passes doing the same.
Take a look at this example:
Normal mapping gone horribly wrong
this one computes enviromental reflection, lighting, geometry color and combines them together to single output color.
Exotic shaders
There are also exotic shaders that go around the pipeline limitations like this one:
Reflection and refraction impossible without recursive ray tracing?
Which are used for stuff that is believed to be not possible to implement in GL/GLSL pipeline. Anyway If the limitations are too binding you can still use compute shader...

How to apply a vertex shader to all vertices in a scene in OpenGL?

I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().

What is the most efficient method of rendering sprites in DirectX 10?

I am currently experimenting with various ways of displaying 2D sprites in DirectX 10. I began by using the ID3DX10Sprite interface to batch draw my sprites in a single call. Eventually, however, I wanted a little more control over how my sprites were rendered, so I decided to look into quad-based sprite rendering (ie each sprite being represented by a quad with a texture applied).
I started out simple: I created a single vertex buffer consisting of 4 vertices that was applied once before the sprites were drawn. I then looped through my sprites, setting the appropriate properties to be passed into the shader, and making a draw call for each sprite, like so: d3dDevice->Draw( 4, 0);. Though it worked, the draw call for every sprite bugged me, so I looked for a more efficient method.
After searching about, I learned about object instancing, and decided to try it out. Everything went well until I tried implementing the most important part of sprites--textures. In short, though I had a texture array (declared at the top of my shader like so Texture2D textures[10];) that could be successfully sampled within my pixel shader using literals/constants as indexes, I could not figure out how to control which textures were applied to which instances via a texture index.
The idea would be for me to pass in a texture index per instance, that could then be used to sample the appropriate texture in the array within the pixel shader. However, after searching around more, I could not find an example of how it could be done (and found many things suggesting that it could not be done without moving to DirectX 11).
Is that to say that the only way to successfully render sprites via object instancing in DirectX 10 is to render them in batches based on texture? So, for example, if my scene consists of 100 sprites with 20 different textures (each texture referenced by 5 sprites), then it would take 20 separate draw calls to display the scene, and I would only be sending 5 sprites at a time.
In the end, I am rather at a loss. I have done a lot of searching, and seem to be coming up with conflicting information. For example, in this article in paragraph 6 it states:
Using DirectX 10, it is possible to apply different textures in the array to different instances of the same object, thus making them look different
In addition, on page 3 of this whitepaper, it mentions the option to:
Read a custom texture per instance from a texture array
However, I cannot seem to find a concrete example of how the shader can be setup to access a texture array using a per instance texture index.
In the end, the central question is: What is the most efficient method of rendering sprites using DirectX 10?
If the answer is instancing, then is it possible to control which texture is applied to each specific instance within the shader--thereby making it possible to send in much larger batches of sprites along with their appropriate texture index with only a single draw call? Or must I be content with only instancing sprites with the same texture at a time?
If the answer is returning to the use of the provided DX10 Sprite interface, then is there a way for me to have more control over how it is rendered?
As a side note, I have also looked into using a Geometry Shader to create the actual quad, so I would only have to pass in a series of points instead of managing a vertex and instance buffer. Again, though, unless there is a way to control which textures are applied to the generated quads, then I'm back to only batching sprites by textures.
There's a few ways (as usual) to do what you describe.
Please note that using
Texture2D textures[10];
will not allow you to use a variable index for lookup in Pixel Shader (since technically this declaration will allocate a slot per texture).
So what you need is to create a Texture2DArray instead. This is a bit like a volume texture, but the z component is a full number and there's no sampling on it.
You will need to generate this texture array though. Easy way is on startup you do one full screen quad draw call to draw each texture into a slice of the array (you can create a RenderTargetView for a specific slice). Shader will be a simple passtrough here.
To create a Texture Array (code is in SlimDX but, options are similar):
var texBufferDesc = new Texture2DDescription
{
ArraySize = TextureCount,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
Format = format,
Height = h,
Width = w,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(1,0),
Usage = ResourceUsage.Default,
};
Then shader resource view is like this:
ShaderResourceViewDescription srvd = new ShaderResourceViewDescription()
{
ArraySize = TextureCount,
FirstArraySlice = 0,
Dimension = ShaderResourceViewDimension.Texture2DArray,
Format = format,
MipLevels = 1,
MostDetailedMip = 0
};
Finally, to get a render target for a specific slice:
RenderTargetViewDescription rtd = new RenderTargetViewDescription()
{
ArraySize = 1,
FirstArraySlice = SliceIndex,
Dimension = RenderTargetViewDimension.Texture2DArray,
Format = this.Format
};
Bind that to your passtrough shader, set desired texture as input and slice as output and draw a full screen quad (or full screen triangle).
Please note that this texture can also be saved in dds format (so it saves you to regenerate every time you start your program).
Looking up your Texture is like:
Texture2DArray myarray;
In Pixel Shader:
myarray.Sample(mySampler, float2(uv,SliceIndex);
Now about rendering sprites, you also have the option of GS expansion.
So you create a vertex buffer containing only the position/size/textureindex/whatever else you need one one vertex per sprite.
Send a draw call with n sprites (Topology needs to be set to point list).
Passtrough the data from vertex shader to geometry shader.
Expand your point into quad in geometry shader, you can find an example which is ParticlesGS in Microsoft SDK doing that, it's a bit overkill for your case since you only need the rendering part for it, not the animation. If you need some cleaned code let me know I'll quickly make a dx10 compatible sample (In my case I use StructuredBuffers instead of VertexBuffer)
Doing a pre-made Quad and passing the above data in Per Instance VertexBuffer is also possible, but if you have a high number of sprites it will easily blow up your graphics card (by high I mean something like over 3 million particles, which is not much by nowadays standards, but if you're under half a million sprites you'll be totally fine ;)
Include the texture index within the instance buffer and use this to select the correct texture from the texture array per instance:
struct VS
{
float3 Position: POSITION;
float2 TexCoord: TEXCOORD0;
float TexIndex: TexIndex; // From the instance buffer not the vertex buffer
}
Then pass this value on through to the pixel shader
struct PS
{
float4 Positon: SV_POSITION;
float3 TexCoord: TEXCOORD0;
}
..
vout.TexCoord = float3(vin.TexCoord, vin.TexIndex);