I have few textures that I want to set in my HLSL shader as array.
Each texture is represented as ID3D11ShaderResourceView*.
Each texture may be DIFFERENT size.
Now, If I set them in D3D as array:
ID3D11ShaderResourceView* m_array[3];
m_array[0] = ...;
m_array[1] = ...;
m_array[2] = ...;
m_deviceContext->PSSetShaderResources(
0, // Start slot
3, // Nb of textures
m_array); // Array
And in my HLSL shader I declared:
Texture2D g_textures[3];
Will it be mapped correctly?
This is in general a method you can use to map texture arrays from runtime to shader execution. It does not matter that the texture dimensions in the array match, however, you may need to account for this within your shader code, depending on exactly how you are sampling the textures.
Also, in your HLSL, you making the assumption that the g_textures array is assigned to slot 0, so if for some reason it doesn't actually go there (eg. there is another texture resource that comes before it in the shader source), then you won't be setting the intended resource to the correct slot. I find it's better to map them explicitly, eg:
Texture2D g_textures[3] : register(t0);
If there is a collision, it will be found at (shader) compile time.
Related
I've a program with two texture: one from a video, and one from an image.
For the image texture, do I have to pass it to the program at each rendering, or can I do it just once? ie can I do
glActiveTexture(GLenum(GL_TEXTURE1))
glBindTexture(GLenum(GL_TEXTURE_2D), texture.id)
glUniform1i(textureLocation, 1)
just once? I believed so, but in my experiment, this works ok if there no video texture involved, but as soon as I add the video texture that I'm attaching at every rendering pass (since it's changing) the only way to get the image is to run the above code at each rendering frame.
Let's dissect what your doing, including some unnecessary stuff, and what the GL does.
First of all, none of the C-style casts you're doing in your code are necessary. Just use GL_TEXTURE_2D and so on instead of GLenum(GL_TEXTURE_2D).
glActiveTexture(GL_TEXTURE0 + i), where i is in the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1], selects the currently active texture unit. Commands that alter texture unit state will affect unit i as long as you don't call glActiveTexture with another valid unit identifier.
As soon as you call glBindTexture(target, name) with the current active texture unit i, the state of the texture unit is changed to refer to name for the specified target when sampling it with the appropriate sampler in a shader (i.e. name might be bound to TEXTURE_2D and the corresponding sample would have to be a sampler2D). You can only bind one texture object to a specific target for the currently active texture unit - so, if you need to sample two 2D textures in your shader, you'd need to use two texture units.
From the above, it should be obvious what glUniform1i(samplerLocation, i) does.
So, if you have two 2D textures you need to sample in a shader, you need two texture units and two samplers, each referring to one specific unit:
GLuint regularTextureName = 0;
GLunit videoTextureName = 0;
GLint regularTextureSamplerLocation = ...;
GLint videoTextureSamplerLocation = ...;
GLenum regularTextureUnit = 0;
GLenum videoTextureUnit = 1;
// setup texture objects and shaders ...
// make successfully linked shader program current and query
// locations, or better yet, assign locations explicitly in
// the shader (see below) ...
glActiveTexture(GL_TEXTURE0 + regularTextureUnit);
glBindTexture(GL_TEXTURE_2D, regularTextureName);
glUniform(regularTextureSamplerLocation, regularTextureUnit);
glActiveTexture(GL_TEXTURE0 + videoTextureUnit);
glBindTexture(GL_TEXTURE_2D, videoTextureName);
glUniform(videoTextureSampleLocation, videoTextureUnit);
Your fragment shader, where I assume you'll be doing the sampling, would have to have the corresponding samplers:
layout(binding = 0) uniform sampler2D regularTextureSampler;
layout(binding = 1) uniform sampler2D videoTextureSampler;
And that's it. If both texture objects bound to the above units are setup correctly, it doesn't matter if the contents of the texture changes dynamically before each fragment shader invocation - there are numerous scenarios where this is common place, e.g. deferred rendering or any other render-to-texture algorithm so you're not exactly breaking new ground with some video texture.
As to the question on how often you need to do this: you need to do it when you need to do it - don't change state that doesn't need changing. If you never change the bindings of the corresponding texture unit, you don't need to rebind the texture at all. Set them up once correctly and leave them alone.
The same goes for the sampler bindings: if you don't sample other texture objects with your shader, you don't need to change the shader program state at all. Set it up once and leave it alone.
In short: don't change state if don't have to.
EDIT: I'm not quite sure if this is the case or not, but if you're using teh same shader with one sampler for both textures in separate shader invocations, you'd have to change something, but guess what, it's as simple as letting the sampler refer to another texture unit:
// same texture unit setup as before
// shader program is current
while (rendering)
{
glUniform(samplerLocation, regularTextureUnit);
// draw call sampling the regular texture
glUniform(samplerLocation, videoTextureUnit);
// draw call sampling teh video texture
}
You should bind the texture before every draw. You only need to set the location once. You can also do layout(binding = 1) in your shader code for that. The location uniform stays with the program. The texture binding is a global GL state. Also be careful about ActiveTexture: it is a global GL state.
Good practice would be:
On program creation, once, set texture location (uniform)
On draw: SetActive(i), Bind(i), Draw, SetActive(i) Bind(0), SetActive(0)
Then optimize later for redundant calls.
Lets say I have a simple 2D texture (shader resource)
ID3D11ShaderResourceView* srvTexture;
And a default (immediate) device context
ID3D11DeviceContext* dc;
Now, I set my texture in Pixel Shader like this
ID3D11ShaderResourceView* srvArrayTexture[1];
srvArrayTexture[0] = srvTexture;
dc->PSSetShaderResources(
0, // start slot (not important in this case)
1, // nb of views (one texture)
srvArrayTexture); // my texture as array (because DirectX wants array)
I understand this process as sending actual texture from RAM memory to GPU memory. I wander, why there are also similar methods like VSSetShaderResources, GSSetShaderResources and so on. Does it mean that every pipeline stage (VS, GS, ...) has its own GPU memory?
If I call
dc->VSSetShaderResources(A);
dc->GSSetShaderResources(A);
dc->PSSetShaderResources(A);
Does it mean that I am sending same data three times? Or maybe my data sending concept is inefficient?
These three functions are just binding, not copying, specific resources in the resource buffer to different shaders(vertex shader, pixel shader, geometry shader). A resource buffer can be read during different stages of the pipeline.
In your example, there is only one buffer of "A". However, the shaders binded with this buffer all have the right to read this buffer.
I am using glDrawRangeElements() to draw textured quads (as triangles). My problem is that I can only bind one texture before that function call, and so all quads are drawn using the same texture.
How to bind a different texture for each quad?
Is this possible when using the glDrawRangeElements() function? If not, what other OpenGL function should I look at?
First,you need to give an access to multiple textures inside your fragment shader.To do this you can use :
Arrays Textures -basically 3D texture,where 3rd dimension is the number of different 2D texture layers.The restriction is that all the textures in the array must be of the same size.Also Cube Map textures can be used (GL 4.0 and later) to stack multiple textures.
Bindless textures - these you can use on relatively new hardware only.For Nvidia that's Kepler and later.Because bindless texture is essentially a pointer to a texture memory on GPU you can fill an array or Uniform buffer with thousands of those and then index into that array in the fragment shader having an access to the sampler object directly.
Now,how can you index into those arrays per primitive?There are number of ways.First,you can use instanced drawing if you render the same primitives several times.Here you have GLSL InstanceID to track what primitive is currently drawn.
In case when you don't use instancing and also try to texture different parts of geometry in a single draw call it would be more complex.You should add texture index information on per vertex basis.That's ,if your geometry has interleaved structure per vertex looking like this:
VTN,VTN,VTN... where (V-vertices,T-texture coords,N-normals),you should add another set of data ,let's call it I - (texture index),so your vertex array will
have the structure VTNI,VTNI,VTNI...
You can also set a separate Vertex buffer including only the texture indices.But for large geometry buffers it probably will be less efficient.Interleaving of usually allows faster data access.
Once you have it you can pass that texture index as varying into fragment shader(set as flat to make sure it is not interpolated ) and index into specific texture.Yeah,that means your vertex array will be larger and contain redundant data,but that's the downside of using multitexture on a single primitive level.
Hope it helps.
I am currently experimenting with various ways of displaying 2D sprites in DirectX 10. I began by using the ID3DX10Sprite interface to batch draw my sprites in a single call. Eventually, however, I wanted a little more control over how my sprites were rendered, so I decided to look into quad-based sprite rendering (ie each sprite being represented by a quad with a texture applied).
I started out simple: I created a single vertex buffer consisting of 4 vertices that was applied once before the sprites were drawn. I then looped through my sprites, setting the appropriate properties to be passed into the shader, and making a draw call for each sprite, like so: d3dDevice->Draw( 4, 0);. Though it worked, the draw call for every sprite bugged me, so I looked for a more efficient method.
After searching about, I learned about object instancing, and decided to try it out. Everything went well until I tried implementing the most important part of sprites--textures. In short, though I had a texture array (declared at the top of my shader like so Texture2D textures[10];) that could be successfully sampled within my pixel shader using literals/constants as indexes, I could not figure out how to control which textures were applied to which instances via a texture index.
The idea would be for me to pass in a texture index per instance, that could then be used to sample the appropriate texture in the array within the pixel shader. However, after searching around more, I could not find an example of how it could be done (and found many things suggesting that it could not be done without moving to DirectX 11).
Is that to say that the only way to successfully render sprites via object instancing in DirectX 10 is to render them in batches based on texture? So, for example, if my scene consists of 100 sprites with 20 different textures (each texture referenced by 5 sprites), then it would take 20 separate draw calls to display the scene, and I would only be sending 5 sprites at a time.
In the end, I am rather at a loss. I have done a lot of searching, and seem to be coming up with conflicting information. For example, in this article in paragraph 6 it states:
Using DirectX 10, it is possible to apply different textures in the array to different instances of the same object, thus making them look different
In addition, on page 3 of this whitepaper, it mentions the option to:
Read a custom texture per instance from a texture array
However, I cannot seem to find a concrete example of how the shader can be setup to access a texture array using a per instance texture index.
In the end, the central question is: What is the most efficient method of rendering sprites using DirectX 10?
If the answer is instancing, then is it possible to control which texture is applied to each specific instance within the shader--thereby making it possible to send in much larger batches of sprites along with their appropriate texture index with only a single draw call? Or must I be content with only instancing sprites with the same texture at a time?
If the answer is returning to the use of the provided DX10 Sprite interface, then is there a way for me to have more control over how it is rendered?
As a side note, I have also looked into using a Geometry Shader to create the actual quad, so I would only have to pass in a series of points instead of managing a vertex and instance buffer. Again, though, unless there is a way to control which textures are applied to the generated quads, then I'm back to only batching sprites by textures.
There's a few ways (as usual) to do what you describe.
Please note that using
Texture2D textures[10];
will not allow you to use a variable index for lookup in Pixel Shader (since technically this declaration will allocate a slot per texture).
So what you need is to create a Texture2DArray instead. This is a bit like a volume texture, but the z component is a full number and there's no sampling on it.
You will need to generate this texture array though. Easy way is on startup you do one full screen quad draw call to draw each texture into a slice of the array (you can create a RenderTargetView for a specific slice). Shader will be a simple passtrough here.
To create a Texture Array (code is in SlimDX but, options are similar):
var texBufferDesc = new Texture2DDescription
{
ArraySize = TextureCount,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
Format = format,
Height = h,
Width = w,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(1,0),
Usage = ResourceUsage.Default,
};
Then shader resource view is like this:
ShaderResourceViewDescription srvd = new ShaderResourceViewDescription()
{
ArraySize = TextureCount,
FirstArraySlice = 0,
Dimension = ShaderResourceViewDimension.Texture2DArray,
Format = format,
MipLevels = 1,
MostDetailedMip = 0
};
Finally, to get a render target for a specific slice:
RenderTargetViewDescription rtd = new RenderTargetViewDescription()
{
ArraySize = 1,
FirstArraySlice = SliceIndex,
Dimension = RenderTargetViewDimension.Texture2DArray,
Format = this.Format
};
Bind that to your passtrough shader, set desired texture as input and slice as output and draw a full screen quad (or full screen triangle).
Please note that this texture can also be saved in dds format (so it saves you to regenerate every time you start your program).
Looking up your Texture is like:
Texture2DArray myarray;
In Pixel Shader:
myarray.Sample(mySampler, float2(uv,SliceIndex);
Now about rendering sprites, you also have the option of GS expansion.
So you create a vertex buffer containing only the position/size/textureindex/whatever else you need one one vertex per sprite.
Send a draw call with n sprites (Topology needs to be set to point list).
Passtrough the data from vertex shader to geometry shader.
Expand your point into quad in geometry shader, you can find an example which is ParticlesGS in Microsoft SDK doing that, it's a bit overkill for your case since you only need the rendering part for it, not the animation. If you need some cleaned code let me know I'll quickly make a dx10 compatible sample (In my case I use StructuredBuffers instead of VertexBuffer)
Doing a pre-made Quad and passing the above data in Per Instance VertexBuffer is also possible, but if you have a high number of sprites it will easily blow up your graphics card (by high I mean something like over 3 million particles, which is not much by nowadays standards, but if you're under half a million sprites you'll be totally fine ;)
Include the texture index within the instance buffer and use this to select the correct texture from the texture array per instance:
struct VS
{
float3 Position: POSITION;
float2 TexCoord: TEXCOORD0;
float TexIndex: TexIndex; // From the instance buffer not the vertex buffer
}
Then pass this value on through to the pixel shader
struct PS
{
float4 Positon: SV_POSITION;
float3 TexCoord: TEXCOORD0;
}
..
vout.TexCoord = float3(vin.TexCoord, vin.TexIndex);
using this code I can send one texture to the shader:
devcon->PSSetShaderResources(0, 1, &pTexture);
Of course i made the pTexture by: D3DX11CreateShaderResourceViewFromFile
Shader:
Texture2D Texture;
return color * Texture.Sample(ss, texcoord);
I'm currently only sending one texture to the shader, but I would like to send multiple textures, how is this possible?
Thank You.
You can use multiple textures as long as their count does not exceed your shader profile specs. Here is an example:
HLSL Code:
Texture2D diffuseTexture : register(t0);
Texture2D anotherTexture : register(t1);
C++ Code:
devcon->V[P|D|G|C|H]SSetShaderResources(texture_index, 1, &texture);
So for example for above HLSL code it will be:
devcon->PSSetShaderResources(0, 1, &diffuseTextureSRV);
devcon->PSSetShaderResources(1, 1, &anotherTextureSRV); (SRV stands for Shader Texture View)
OR:
ID3D11ShaderResourceView * textures[] = { diffuseTextureSRV, anotherTextureSRV};
devcon->PSSetShaderResources(0, 2, &textures);
HLSL names can be arbitrary and doesn't have to correspond to any specific name - only indexes matter. While "register(tXX);" statements are not required, I'd recommend you to use them to avoid confusion as to which texture corresponds to which slot.
By using Texture Arrays. When you fill out your D3D11_TEXTURE2D_DESC look at the ArraySize member. This desc struct is the one that gets passed to ID3D11Device::CreateTexture2D. Then in your shader you use a 3rd texcoord sampling index which indicates which 2D texture in the array you are referring to.
Update: I just realised you might be talking about doing it over multiple calls (i.e. for different geo), in which case you update the shader's texture resource view. If you are using the effects framework you can use ID3DX11EffectShaderResourceVariable::SetResource, or alternatively rebind a new texture using PSSetShaderResources. However, if you are trying to blend between multiple textures, then you should use texture arrays.
You may also want to look into 3D textures, which provide a natural way to interpolate between adjacent textures in the array (whereas 2D arrays are automatically clamped to the nearest integer) via the 3rd element in the texcoord. See the HLSL sample remarks.