I have to create a texture with equivalent format of D3DFMT_A8L8 in directx9 to directx11. But the note in the documentation is not clear to me. Would someone explain what should be done? While creating input layout I am facing invalid parameter error.
D3DFMT_A8L8 => DXGI_FORMAT_R8G8_UNORM
Note: Use swizzle .rrrg in shader to duplicate red and move green to the alpha components to get Direct3D 9 behavior.
In Direct3D 9, the "Luminance" formats would automatically be read in the shader as RGB values (replicated) because they were greyscale formats. In Direct3D 10+, there are no "Luminance" format. There are one and two channel formats, and there's no "special behavior" with them.
Therefore, in a shader a DXGI_FORMAT_R8G8_UNORM texture will have the first channel in the 'r' channel and the second channel in the 'g' channel. The DXGI_FORMAT_R8G8_UNORM has the same memory foot-print as D3DFMT_A8L8, i.e. two 8-bit channels of unsigned normal integer data, but if you want the behavior in a modern shader that matches the old one you have to do it explicitly with shader swizzles:
Texture2D<float4> Texture : register(t0);
sampler Sampler : register(s0);
float4 color = Texture.Sample(Sampler, pin.TexCoord);
// Special-case for DXGI_FORMAT_R8G8_UNORM treated as D3DFMT_A8L8
color = color.rrrg;
This has nothing at all to do with input layouts. It's just the way a texture sampler works.
Related
In shader model 5.1 we can use dynamic indexing for textures like so:
Texture2D textures[5] : register(t0)
PixelShader(Input p) : SV_TARGET
{
float4 color = textures[0].Sample(someSampler, p.UV);
return color;
}
Here it's assumed that all textures have 4 channels (rgba). However, I have no idea how to sample when my texture array is a mix of different formats like BC3 (rgba), BC4 (single channel R), BC5 (dual channel RG). For example, in case of BC4 I could try
float R = textures[0].Sample(someSampler, p.UV).r;
But wouldn't this just skip over three texels?
HSLS Shader model 5.1 is quite confusing because you have a distinction between "texture array" and "texture array"...
The first meaning is the one that appeared with DX10, A single texture resource is made of several slices and a shader can index in the slices. The major limitation is that each slice have to share size and format.
The second meaning, introduced with API like DX12 or Vulkan is closer to "an array of textures". You can now group multiple resource objects into an array of descriptors. The shader can freely use any of them with dynamic indexing. The constraint of a texture array are lifted. The one limitation is the use of the NonUniformIndex intrinsic to let the driver fix up indexing limitation a GPU may have.
As for your original questionm, It is then up to you to know what texture are where, if you group texture with formats like BC4 and BC7, it is probably because one is an albedo while the other may be a gloss map. Your shader will give the semantic to what it read. But if you want a BC4 texture to expand as RRRR instead of the default R001, you can use the component mapping in the shader resource view.
This is not a 'texture array'. This is just a way to declare 5 textures bound individually, and the syntax lets you use indices to select t0 through t1. A 'texture array' is declared as follows:
Texture2DArray textures : register(t0);
Every texture in the texture array must be the same format (it's a single resource), and you use a float3 to index it for sampling.
float4 color = textures.Sample(someSampler, float3(p.UV,0) );
What you are doing above is basically the same thing as:
Texture2D texture0 : register(t0);
Texture2D texture1 : register(t1);
Texture2D texture2 : register(t2);
Texture2D texture3 : register(t3);
Texture2D texture4 : register(t4);
As such, the formats of each texture are completely independent, the code here:
float R = textures[0].Sample(someSampler, p.UV).r;
This just samples the texture in t0 as normal, returning just the red channel. For a BC4, this will cause the hardware to decompress the correct 4x4 block (or blocks depending on the UV and sampler mode), and return the red channel from the reconstruction.
If you are new to DirectX and HLSL, I strongly recommend not using DirectX 12 to start. It's a fairly unforgiving API designed for graphics experts, so you should consider starting with DirectX 11 instead. The APIs both drive the same hardware, they just do it with different programmer abstractions. DirectX 12 documentation also generally assumes you are already an expert with DirectX 11 anyhow and the HLSL usage is basically the same (with the addition of programmatic control over root signatures). See DirectX Tool Kit for DirectX 11 and DirectX 12.
Suppose I'm having a problem like this: now I have a framebuffer and a texture only contains one color component(for example, GL_RED) has already binded to it. What will fragment shader looks like? I guess the answer is:
...
out float ex_color;
ex_color = ...;
Here comes my question : will the shader automatically detect the format of framebuffer and write values to it? What if fragment shader outputs float values but framebuffer format is GL_RGBA?
By the way, what is the correct approach to create a texture only has one component? I read examples from g-truc which has a sample like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED,GLsizei(Texture.dimensions().x), GLsizei(Texture.dimensions().y), 0,GL_RGB, GL_UNSIGNED_BYTE, 0);
What's the meaning of assign GL_RGB as pixel data format?
Just like vertex shader inputs don't have to match the exact size of the data specified by glVertexAttribPointer, fragment shader outputs don't have to match the exact size of the image they're being written to. If the output provides more components than the destination image format, then the extra components are ignored. If it provides fewer, then the other components have undefined values (unlike vertex inputs, where unspecified values have well-defined values).
What's the meaning of assign GL_RGB as pixel data format?
That's the pixel transfer format. That describes the format of pixels you're providing to OpenGL, not the format that OpenGL will store them as.
You should always use sized internal formats, not unsized ones like GL_RED.
For a decent explanation of the internal format and format, see:
http://opengl.org/wiki/Image_Format and http://opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml.
You basically want GL_RED for format and likely want GL_R8 (unsigned normalized 8-bit fixed-point) for the internal format.
A long time ago, luminance textures were the norm for single-channel, but that is a deprecated format in modern GL and red is now the logical "drawable" texture format for single-channels, just as red/green is the most logical format for two-channel.
As for your shader, there are rules for component expansion defined by the core specification. If you have a texture with 1 channel as an input, but sample it as a vec4, it will be equivalent to: vec4 (RED, 0.0, 0.0, 1.0).
Writing to the texture is a little bit different.
From the OpenGL 4.4 Spec, section 15.2 (Shader Execution), pp. 441 -- http://www.opengl.org/registry/doc/glspec44.core.pdf
When a fragment shader terminates, the value of each active user-defined output variable is written to components of the fragment color output to which it is bound. The set of fragment color components written is determined according to the variable’s data type and component index binding, using the mappings in table 11.1 [pp. 341].
By default, if your fragment shader's output is a float, it is going to write to the x (red) component of your texture. You could use a layout qualifier (e.g. layout (component=1) out float color;) to specify that it should write to y (green), z (blue) or w (alpha) (assuming you have an RGBA texture).
Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);
For offscreen rendering to a texture, I'm attaching at the attachment GL_COLOR_ATTACHMENT0 a texture defined by
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_FLOAT,NULL);
I then render to the FBO using a fragment shader that outputs a vec4, so normally, that should be ok. To check that I display the texture correctly, I use the function
glTexSubImage2D()
to add some grey pixels in the middle of the texture. The texture IS correctly displayed (I can perfectly see these pixels at the right place), but the rest of the texture is only noisy artifacts (when it's not black).
Does this come from the fact that I use GL_FLOAT for a GL_RGBA texture? If yes, how can I, in GLSL, convert a uvec4 to vec4? The output of my main shader is a vec4, and I don't know how to convert the uvec4 output of a usampler2D texture to my final vec4.
Thank you for any answer you might provide :)
EDIT : I found the solution : I didn't clear the GL_COLOR_BUFFER_BIT between my 2 renders.
I know this question was self-answered, but a discussion need to be had about this:
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_FLOAT,NULL);
That does not create a floating-point texture. It creates a texture that contains normalized, unsigned integers.
The actual format of a texture is the third parameter; that defines what the texture actually contains. The last three parameters define how you are trying to upload data to the texture; if the pointer is NULL (and no PBO is bound), then the parameters mean nothing.
Basically, my program uses frame buffer object to render to a 3D texture.
If the 3D texture which I attach to fbo is in GL_RGB8 format, which is 24 bits per texel, there is no problem. Only 8-bits of them are used.
The problem happens when I try to use GL_ALPHA8 (8 bits per texel) as the internal format for the 3D texture, the rendering result is blank.
The cg fragment shader which I used to render to fbo looks like this:
void F_PureDecoder(
float3 vIndexCoord : TEXCOORD0,
out float color :COLOR)
{
... ...
color=float4(fL3,fL3,fL3,fL3);
}
Am I doing something wrong or fbo doesn't support to render to 8-bit texel texture? I am primarily unsure if the output format of fragment shader is correct. Coz the framebuffer is 8 bits but the output is 24 bits. Should I modify the output format from fragment shader? If so, what format should I use?
Thanks very much for any input!
To render to one/two component textures, you MUST use the GL_ARB_texture_rg extension, it will NOT work with any other format. See the extension specification for why it doesn't work:
It is also desirable to be able to render to one- and two-
component format textures using capabilities such as framebuffer
objects (FBO), but rendering to I/L/LA formats is under-specified
(specifically how to map R/G/B/A values to I/L/A texture channels).
So, use the GL_R8 internalFormat. To write to it, just write the full RGBA output, but only the R component will be written.