Can't get texture.Sample to work, although I can get texture.Load to work fine in Direct 3d 11 shader - hlsl

In my HLSL for Direct3d 11 app, I'm having a problem where the texture.Sample intrinsic always return 0. I know my data and parameters are correct because if I use texture.Load instead of Sample the value returned is correct.
Here are my declarations:
extern Texture2D<float> texMask;
SamplerState TextureSampler : register (s2);
Here is the code in my pixel shader that works-- this confirms that my texture is available correctly to the shader and my texcoord values are correct:
float maskColor = texMask.Load(int3(8192*texcoord.x, 4096*texcoord.y, 0));
If I substitute for this the following line, maskColor is always 0, and I can't figure out why.
float maskColor = texMask.Sample(TextureSampler, texcoord);
TextureSampler has the default state values; texMask is defined with 1 mip level.
I've also tried:
float maskColor = texMask.SampleLevel(TextureSampler, texcoord, 0);
and that also always returns 0.
C++ code for setting up sampler:
D3D11_SAMPLER_DESC sd;
ZeroMemory(&sd, sizeof(D3D11_SAMPLER_DESC));
sd.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sd.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
ID3D11SamplerState* pSampler;
dev->CreateSamplerState(&sd, &pSampler);
devcon->PSSetSamplers(2, 1, &pSampler);

Forgive me for reviving such an old post, but I figured it important to add another possible cause for this sort of issue for others, and this post is the most relevant place I could find to post in.
I, too, had an issue where the HLSL Sample function would always return 0, but only on specific textures, and not on others. I checked, ensured the texture was properly bound, and that the color values should not have been 0, and still was left wondering why I was always getting 0 back for this one specific texture, but not others used in the same shader pass. The Load function worked fine, but then I lost the nice features that samplers give us.
As it turns out, in my case, I had accidentally created this texture's description as:
D3D11_TEXTURE2D_DESC desc;
desc.Width = _width;
desc.Height = _height;
desc.MipLevels = 0; // <- Bad!
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R16G16B16A16_FLOAT;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
This worked and created a texture that was visible and renderable, however what happens when defining MipLevels to 0, is that DirectX generates an entire mip chain for that texture. Me being me, however, I forgot this while working on my project further, and while DirectX may generate the textures for the mip chain, drawing to the texture does not cascade through all the levels of the chain (which does make sense, I suppose).
Now, I suppose it's important to note that I'm still new to the whole graphics programming thing, if that wasn't already obvious enough. I have absolutely no idea what mip level, or combination of mip levels, the regular Sample function uses. But I can say that in my case, it didn't happen to be level 0. Maybe it will for a smaller mip chain, but this texture in particular had 12 levels in total, with which only level 0 had any actual color information drawn to it. Using the Load function, or SampleLevel to explicitly access mip level 0 worked fine. As I do not need, nor want, the texture I'm trying to sample to have a mip chain, I simply changed it's description to fix it.

I found my problem -- I needed to specify a register for the texture as well as the sampler in my HLSL. I can't find any documentation anywhere that describes why this is necessary, but it did fix my problem.

Related

Sampling Back Buffer in vertex Shader always returns 0 and float1 instead of float4

I am totally lost now. Have been trying to read the backbuffer inside a vertex shader for days with no luck whatsoever.
I'm trying to read the vertexes position from the backbuffer and it's neighboring pixels. (I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader). I've created a separate ID3D11Texture2D and an SRV to go with the backBuffer. I copy the backbuffer into this SRV's resource. Bind the SRV using VSSetShaderResources but just can't seem to be able to read from it inside the vertex shader.
I will share some code here from the creation of these elements as well as include some RenderDoc screenshots that keep showing that the SRV is being bound to the VS stage and has the right texture associated with it but every Load or []operator or tex2dlod or SampleLevel(i bound a SamplerState too)
just keeps returning a single 1.0 value with the rest of the float4 never being returned, meaning i only get a float1 back. I will also include a renderdoc capture file if anyone wants to take a look.
This is a simple scene from tutorial 42 on the rastertek.com site, there is a ground plane with a cube and a sphere on it :
https://i.imgur.com/cbVC48E.gif
// Here is some code when creating the secondary texture and SRV that houses a //backBuffer
// Get the pointer to the back buffer.
result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&backBufferPtr);
if(FAILED(result))
{
MessageBox((*(hwnd)), L"Get the pointer to the back buffer FAILED", L"Error", MB_OK);
return false;
}
// Create another texture2d that we will use to make an SRV out of, and this texture2d will be used to copy the backbuffer to so we can read it in a shader
D3D11_TEXTURE2D_DESC bbDesc;
backBufferPtr->GetDesc(&bbDesc);
bbDesc.MipLevels = 1;
bbDesc.ArraySize = 1;
bbDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bbDesc.Usage = D3D11_USAGE_DEFAULT;
bbDesc.MiscFlags = 0;
bbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
result = m_device->CreateTexture2D(&bbDesc, NULL, &m_backBufferTx2D);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Create a Tx2D for backbuffer SRV FAILED", L"Error", MB_OK);
return false;
}
D3D11_SHADER_RESOURCE_VIEW_DESC descSRV;
ZeroMemory(&descSRV, sizeof(descSRV));
descSRV.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
descSRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
descSRV.Texture2D.MipLevels = 1;
descSRV.Texture2D.MostDetailedMip = 0;
result = GetDevice()->CreateShaderResourceView(m_backBufferTx2D, &descSRV, &m_backBufferSRV);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Creating BackBuffer SRV FAILED.", L"Error", MB_OK);
return false;
}
// Create the render target view with the back buffer pointer.
result = m_device->CreateRenderTargetView(backBufferPtr, NULL, &m_renderTargetView);
First I render the scene in all white and then I copy that to the SRV and bind it for the next shader that's supposed to sample it. I'm expecting to get a float4(1.0, 1.0, 1.0, 1.0) value returned when i sample the backbuffer with the vertex's on screen position
https://i.imgur.com/N9CYg8c.png
as shown on the top left in the event browser, there were three drawindexed calls for rendering everything in white and then a CopyResource.
I've selected the next (fourth) DrawIndexed and on the right side outlined in red are the inputs for this next shader clearly showing that the backBuffer has been successfully bound to the vertex shader.
And now for the part that's giving me trouble
https://i.imgur.com/ENuXk0n.png
I'm gonna be debugging this top-left vertex as shown on the screenshot,
the vertex Shader has a
Texture2D prevBackBuffer: register(t0);
written at the top
https://i.imgur.com/8cihNsq.png
When trying to sample the left neighboring pixel
this line of code returns newCoord = float2(158, 220)
when entering these pixel values in the texture view i get this pixel
https://i.imgur.com/DT72Fl1.png
so the coordinates are ok so far, and as outlined i'm expecting to get a float4(0.0, 0.0, 0.0, 1,0) returned when i sample this pixel
(I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader)
AND YET, when i sample that pixel right after altering the pixel coordinates since load counts pixels from bottom left so i need
newCoord = float2(158, 379), i get this
https://i.imgur.com/8SuwOzz.png
why is this, even if it's out of range, load should return all zeros, since I'm not sure about the whole load counts from bottom left thing I tried sampling using the top left coordinates (158, 220) but end up getting 0.0, ?, ?, ?
I'm completely stumped and have no idea what to try next. I've tried using a sample state :
// Create a clamp texture sampler state description.
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = device->CreateSamplerState(&samplerDesc, &m_sampleStateClamp);
but still never get a proper float4 back when reading the texture.
Any ideas, suggestions, I'll take anything at this point.
Oh and here's a RenderDoc file of the frame i was examining :
http://www.mediafire.com/file/1bfiqdpjkau4l0n/my_capture.rdc/file
So from my experience, reading from the back buffer is not really an operation that you want to be doing in the first place. If you have to do any operation on the rendered scene, the best way to do that is to render the scene to an intermediate texture, perform the operation on that texture, then render the final scene to the back buffer. This is generally how things like dynamic shadows are done - the scene is rendered from the perspective of the light, and the resulting buffer is interpreted to get a shadow value that is then applied to the final scene (this is also why dynamic light sources are limited in commercial game engines - they're rather expensive to use).
A similar idea can be applied here. First, render the whole scene to an intermediate texture, bound as a render target view (where the pixel format is specified by you, the programmer). Next, rebind that intermediate texture as a shader resource view, and render the scene again, using the edge detection shader and the real back buffer (where the pixel format is defined by the hardware).
This, fundamentally, is what I believe the issue is - a back buffer is a device dependent resource, and its format can change depending on the hardware. Therefore, using it from a shader is not safe, as you don't always know what the format will be. A device independent resource, on the other hand, will always have the same format, and you can safely use it however you like from a shader.
I wasn't able to get sampling an SRV in the vertex shader to work
but what i was able to get working
is using a backBuffer.SampleLevel inside a compute shader
I also had to change the sampler to something like this :
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0.5f;
samplerDesc.BorderColor[1] = 0.5f;
samplerDesc.BorderColor[2] = 0.5f;
samplerDesc.BorderColor[3] = 0.5f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = 0;

D3D11: Rendering (depth) to texture results in red square, normal rendering works

I'm currently working on a D3D project and want to implement directional shadow mapping. I set everything up according to the Microsoft Guide, but it just doesn't work.
I've created a 2D texture object, a depth stencil view and a shader resource view and set them up using the following descriptions:
D3D11_TEXTURE2D_DESC shadowMapDesc;
ZeroMemory(&shadowMapDesc, sizeof(D3D11_TEXTURE2D_DESC));
shadowMapDesc.Width = width;
shadowMapDesc.Height = height;
shadowMapDesc.MipLevels = 1;
shadowMapDesc.ArraySize = 1;
shadowMapDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
shadowMapDesc.SampleDesc.Count = 1;
shadowMapDesc.SampleDesc.Quality = 0;
shadowMapDesc.Usage = D3D11_USAGE_DEFAULT;
shadowMapDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
shadowMapDesc.CPUAccessFlags = 0;
shadowMapDesc.MiscFlags = 0;
ID3D11Device& d3ddev = dev.getD3DDevice();
uint32_t *initData = new uint32_t[width * height];
ZeroMemory(initData, sizeof(uint32_t) * width * height);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = initData;
data.SysMemPitch = sizeof(uint32_t) * width;
data.SysMemSlicePitch = 0;
HRESULT hr = d3ddev.CreateTexture2D(&shadowMapDesc, &data, &texture_);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
hr = d3ddev.CreateDepthStencilView(texture_, &depthStencilViewDesc, &stencilView_);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
ZeroMemory(&shaderResourceViewDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
hr = d3ddev.CreateShaderResourceView(texture_, &shaderResourceViewDesc, &shaderView_);
Between these steps there is additional error checking, but all the create-functions return successfully. I then bind the texture, render my scene and unbind the texture using the following functions:
void D3DDepthTexture2D::bindAsTarget(D3DDevice& dev)
{
dev.getDeviceContext().ClearDepthStencilView(stencilView_, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Bind target
dev.getDeviceContext().OMSetRenderTargets(0, 0, stencilView_);
// Set viewport
dev.setViewport(static_cast<float>(width_), static_cast<float>(height_), 0.0f, 0.0f);
}
void D3DDepthTexture2D::unbindAsTarget(D3DDevice& dev, float width, float height)
{
// Unbind target
dev.resetRenderTarget();
// Reset viewport
dev.setViewport(width, height, 0.0f, 0.0f);
}
My render-to-depth-texture routine basically looks like this (removing all the unnecessary details):
camera = buildCameraFromLight(light);
setCameraCBuffer(camera);
bindTexture();
activateShader();
for(Object j : objects) {setTransformationCBuffer(j); renderObject(j);}
deactivateShader();
unbindTexture();
Rendering the scene from the light's perspective to the normal render target (screen) results in the proper image (both the actual image and just rendering the depth values). I use a simple vertex shader that just transforms the vertices and a pixel shader that does nothing at all OR returns the depth values (I tried both, doesn't change anything about the end result since we don't care about the color buffer).
After clearing the texture and rendering to it, I render it onto a quad to my screen, but all I get is a red square - so the depth value is 1.0f, the value I've cleared the texture to. I'm really at a loss for what to do, I tried everything, implemented every possible solution from online tutorials or changed things around on my own, but nothing helps. Here's a list of all the things I already checked:
All FAILED(hr)-calls return false, no error message is printed to the console
I tested whether the geometry gets transformed properly by rendering the geometry and their depth values (z / w) to screen, which worked and looked correct
I tested calculating the depth values in the fragment shader and rendering to a normal render target (basically trying to render my color buffer to texture) instead of a depth stencil texture, but that didn't work either, red square
I tested different formats and format combinations for the shadow map and the views, which either caused the creation to fail or didn't change a thing
I checked whether any call between setting and unsetting my texture as the render target during the render call resetted the depth stencil target to something else - not the case
I debugged my texture-to-screen/quad rendering routine already and it works properly with other textures, so I am in fact seeing what the depth texture looks like
I changed the geometry and camera perspective around to see whether that makes anything visible in the depth texture - it doesn't
I came across this similar StackOverflow problem and checked whether my default depth stencil buffer had the same dimensions, AA settings etc. as my texture - and it does (count 1, quality 0)
I really don't know what's up, I've been trying to debug this for hours and hours. I hope someone here can give me any advice on what I'm doing wrong or what I could try to fix this. I'm using C++11 with Direct3D11.
Note: I can't debug any of this using NSight or any Visual Studio tools since they don't seem to work properly with my system right now and I don't have any administrative rights to fix any of it. I just have to deal with it for now. I hope the given information and code samples are enough to provide a rough idea of what I could also try to make this work.
Thanks in advance.
I got NSight to work and debugged the whole thing with that. Turns out the depth texture was properly created and filled with the depth and stencil data and I just forgot that all the depth information is stored in the first channel - so I ignored the g and b data and used 1.0 for a and it worked. Using the g and b channels somehow made the whole thing red (maybe someone wants to add to this and explain why).
Debugging this got much easier once I could observe the texture that is present in the shader - I should've used a debugging tool like NSight or RenderDoc way earlier. Thanks to #EgorShkorov for the advice.

How can I achieve noSmooth() with the P3D renderer?

I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)
You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.
I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();

I need some clarification with the concept of depth/stencil buffers in direct3D 11 (c++)

I am following tutorials online to help me create my first game, and so far, i am understanding every concept that Direct3D 11 has to throw at me.
But there's a certain concept that i can't seem to completely grasp yet; the depth/stencil buffers.
I understand that a depth/stencil buffers are used to "compare" the depths of pixels from different objects in a game. If two objects are overlapping each other, then the object that has less depth in the pixels will show up closer to the camera. And you define a depth/stencil buffer by filling out the D3D11_TEXTURE2D_DESC..
But my question is; if i fill out the D3D11_TEXTURE2D_DESC structure, then am i telling directX HOW to compare the pixels of different objects in a game?
If you don't understand my question, please just try to explain the concept of depth/stencil buffers as simple as you can. Also please try to explain what exactly am i defining by filling out the D3D11_TEXTURE2D_DESC structure
Thank you.
When you fill out the D3D11_TEXTURE2D_DESC, you are describing the depth/stencil buffer itself: How large is it, what format does it use, how you want to bind it to the pipeline.
The 'boiler-plate' construction for this is as follows (taken from Direct3D Win32 Game Visual Studio template using the C++ equivalent CD3D11_TEXTURE2D_DESC
CD3D11_TEXTURE2D_DESC depthStencilDesc(depthBufferFormat,
backBufferWidth, backBufferHeight, 1, 1,
D3D11_BIND_DEPTH_STENCIL);
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(&depthStencilDesc, nullptr,
depthStencil.GetAddressOf()));
The depthBufferFormat is determined by what level of precision you want, whether or not you are using a stencil-buffer, and your Direct3D Feature Level. The template uses DXGI_FORMAT_D24_UNORM_S8_UINT by default which works on all feature leaves and provides reasonable precision for depth and an 8-bit stencil. The size must exactly match your color back-buffer.
You bind the depth-stencil buffer to the render pipeline by creating a 'view' for the buffer.
CD3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed(
m_d3dDevice->CreateDepthStencilView(depthStencil.Get(),
&depthStencilViewDesc, m_depthStencilView.ReleaseAndGetAddressOf()));
You then 'clear' the view each frame and then bind the view for rendering:
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
m_d3dContext->OMSetRenderTargets(1, m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get());
You tell Direct3D how to do the comparison with D3D11_DEPTH_STENCIL_DESC (or the C++ equivalent D3D11_DEPTH_STENCIL_DESC).
The 'default' depth/stencil state is:
DepthEnable = TRUE;
DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
DepthFunc = D3D11_COMPARISON_LESS;
StencilEnable = FALSE;
StencilReadMask = D3D11_DEFAULT_STENCIL_READ_MASK;
StencilWriteMask = D3D11_DEFAULT_STENCIL_WRITE_MASK;
const D3D11_DEPTH_STENCILOP_DESC defaultStencilOp =
{ D3D11_STENCIL_OP_KEEP,
D3D11_STENCIL_OP_KEEP,
D3D11_STENCIL_OP_KEEP,
D3D11_COMPARISON_ALWAYS };
FrontFace = defaultStencilOp;
BackFace = defaultStencilOp;
In the DirectX Tool Kit, we provide three common depth states:
// DepthNone
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthEnable = FALSE;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
// DepthDefault
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
// DepthRead
CD3D11_DEPTH_STENCIL_DESC desc(default);
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;

DirectX 9 not rendering after adding transforms

so far I got a cube rendered without any transforms (thus it was rendered in an orthographic perspective), and I am working on the previous code to get it into a perspective view, with all the matrices involved. I changed the Flexible Vertex Format so as not to have RHW (thus only having XYZ coordinates and color, tried ARGB and XRGB but I don't think it matters), and I added a function that sets all the matrices.
Debugging
showed that matrices are being created correctly, functions return correctly (as far as I could see), no crashes (DirectX will never complain if something goes wrong, it just doesn't render) and in general, step-by-step debugging shows no paranormal activity.
Existing project (which I modify and eventually prevent from working):
As I advance I also write tutorials of sorts so I can go back and see what I did last time to get it to work, and this time I've kept versions so you can get the code here along with the VS2010 solution, all the DirectX work is done in 3Dheader.h and D3DLoader.h
Changes:
- the custom vertex format FVF_CUSTOMVERTEX has been changed so as not to include RHW, as I understand it has to be removed so as to be computed through the transformations
- In Render() I add a call to the function setMatrices() which does all the matrix and transform work, and is as follows:
void setMatrices()
{
//--------------transformation code----------------//
D3DXMATRIX objectM, translationM, rotationM, projectionM, lookAtM, finalM;
HRESULT hr;
D3DXMatrixIdentity(&objectM);
D3DXMatrixRotationY(&rotationM, D3DX_PI/4);
//D3DXMatrixMultiply(&finalM, &objectM, &rotationM);
D3DXMatrixPerspectiveFovLH(&projectionM,D3DX_PI/4,(float)yRes/xRes, 1, 100);
D3DXVECTOR3 camera;
camera.x = -10;
camera.y = 0;
camera.z = 0;
D3DXVECTOR3 cameraTarget;
cameraTarget.x = 0;
cameraTarget.y = 0;
cameraTarget.z = 0;
D3DXVECTOR3 cameraUp;
cameraUp.x = 0;
cameraUp.y = 1;
cameraUp.z = 0;
D3DXMatrixLookAtLH(&lookAtM,&camera,&cameraTarget, &cameraUp);
hr = pd3dDevice->SetTransform(D3DTS_WORLD, &objectM);
hr = pd3dDevice->SetTransform(D3DTS_PROJECTION, &projectionM);
hr = pd3dDevice->SetTransform(D3DTS_VIEW, &lookAtM);
D3DVIEWPORT9 view_port;
view_port.X=0;
view_port.Y=0;
view_port.Width=xRes;
view_port.Height=yRes;
view_port.MinZ=0.0f;
view_port.MaxZ=1.0f;
pd3dDevice->SetViewport(&view_port);
}
Note of course that some elements may not be needed, placed there just in case during my attempts, this is the code I have currently so we have a common reference.
Thanks in advance for any answers and/or attempts to answer.
In your code (downloaded), xRes and yRes are ints. Due to integer division, yRes/xRes will be zero, because xRes > yRes. You are passing this into the D3DXMatrixPerspextiveFovLH function as the aspect ratio, which will produce an invalid matrix. Instead, cast them to floats first, before doing the division, and pass the result in.