Passing Texture through Shader DirectX 9 - c++

I am trying to render a texture that gets passed through a pixel shader.
Currently my shader is as follows:
float4 EffectProcess( float2 Tex : TEXCOORD0 ) : COLOR0
{
return float4(1,0,0,1);
}
technique MyTechnique
{
pass p0
{
VertexShader = null;
PixelShader = compile ps_2_0 EffectProcess();
}
}
As you can see, it is a very basic shader that makes that forces the pixels to be red.
UINT uiPasses = 0;
res= g_lpEffect->Begin(&uiPasses, 0);
for (UINT uiPass = 0; uiPass < uiPasses; uiPass++)
{
res = g_lpEffect->BeginPass(uiPass);
res = sprite->Begin(D3DXSPRITE_SORT_TEXTURE);
res = sprite->Draw(tex, NULL, 0x0, 0x0, 0xFFFFFFFF);
res = sprite->End();
res = g_lpEffect->EndPass();
}
res = g_lpEffect->End();
And I am drawing the texture using the shader like so. I am not sure this is the correct way to do it though and have found very little resources on the subject.
The shader is being created correctly and the texture aswell, all calls return a hresult of S_OK, yet when I run the code, the texture shows perfectly, without being overwritten by red.

Both sprite and effects by default store initial pipeline state and set up their own when Begin is called and then restore it when End is called. So I suspect that sprite->Begin(D3DXSPRITE_SORT_TEXTURE); will disable effect processing and your pixel shader is never called. You may try to pass something like D3DXSPRITE_DONOTMODIFY_RENDERSTATE into Begin to prevent it from modifying pipeline state, though this may break sprite rendering. It would be better to get rid of sprite altogether and write your own sprite shader (both vertex and pixel) because fixed pipeline rendering is mostly deprecated these days.

Related

Applying HLSL Pixel Shaders to Win32 Screen Capture

A little background: I'm attempting to make a Windows (10) application which makes the screen look like an old CRT monitor, scanlines, blur, and all. I'm using this official Microsoft screen capture demo as a starting point: At this stage I can capture a window, and display it back in a new mouse-through window as if it were the original window.
I am attempting to use the CRT-Royale CRT shaders which are generally considered the best CRT shaders; these are available in .cg format. I transpile them with cgc to hlsl, then compile the hlsl files to compiled shader byte code with fxc. I am able to successfully load the compiled shaders and create the pixel shader. I then set the pixel shader in the d3d context. I then attempt to copy the capture surface frame to a pixel shader resource and set the created shaders resource. All of this builds and runs, but I do not see any difference in the output image and am not sure how to proceed. Below is the relevant code. I am not a c++ developer and am making this as a personal project which I plan on open sourcing once I have a primitive working version. Any advice is appreciated, thanks.
SimpleCapture::SimpleCapture(
IDirect3DDevice const& device,
GraphicsCaptureItem const& item)
{
m_item = item;
m_device = device;
// Set up
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
d3dDevice->GetImmediateContext(m_d3dContext.put());
auto size = m_item.Size();
m_swapChain = CreateDXGISwapChain(
d3dDevice,
static_cast<uint32_t>(size.Width),
static_cast<uint32_t>(size.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
2);
// ADDED THIS
HRESULT hr1 = D3DReadFileToBlob(L"crt-royale-first-pass-ps_4_0.fxc", &ps_1_buffer);
HRESULT hr = d3dDevice->CreatePixelShader(
ps_1_buffer->GetBufferPointer(),
ps_1_buffer->GetBufferSize(),
nullptr,
&ps_1
);
m_d3dContext->PSSetShader(
ps_1,
nullptr,
0
);
// END OF ADDED CHANGES
// Create framepool, define pixel format (DXGI_FORMAT_B8G8R8A8_UNORM), and frame size.
m_framePool = Direct3D11CaptureFramePool::Create(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
size);
m_session = m_framePool.CreateCaptureSession(m_item);
m_lastSize = size;
m_frameArrived = m_framePool.FrameArrived(auto_revoke, { this, &SimpleCapture::OnFrameArrived });
}
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&)
{
auto newSize = false;
{
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height)
{
// The thing we have been capturing has changed size.
// We need to resize our swap chain first, then blit the pixels.
// After we do that, retire the frame and then recreate our frame pool.
newSize = true;
m_lastSize = frameContentSize;
m_swapChain->ResizeBuffers(
2,
static_cast<uint32_t>(m_lastSize.Width),
static_cast<uint32_t>(m_lastSize.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
0);
}
{
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
com_ptr<ID3D11Texture2D> backBuffer;
check_hresult(m_swapChain->GetBuffer(0, guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
// ADDED THIS
D3D11_TEXTURE2D_DESC txtDesc = {};
txtDesc.MipLevels = txtDesc.ArraySize = 1;
txtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
txtDesc.SampleDesc.Count = 1;
txtDesc.Usage = D3D11_USAGE_IMMUTABLE;
txtDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
ID3D11Texture2D *tex;
d3dDevice->CreateTexture2D(&txtDesc, NULL,
&tex);
frameSurface.copy_to(&tex);
d3dDevice->CreateShaderResourceView(
tex,
nullptr,
srv_1
);
auto texture = srv_1;
m_d3dContext->PSSetShaderResources(0, 1, texture);
// END OF ADDED CHANGES
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
}
}
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
m_swapChain->Present1(1, 0, &presentParameters);
... // Truncated
Shaders define how things are drawn. However, you don't draw anything - you just copy, which is why the shader doesn't do anything.
What you should do is to remove the CopyResource call, and instead draw a full screen quad on the back buffer (Which requires you to create a vertex buffer that you can bind, then set the back buffer as render target, and finally call Draw/DrawIndexed to actually render something, which then will invoke the shader).
Also - since I'm not sure whether you already do this and just stripped it from the shown code - functions like CreatePixelShader don't return HRESULTs just for the fun of it - you should check what is actually returned, because DirectX silently returns most errors and expects you to handle them, instead of crashing your program.

Sampling Back Buffer in vertex Shader always returns 0 and float1 instead of float4

I am totally lost now. Have been trying to read the backbuffer inside a vertex shader for days with no luck whatsoever.
I'm trying to read the vertexes position from the backbuffer and it's neighboring pixels. (I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader). I've created a separate ID3D11Texture2D and an SRV to go with the backBuffer. I copy the backbuffer into this SRV's resource. Bind the SRV using VSSetShaderResources but just can't seem to be able to read from it inside the vertex shader.
I will share some code here from the creation of these elements as well as include some RenderDoc screenshots that keep showing that the SRV is being bound to the VS stage and has the right texture associated with it but every Load or []operator or tex2dlod or SampleLevel(i bound a SamplerState too)
just keeps returning a single 1.0 value with the rest of the float4 never being returned, meaning i only get a float1 back. I will also include a renderdoc capture file if anyone wants to take a look.
This is a simple scene from tutorial 42 on the rastertek.com site, there is a ground plane with a cube and a sphere on it :
https://i.imgur.com/cbVC48E.gif
// Here is some code when creating the secondary texture and SRV that houses a //backBuffer
// Get the pointer to the back buffer.
result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&backBufferPtr);
if(FAILED(result))
{
MessageBox((*(hwnd)), L"Get the pointer to the back buffer FAILED", L"Error", MB_OK);
return false;
}
// Create another texture2d that we will use to make an SRV out of, and this texture2d will be used to copy the backbuffer to so we can read it in a shader
D3D11_TEXTURE2D_DESC bbDesc;
backBufferPtr->GetDesc(&bbDesc);
bbDesc.MipLevels = 1;
bbDesc.ArraySize = 1;
bbDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bbDesc.Usage = D3D11_USAGE_DEFAULT;
bbDesc.MiscFlags = 0;
bbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
result = m_device->CreateTexture2D(&bbDesc, NULL, &m_backBufferTx2D);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Create a Tx2D for backbuffer SRV FAILED", L"Error", MB_OK);
return false;
}
D3D11_SHADER_RESOURCE_VIEW_DESC descSRV;
ZeroMemory(&descSRV, sizeof(descSRV));
descSRV.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
descSRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
descSRV.Texture2D.MipLevels = 1;
descSRV.Texture2D.MostDetailedMip = 0;
result = GetDevice()->CreateShaderResourceView(m_backBufferTx2D, &descSRV, &m_backBufferSRV);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Creating BackBuffer SRV FAILED.", L"Error", MB_OK);
return false;
}
// Create the render target view with the back buffer pointer.
result = m_device->CreateRenderTargetView(backBufferPtr, NULL, &m_renderTargetView);
First I render the scene in all white and then I copy that to the SRV and bind it for the next shader that's supposed to sample it. I'm expecting to get a float4(1.0, 1.0, 1.0, 1.0) value returned when i sample the backbuffer with the vertex's on screen position
https://i.imgur.com/N9CYg8c.png
as shown on the top left in the event browser, there were three drawindexed calls for rendering everything in white and then a CopyResource.
I've selected the next (fourth) DrawIndexed and on the right side outlined in red are the inputs for this next shader clearly showing that the backBuffer has been successfully bound to the vertex shader.
And now for the part that's giving me trouble
https://i.imgur.com/ENuXk0n.png
I'm gonna be debugging this top-left vertex as shown on the screenshot,
the vertex Shader has a
Texture2D prevBackBuffer: register(t0);
written at the top
https://i.imgur.com/8cihNsq.png
When trying to sample the left neighboring pixel
this line of code returns newCoord = float2(158, 220)
when entering these pixel values in the texture view i get this pixel
https://i.imgur.com/DT72Fl1.png
so the coordinates are ok so far, and as outlined i'm expecting to get a float4(0.0, 0.0, 0.0, 1,0) returned when i sample this pixel
(I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader)
AND YET, when i sample that pixel right after altering the pixel coordinates since load counts pixels from bottom left so i need
newCoord = float2(158, 379), i get this
https://i.imgur.com/8SuwOzz.png
why is this, even if it's out of range, load should return all zeros, since I'm not sure about the whole load counts from bottom left thing I tried sampling using the top left coordinates (158, 220) but end up getting 0.0, ?, ?, ?
I'm completely stumped and have no idea what to try next. I've tried using a sample state :
// Create a clamp texture sampler state description.
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = device->CreateSamplerState(&samplerDesc, &m_sampleStateClamp);
but still never get a proper float4 back when reading the texture.
Any ideas, suggestions, I'll take anything at this point.
Oh and here's a RenderDoc file of the frame i was examining :
http://www.mediafire.com/file/1bfiqdpjkau4l0n/my_capture.rdc/file
So from my experience, reading from the back buffer is not really an operation that you want to be doing in the first place. If you have to do any operation on the rendered scene, the best way to do that is to render the scene to an intermediate texture, perform the operation on that texture, then render the final scene to the back buffer. This is generally how things like dynamic shadows are done - the scene is rendered from the perspective of the light, and the resulting buffer is interpreted to get a shadow value that is then applied to the final scene (this is also why dynamic light sources are limited in commercial game engines - they're rather expensive to use).
A similar idea can be applied here. First, render the whole scene to an intermediate texture, bound as a render target view (where the pixel format is specified by you, the programmer). Next, rebind that intermediate texture as a shader resource view, and render the scene again, using the edge detection shader and the real back buffer (where the pixel format is defined by the hardware).
This, fundamentally, is what I believe the issue is - a back buffer is a device dependent resource, and its format can change depending on the hardware. Therefore, using it from a shader is not safe, as you don't always know what the format will be. A device independent resource, on the other hand, will always have the same format, and you can safely use it however you like from a shader.
I wasn't able to get sampling an SRV in the vertex shader to work
but what i was able to get working
is using a backBuffer.SampleLevel inside a compute shader
I also had to change the sampler to something like this :
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0.5f;
samplerDesc.BorderColor[1] = 0.5f;
samplerDesc.BorderColor[2] = 0.5f;
samplerDesc.BorderColor[3] = 0.5f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = 0;

D3D11: Rendering (depth) to texture results in red square, normal rendering works

I'm currently working on a D3D project and want to implement directional shadow mapping. I set everything up according to the Microsoft Guide, but it just doesn't work.
I've created a 2D texture object, a depth stencil view and a shader resource view and set them up using the following descriptions:
D3D11_TEXTURE2D_DESC shadowMapDesc;
ZeroMemory(&shadowMapDesc, sizeof(D3D11_TEXTURE2D_DESC));
shadowMapDesc.Width = width;
shadowMapDesc.Height = height;
shadowMapDesc.MipLevels = 1;
shadowMapDesc.ArraySize = 1;
shadowMapDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
shadowMapDesc.SampleDesc.Count = 1;
shadowMapDesc.SampleDesc.Quality = 0;
shadowMapDesc.Usage = D3D11_USAGE_DEFAULT;
shadowMapDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
shadowMapDesc.CPUAccessFlags = 0;
shadowMapDesc.MiscFlags = 0;
ID3D11Device& d3ddev = dev.getD3DDevice();
uint32_t *initData = new uint32_t[width * height];
ZeroMemory(initData, sizeof(uint32_t) * width * height);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = initData;
data.SysMemPitch = sizeof(uint32_t) * width;
data.SysMemSlicePitch = 0;
HRESULT hr = d3ddev.CreateTexture2D(&shadowMapDesc, &data, &texture_);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
hr = d3ddev.CreateDepthStencilView(texture_, &depthStencilViewDesc, &stencilView_);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
ZeroMemory(&shaderResourceViewDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
hr = d3ddev.CreateShaderResourceView(texture_, &shaderResourceViewDesc, &shaderView_);
Between these steps there is additional error checking, but all the create-functions return successfully. I then bind the texture, render my scene and unbind the texture using the following functions:
void D3DDepthTexture2D::bindAsTarget(D3DDevice& dev)
{
dev.getDeviceContext().ClearDepthStencilView(stencilView_, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Bind target
dev.getDeviceContext().OMSetRenderTargets(0, 0, stencilView_);
// Set viewport
dev.setViewport(static_cast<float>(width_), static_cast<float>(height_), 0.0f, 0.0f);
}
void D3DDepthTexture2D::unbindAsTarget(D3DDevice& dev, float width, float height)
{
// Unbind target
dev.resetRenderTarget();
// Reset viewport
dev.setViewport(width, height, 0.0f, 0.0f);
}
My render-to-depth-texture routine basically looks like this (removing all the unnecessary details):
camera = buildCameraFromLight(light);
setCameraCBuffer(camera);
bindTexture();
activateShader();
for(Object j : objects) {setTransformationCBuffer(j); renderObject(j);}
deactivateShader();
unbindTexture();
Rendering the scene from the light's perspective to the normal render target (screen) results in the proper image (both the actual image and just rendering the depth values). I use a simple vertex shader that just transforms the vertices and a pixel shader that does nothing at all OR returns the depth values (I tried both, doesn't change anything about the end result since we don't care about the color buffer).
After clearing the texture and rendering to it, I render it onto a quad to my screen, but all I get is a red square - so the depth value is 1.0f, the value I've cleared the texture to. I'm really at a loss for what to do, I tried everything, implemented every possible solution from online tutorials or changed things around on my own, but nothing helps. Here's a list of all the things I already checked:
All FAILED(hr)-calls return false, no error message is printed to the console
I tested whether the geometry gets transformed properly by rendering the geometry and their depth values (z / w) to screen, which worked and looked correct
I tested calculating the depth values in the fragment shader and rendering to a normal render target (basically trying to render my color buffer to texture) instead of a depth stencil texture, but that didn't work either, red square
I tested different formats and format combinations for the shadow map and the views, which either caused the creation to fail or didn't change a thing
I checked whether any call between setting and unsetting my texture as the render target during the render call resetted the depth stencil target to something else - not the case
I debugged my texture-to-screen/quad rendering routine already and it works properly with other textures, so I am in fact seeing what the depth texture looks like
I changed the geometry and camera perspective around to see whether that makes anything visible in the depth texture - it doesn't
I came across this similar StackOverflow problem and checked whether my default depth stencil buffer had the same dimensions, AA settings etc. as my texture - and it does (count 1, quality 0)
I really don't know what's up, I've been trying to debug this for hours and hours. I hope someone here can give me any advice on what I'm doing wrong or what I could try to fix this. I'm using C++11 with Direct3D11.
Note: I can't debug any of this using NSight or any Visual Studio tools since they don't seem to work properly with my system right now and I don't have any administrative rights to fix any of it. I just have to deal with it for now. I hope the given information and code samples are enough to provide a rough idea of what I could also try to make this work.
Thanks in advance.
I got NSight to work and debugged the whole thing with that. Turns out the depth texture was properly created and filled with the depth and stencil data and I just forgot that all the depth information is stored in the first channel - so I ignored the g and b data and used 1.0 for a and it worked. Using the g and b channels somehow made the whole thing red (maybe someone wants to add to this and explain why).
Debugging this got much easier once I could observe the texture that is present in the shader - I should've used a debugging tool like NSight or RenderDoc way earlier. Thanks to #EgorShkorov for the advice.

HLSL multi-pass geometry shader culling issue

I'm working on a shader that runs two passes:
A standard vertex / pixel shader that draws the original object with some alpha blending
A geometry shader that subdivides the mesh and makes it crumble
My issue however is that the geometry of the first pass occludes the geometry of the second pass. Even when the original geometry is fully transparent, it completely culls whatever geometry from the second pass is behind it.
The original geometry is a sphere, which as you can see here occludes the result of the geometry shader
The first pass uses this blending mode:
BlendState SrcAlphaBlendingAdd
{
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
BlendOp = ADD;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
BlendOpAlpha = ADD;
RenderTargetWriteMask[0] = 0x0F;
};
both passes currently use this depthstencilstate:
DepthStencilState Depth {
// Depth test parameters
DepthEnable = true;
DepthWriteMask = all;
DepthFunc = less;
StencilEnable = false;
};
Cullmode is set to NONE for both passes.
Is there a way for both passes to use depthtest, without the first pass occluding the second?
If need be I can separate these passes into two shaders and just render the same object with both shaders in engine, but I would like to figure out if it's possible just using multiple passes.

DirectX using multiple Render Targets as input to each other

I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)