How do i: convert surface to a texture or create a texture with certain multisampling parameters or render a surface with an alpha layer - c++

I am creating two render targets, both must share the back buffer's depth buffer, so it is important that I set them to have the same multi sampling parameters, however pDevice->CreateTexture(..) does not give any parameters for setting the multi sampling types. So I created two render target surfaces using pDevice->CreateRenderTarget(...) giving the same values as the depth buffer, now the depth buffer works in conjunction with my render targets, however I am unable to render them over the screen properly because alpha blending does not work with ->StretchRect (or so I have been told, and it did not work when I tried).
So the title of this question is basically my questions, how do i:
- convert a surface to a texture or
- create a texture with certain multisampling parameters or
- render a surface properly with an alpha layer

The documentation for StretchRect specifically explains how to do this:
Using StretchRect to downsample a
Multisample Rendertarget
You can use StretchRect to copy from
one rendertarget to another. If the
source rendertarget is multisampled,
this results in downsampling the
source rendertarget. For instance you
could:
Create a multisampled rendertarget.
Create a second rendertarget of the
same size, that is not multisampled.
Copy (using StretchRect the
multisample rendertarget to the second
rendertarget.
Note that use of the
extra surface involved in using
StretchRect to downsample a
Multisample Rendertarget will result
in a performance hit.

So new response to an old question, but I came across this and thought I'd supply an answer in case someone else comes across it while running into this problem. Here is the solution with a stripped down version of my wrappers and functions for it.
I have a game in which the renderer has several layers, one of which is a geometry layer. When rendering, it iterates over all layers, calling their Draw functions. Each layer has its own instance of my RenderTarget wrapper. When the layer Draws, it "activates" its render target, clears the buffer to alpha, draws the scene, then "deactivates" its render target. After all layers have drawn to their render targets, all of those render targets are then combined onto the backbuffer to produce the final image.
GeometryLayer::Draw
* Activates the render target used by this layer
* Sets needed render states
* Clears the buffer
* Draws geometry
* Deactivates the render target used by this layer
void GeometryLayer::Draw( const math::mat4& viewProjection )
{
m_pRenderTarget->Activate();
pDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,TRUE);
pDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
pDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
pDevice->Clear(0,0,D3DCLEAR_TARGET,m_clearColor,1.0,0);
pDevice->BeginScene();
pDevice->Clear(0,0,D3DCLEAR_ZBUFFER,0,1.0,0);
for(auto it = m->visibleGeometry.begin(); it != m->visibleGeometry.end(); ++it)
it->second->Draw(viewProjection);
pDevice->EndScene();
m_pRenderTarget->Deactivate();
}
My RenderTarget wrapper contains an IDirect3DTexture9* (m_pTexture) which is used with D3DXCreateTexture to generate the texture to be drawn to. It also contains an IDirect3DSurface9* (m_pSurface) which is given by the texture. It also contains another IDirect3DSurface9* (m_pMSAASurface).
In the initialization of my RenderTarget, there is an option to enable multisampling. If this option is turned off, the m_pMSAASurface is initialized to nullptr. If this option is turned on, the m_pMSAASurface is created for you using the IDirect3DDevice9::CreateRenderTarget function, specifying my current multisampling settings as the 4th and 5th arguments.
RenderTarget::Init
* Creates a texture
* Gets a surface off the texture (adds to surface's ref count)
* If MSAA, creates msaa-enabled surface
void RenderTarget::Init(const int width,const int height,const bool enableMSAA)
{
m_bEnableMSAA = enableMSAA;
D3DXCreateTexture(pDevice,
width,
height,
1,
D3DUSAGE_RENDERTARGET,
D3DFMT_A8R8G8B8,
D3DPOOL_DEFAULT,
&m_pTexture;
);
m_pTexture->GetSurfaceLevel(0,&m_pSurface);
if(enableMSAA)
{
Renderer::GetInstance()->GetDevice()->CreateRenderTarget(
width,
height,
D3DFMT_A8R8G8B8,
d3dpp.MultiSampleType,
d3dpp.MultiSampleQuality,
false,
&m_pMSAAsurface,
NULL
);
}
}
If this MSAA setting is off, RenderTarget::Activate sets m_pSurface as the render target. If this MSAA setting is on, RenderTarget::Activate sets m_pMSAASurface as the render target and enables the multisampling render state.
RenderTarget::Activate
* Stores the current render target (adds to that surface's ref count)
* If not MSAA, sets surface as the new render target
* If MSAA, sets msaa surface as the new render target, enables msaa render state
void RenderTarget::Activate()
{
pDevice->GetRenderTarget(0,&m_pOldSurface);
if(!m_bEnableMSAA)
{
pDevice->SetRenderTarget(0,m_pSurface);
}
else
{
pDevice->SetRenderTarget(0,m_pMSAAsurface);
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,true);
}
}
If this MSAA setting is off, RenderTarget::Deactivate simply restores the original render target. If this MSAA setting is on, RenderTarget::Deactivate restores the original render target too, but also copies m_pMSAASurface onto m_pSurface.
RenderTarget::Deactivate
* If MSAA, disables MSAA render state
* Restores the previous render target
* Drops ref counts on the previous render target
void RenderTarget::Deactivate()
{
if(m_bEnableMSAA)
{
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,false);
pDevice->StretchRect(m_pMSAAsurface,NULL,m_pSurface,NULL,D3DTEXF_NONE);
}
pDevice->SetRenderTarget(0,m_pOldSurface);
m_pOldSurface->Release();
m->pOldSurface = nullptr;
}
When the Renderer later asks the geometry layer for its RenderTarget texture in order to combine it with the other layers, that texture has the image copied from m_pMSAASurface on it. Assuming you're using a format that facilitates an alpha channel, this texture can be be blended with others, as I'm doing with the render targets of several layers.

Related

Is it possible to combine HDR with MSAA in a DirectX 12 desktop application?

Using the DirectX Tool Kit for DirectX 12, I'm able to successfully compile and run the individual MSAA and HDR tutorial samples.
When I combined the relevant code for the MSAA and HDR components together into a single Game.cpp file however, compilation fails with the debug layer message:
D3D12 ERROR : ID3D12CommandList::ResolveSubresource : The specified format is not compatible with the source resource.Format : R10G10B10A2_UNORM, Source Resource Format : R16G16B16A16_FLOAT [RESOURCE_MANIPULATION ERROR #878: RESOLVESUBRESOURCE_INVALID_FORMAT]
I am using the HDR sample code for an SDR display monitor, and therefore need to apply tone mapping. With respect to the order in which calls are made, I make a call to end the HDR scene before attempting to resolve the MSAA render target:
// 3d rendering completed
m_hdrScene->EndScene(commandList);
if (m_msaa)
{
// Resolve the MSAA render target.
//PIXBeginEvent(commandList, PIX_COLOR_DEFAULT, L"Resolve");
auto backBuffer = m_deviceResources->GetRenderTarget();
...
Then following the MSAA resolve block, I place the tone mapping statement as follows:
// Unbind depth/stencil for sprites (UI)
auto rtvDescriptor = m_deviceResources->GetRenderTargetView();
commandList->OMSetRenderTargets(1, &rtvDescriptor, FALSE, nullptr);
// set texture descriptor heap in prep for sprite drawing
commandList->SetDescriptorHeaps(static_cast<UINT>(std::size(heaps)), heaps);
// apply tonemapping to hdr scene
switch (m_deviceResources->GetColorSpace())
{
default:
m_toneMap->Process(commandList);
break;
...
I found that attempting to tone map before setting the descriptor heap for drawing 2D sprites (over the 3D scene) would result in the error:
D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: The descriptor heap (0x0000025428203230:'DescriptorHeap') containing handle 0x80000253a82ff205 is different from currently set descriptor heap 0x0000025428203540:'EffectTextureFactory'. [ EXECUTION ERROR #708: SET_DESCRIPTOR_TABLE_INVALID]
D3D12: BREAK enabled for the previous message, which was: [ ERROR EXECUTION #708: SET_DESCRIPTOR_TABLE_INVALID ]
I admit this was a rather naive first attempt to combine HDR and MSAA, but I'm concerned that these features could be incompatible and/or mutually exclusive in DirectX 12. I understand why a resource compatibility issue arises during MSAA resolve, as we need to use floating point render targets for HDR. I should note that my program will run and render correctly with HDR if I skip the MSAA code blocks by setting my m_msaa boolean to false.
Looking forward to any advice anyone may have. If sufficient code or other details about the program are required, I'll be happy to update my post.
To successfully render a HDR scene with 4xMSAA, I made the following revisions:
In CreateWindowSizeDependentResources(), the MSAA render target (and view) are defined with the same format as the HDR render texture:
// Create an MSAA render target.
D3D12_RESOURCE_DESC msaaRTDesc = CD3DX12_RESOURCE_DESC::Tex2D(
//m_deviceResources->GetBackBufferFormat(),
m_hdrScene->GetFormat(), // set matching format to HDR target (_R16G16B16A16_FLOAT)
backBufferWidth,
backBufferHeight,
1, // This render target view has only one texture.
1, // Use a single mipmap level
4 // <--- Use 4x MSAA
);
...
The next major change is in Render(), where the MSAA resolve to the HDR target occurs after ending the HDR scene render and before the tone mapping process:
// Resolve the MSAA render target.
auto backBuffer = m_deviceResources->GetRenderTarget();
auto hdrBuffer = m_hdrScene->GetResource(); // HDR destination texture
{
D3D12_RESOURCE_BARRIER barriers[2] =
{
// transition msaa texture from target to resolve source
CD3DX12_RESOURCE_BARRIER::Transition(
m_msaaRenderTarget.Get(),
D3D12_RESOURCE_STATE_RENDER_TARGET,
D3D12_RESOURCE_STATE_RESOLVE_SOURCE),
// transition hdr texture from pixel shader resource to resolve destination
CD3DX12_RESOURCE_BARRIER::Transition(
hdrBuffer,
D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE,
D3D12_RESOURCE_STATE_RESOLVE_DEST)
};
commandList->ResourceBarrier(2, barriers);
}
//commandList->ResolveSubresource(backBuffer, 0, m_msaaRenderTarget.Get(), 0,
// m_deviceResources->GetBackBufferFormat());
// resolve MSAA to the single sample 16FP resource destination
commandList->ResolveSubresource(hdrBuffer, 0, m_msaaRenderTarget.Get(), 0,
m_hdrScene->GetFormat());
// prepare backbuffer for 2D sprite drawing, typically rendered without MSAA
{
D3D12_RESOURCE_BARRIER barriers[2] =
{
// transition hdr texture from resolve destination back to p.s resource
CD3DX12_RESOURCE_BARRIER::Transition(
hdrBuffer,
D3D12_RESOURCE_STATE_RESOLVE_DEST,
D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE),
// transition backbuffer from Present to render target (for sprite drawing after the MSAA resolve)
CD3DX12_RESOURCE_BARRIER::Transition(
backBuffer,
D3D12_RESOURCE_STATE_PRESENT,
D3D12_RESOURCE_STATE_RENDER_TARGET)
};
commandList->ResourceBarrier(2, barriers);
}
... do tone mapping
... do 2D sprite drawing
Output was to a conventional 1080p display (no HDR10 or V-Sync). The LHS screen capture is a HDR scene without MSAA, whereas the RHS capture shows the smoothing effect of 4xMSAA on the 3D wireframe objects, kettle & earth models, and triangle billboard.
The cat and 'Hello World' text are 2D sprites drawn to the backbuffer after MSAA resolve and tone mapping.
Thanks again to #ChuckWalbourn for pointing me in the right direction, and I look forward to progressing my projects with the DirectX Tool Kit.
ResolveSubresource cannot do the conversion from 16fp to 10:10:10:2 HDR10.
Generally you need to:
Render to MSAA floating-point in High Dynamic Range (linear color space)
Resolve MSAA to single-sample floating-point. (Many games choose to use advanced software antialiasing techniques as part of this process such as FXAA, SMAA, etc.)
Perform the tone-map and Rec.2020 colorspace conversion from floating-point to 10:10:10:2
Display HDR10
Rendering the UI sometimes happen before step 3, other times after. If done before, you typically have to 'scale up' the UI colors to make them stand out.
See the SimpleMSAA DX12 and SimpleHDR DX12 samples for the technical details.
DirectX Tool Kit includes a PostProcess class which can perform the HDR10 tone-map. See this tutorial.

OpenGL offscreen rendering

I was trying to render a 2D scene into a off-screen FrameBuffer Object and use glFrameBufferTexture2D function to use the frame as texture to texture a cube.
The 2D scene is rendered in one context and the texture is used in another one in the same thread.
The problem is when I textured the cube, alpha channel seemed to be incorrect. I used apitrace to check the texture, and the texture has correct alpha value, and the shader was merely out_color = texture(in_texture, uv_coords)
The problem was solved if I blit the off-screen framebuffer color attachment to anything, whether it be itself or framebuffer 0 (output window).
I was wondering why this is happening and how to solve it without needing to blit the framebuffer.
Found out that I used single buffering for the static 2D scene and requires a glFlush to flush the pipeline.

Is it possible to save the current viewport and then re draw the saved viewport in OpenGL and C++ during the next draw cycle?

I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.

How does one use clip() to perform alpha testing?

This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4

Texture from Surface in DirectX 8.1

I'm doing double-buffering by creating a render target with its associated depth and stencil buffer, drawing to it, and then drawing a fullscreen, possibly stretched, quad with the back buffer as the texture.
To do this I'm using a CreateTexture() call to create the back buffer, and then a GetSurfaceLevel() call to get the texture from the Surface. This works fine.
However, I'd like to use CreateRenderTarget() directly. It returns a Surface. But then I need a Texture to draw a quad to the front buffer.
The problem is, I can't find a function to get a texture from a surface. I've searched the DX8.1 doc again and again with no luck. Does such function even exist?
You can create empty texture matching size and color format of the surface. Then copy contents of the surface to the surface of the texture.
Here is a snippet from my DirectX9 code without error handling and other complications. It actually creates mipmap-chain.
Note StretchRect that does the actual copying by stretching surface to match geometry of the destination surface.
IDirect3DSurface9* srcSurface = renderTargetSurface;
IDirect3DTexture9* tex = textureFromRenderTarget;
int levels = tex->GetLevelCount();
for (int i=0; i<levels; i++)
{
IDirect3DSurface9* destSurface = 0;
tex->GetSurfaceLevel(i, &destSurface);
pd3dd->StretchRect(srcSurface, NULL, destSurface, NULL, D3DTEXF_LINEAR);
}
But of course, this is for DirectX 9. For 8.1 you can try CopyRect or Blt.
On Dx9 there is ID3DXRenderToSurface that can use surface from texture directly. I am not sure if that's possible with Dx8.1, but above copy-method should work.
If backwards compatibility is the reason your using D3D8, try using SwiftShader instead. http://transgaming.com/business/swiftshader
It a software implementation of D3D9. You can utilize it when you don't have a video card. It costs about 12k though.