I'm trying to use a texture to share rendered surfaces between 2 different applications. Everything appears to work correctly, except that when sampling the shared texture I only ever receive black.
1) First, I create a shared texture as documented here: http://msdn.microsoft.com/en-us/library/windows/desktop/ee913554(v=vs.85).aspx. The source application successfully supplies the destination application with the handle of teh shared resource, the resource is opened and a resource view is created happily.
(src Application)
D3D11_TEXTURE2D_DESC SharedTextureDesc = TextureDesc;
SharedTextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
SharedTextureDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;
hResult = pD3D11Device->CreateTexture2D(&SharedTextureDesc, NULL, &m_pSharedRenderBuffer);
(dstApplication)
HANDLE hSharedRenderHandle = pDisplayCallback->GetSharedRenderBuffer();
hr = pDevice->OpenSharedResource( hSharedRenderHandle,__uuidof(ID3D11Texture2D), (LPVOID*) &m_pSharedRenderTexture);
pDevice->CreateShaderResourceView(m_pSharedRenderTexture, NULL, &m_RenderTextureView);
// Create the sample state
D3D11_SAMPLER_DESC sampDesc;
ZeroMemory( &sampDesc, sizeof(sampDesc) );
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = -FLT_MAX;
sampDesc.MaxLOD = FLT_MAX;
hr = pDevice->CreateSamplerState( &sampDesc, &m_TexSamplerState );
2) At the end of the Source application render, I copy the back buffer to the shared render texture.
(acquireSync)
pDeviceContext->CopyResource(pContext->m_pSharedRenderBuffer, pContext->m_pBackBuffer);
(releaseSync)
3) During the destination render, at a certain point I render out a full screen quad with the contents of the original render.
// Set the sampler state in the pixel shader.
pContext->PSSetSamplers(0, 1, &m_TexSamplerState);
pContext->PSSetShaderResources(0, 1, &m_RenderTextureView);
4) The VS shader creates the full screen quad, and the PS shader simply sets the color according to the texture value
VS (described here: http://www.altdevblogaday.com/2011/08/08/interesting-vertex-shader-trick/)
VS_Output Output;
Output.Tex = float2((id << 1) & 2, id & 2);
Output.Pos = float4(Output.Tex * float2(2,-2) + float2(-1,1), 0, 1);
return Output;
PS (for debugging I set a portion of the screen to UV's, and the rest to texture)
float4 DiffuseColor = float4(vsIn.Tex.x, vsIn.Tex.y, 0, 1);
if (vsIn.Tex.x > 0.5 || vsIn.Tex.y < 0.5)
DiffuseColor = g_RenderBuffer.Sample(g_Sampler, vsIn.Tex);
return DiffuseColor;
Theoretically, this is all I need. However, things get weird.
The render is happening correctly, the full screen is rendered according to the output of my PixelShader.
My UV's are correct, because rendering them gives about what you'd expect.
I have tried loading a texture from disk and rendering that directly. This works fine.
I have tried writing my shared buffer to disk, to ensure that what I am recieving has some values set - this also works.
I've tried comparing the descriptions for the shared texture/view vs the working disk-based texture/view, and they seem similar. Only difference I can see are the lack of miplevels on the shared texture, which is fine, I don't want mip levels, and shared texture format matches source buffer render surface: DXGI_FORMAT_R8G8B8A8_UNORM_SRGB (loaded texture is BC3_UNORM)
The syncing seems to be fine, or at least when I first tried this several days ago D3D threw warnings that the resource was not sync'ed, and I fixed that.
According to everything, everything is working, its just that the texture is black according to D3D!
Any thoughts? I'm not a graphics programmer by any stretch, so debugging this issue has been a lot of fun so far! Last notes - I can't attach Pix or use VS2012 gfx debugging, as it crashes on app start.
Related
Good afternoon!
In DirectX I'm a beginner, I do not know many things.
I have a texture in the memory of the video card in RGBA format (DXGI_FORMAT_R8G8B8A8_UNORM). This is an intercepted game buffer.
Before copying its contents into the computer's memory, I need to do a transformation of this texture
in the format BGRA (DXGI_FORMAT_B8G8R8A8_UNORM). How to convert a texture from one pixel format to another
means of video card?
// DXGI_FORMAT_R8G8B8A8_UNORM
ID3D11Texture2D *pTexture1;
// DXGI_FORMAT_B8G8R8A8_UNORM
ID3D11Texture2D *pTexture2;
D3D11_TEXTURE2D_DESC desc2 = {};
desc2.Width = swapChainDesc.BufferDesc.Width;
desc2.Height = swapChainDesc.BufferDesc.Height;
desc2.MipLevels = 1;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc2.BindFlags = D3D10_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc2.SampleDesc.Count = 1;
desc2.Usage = D3D11_USAGE_DEFAULT;
desc2.MiscFlags = 0;
void pixelConvert(ID3D11Texture2D *pTexture1, ID3D11Texture2D *pTexture2)
{
//
// Code....
//
}
I do not understand this solution:
DX11 convert pixel format BGRA to RGBA
It is not complete
On the GPU, the simple solution is to just render the texture to another render target as a 'full-screen quad'. There's no need for any specialized shader, it's just simple rendering. When you load the texture, it gets 'swizzled' if needed to the standard Red, Green, and Blue channels and when you write to the render target the same thing happens depending on the format.
See SpriteBatch in DirectX Tool Kit, and this tutorial in particular if you need a solution that works on all Direct3D Hardware Feature Levels. If you can require D3D_FEATURE_LEVEL_10_0 or later, then see PostProcess which is optimized for the fact that you are always drawing the quad to the entire render target.
If you want a CPU solution, see DirectXTex.
I am using Three.js to render the world to a WebGLRenderTarget. My world does not full the whole screen and thus, has transparent background. The purpose is to provide alpha-channel aware image effects.
I render the world to a WebGLRenderTarget buffer
I try to post-process this by reading from the buffer and writing to the real screen
My post-processing function depends on the alpha channel. However, looks like that somehow Three.JS post-processing shader fails to read the alpha channel correctly - it is all 1.0 no matter what values I try to put in to WebGLRenderTarget.
The simple way to demostrate the problem.
I create a render target:
var rtParameters = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat};
for(var i=0; i<this.bufferCount; i++) {
this.buffers[i] = new THREE.WebGLRenderTarget(this.width, this.height, rtParameters);
}
I clear the buffer setting alpha to 0.3::
function clear(buf) {
// For debugging purposes we set clear color alpha
renderer.setClearColor(new THREE.Color(0xff00ff), 0.3);
renderer.clearTarget(buf, true, true, true);
}
// Clean up both buffers for the start
clear(buffers[0]);
And then I use this buffer as a read buffer and render to the screen in my post-processing fragment shader function (ThreeJS post-processing examples):
"void main() {",
// texture is the buffer I rendered before
"vec4 sample = texture2D( texture, vUv);",
// Everything goes to white (1.0) when trying to visualize the
// alpha channel of previously rendered WebGLTarget.
// It should get value 0.3 - slight gray
"gl_FragColor = vec4(sample.a, sample.a, sample.a, 1.0);",
"}"
Other color values are read correctly. If I use vec4(sample.r, sample.g, sample.b, 1.0) it directly copies as expected.
Is there a problem of clearing the alpha channel for WebGLRenderTarget?
Is there a problem of reading alpha values as having WebGLRenderTarget as texture for 2D image post-processing in GLSL shader?
Here is a fiddle that implements what I believe you are trying to achieve.
http://jsfiddle.net/6vK6W/3/
I am creating two render targets, both must share the back buffer's depth buffer, so it is important that I set them to have the same multi sampling parameters, however pDevice->CreateTexture(..) does not give any parameters for setting the multi sampling types. So I created two render target surfaces using pDevice->CreateRenderTarget(...) giving the same values as the depth buffer, now the depth buffer works in conjunction with my render targets, however I am unable to render them over the screen properly because alpha blending does not work with ->StretchRect (or so I have been told, and it did not work when I tried).
So the title of this question is basically my questions, how do i:
- convert a surface to a texture or
- create a texture with certain multisampling parameters or
- render a surface properly with an alpha layer
The documentation for StretchRect specifically explains how to do this:
Using StretchRect to downsample a
Multisample Rendertarget
You can use StretchRect to copy from
one rendertarget to another. If the
source rendertarget is multisampled,
this results in downsampling the
source rendertarget. For instance you
could:
Create a multisampled rendertarget.
Create a second rendertarget of the
same size, that is not multisampled.
Copy (using StretchRect the
multisample rendertarget to the second
rendertarget.
Note that use of the
extra surface involved in using
StretchRect to downsample a
Multisample Rendertarget will result
in a performance hit.
So new response to an old question, but I came across this and thought I'd supply an answer in case someone else comes across it while running into this problem. Here is the solution with a stripped down version of my wrappers and functions for it.
I have a game in which the renderer has several layers, one of which is a geometry layer. When rendering, it iterates over all layers, calling their Draw functions. Each layer has its own instance of my RenderTarget wrapper. When the layer Draws, it "activates" its render target, clears the buffer to alpha, draws the scene, then "deactivates" its render target. After all layers have drawn to their render targets, all of those render targets are then combined onto the backbuffer to produce the final image.
GeometryLayer::Draw
* Activates the render target used by this layer
* Sets needed render states
* Clears the buffer
* Draws geometry
* Deactivates the render target used by this layer
void GeometryLayer::Draw( const math::mat4& viewProjection )
{
m_pRenderTarget->Activate();
pDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,TRUE);
pDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
pDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
pDevice->Clear(0,0,D3DCLEAR_TARGET,m_clearColor,1.0,0);
pDevice->BeginScene();
pDevice->Clear(0,0,D3DCLEAR_ZBUFFER,0,1.0,0);
for(auto it = m->visibleGeometry.begin(); it != m->visibleGeometry.end(); ++it)
it->second->Draw(viewProjection);
pDevice->EndScene();
m_pRenderTarget->Deactivate();
}
My RenderTarget wrapper contains an IDirect3DTexture9* (m_pTexture) which is used with D3DXCreateTexture to generate the texture to be drawn to. It also contains an IDirect3DSurface9* (m_pSurface) which is given by the texture. It also contains another IDirect3DSurface9* (m_pMSAASurface).
In the initialization of my RenderTarget, there is an option to enable multisampling. If this option is turned off, the m_pMSAASurface is initialized to nullptr. If this option is turned on, the m_pMSAASurface is created for you using the IDirect3DDevice9::CreateRenderTarget function, specifying my current multisampling settings as the 4th and 5th arguments.
RenderTarget::Init
* Creates a texture
* Gets a surface off the texture (adds to surface's ref count)
* If MSAA, creates msaa-enabled surface
void RenderTarget::Init(const int width,const int height,const bool enableMSAA)
{
m_bEnableMSAA = enableMSAA;
D3DXCreateTexture(pDevice,
width,
height,
1,
D3DUSAGE_RENDERTARGET,
D3DFMT_A8R8G8B8,
D3DPOOL_DEFAULT,
&m_pTexture;
);
m_pTexture->GetSurfaceLevel(0,&m_pSurface);
if(enableMSAA)
{
Renderer::GetInstance()->GetDevice()->CreateRenderTarget(
width,
height,
D3DFMT_A8R8G8B8,
d3dpp.MultiSampleType,
d3dpp.MultiSampleQuality,
false,
&m_pMSAAsurface,
NULL
);
}
}
If this MSAA setting is off, RenderTarget::Activate sets m_pSurface as the render target. If this MSAA setting is on, RenderTarget::Activate sets m_pMSAASurface as the render target and enables the multisampling render state.
RenderTarget::Activate
* Stores the current render target (adds to that surface's ref count)
* If not MSAA, sets surface as the new render target
* If MSAA, sets msaa surface as the new render target, enables msaa render state
void RenderTarget::Activate()
{
pDevice->GetRenderTarget(0,&m_pOldSurface);
if(!m_bEnableMSAA)
{
pDevice->SetRenderTarget(0,m_pSurface);
}
else
{
pDevice->SetRenderTarget(0,m_pMSAAsurface);
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,true);
}
}
If this MSAA setting is off, RenderTarget::Deactivate simply restores the original render target. If this MSAA setting is on, RenderTarget::Deactivate restores the original render target too, but also copies m_pMSAASurface onto m_pSurface.
RenderTarget::Deactivate
* If MSAA, disables MSAA render state
* Restores the previous render target
* Drops ref counts on the previous render target
void RenderTarget::Deactivate()
{
if(m_bEnableMSAA)
{
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,false);
pDevice->StretchRect(m_pMSAAsurface,NULL,m_pSurface,NULL,D3DTEXF_NONE);
}
pDevice->SetRenderTarget(0,m_pOldSurface);
m_pOldSurface->Release();
m->pOldSurface = nullptr;
}
When the Renderer later asks the geometry layer for its RenderTarget texture in order to combine it with the other layers, that texture has the image copied from m_pMSAASurface on it. Assuming you're using a format that facilitates an alpha channel, this texture can be be blended with others, as I'm doing with the render targets of several layers.
This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4
I'm doing double-buffering by creating a render target with its associated depth and stencil buffer, drawing to it, and then drawing a fullscreen, possibly stretched, quad with the back buffer as the texture.
To do this I'm using a CreateTexture() call to create the back buffer, and then a GetSurfaceLevel() call to get the texture from the Surface. This works fine.
However, I'd like to use CreateRenderTarget() directly. It returns a Surface. But then I need a Texture to draw a quad to the front buffer.
The problem is, I can't find a function to get a texture from a surface. I've searched the DX8.1 doc again and again with no luck. Does such function even exist?
You can create empty texture matching size and color format of the surface. Then copy contents of the surface to the surface of the texture.
Here is a snippet from my DirectX9 code without error handling and other complications. It actually creates mipmap-chain.
Note StretchRect that does the actual copying by stretching surface to match geometry of the destination surface.
IDirect3DSurface9* srcSurface = renderTargetSurface;
IDirect3DTexture9* tex = textureFromRenderTarget;
int levels = tex->GetLevelCount();
for (int i=0; i<levels; i++)
{
IDirect3DSurface9* destSurface = 0;
tex->GetSurfaceLevel(i, &destSurface);
pd3dd->StretchRect(srcSurface, NULL, destSurface, NULL, D3DTEXF_LINEAR);
}
But of course, this is for DirectX 9. For 8.1 you can try CopyRect or Blt.
On Dx9 there is ID3DXRenderToSurface that can use surface from texture directly. I am not sure if that's possible with Dx8.1, but above copy-method should work.
If backwards compatibility is the reason your using D3D8, try using SwiftShader instead. http://transgaming.com/business/swiftshader
It a software implementation of D3D9. You can utilize it when you don't have a video card. It costs about 12k though.