I'm attempting to render to a texture for the purpose of shadow mapping in DirectX11. I've set up and bound a separate render target to draw to. Problem is, after calling OMSetRenderTargets it's still rendering to the previously bound render target.
The graphics diagnostics event list shows that OMSetRenderTargets is being called, setting "obj:30" as the render target view. However, the following DrawIndexed call shows the render target as "obj:17", which is the previously bound render target.
Event List
Draw Call
I have the DirectX debug layer enabled, however it does not show any errors or warning messages. I've also ensured that the texture is not bound as a shader resource when the draw call happens but no luck there either.
These are both called by the following function
void GraphicsHandler::DrawSceneToRenderTarget(ID3D11RenderTargetView* RenderTarget, ID3D11VertexShader* WithVertexShader, ID3D11PixelShader* WithPixelShader)
{
const unsigned int VertexSize = sizeof(Vertex);
const unsigned int Offset = 0;
DeviceContext->ClearDepthStencilView(DepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0.0f);
DeviceContext->VSSetShader(WithVertexShader, nullptr, 0);
DeviceContext->PSSetShader(WithPixelShader, nullptr, 0);
DeviceContext->OMSetRenderTargets(1, &RenderTarget, DepthStencilView); //Render target set here
for (auto& Obj : ActiveScene.Objects)
{
ObjectInfo ObjectData;
ObjectData.ObjectTransform = XMMatrixIdentity();
ObjectData.ObjectTransform *= XMMatrixRotationRollPitchYaw(Obj->Rotator.X, Obj->Rotator.Y, Obj->Rotator.Z);
ObjectData.ObjectTransform *= XMMatrixTranslation(Obj->Position.X, Obj->Position.Y, Obj->Position.Z);
ObjectData.ObjectTransform *= XMMatrixScaling(Obj->Scale.X, Obj->Scale.Y, Obj->Scale.Z);
ObjectData.NormalMatrix = XMMatrixTranspose(XMMatrixInverse(nullptr, ObjectData.ObjectTransform));
DeviceContext->UpdateSubresource(ObjectBuffer, 0, nullptr, &ObjectData, 0, 0);
DeviceContext->UpdateSubresource(MaterialBuffer, 0, nullptr, &Obj->Mat, 0, 0);
DeviceContext->IASetVertexBuffers(0, 1, &Obj->VertexBuffer, &VertexSize, &Offset);
DeviceContext->IASetIndexBuffer(Obj->IndexBuffer, DXGI_FORMAT_R16_UINT, 0);
DeviceContext->VSSetConstantBuffers(0, 1, &ObjectBuffer);
//DeviceContext->PSSetConstantBuffers(0, 1, &MaterialBuffer);
DeviceContext->DrawIndexed(Obj->Indices.size(), 0, 0); //Draw called here
}
}
with the problematic calls to that being in the following two functions
void GraphicsHandler::RenderSceneDepth()
{
DeviceContext->RSSetState(RasterizerState);
DeviceContext->PSSetShaderResources(0, 1, &SceneDepthSRV);
DeviceContext->UpdateSubresource(CameraBuffer, 0, nullptr, &ActiveScene.SceneCamera.GetCameraVSInfo(), 0, 0);
DeviceContext->VSSetConstantBuffers(1, 1, &CameraBuffer);
DeviceContext->ClearRenderTargetView(SceneDepthRTV, Colors::Black);
DrawSceneToRenderTarget(SceneDepthRTV, VertexShader, DepthShader);
}
void GraphicsHandler::RenderShadowMap(ShadowMap& SM)
{
//Clear shader resources, as the texture can't be bound as input and output
ID3D11ShaderResourceView* NullResources[2] = { nullptr, nullptr };
DeviceContext->PSSetShaderResources(0, 2, NullResources);
DeviceContext->RSSetState(SMRasterizerState); //Need to render back faces only
ID3D11SamplerState* Samplers[2] = { SamplerState, ShadowSamplerState };
DeviceContext->PSSetSamplers(0, 2, Samplers);
//If the light is a directional source, render a directional shadow map
DirectionalLight* DirLight = nullptr;
DirLight = dynamic_cast<DirectionalLight*>(SM.ParentLight);
if (DirLight)
{
ID3D11RenderTargetView* RTV = SM.RTVs[0];
SM.LightPovCamera.ForwardDirection = DirLight->Direction;
DeviceContext->ClearRenderTargetView(RTV, Colors::Black);
DeviceContext->UpdateSubresource(LightPovBuffer, 0, nullptr, &SM.LightPovCamera.GetCameraVSInfo(), 0, 0);
DeviceContext->VSSetConstantBuffers(1, 1, &LightPovBuffer);
DrawSceneToRenderTarget(RTV, VertexShader, DepthShader);
}
//Otherwise, render to each face of the texturecube
else
{
for (int N = 0; N < 6; N++)
{
DeviceContext->ClearRenderTargetView(SM.RTVs[N], Colors::Black);
Camera POVCam = SM.GetCameraForCubemapFace(N);
DeviceContext->UpdateSubresource(LightPovBuffer, 0, nullptr, &POVCam.GetCameraVSInfo(), 0, 0);
DeviceContext->VSSetConstantBuffers(1, 1, &LightPovBuffer);
DrawSceneToRenderTarget(SM.RTVs[N], VertexShader, DepthShader);
}
}
}
Woops my mistake, the debug layer actually wasn't enabled, the error was caused by the render target having different dimensions to the depth stencil view. Apologies!
Related
I am trying to render to the OpenGL Framebuffer via an OpenGL Renderbuffer from an OpenCL kernel. The issue is: Even though I can (propably) render/write to the Renderbuffer from an OpenCL kernel, the screen stays empty (-> Black).
I am getting to my limits of what I can test in finite time, so I am asking someone with much more experience to give a tip, about what I am missing.
I personally suspect that I forgot to Bind a Buffer at the right point, but since I don't see which and where, this is practically impossible to check.
Now for some reduced code (So you don't have to look at all the error checking etc.)(This is the function that is called during the render routine):
void TestBuffer(){
GLubyte *buffer = (GLubyte *) malloc(1000 * 1000 * 4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
error = glGetError();
if(error != GL_NO_ERROR){
printf("error with readBuffer, %i\n", error);
}
glReadPixels(0, 0, 1000, 1000, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);
error = glGetError();
if(error != GL_NO_ERROR){
printf("error with readpixels\n");
}
for(int i = 0; i < 1000*100; i++){
if(buffer[i] != 0){
printf("buffer was not empty # %i: %u\n", i, buffer[i]);
free(buffer);
return;
}
}
printf("buffer was empty\n");
free(buffer);
}
void runShader(){
glFinish(); //Make sure, that OpenGL isn't using our objects
ret = clEnqueueAcquireGLObjects(command_queue, 1, &cl_renderBuffer, 0, NULL, NULL);
// Execute the OpenCL kernel on the list
size_t global_item_size = 1000 * 1000; // Process the entire lists
size_t local_item_size = 1000; // Divide work items into groups of SceenWidth
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global_item_size, &local_item_size, 0, NULL, NULL);
ret = clEnqueueReleaseGLObjects(command_queue, 1, &cl_renderBuffer, 0, NULL, NULL);
clFlush(command_queue);
clFinish(command_queue);
// We are going to blit into the window (default framebuffer)
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); // Use backbuffer as color dst.
// Read from your FBO
glBindFramebuffer (GL_READ_FRAMEBUFFER, gl_frameBuffer);
glReadBuffer (GL_COLOR_ATTACHMENT0); // Use Color Attachment 0 as color src.
// Copy the color and depth buffer from your FBO to the default framebuffer
glBlitFramebuffer (0,0, 1000, 1000, 0,0, 1000, 1000, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_NEAREST);
TestBuffer();
}
My ideas where:
Blit the contents of the renderbuffer to the screenbuffer, in case I messed up with binding the new framebuffer object (created earlier), or attaching the renderbuffer (which you can see in the last few lines of the code)
Check, if I messed up with the double Buffer or sth.: this is the TestBuffer() function
Flushing before Finishing, just in case
The shader/kernel code is simple on purpose, to see if the other stuff actually works (.w should be alpha, which should be opaque, so we can see the result, the rest is just a gray rainbow):
#pragma OPENCL EXTENSION all : enable
#define ScreenWidth 1000
#define ScreenHight 1000
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_NONE | CLK_FILTER_NEAREST;
__kernel void rainbow(__write_only image2d_t asd) {
int i = get_global_id(0);
unsigned int x = i%ScreenWidth;
unsigned int y = i/ScreenHight;
uint4 pixel; //I wish, I could access this as an array
pixel.x = i;
pixel.y = i;
pixel.z = i;
pixel.w = 255;
write_imageui(asd, (int2)(x, y), pixel);
}
Some further information:
I am only rendering stuff to the COLOR_ATTACHMENT0, since I don't care about the depth or stencil buffer in my usecase. This could be an issue though. (I didn't even generate buffers for them)
I am compiling for Windows 10
The format of the Renderbuffer is RGBA8, but I think the natural format is RGBA24. It once was just RGBA as you can see in the TestBuffer Routine, but I think this should be fine.
What could cause the screen to stay black/empty?
I have attached a texture to a set of 4 indexed vertices which are stored in a dynamic vertex buffer. I have also added a translation matrix to the constant buffer of the vertex shader. However, when I try updating the constant buffer to alter the translation matrix so the I can move the sprite, the sprite does not move smoothly. It stops randomly for short amounts of time before moving a short distance again.
Below are the render functions, the main loop and the shaders being used:
void Sprite::Render(ID3D11DeviceContext* devcon, float dt) {
// 2D rendering on backbuffer here
UINT stride = sizeof(VERTEX);
UINT offset = 0;
spr_const_data.translateMatrix.r[3].m128_f32[0] += 60.0f*dt;
devcon->IASetInputLayout(m_data_layout);
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
devcon->VSSetShader(m_spr_vert_shader, 0, 0);
devcon->PSSetShader(m_spr_pixel_shader, 0, 0);
devcon->VSSetConstantBuffers(0, 1, &m_spr_const_buffer);
devcon->PSSetSamplers(0, 1, &m_tex_sampler_state);
devcon->PSSetShaderResources(0, 1, &m_shader_resource_view);
// select vertex and index buffers
devcon->IASetIndexBuffer(m_sprite_index_buffer, DXGI_FORMAT_R32_UINT, offset);
devcon->IASetVertexBuffers(0, 1, &m_sprite_vertex_buffer, &stride, &offset);
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms, sizeof(D3D11_MAPPED_SUBRESOURCE));
devcon->Map(m_spr_const_buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &ms);
memcpy(ms.pData, &spr_const_data, sizeof(spr_const_data));
devcon->Unmap(m_spr_const_buffer, 0);
// select which primitive type to use
// draw vertex buffer to backbuffer
devcon->DrawIndexed(6, 0, 0);
}
void RenderFrame(float dt) {
float background_color[] = { 1.0f, 1.0f, 1.0f, 1.0f };
// clear backbuffer
devcon->ClearRenderTargetView(backbuffer, background_color);
knight->Render(devcon, dt);
// switch back and front buffer
swapchain->Present(0, 0);
}
void MainLoop() {
MSG msg;
auto tp1 = std::chrono::system_clock::now();
auto tp2 = std::chrono::system_clock::now();
while (GetMessage(&msg, nullptr, 0, 0) > 0) {
tp2 = std::chrono::system_clock::now();
std::chrono::duration<float> dt = tp2 - tp1;
tp1 = tp2;
TranslateMessage(&msg);
DispatchMessage(&msg);
RenderFrame(dt.count());
}
}
cbuffer CONST_BUFFER_DATA : register(b0)
{
matrix orthoMatrix;
matrix translateMatrix;
};
struct VOut {
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
VOut VShader(float4 position : POSITION, float2 tex : TEXCOORD0) {
VOut output;
position = mul(translateMatrix, position);
output.position = mul(orthoMatrix, position);
output.tex = tex;
return output;
}
Texture2D square_tex;
SamplerState tex_sampler;
float4 PShader(float4 position : SV_POSITION, float2 tex : TEXCOORD0) : SV_TARGET
{
float4 tex_col = square_tex.Sample(tex_sampler, tex);
return tex_col;
}
I have also included the initialization of the swapchain and backbuffer in case there is a mistake owing to it.
DXGI_SWAP_CHAIN_DESC scd; // hold swap chain information
ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC));
// fill swap chian description struct
scd.BufferCount = 1; // one back buffer
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used (draw into back buffer)
scd.OutputWindow = hWnd; // window to be used
scd.SampleDesc.Count = 1; // how many multisamples
scd.Windowed = true; // windowed/full screen
// create device, device context and swap chain using scd
D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_DEBUG, nullptr, NULL, D3D11_SDK_VERSION, &scd, &swapchain, &dev, nullptr, &devcon);
// get address of back buffer
ID3D11Texture2D* pBackBuffer;
swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
// use back buffer address to create render target
dev->CreateRenderTargetView(pBackBuffer, nullptr, &backbuffer);
pBackBuffer->Release();
// set the render target as the backbuffer
devcon->OMSetRenderTargets(1, &backbuffer, nullptr);
// Set the viewport
D3D11_VIEWPORT viewport;
ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = WINDOW_WIDTH;
viewport.Height = WINDOW_HEIGHT;
devcon->RSSetViewports(1, &viewport); // activates viewport
I have also tried getting a pointer to the vertex data via the pData member of the D3D11_MAPPED_SUBRESOURCE object and then casting it to a VERTEX* to manipulate the data, but the problem persists. I would like to know how to move the sprite smoothly across the window.
I solved the problem by writing a fixed FPS game loop and giving the interpolated values between the previous and current states to the render function. I was also updating the constant buffer in an incorrect manner.
I got a IDirect3DSurface9 * from a DXVA2 video decoder. I'd like to modify that surface with a shader. I' m able to render the video frames without shader by "drawing" the surface in the back buffer.
I use the following code to render the frames without shader:
void Draw(Example* exps){
IDirect3DSurface9* backbuffer;
hwctx->d3d9device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
hwctx->d3d9device->BeginScene();
hwctx->swap_chain->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backbuffer);
hwctx->d3d9device->StretchRect(videoSurface, NULL, backbuffer, NULL, D3DTEXF_LINEAR);
hwctx->d3d9device->EndScene();
hwctx->swap_chain->Present(0, 0, 0, 0, 0);
backbuffer->Release();
}
Up until here, everithing works.
I would modify the Draw function to render video frames with the following shader:
uniform extern float4x4 gWVP;
uniform extern texture gTexRGB;
uniform extern texture gTexAlpha;
sampler TexRGB = sampler_state
{
Texture = <gTexRGB>;
AddressU = WRAP;
AddressV = WRAP;
};
sampler TexAlpha = sampler_state
{
Texture = <gTexAlpha>;
AddressU = WRAP;
AddressV = WRAP;
};
struct OutputVS
{
float4 posH : POSITION0;
float2 tex0 : TEXCOORD0;
};
OutputVS DirLightTexVS(float3 posL : POSITION0, float3 normalL : NORMAL0, float2 tex0: TEXCOORD0)
{
// Zero out our output.
OutputVS outVS = (OutputVS)0;
// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), gWVP);
// Pass on texture coordinates to be interpolated in rasterization.
outVS.tex0 = tex0;
// Done--return the output.
return outVS;
}
float4 DirLightTexPS(float4 c : COLOR0, float4 spec : COLOR1, float2 tex0 : TEXCOORD0) : COLOR
{
float3 rgb = tex2D(TexRGB, tex0).rgb;
float alpha = tex2D(TexAlpha, tex0).g;
return float4(rgb, alpha);
}
technique DirLightTexTech
{
pass P0
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 DirLightTexVS();
pixelShader = compile ps_2_0 DirLightTexPS();
}
}
where TexRGB is the texture associated to the video frame, while TexAlpha is another texture that contains alpha values.
How can I pass the decoded surface to the shader? I never used Directx9 so an example is appreciated and could help me to solve the problem.
UPDATE 1:
I created the InitEffect function to load the effect from file
void InitEffect(Example* ctx) {
auto hwctx = ctx->decoder->stream->HWAccelCtx;
// Create the FX from a .fx file.
ID3DXBuffer* errors = 0;
D3DXCreateEffectFromFile(hwctx->d3d9device, "basicEffect.fx", 0, 0, D3DXSHADER_DEBUG, 0, &ctx->mFX, &errors);
if (errors)
MessageBox(0, (LPCSTR)errors->GetBufferPointer(), 0, 0);
// Obtain handles.
ctx->mhTech = ctx->mFX->GetTechniqueByName("DirLightTexTech");
ctx->mhWVP = ctx->mFX->GetParameterByName(0, "gWVP");
ctx->mhTexAlpha = ctx->mFX->GetParameterByName(0, "gTexAlpha");
ctx->mhTexRGB = ctx->mFX->GetParameterByName(0, "gTexRGB");
}
and changed the rendering fuction to:
void Draw(Example* ctx) {
InitMatrices(ctx);
IDirect3DSurface9* backbuffer;
hwctx->d3d9device->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0xffeeeeee, 1.0f, 0);
hwctx->d3d9device->BeginScene();
ctx->mFX->SetTechnique(ctx->mhTech);
ctx->mFX->SetMatrix(ctx->mhWVP, &(ctx->mCrateWorld*ctx->mView*ctx->mProj));
ctx->texRGB->GetSurfaceLevel(0, &ctx->surfRGB);
hwctx->d3d9device->SetRenderTarget(0, ctx->surfRGB);
hwctx->d3d9device->StretchRect((IDirect3DSurface9*)s->frame->data[3], NULL, ctx->surfRGB, NULL, D3DTEXF_LINEAR);
ctx->mFX->SetTexture(ctx->mhTexAlpha, ctx->texAlpha);
ctx->mFX->SetTexture(ctx->mhTexRGB, ctx->texRGB);
// Enable alpha blending.
hwctx->d3d9device->SetRenderState(D3DRS_ALPHABLENDENABLE, true);
hwctx->d3d9device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
hwctx->d3d9device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
hwctx->d3d9device->SetVertexDeclaration(VertexPNT::Decl);
hwctx->d3d9device->SetStreamSource(0, ctx->mBoxVB, 0, sizeof(VertexPNT));
hwctx->d3d9device->SetIndices(ctx->mBoxIB);
UINT numPasses = 0;
ctx->mFX->Begin(&numPasses, 0);
for (UINT i = 0; i < numPasses; ++i){
ctx->mFX->BeginPass(i);
hwctx->d3d9device->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, 24, 0, 12);
ctx->mFX->EndPass();
}
ctx->mFX->End();
hwctx->d3d9device->EndScene();
hwctx->swap_chain->Present(0, 0, 0, 0, 0);
backbuffer->Release();
// Disable alpha blending.
hwctx->d3d9device->SetRenderState(D3DRS_ALPHABLENDENABLE, false);
}
but it still doesn't work.
UPDATE 2
I modified the code by following the code Asesh shared. In the InitEffect function I added the following lines to create a render target surface:
ctx->texRGB->GetSurfaceLevel(0, &ctx->surfRGB);
// store orginal rendertarget
hwctx->d3d9device->GetRenderTarget(0, &ctx->origTarget_);
D3DSURFACE_DESC desc;
ctx->origTarget_->GetDesc(&desc);
// create our surface as render target
hwctx->d3d9device->CreateRenderTarget(1920, 1080, D3DFMT_X8R8G8B8,
desc.MultiSampleType, desc.MultiSampleQuality,
false, &ctx->surfRGB, NULL);
the draw function is:
void drawScene(Example* ctx) {
InitMatrices(ctx);
auto hwctx = ctx->decoder->stream->HWAccelCtx;
auto s = (VdrStreamContext*)ctx->decoder->stream->vdrCodecCtx->opaque;
IDirect3DSurface9* backbuffer;
hwctx->d3d9device->SetRenderTarget(0, ctx->surfRGB);
hwctx->d3d9device->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0xffeeeeee, 1.0f, 0);
hwctx->d3d9device->BeginScene();
hwctx->d3d9device->StretchRect((IDirect3DSurface9*)s->vdrFrame->data[3], NULL, ctx->surfRGB, NULL, D3DTEXF_NONE);
hwctx->d3d9device->SetRenderTarget(0, ctx->origTarget_);
if (!hwctx->d3d9device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &backbuffer)) {
hwctx->d3d9device->StretchRect(ctx->surfRGB, NULL, backbuffer, NULL, D3DTEXF_NONE);
}
ctx->mFX->SetTechnique(ctx->mhTech);
ctx->mFX->SetMatrix(ctx->mhWVP, &(ctx->mCrateWorld*ctx->mView*ctx->mProj));
ctx->mFX->SetTexture(ctx->mhTexAlpha, ctx->texAlpha);
ctx->mFX->SetTexture(ctx->mhTexRGB, ctx->texRGB);
// Enable alpha blending.
hwctx->d3d9device->SetRenderState(D3DRS_ALPHABLENDENABLE, true);
hwctx->d3d9device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
hwctx->d3d9device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
hwctx->d3d9device->SetVertexDeclaration(VertexPNT::Decl);
hwctx->d3d9device->SetStreamSource(0, ctx->mBoxVB, 0, sizeof(VertexPNT));
hwctx->d3d9device->SetIndices(ctx->mBoxIB);
UINT numPasses = 0;
ctx->mFX->Begin(&numPasses, 0);
for (UINT i = 0; i < numPasses; ++i){
ctx->mFX->BeginPass(i);
hwctx->d3d9device->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, 24, 0, 12);
ctx->mFX->EndPass();
}
ctx->mFX->End();
hwctx->d3d9device->EndScene();
hwctx->d3d9device->Present(0, 0, 0, 0);
backbuffer->Release();
// Disable alpha blending.
hwctx->d3d9device->SetRenderState(D3DRS_ALPHABLENDENABLE, false);
}
by drawing in the backbuffer by hwctx->d3d9device->StretchRect(ctx->surfRGB, NULL, backbuffer, NULL, D3DTEXF_NONE); even if the ctx->surfRGB is associated to the texture passed to the shader, the video frame is showed on the screen but alpha blending is not applied. If I remove hwctx->d3d9device->StretchRect(ctx->surfRGB, NULL, backbuffer, NULL, D3DTEXF_NONE); video frame is not showed even if the ctx->surfRGB is not empty.
so I have been following a tutorial in the Frank Luna book "3D Games programming with DirectX11" and Have been working on a Sky-box. This sky-box is rendering correctly apart from a small tweak needed to the texture. I have created a separate vertex and pixel shader for the Sky Box as it doesn't need so much work in the .fx file. When I draw my object they all draw but when I use the normal vertex and pixel shader which works normally my objects appear black (I think they are not able to get the colour from their shader).
_pImmediateContext->RSSetState(_solidFrame);
_pImmediateContext->VSSetShader(_pSkyVertexShader, nullptr, 0);
_pImmediateContext->PSSetShaderResources(3, 1, &_pTextureSkyMap);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureRV);
_pImmediateContext->PSSetSamplers(0, 1, &_pSamplerLinear);
_pImmediateContext->PSSetShader(_pSkyPixelShader, nullptr, 0);
//Imported Sky
world = XMLoadFloat4x4(&_sky.GetWorld());
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
//Draw the Pitch
_sky.Draw(_pd3dDevice, _pImmediateContext);
_pImmediateContext->VSSetShader(_pVertexShader, nullptr, 0);
_pImmediateContext->VSSetConstantBuffers(0, 1, &_pConstantBuffer);
_pImmediateContext->PSSetConstantBuffers(0, 1, &_pConstantBuffer);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureMetalRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureRV);
_pImmediateContext->PSSetSamplers(0, 1, &_pSamplerLinear);
_pImmediateContext->PSSetShader(_pPixelShader, nullptr, 0);
//Floor
// Render opaque objects //
// Set vertex buffer for the Floor
_pImmediateContext->IASetVertexBuffers(0, 1, &_pVertexBufferFloor, &stride, &offset);
// Set index buffer
_pImmediateContext->IASetIndexBuffer(_pIndexBufferFloor, DXGI_FORMAT_R16_UINT, 0);
world = XMLoadFloat4x4(&_worldFloor);
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
_pImmediateContext->DrawIndexed(96, 0, 0);
//Imported Pitch
world = XMLoadFloat4x4(&_pitch.GetWorld());
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
//Draw the Pitch
_pitch.Draw(_pd3dDevice, _pImmediateContext);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureMetalRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureMetalRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureMetalRV);
Am i missing out a line of code to clear something between changing the shaders and not using the wrong data??
Was a problem with a small edit to the .fx file that I hadn't noticed. Is now fixed by reverting that file back.
I have a WinForms application with a panel (500x500 pixels) that I want to render something in. At this point I am just trying to fill it in with a specific color. I want to use OpenGL/CUDA interop to do this.
I got the panel configured to be the region to render stuff in, however when I run my code, the panel just gets filled with the glClear(..) color, and nothing assigned by the kernel is displayed. It sort of worked this morning (inconsistently), and in my attempt to sort out the SwapBuffers() mess, I think I screwed it up.
Here is the pixel format initialization for OpenGL. It seems to work fine, I have the two buffers as I expected, and the context is correct:
static PIXELFORMATDESCRIPTOR pfd=
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
PFD_DOUBLEBUFFER, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
16, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
GLint iPixelFormat;
// get the device context's best, available pixel format match
if((iPixelFormat = ChoosePixelFormat(hdc, &pfd)) == 0)
{
MessageBox::Show("ChoosePixelFormat Failed");
return 0;
}
// make that match the device context's current pixel format
if(SetPixelFormat(hdc, iPixelFormat, &pfd) == FALSE)
{
MessageBox::Show("SetPixelFormat Failed");
return 0;
}
if((m_hglrc = wglCreateContext(m_hDC)) == NULL)
{
MessageBox::Show("wglCreateContext Failed");
return 0;
}
if((wglMakeCurrent(m_hDC, m_hglrc)) == NULL)
{
MessageBox::Show("wglMakeCurrent Failed");
return 0;
}
After this is done, I set up the ViewPort as such:
glViewport(0,0,iWidth,iHeight); // Reset The Current Viewport
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
glEnable(GL_DEPTH_TEST);
Then I set up the clear color and do a clear:
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
Now I set up the CUDA/OpenGL interop:
cudaDeviceProp prop; int dev;
memset(&prop, 0, sizeof(cudaDeviceProp));
prop.major = 1; prop.minor = 0;
checkCudaErrors(cudaChooseDevice(&dev, &prop));
checkCudaErrors(cudaGLSetGLDevice(dev));
glBindBuffer = (PFNGLBINDBUFFERARBPROC)GET_PROC_ADDRESS("glBindBuffer");
glDeleteBuffers = (PFNGLDELETEBUFFERSARBPROC)GET_PROC_ADDRESS("glDeleteBuffers");
glGenBuffers = (PFNGLGENBUFFERSARBPROC)GET_PROC_ADDRESS("glGenBuffers");
glBufferData = (PFNGLBUFFERDATAARBPROC)GET_PROC_ADDRESS("glBufferData");
GLuint bufferID;
cudaGraphicsResource * resourceID;
glGenBuffers(1, &bufferID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, bufferID);
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, fWidth*fHeight*4, NULL, GL_DYNAMIC_DRAW_ARB);
checkCudaErrors(cudaGraphicsGLRegisterBuffer( &resourceID, bufferID, cudaGraphicsMapFlagsNone ));
Now I try to call my kernel (which just paints each pixel a specific color) and have that displayed.
uchar4* devPtr;
size_t size;
// First clear the back buffer:
glClearColor(1.0f, 0.5f, 0.0f, 0.0f); // orange
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkCudaErrors(cudaGraphicsMapResources(1, &resourceID, NULL));
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void**)&devPtr, &size, resourceID));
animate(devPtr); // This will call the kernel and do a sync (see later)
checkCudaErrors(cudaGraphicsUnmapResources(1, &resourceID, NULL));
// Swap buffers to bring back buffer forward:
SwapBuffers(m_hDC);
At this point I expect to see the kernel colors on the screen, but no! I see orange, which is the clear color that I just set.
Here is the call to the kernel:
void animate(uchar4* dispPtr)
{
checkCudaErrors(cudaDeviceSynchronize());
animKernel<<<blocks, threads>>>(dispPtr, envdim);;
checkCudaErrors(cudaDeviceSynchronize());
}
Here envdim is just the dimensions (so 500x500). The kernel itself:
__global__ void animKernel(uchar4 *optr, dim3 matdim)
{
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int offset = x + y * matdim.x;
if (x < matdim.x && y < matdim.y)
{
// BLACK:
optr[offset].x = 0; optr[offset].y = 0; optr[offset].z = 0;
}
}
Things I've done:
The value returned by cudaGraphicsResourceGetMappedPointer's size is 1000000, which corresponds to the 500x500 matrix of uchar4, so that's good.
Each kernel printed the value and location that it was writing to, and that seemed ok.
Played with the alpha value for the clear color, but that doesn't seem to do anything (yet?)
Ran the animate() function several times. Don't know why I thought that would help, but I tried it...
So I guess I'm missing something, but I'm going kind of crazy looking for it. Any advice? Help?
It's another one of those questions I answer myself! Hmph, as I figured, it was a one line issue. The problem resides in the rendering call itself.
The configuration is fine, the one issue I have with the code above is:
I never called glDrawPixels(), which is necessary in order for the OpenGL driver to copy the shared buffer (GL_PiXEL_UNPACK_BUFFER_ARB) source to the display buffer. The correct rendering sequence is then:
uchar4* devPtr;
size_t size;
// First clear the back buffer:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkCudaErrors(cudaGraphicsMapResources(1, &resourceID, NULL));
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void**)&devPtr, &size, resourceID));
animate(devPtr); // This will call the kernel and do a sync (see later)
checkCudaErrors(cudaGraphicsUnmapResources(1, &resourceID, NULL));
// This is necessary to copy the shared buffer to display
glDrawPixels(fWidth, fHeight, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// Swap buffers to bring back buffer forward:
SwapBuffers(m_hDC);
I'd like to thank the Acade-- uh, CUDA By Example, once again for helping me. Even though the example code from the book used GLUT (which was completely useless for this...), the book referenced normal gl functions.