DirectX: Small distortion between 2 sprite polygons - c++

Hello I use the same way to render sprites with directx from a long time but here I am rendering the screen in a texture and then render it with a big sprite on the screen.
For the camera I use that:
vUpVec=D3DXVECTOR3(0,1,0);
vLookatPt=D3DXVECTOR3(0,0,0);
vFromPt=D3DXVECTOR3(0,0,-1);
D3DXMatrixLookAtRH( &matView, &vFromPt, &vLookatPt, &vUpVec );
g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView );
D3DXMatrixOrthoRH( &matProj, 1,1, 0.5f, 20 );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
And to render the sprite:
CUSTOMVERTEX* v;
spritevb->Lock( 0, 0, (void**)&v, 0 );
v[0].position = D3DXVECTOR3(-0.5f,-0.5f,0); v[0].u=0; v[0].v=1;
v[1].position = D3DXVECTOR3(-0.5f,0.5f,0); v[1].u=0; v[1].v=0;
v[2].position = D3DXVECTOR3(0.5f,-0.5f,0); v[2].u=1; v[2].v=1;
v[3].position = D3DXVECTOR3(0.5f,0.5f,0); v[3].u=1; v[3].v=0;
spritevb->Unlock();
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
This is very basic and works, my sprite is rendered on the screen full.
But by looking closer I see that there's a small diagonal line through the screen (between the 2 polygons) not a colored one but like if them weren't perfectly positionned.
I thought about filtering and tried removing everything but maybe I forget something...
Thanks

To render to full screen best way is to not define any camera positions.
If you use as input positions
SimpleVertex vertices[] =
{
{ XMFLOAT3( -1.0f, 1.0f, 0.5f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 0.5f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.5f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 0.5f ), XMFLOAT2( 0.0f, 1.0f ) },
};
and in the Vertex Shader do
VS_OUTPUT RenderSceneVS( VS_INPUT input )
{
VS_OUTPUT Output;
Output.Position = input.Position;
Output.TextureUV = input.TextureUV;
return Output;
}
you get a render to full screen as well without having to worry about the viewing frustrum. Using this I never saw any lines between the two triangles.

Related

Defered rendering : Problems when passing Render Targets as Shader Resource Views to shader

I'm implementing deferred rendering/shading for the very first time I ran into some problems which I'm having trouble to solve on my own :/.
When rendering the geometry pass and deferred pass together I get this wierd looking output
I'm using a green clear color in the beginning of my deferred pass before setting topology, input layout etc.. So that's where the green comes from. I'm not sure why the output image is split in half though.
My main problem is however successfully passing the render targets from my geometry pass as shader resource views to my deferred shader. This is the result from my geometry shader
So judging by the output image I've seen to managed the transform to the correct space right?
In my geometry pass I set my render targets
ID3D11RenderTargetView* renderTargetsToSet[] = { mGBuffers[0]->RenderTargetView(),
mGBuffers[1]->RenderTargetView(),
mGBuffers[2]->RenderTargetView(),
mGBuffers[3]->RenderTargetView() };
mDeviceContext->OMSetRenderTargets( NUM_GBUFFERS, renderTargetsToSet, mDepthStencilView );
In the deferred pass I set them as shader resource views
ID3D11ShaderResourceView* viewsToSet[] = { mGBuffers[0]->mShaderResourceView,
mGBuffers[1]->mShaderResourceView,
mGBuffers[2]->mShaderResourceView,
mGBuffers[3]->mShaderResourceView };
mDeviceContext->PSSetShaderResources( 0, 4, viewsToSet );
In my deferred shader I register them
Texture2D worldPosTexture : register( t0 );
Texture2D normalTexture : register( t1 );
Texture2D diffuseTexture : register( t2 );
Texture2D specularTexture : register( t3 );
And sample them
float3 worldPosSample = worldPosTexture.Sample( samplerState, input.texCoord ).xyz;
float3 normalSample = normalTexture.Sample( samplerState, input.texCoord ).xyz;
float3 diffuseSample = diffuseTexture.Sample( samplerState, input.texCoord ).xyz;
float3 specularSample = specularTexture.Sample( samplerState, input.texCoord ).xyz;
To get the exact same output that the geometry pass gave I write
return float4( worldPosSample, 1.0f );
But all I get is that black and green split image that I posted.
To debug this I put some if-statements that return a color if one of elements in a float3 sample was 0.0f and ALL of the elements are 0.0f!
Am I really setting the gbuffer render targets as shader resource views correctly?
My understanding was that when a gbuffer contains an ID3D11ShaderResourceView* and an ID3D11RenderTargetView* and the ID3D11Texture2D*, used for creating both, is created with the D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE bind flag that when the render target view is used, then its content is automatically "copied" to the gbuffer shader resource view which later can be used as input in a shader.
Feel free to correct me and/or broaden my horizons on the subject.
Any suggestions on my problem? Thank you!
I figured out what I was doing wrong!
The black and green split image was a result from sampling in the deferred shader with incorrect UV-coordinates. I made the mistake of simply passing the geometry again and sampling with its texture coordinates.
The solution was to define a very simple quad and a new vertex buffer to store it in
vertices[0].position = XMFLOAT3( -1.0f, 1.0f, 0.0f ); vertices[0].normal = XMFLOAT3( 0.0f, 0.0f, -1.0f ); vertices[0].texCoord = XMFLOAT2( 0.0f, 0.0f );
vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f ); vertices[1].normal = XMFLOAT3( 0.0f, 0.0f, -1.0f ); vertices[1].texCoord = XMFLOAT2( 1.0f, 0.0f );
vertices[2].position = XMFLOAT3( -1.0f, -1.0f, 0.0f ); vertices[2].normal = XMFLOAT3( 0.0f, 0.0f, -1.0f ); vertices[2].texCoord = XMFLOAT2( 0.0f, 1.0f );
vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f ); vertices[3].normal = XMFLOAT3( 0.0f, 0.0f, -1.0f ); vertices[3].texCoord = XMFLOAT2( 1.0f, 1.0f );
The quad has its normal pointed in the negatize Z-axis so it's orientation is just as the textures that was produced in the geometry pass. I also created a new ID3D11InputLayout* containing only POSITION, NORMAL and TEXCOORD for the deferred pass as well as changed the topology to D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP since my geometry pass use tesselation.
This is the final output :)

Diectx11 Pointing to vertex buffer from another function/ Trying to use multiple textures

Edit/ Update: Put in my most recent code, and asking a new question about texturing if you still have time to help me.
Original problem: I need to have the vertex buffer in its own function, I'm trying to make it with variables for vertices so I can run an array of randomly generated co-ordinates through it and result in many instances of cubes which I can control the size of.
Your advice set me on the right track and I was able to make the vertex buffer work in a separate function as desired. I may have set myself up for problems later on so I'm trying to show as much relevant code as possible just in case.
New problem: My next step is to do what I just did but drawing a different set of cubes (Friendlies, so different size, which is why I wanted to make the buffer more dynamic so I can re-use it for everything).
I think I can manage that part fine, but first I need to figure out how to run multiple textures so I can tell which is which (Also because onscreen text will be done by texturing squares with pictures of letter/ numbers.)
Here is the code involved:
struct VERTEX {FLOAT X, Y, Z; D3DXVECTOR3 Normal; FLOAT U, V;};
void InitGraphics()
{
// create the vertex buffer
D3D11_BUFFER_DESC bd;
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(VERTEX) * 24; // size is the VERTEX struct * amount of vertices stored
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
// create the index buffer out of DWORDs
DWORD IndexList[] =
{
0,1,2,3,
4,5,6,7,
8,9,10,11,
12,13,14,15,
16,17,18,19,
20,21,22,23,
};
// create the index buffer
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(DWORD) * 24; // Changed to match the amount of indices used
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
bd.MiscFlags = 0;
dev->CreateBuffer(&bd, NULL, &pIBuffer);
devcon->Map(pIBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, IndexList, sizeof(IndexList));
devcon->Unmap(pIBuffer, NULL);
D3DX11CreateShaderResourceViewFromFile
(dev, // the Direct3D device
L"Wood.png", // load Wood.png in the local folder
NULL, // no additional information
NULL, // no multithreading
&pTexture, // address of the shader-resource-view
NULL); // no multithreading
}
void RenderFrame(void)
{
...
devcon->UpdateSubresource(pCBuffer, 0, 0, &cBuffer, 0, 0);
devcon->PSSetShaderResources(0, 1, &pTexture);
...
DrawStuff();
}
I was able to follow your directions and bring the memcpy line into the other function as seen below but had to bring a couple other lines along with it to make it work. I included more of the code this time to show what else is in the InitGraphics function as my next problem is trying to figure out how to use multiple textures.
The vertex buffer now looks like this:
void VertBuffer()
{
VERTEX VertList[] =
{
{(vVX - vS + 0.0f), (- vS + 0.0f), (vVZ - vS + 0.0f), D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 0.0f},
{(vVX - vS + 0.0f), (vS + 0.0f), (vVZ - vS + 0.0f), D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 1.0f},
{(vVX + vS + 0.0f), (- vS + 0.0f), (vVZ - vS + 0.0f), D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 0.0f},
{(vVX + vS + 0.0f), (vS + 0.0f), (vVZ - vS + 0.0f), D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 1.0f}, // side 1
...
};
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, VertList, sizeof(VertList));
devcon->Unmap(pVBuffer, NULL);
}
This is where vVX and vVZ are co-ordinates randomly generated and stored in an array and then vS is to manipulate the size of the cube. When I last posted I was still having problems with it because I managed to get the vertex buffer working in its own function, but I could still only call it once at the beginning which meant none of the variables took place. I tried putting it in DrawStuff() but that caused the program to crash after 3-6 seconds. Since then, I have absolutely no idea what I changed or edited but somehow the problem became fixed so now I have a working draw function which looks like this and calls the vertex buffer in every cycle to constantly update it on the locations of the cubes.
void DrawStuff()
{
for (j = 0; j < 10; j++) // Draw 10 Creeps
{
for (int i = 0; i < 6; i++)
{
vS = 2; // Creep size
vVX = aCMgr [j][0];
vVZ = aCMgr [j][1];
VertBuffer();
devcon->DrawIndexed(4, i * 4, 0);
}
}
}
So that seems to be working great now and I'm just going to make multiple of these. One for bad cubes (Creeps), one for the player + friendly cube, and one for lots of squares which will be textured to make up a rudimentary GUI.
After 12 hours of Google searching and re-reading the tutorial website as well as my own code, I've got as far as learning that I need to change the array size in D3D11_TEXTURE2D_DESC and then run the part in InitGraphics multiple times to load up each texture but I still cannot for the life of me figure out at what point to control applying different textures to different objects.
Here's (I think) all the code I have relating to textures:
ID3D11ShaderResourceView *pTexture; // The pointer to the texture shader
void InitD3D(HWND hWnd)
{
...
D3D11_TEXTURE2D_DESC texd;
ZeroMemory(&texd, sizeof(texd));
texd.Width = 512;
texd.Height = 512;
texd.ArraySize = 3;
texd.MipLevels = 1;
texd.SampleDesc.Count = 4;
texd.Format = DXGI_FORMAT_D32_FLOAT;
texd.BindFlags = D3D11_BIND_DEPTH_STENCIL;
ID3D11Texture2D *pDepthBuffer;
dev->CreateTexture2D(&texd, NULL, &pDepthBuffer);
...
}
I changed ArraySize to 3, assuming I will have 3 different images which will be used to texture everything. From my understanding, I need to run D3DX11CreateShaderResourceViewFromFile three times, once for each texture? Where would I go from here?
You can put the Vertex array and memcpy together in a same function and call this function in InitGraphics().
void InitVertexBuffer()
{
VERTEX VertList[] =
{
{-1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 0.0f},
{-1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 1.0f},
{1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 0.0f},
{1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 1.0f},
};
// Some code else....
memcpy(ms.pData, VertList, sizeof(Vertlist));
}
void InitGraphics()
{
// ... Code to initialized D3D11
InitVertexBuffer();
}
Another thing I want to point is in DirectX11, you don't necessary need to use memcpy, you can bind the vertex data before creating vertex buffer, as below.
// The vertex format
struct SimpleVertex
{
DirectX::XMFLOAT3 Pos; // Position
DirectX::XMFLOAT3 Color; // color
};
VOID InitVertexBuffer()
{
// Create the vertex buffer
SimpleVertex vertices[] =
{
{ XMFLOAT3( -1.0f, 1.0f, -1.0f ), XMFLOAT3( 0.0f, 0.0f, 1.0f) },
{ XMFLOAT3( 1.0f, 1.0f, -1.0f ), XMFLOAT3( 0.0f, 1.0f, 0.0f) },
{ XMFLOAT3( 1.0f, 1.0f, 1.0f ), XMFLOAT3( 0.0f, 1.0f, 1.0f) },
{ XMFLOAT3(-1.0f, 1.0f, 1.0f ), XMFLOAT3( 1.0f, 0.0f, 0.0f) },
{ XMFLOAT3(-1.0f, -1.0f, -1.0f ), XMFLOAT3( 1.0f, 0.0f, 1.0f) },
{ XMFLOAT3( 1.0f, -1.0f, -1.0f ), XMFLOAT3( 1.0f, 1.0f, 0.0f) },
{ XMFLOAT3( 1.0f, -1.0f, 1.0f ), XMFLOAT3( 1.0f, 1.0f, 1.0f) },
{ XMFLOAT3(-1.0f, -1.0f, 1.0f ), XMFLOAT3( 0.0f, 0.0f, 0.0f) },
};
// Vertex Buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(vertices);
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
// copy vertex buffer data
D3D11_SUBRESOURCE_DATA initData;
ZeroMemory(&initData, sizeof(initData));
initData.pSysMem = vertices; ////////////////// Bind your vertex data here.
HRESULT hr = g_pd3dDevice->CreateBuffer(&bd, &initData, &g_pVertexBuffer);
if(FAILED(hr))
{
MessageBox(NULL, L"Create vertex buffer failed", L"Error", 0);
}
}

how to convert world coordinates to screen coordinates

I am creating a game that will have 2d pictures inside a 3d world.
I originally started off by not caring about my images been stretched to a square while I learnt more about how game mechanics work... but it's now time to get my textures to display in the correct ratio and... size.
Just a side note, I have played with orthographic left hand projections but I noticed that you cannot do 3d in that... (I guess that makes sense... but I could be wrong, I tried it and when I rotated my image, it went all stretchy and weirdosss).
the nature of my game is as follows:
In the image it says -1.0 to 1.0... i'm not fussed if the coordinates are:
topleft = 0,0,0
bottom right = 1920, 1200, 0
But if that's the solution, then whatever... (p.s the game is not currently set up so that -1.0 and 1.0 is left and right of screen. infact i'm not sure how i'm going to make the screen edges the boundaries (but that's a question for another day)
Question:
The issue I am having is that my image for my player (2d) is 128 x 64 pixels. After world matrix multiplication (I think that's what it is) the vertices I put in scale my texture hugely... which makes sense but it looks butt ugly and I don't want to just whack a massive scaling matrix into the mix because it'll be difficult to work out how to make the texture 1:1 to my screen pixels (although maybe you will tell me it's actually how you do it but you need to do a clever formula to work out what the scaling should be).
But basically, I want the vertices to hold a 1:1 pixel size of my image, unstretched...
So I assume I need to convert my world coords to screen coords before outputting my textures and vertices??? I'm not sure how it works...
Anyways, here are my vertices.. you may notice what I've done:
struct VERTEX
{
float X, Y, Z;
//float R, G, B, A;
float NX, NY, NZ;
float U, V; // texture coordinates
};
const unsigned short SquareVertices::indices[ 6 ] = {
0, 1, 2, // side 1
2, 1, 3
};
const VERTEX SquareVertices::vertices[ 4 ] = {
//{ -1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
//{ 1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
//{ -1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
//{ 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
{ -64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -64.0f, 32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 64.0f, 64.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
(128 pixels / 2 = 64 ), ( 64 / 2 = 32 ) because the centre is 0.0... but what do I need to do to projections, world transdoobifications and what nots to get the worlds to screens?
My current setups look like this:
// called 1st
void Game::SetUpViewTransformations( )
{
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -20.0f, 0 );
XMVECTOR vecCamLookAt = XMVectorSet( 0, 0, 0, 0 );
XMVECTOR vecCamUp = XMVectorSet( 0, 1, 0, 0 );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
}
// called 2nd
void Game::SetUpMatProjection( )
{
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
windowWidth / windowHeight, // aspect ratio
1, // the near view-plane
100 ); // the far view-plan
}
and here is a sneaky look at my update and render methods:
// called 3rd
void Game::Update( )
{
world->Update();
worldRotation = XMMatrixRotationY( world->rotation );
player->Update( );
XMMATRIX matTranslate = XMMatrixTranslation( player->x, player->y, 0.0f );
//XMMATRIX matTranslate = XMMatrixTranslation( 0.0f, 0.0f, 1.0f );
matWorld[ 0 ] = matTranslate;
}
// called 4th
void Game::Render( )
{
// set our new render target object as the active render target
d3dDeviceContext->OMSetRenderTargets( 1, rendertarget.GetAddressOf( ), zbuffer.Get( ) );
// clear the back buffer to a deep blue
float color[ 4 ] = { 0.0f, 0.2f, 0.4f, 1.0f };
d3dDeviceContext->ClearRenderTargetView( rendertarget.Get( ), color );
d3dDeviceContext->ClearDepthStencilView( zbuffer.Get( ), D3D11_CLEAR_DEPTH, 1.0f, 0 ); // clear the depth buffer
CBUFFER cBuffer;
cBuffer.DiffuseVector = XMVectorSet( 0.0f, 0.0f, 1.0f, 0.0f );
cBuffer.DiffuseColor = XMVectorSet( 0.5f, 0.5f, 0.5f, 1.0f );
cBuffer.AmbientColor = XMVectorSet( 0.2f, 0.2f, 0.2f, 1.0f );
//cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Rotation = XMMatrixRotationY( world->rotation );
// calculate the view transformation
SetUpViewTransformations();
SetUpMatProjection( );
//matFinal[ 0 ] = matWorld[0] * matView * matProjection;
UINT stride = sizeof( VERTEX );
UINT offset = 0;
d3dDeviceContext->PSSetShaderResources( 0, 1, player->texture.GetAddressOf( ) ); // Set up texture
d3dDeviceContext->IASetVertexBuffers( 0, 1, player->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( player->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &cBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( player->indices ), 0, 0 ); // draw
swapchain->Present( 1, 0 );
}
Just to clarify, if I make my vertices use 2 and 1 respective of the fact my image is 128 x 64.. I get a normal looking size image.. and yet at 0,0,0 it's not at 1:1 size... wadduuuppp buddyyyy
{ -2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -2.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 2.0f, 2.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
Desired outcome 2 teh max:
Cool picture isn't it :D ?
Comment help:
I'm not familliar with direct-x but as far as I can see the thing with your image that is screen coordinates are [-1...+1] on x and y. So total length on both axis equals 2 and your image is scaled times 2. Try consider this scale in camera matrix.

DirectX : Nothing is drawn on screen

I'm trying to develop a program using DirectX (10) to display on screen.
Thing is, it displays nothing but the color I use to clear the backbuffer.
(I apologize for the quite big chunks of code that follow).
Here is my rendering function :
void DXEngine::renderOneFrame()
{
//First, we clear the back buffer
m_device->ClearRenderTargetView(m_renderTargetView,D3DXCOLOR(0.0f, 0.125f, 0.3f, 1.0f));
//Then, we clear the depth buffer
m_device->ClearDepthStencilView(m_depthStencilView,D3D10_CLEAR_DEPTH,1.0f, 0);
//Update variables
m_worldVariable->SetMatrix((float*)&m_world);
m_viewVariable->SetMatrix((float*)&m_view);
m_projectionVariable->SetMatrix((float*)&m_projection);
//Render the cube
D3D10_TECHNIQUE_DESC techDesc;
m_technique->GetDesc(&techDesc);
for(UINT pass = 0; pass < techDesc.Passes ; pass++){
m_technique->GetPassByIndex(pass)->Apply(0);
m_device->DrawIndexed(36,0,0);
}
m_swapChain->Present(0,0);
}
It is exactly the same as the 5th tutorial on DirectX10 in the DirectX SDK (June 2010) under the "Samples" folder, except it's encapsulated in an object.
My scene is initialized as follow :
HRESULT DXEngine::initStaticScene()
{
HRESULT hr;
//Vertex buffer creation and initialization
Vertex1Pos1Col vertices [] =
{
{ D3DXVECTOR3( -1.0f, 1.0f, -1.0f ), D3DXVECTOR4( 0.0f, 0.0f, 1.0f, 1.0f ) },
{ D3DXVECTOR3( 1.0f, 1.0f, -1.0f ), D3DXVECTOR4( 0.0f, 1.0f, 0.0f, 1.0f ) },
{ D3DXVECTOR3( 1.0f, 1.0f, 1.0f ), D3DXVECTOR4( 0.0f, 1.0f, 1.0f, 1.0f ) },
{ D3DXVECTOR3( -1.0f, 1.0f, 1.0f ), D3DXVECTOR4( 1.0f, 0.0f, 0.0f, 1.0f ) },
{ D3DXVECTOR3( -1.0f, -1.0f, -1.0f ), D3DXVECTOR4( 1.0f, 0.0f, 1.0f, 1.0f ) },
{ D3DXVECTOR3( 1.0f, -1.0f, -1.0f ), D3DXVECTOR4( 1.0f, 1.0f, 0.0f, 1.0f ) },
{ D3DXVECTOR3( 1.0f, -1.0f, 1.0f ), D3DXVECTOR4( 1.0f, 1.0f, 1.0f, 1.0f ) },
{ D3DXVECTOR3( -1.0f, -1.0f, 1.0f ), D3DXVECTOR4( 0.0f, 0.0f, 0.0f, 1.0f ) },
};
D3D10_BUFFER_DESC desc;
desc.Usage = D3D10_USAGE_DEFAULT;
desc.ByteWidth = sizeof(Vertex1Pos1Col) * 8;
desc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
D3D10_SUBRESOURCE_DATA data;
data.pSysMem = vertices;
hr = m_device->CreateBuffer(&desc,&data,&m_vertexBuffer);
if(FAILED(hr)){
MessageBox(NULL,TEXT("Vertex buffer creation failed"), TEXT("Error"),MB_OK);
return hr;
}
UINT stride = sizeof(Vertex1Pos1Col);
UINT offset = 0;
m_device->IASetVertexBuffers(0,1,&m_vertexBuffer,&stride,&offset);
//Index buffer creation and initialization
DWORD indices[] =
{
3,1,0,
2,1,3,
0,5,4,
1,5,0,
3,4,7,
0,4,3,
1,6,5,
2,6,1,
2,7,6,
3,7,2,
6,4,5,
7,4,6,
};
desc.Usage = D3D10_USAGE_DEFAULT;
desc.ByteWidth = sizeof(DWORD) * 36;
desc.BindFlags = D3D10_BIND_INDEX_BUFFER;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
data.pSysMem = vertices;
hr = m_device->CreateBuffer(&desc,&data,&m_indexBuffer);
if(FAILED(hr)){
MessageBox(NULL,TEXT("Index buffer creation failed"), TEXT("Error"),MB_OK);
return hr;
}
m_device->IASetIndexBuffer(m_indexBuffer,DXGI_FORMAT_R32_FLOAT,0);
//Set the primitive topology, i.e. how indices should be interpreted (here, as a triangle list)
m_device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
D3DXMatrixIdentity(&m_world);
D3DXVECTOR3 eye(0.0f, 1.0f, -10.0f);
D3DXVECTOR3 at(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXMatrixLookAtLH(&m_view, &eye, &at, &up);
D3DXMatrixPerspectiveFovLH(&m_projection, (float)D3DX_PI * 0.25f, m_width/(FLOAT)m_height, 0.1f, 100.0f);
return hr;
}
Once again, it's the exact same code (but encapsulated) as the tutorial I mentionned earlier.
When I open the Tutorial Visual Studio Solution in my IDE, it works and displays nicely what is described in the scene, but when I try to run my "encapsulated" version of this code, nothing shows up but the background color...
Note : My windows message pumps works fine, I can even handle user inputs the way I want, everything's fine. My application performs correctly my engine initialization (I check every single returned error code and there's nothing else but S_OK codes).
I have no clue where to search now. I've checked my code times and times again and it's exactly the same as the tutorial, I've checked that everything I encapsulate is set and accessed correctly, etc, but I still can't display anything else than the background color...
I was wondering if anyone here could have an idea of what could possibly cause this, or at least hints on where to look for...
EDIT: Effect file used :
//--------------------------------------------------------------------------------------
// File: Tutorial05.fx
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//--------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------
// Constant Buffer Variables
//--------------------------------------------------------------------------------------
matrix World;
matrix View;
matrix Projection;
//--------------------------------------------------------------------------------------
struct VS_INPUT
{
float4 Pos : POSITION;
float4 Color : COLOR;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float4 Color : COLOR;
};
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Color = input.Color;
return output;
}
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 PS( PS_INPUT input) : SV_Target
{
return input.Color;
}
//--------------------------------------------------------------------------------------
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
I think, that this can be an error:
Input assembler stage of D3D (10 and 11) pipeline is always waiting for DXGI_FORMAT_***_UINT format for index buffers. MSDN proves this:
A DXGI_FORMAT that specifies the format of the data in the index
buffer. The only formats allowed for index buffer data are 16-bit
(DXGI_FORMAT_R16_UINT) and 32-bit (DXGI_FORMAT_R32_UINT) integers.
Then look at your code that binds your buffer to IA:
m_device->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_FLOAT, 0);
I think you should use DXGI_FORMAT_R32_UINT for your case, like this:
m_device->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0);

OpenGL Color Matrix

How do I get the OpenGL color matrix transforms working?
I've modified a sample program that just draws a triangle, and added some color matrix code to see if I can change the colors of the triangle but it doesn't seem to work.
static float theta = 0.0f;
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClearDepth(1.0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glRotatef( theta, 0.0f, 0.0f, 1.0f );
glMatrixMode(GL_COLOR);
GLfloat rgbconversion[16] =
{
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f
};
glLoadMatrixf(rgbconversion);
glMatrixMode(GL_MODELVIEW);
glBegin( GL_TRIANGLES );
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( 0.0f, 1.0f , 0.5f);
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( 0.87f, -0.5f, 0.5f );
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( -0.87f, -0.5f, 0.5f );
glEnd();
glPopMatrix();
As far as I can tell, the color matrix I'm loading should change the triangle to black, but it doesn't seem to work. Is there something I'm missing?
The color matrix only applies to pixel transfer operations such as glDrawPixels which aren't hardware accelerated on current hardware. However, implementing a color matrix using a fragment shader is really easy. You can just pass your matrix as a uniform mat4 then mulitply it with gl_FragColor
It looks like you're doing it correctly, but your current color matrix sets the triangle's alpha value to 0 as well, so while it is being drawn, it does not appear on the screen.
"Additionally, if the ARB_imaging extension is supported, GL_COLOR is also accepted."
From the glMatrixMode documentation. Is the extension supported on your machine?
I have found the possible problem.
The color matrix is supported by the "Image Processing Subset". In most HW, it was supported by driver.(software implementation)
Solution:
Add this line after glEnd():
glCopyPixels(0,0, getWidth(), getHeight(),GL_COLOR);
It's very slow....