Drawing a full screen Quad? - c++

What's wrong with this:
pVertexBuffer[0].Position = D3DXVECTOR3(0.0f,0.0f,0.0f);
pVertexBuffer[0].TexCoord = D3DXVECTOR2(0.0f,0.0f);
pVertexBuffer[1].Position = D3DXVECTOR3(m_ScreenResolutionX,0.0f,0.0f);
pVertexBuffer[1].TexCoord = D3DXVECTOR2(1.0f,0.0f);
pVertexBuffer[2].Position = D3DXVECTOR3(0.0f,m_ScreenResolutionY,0.0f);
pVertexBuffer[2].TexCoord = D3DXVECTOR2(0.0f,1.0f);
pVertexBuffer[3].Position = D3DXVECTOR3(0.0f,m_ScreenResolutionY,0.0f);
pVertexBuffer[3].TexCoord = D3DXVECTOR2(0.0f,1.0f);
pVertexBuffer[4].Position = D3DXVECTOR3(m_ScreenResolutionX,0.0f,0.0f);
pVertexBuffer[4].TexCoord = D3DXVECTOR2(1.0f,0.0f);
pVertexBuffer[5].Position = D3DXVECTOR3(m_ScreenResolutionX,m_ScreenResolutionY,0.0f);
pVertexBuffer[5].TexCoord = D3DXVECTOR2(1.0f,1.0f);
if i try to render this, i don't see anything. in the vertex shader i use these vertex positions without transforming them.

Vertex shaders output vertices in homogenous screenspace coordinates; they are usually screen resolution independent. In other words, you should output coordinates from (-1,-1,0) to (1, 1, 0).

Related

How can the particle objects have different textures?

I have a problem making particles, so I'm going to ask you a question. The problem is that when you make several particles, the texture of the particles is the texture of the particles made at the end. This is my code.
enter code herem_pParticleBuffer = new CStructuredBuffer;
m_pParticleBuffer->Create(sizeof(tParticle), m_iMaxParticle, nullptr);
m_pSharedBuffer = new CStructuredBuffer;
m_pSharedBuffer->Create(sizeof(tParticleShared), 1, nullptr);
m_pMesh = CResMgr::GetInst()->FindRes<CMesh>(L"PointMesh");
m_pMtrl = CResMgr::GetInst()->FindRes<CMaterial>(L"ParticleMtrl");
Ptr<CTexture> pParticle = _pTexture;
m_pMtrl->SetData(SHADER_PARAM::TEX_0, pParticle.GetPointer());
m_pUpdateMtrl = CResMgr::GetInst()->FindRes<CMaterial>(L"ParticleUpdateMtrl");
This is where the particles are initialized.
float fRatio = tData[_in.iInstID].m_fCurTime / tData[_in.iInstID].m_fLifeTime;
float4 vCurColor = (g_vec4_1 - g_vec4_0) * fRatio + g_vec4_0;
return vCurColor * g_tex_0.Sample(g_sam_0, _in.vUV);
This is the hlsl corresponding to the pixel shader of the particles.

Sampling Back Buffer in vertex Shader always returns 0 and float1 instead of float4

I am totally lost now. Have been trying to read the backbuffer inside a vertex shader for days with no luck whatsoever.
I'm trying to read the vertexes position from the backbuffer and it's neighboring pixels. (I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader). I've created a separate ID3D11Texture2D and an SRV to go with the backBuffer. I copy the backbuffer into this SRV's resource. Bind the SRV using VSSetShaderResources but just can't seem to be able to read from it inside the vertex shader.
I will share some code here from the creation of these elements as well as include some RenderDoc screenshots that keep showing that the SRV is being bound to the VS stage and has the right texture associated with it but every Load or []operator or tex2dlod or SampleLevel(i bound a SamplerState too)
just keeps returning a single 1.0 value with the rest of the float4 never being returned, meaning i only get a float1 back. I will also include a renderdoc capture file if anyone wants to take a look.
This is a simple scene from tutorial 42 on the rastertek.com site, there is a ground plane with a cube and a sphere on it :
https://i.imgur.com/cbVC48E.gif
// Here is some code when creating the secondary texture and SRV that houses a //backBuffer
// Get the pointer to the back buffer.
result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&backBufferPtr);
if(FAILED(result))
{
MessageBox((*(hwnd)), L"Get the pointer to the back buffer FAILED", L"Error", MB_OK);
return false;
}
// Create another texture2d that we will use to make an SRV out of, and this texture2d will be used to copy the backbuffer to so we can read it in a shader
D3D11_TEXTURE2D_DESC bbDesc;
backBufferPtr->GetDesc(&bbDesc);
bbDesc.MipLevels = 1;
bbDesc.ArraySize = 1;
bbDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bbDesc.Usage = D3D11_USAGE_DEFAULT;
bbDesc.MiscFlags = 0;
bbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
result = m_device->CreateTexture2D(&bbDesc, NULL, &m_backBufferTx2D);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Create a Tx2D for backbuffer SRV FAILED", L"Error", MB_OK);
return false;
}
D3D11_SHADER_RESOURCE_VIEW_DESC descSRV;
ZeroMemory(&descSRV, sizeof(descSRV));
descSRV.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
descSRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
descSRV.Texture2D.MipLevels = 1;
descSRV.Texture2D.MostDetailedMip = 0;
result = GetDevice()->CreateShaderResourceView(m_backBufferTx2D, &descSRV, &m_backBufferSRV);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Creating BackBuffer SRV FAILED.", L"Error", MB_OK);
return false;
}
// Create the render target view with the back buffer pointer.
result = m_device->CreateRenderTargetView(backBufferPtr, NULL, &m_renderTargetView);
First I render the scene in all white and then I copy that to the SRV and bind it for the next shader that's supposed to sample it. I'm expecting to get a float4(1.0, 1.0, 1.0, 1.0) value returned when i sample the backbuffer with the vertex's on screen position
https://i.imgur.com/N9CYg8c.png
as shown on the top left in the event browser, there were three drawindexed calls for rendering everything in white and then a CopyResource.
I've selected the next (fourth) DrawIndexed and on the right side outlined in red are the inputs for this next shader clearly showing that the backBuffer has been successfully bound to the vertex shader.
And now for the part that's giving me trouble
https://i.imgur.com/ENuXk0n.png
I'm gonna be debugging this top-left vertex as shown on the screenshot,
the vertex Shader has a
Texture2D prevBackBuffer: register(t0);
written at the top
https://i.imgur.com/8cihNsq.png
When trying to sample the left neighboring pixel
this line of code returns newCoord = float2(158, 220)
when entering these pixel values in the texture view i get this pixel
https://i.imgur.com/DT72Fl1.png
so the coordinates are ok so far, and as outlined i'm expecting to get a float4(0.0, 0.0, 0.0, 1,0) returned when i sample this pixel
(I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader)
AND YET, when i sample that pixel right after altering the pixel coordinates since load counts pixels from bottom left so i need
newCoord = float2(158, 379), i get this
https://i.imgur.com/8SuwOzz.png
why is this, even if it's out of range, load should return all zeros, since I'm not sure about the whole load counts from bottom left thing I tried sampling using the top left coordinates (158, 220) but end up getting 0.0, ?, ?, ?
I'm completely stumped and have no idea what to try next. I've tried using a sample state :
// Create a clamp texture sampler state description.
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = device->CreateSamplerState(&samplerDesc, &m_sampleStateClamp);
but still never get a proper float4 back when reading the texture.
Any ideas, suggestions, I'll take anything at this point.
Oh and here's a RenderDoc file of the frame i was examining :
http://www.mediafire.com/file/1bfiqdpjkau4l0n/my_capture.rdc/file
So from my experience, reading from the back buffer is not really an operation that you want to be doing in the first place. If you have to do any operation on the rendered scene, the best way to do that is to render the scene to an intermediate texture, perform the operation on that texture, then render the final scene to the back buffer. This is generally how things like dynamic shadows are done - the scene is rendered from the perspective of the light, and the resulting buffer is interpreted to get a shadow value that is then applied to the final scene (this is also why dynamic light sources are limited in commercial game engines - they're rather expensive to use).
A similar idea can be applied here. First, render the whole scene to an intermediate texture, bound as a render target view (where the pixel format is specified by you, the programmer). Next, rebind that intermediate texture as a shader resource view, and render the scene again, using the edge detection shader and the real back buffer (where the pixel format is defined by the hardware).
This, fundamentally, is what I believe the issue is - a back buffer is a device dependent resource, and its format can change depending on the hardware. Therefore, using it from a shader is not safe, as you don't always know what the format will be. A device independent resource, on the other hand, will always have the same format, and you can safely use it however you like from a shader.
I wasn't able to get sampling an SRV in the vertex shader to work
but what i was able to get working
is using a backBuffer.SampleLevel inside a compute shader
I also had to change the sampler to something like this :
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0.5f;
samplerDesc.BorderColor[1] = 0.5f;
samplerDesc.BorderColor[2] = 0.5f;
samplerDesc.BorderColor[3] = 0.5f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = 0;

glViewport offset and ortho projection

In all tutorials I found about creating projection matrix based on viewport size all of them assumed that left bottom coordinates of viewport will be (0,0).
Now I want to draw to the different parts of the screen and for that purpose I want to switch viewports accordingly:
glViewport(0,0,windowWidth/2, windowHeight/2); //left bottom
glViewport(0,windowHeight/2,windowWidth/2, windowHeight/2);//left top
glViewport(windowWidth/2,0,windowWidth/2, windowHeight/2);//right bottom
glViewport(windowWidth/2, windowHeight/2,windowWidth/2, windowHeight/2);//right top
Now I have a problem with defining my projection matrix. Without having any (x,y) offest I was using this code for calculating my ortho projection matrix:
if (m_WindowWidth > m_WindowHeight)
{
auto viewportAspectRatio = (float)m_WindowWidth / (float)m_WindowHeight;
m_ProjectionMatrix.m_fLeft = (-1.0f) * m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fRight = m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fBottom = (-1.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fTop = m_fWindowSize;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
m_fMoveSpeed = static_cast<GLfloat>(m_fWindowSize * 2 / static_cast<float>(m_WindowHeight));
}
else
{
auto viewportAspectRatio = (float)m_WindowHeight / (float)m_WindowWidth;
m_ProjectionMatrix.m_fLeft = (-1.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fRight = m_fWindowSize;
m_ProjectionMatrix.m_fBottom = (-1.0f)*m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fTop = m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
m_fMoveSpeed = static_cast<GLfloat>(m_fWindowSize * 2 / static_cast<float>(m_WindowWidth));
}
And this works fine UNTIL I will add any (x,y) offset to my viewport. The effect is following when using glViewport(0, m_WindowHeight/2, m_WindowWidth/2, m_WindowHeight/2):
And with glViewport(0, 0, m_WindowWidth/2, m_WindowHeight/2):
How can I make it work?
First, aspect ratio is always width/height.
Then I think what you are looking for is :
m_ProjectionMatrix.m_fLeft = x;
m_ProjectionMatrix.m_fRight = x + m_WindowWidth;
m_ProjectionMatrix.m_fBottom = y;
m_ProjectionMatrix.m_fTop = y + m_WindowHeight;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
Have a look at this wiki page
I found solution to this problem and posted it on the gamedev forum:
https://gamedev.stackexchange.com/questions/122284/glviewport-offset-and-ortho-projection/122289#122289
In short:
I was drawing my whole scene to the framebuffer and then I rendered generated texture to the screen. This operation caused unwanted glViewport() transformations accumulation.
Solution to this is setting the glViewport() to origin befrore rendering to the framebuffer and then setting offset when rendering texture to the screen.

Opengl Billboard matrix

I am writing a viewer for a proprietary mesh & animation format in OpenGL.
During rendering a transformation matrix is created for each bone (node) and is applied to the vertices that bone is attached to.
It is possible for a bone to be marked as "Billboarded" which as most everyone knows, means it should always face the camera.
So the idea is to generate a matrix for that bone which when used to transform the vertices it's attached to, causes the vertices to be billboarded.
On my test model it should look like this:
However currently it looks like this:
Note, that despite its incorrect orientation, it is billboarded. As in no matter which direction the camera looks, those vertices are always facing that direction at that orientation.
My code for generating the matrix for bones marked as billboarded is:
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec4 camPos = vec4(-view[3].x, -view[3].y, -view[3].z,1);
vec3 camUp = vec3(view[0].y, view[1].y, view[2].y);
// zero the translation in the matrix, so we can use the matrix to transform
// camera postion to world coordinates using the view matrix
view[3].x = view[3].y = view[3].z = 0;
// the view matrix is how to get to the gluLookAt pos from what we gave as
// input for the camera position, so to go the other way we need to reverse
// the rotation. Transposing the matrix will do this.
{
float * matrix = (float*)&view;
float temp[16];
// copy this into temp
memcpy(temp, matrix, sizeof(float) * 16);
matrix[1] = temp[4]; matrix[4] = temp[1];
matrix[2] = temp[8]; matrix[8] = temp[2];
matrix[6] = temp[9]; matrix[9] = temp[6];
}
// get the correct position of the camera in world space
camPos = view * camPos;
//vec3 pos = pivot;
vec3 look = glm::normalize(vec3(camPos.x-pos.x,camPos.y-pos.y,camPos.z-pos.z));
vec3 right = glm::cross(camUp,look);
vec3 up = glm::cross(look,right);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
I am using GLM to do the math involved.
Though this part of the code is based off of the tutorial here, other parts of the code are based off of an open source program similar to the one I'm building. However that program was written for DirectX and I haven't had much luck directly converting. The (working) directX code for billboarding looks like this:
D3DXMatrixRotationY(&CameraRotationMatrixY, -Camera.GetPitch());
D3DXMatrixRotationZ(&CameraRotationMatrixZ, Camera.GetYaw());
D3DXMatrixMultiply(&CameraRotationMatrix, &CameraRotationMatrixY, &CameraRotationMatrixZ);
D3DXQuaternionRotationMatrix(&CameraRotation, &CameraRotationMatrix);
D3DXMatrixTransformation(&CameraRotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &CameraRotation, NULL);
D3DXMatrixDecompose(&Scaling, &Rotation, &Translation, &BaseMatrix);
D3DXMatrixTransformation(&RotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &Rotation, NULL);
D3DXMatrixMultiply(&TempMatrix, &CameraRotationMatrix, &RotationMatrix);
D3DXMatrixMultiply(&BaseMatrix, &TempMatrix, &BaseMatrix);
Note the results are stored in baseMatrix in the directX version.
EDIT2: Here's the code I came up with when I tried to modify my code according to datenwolf's suggestions. I'm pretty sure I made some mistakes still. This attempt creates heavily distorted results with one end of the object directly in the camera.
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec3 pos = vec3(calculatedMatrix[3].x,calculatedMatrix[3].y,calculatedMatrix[3].z);
mat4 inverted = glm::inverse(view);
vec4 plook = inverted * vec4(0,0,0,1);
vec3 look = vec3(plook.x,plook.y,plook.z);
vec3 right = orthogonalize(vec3(view[0].x,view[1].x,view[2].x),look);
vec3 up = orthogonalize(vec3(view[0].y,view[1].y,view[2].y),look);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
calculatedMatrix = bmatrix;
vec3 orthogonalize(vec3 toOrtho, vec3 orthoAgainst) {
float bottom = (orthoAgainst.x*orthoAgainst.x)+(orthoAgainst.y*orthoAgainst.y)+(orthoAgainst.z*orthoAgainst.z);
float top = (toOrtho.x*orthoAgainst.x)+(toOrtho.y*orthoAgainst.y)+(toOrtho.z*orthoAgainst.z);
return toOrtho - top/bottom*orthoAgainst;
}
Creating a parallel to view billboard matrix is as simple as setting the upper left 3×3 submatrix of the total modelview matrix to identity. There are only some cases where you actually require the actual look vector.
Anyway, you're thinking far too complicated. All your tinkering with the matrix completely misses the point. Namely that the modelview transformation assumes that the camera is always at (0,0,0) and moves world and models in opposite. What you try to do is finding the vector in model space that points towards the camera. Which is simply the vector that will point toward (0,0,0) after transformation.
So all we have to do is invert the modelview matrix and transform (0,0,0,1) with it. That's your look vector. For your calculations of right and up vectors orthogonalize the 1st (X) and 2nd (Y) column of the modelview matrix against that look vector.
Figured it out myself. It turns out the model format I'm using uses different axes for billboarding. Most billboarding implementations (including the one I used) use the X,Y coordinates to position the billboarded object. The format I was reading uses Y and Z.
The thing to look for is that there was a billboarding effect, but facing the wrong direction. To fix this I played with the different camera vectors until I arrived at the correct matrix calculation:
bmatrix[1].x = right.x;
bmatrix[1].y = right.y;
bmatrix[1].z = right.z;
bmatrix[1].w = 0;
bmatrix[2].x = up.x;
bmatrix[2].y = up.y;
bmatrix[2].z = up.z;
bmatrix[2].w = 0;
bmatrix[0].x = look.x;
bmatrix[0].y = look.y;
bmatrix[0].z = look.z;
bmatrix[0].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
My attempts to follow datenwolf's advice did not succeed and at this time he hasn't offered any additional explanation so I'm unsure why. Thanks anyways!

How should my texture sampler be for rendering a bitmap font/sprite?

I have a texture and was curious as to what the texture sampler should be for sampling the sprite texture? I am using DirectX11, though if you know what it should be for DX9/10, I believe it is transferable.
I tried
AddressU = D3D11_TEXTURE_ADDRESS_WRAP
AddressV = D3D11_TEXTURE_ADDRESS_WRAP
AddressW = D3D11_TEXTURE_ADDRESS_WRAP
ComparisonFunc = D3D11_COMPARISON_NEVER
Filter = D3D11_FILTER_MIN_MAG_MIP_POINT
MaxAnisotropy = 1;
MaxLOD = D3D11_FLOAT32_MAX;
MinLOD = 0;
MipLODBias = 0;
Although when rendering, there appeared to be artifacts and it did not seem as clear as it should be.
This is an example of what the artifcats are. The top text with a light blue background you can see artifacts (for example, the A and C). The bottom text with the black background is the origin image.