Direct3D11(C++): Updating Texture coordinates in constant buffer? - c++

I'm trying to make a rather basic 2D Engine with Direct3D.
I made a LoadImage() function which stores all the rather static behaviour of the image in an object. (Shaders, Vertexbuffers, Samplers etc)
I am planning to do the positioning of the vertices with matrices in constant buffers.
However, I would also like to have a DrawImage() function, which would have a parameter to tell what part of the texture should be drawn (clipped), so I would have to update the texture coordinates.
Since the vertexbuffer is already pre-defined, I wondered if there is a way to update texture coordinates via a constantbuffer that would be sent to the vertexshader?
I hope my question is clear enough, if you have any doubts look at the code below.
bool GameManager::GMLoadImage(Image* pImage, const char* pkcFilePath, ImageDesc* pImDesc)
{
pImage = new Image();
ID3D11ShaderResourceView* pColorMap = (pImage)->GetpColorMap();
/// CREATE SHADER RESOURCE VIEW (from file) ///
HRESULT result = D3DX11CreateShaderResourceViewFromFileA(m_pDevice,
pkcFilePath,
0,
0,
&pColorMap,
0);
if (FAILED(result)) {
MessageBoxA(NULL,"Error loading ShaderResourceView from file","Error",MB_OK);
return false;
}
/// RECEIVE TEXTURE DESC ///
ID3D11Resource* pColorTex;
pColorMap->GetResource(&pColorTex);
((ID3D11Texture2D*)pColorTex)->GetDesc(&((pImage)->GetColorTexDesc()));
pColorTex->Release();
/// CREATE VERTEX BUFFER ///
D3D11_TEXTURE2D_DESC colorTexDesc = pImage->GetColorTexDesc();
float halfWidth = static_cast<float>(colorTexDesc.Width)/2.0f;
float halfHeight = static_cast<float>(colorTexDesc.Height)/2.0f;
Vertex.PosTex vertices[]=
{
{XMFLOAT3( halfWidth, halfHeight, 1.0f ), XMFLOAT2( 1.0f, 0.0f )},
{XMFLOAT3( halfWidth, -halfHeight, 1.0f ), XMFLOAT2( 1.0f, 1.0f )},
{XMFLOAT3( -halfWidth, -halfHeight, 1.0f ), XMFLOAT2( 0.0f, 1.0f )},
{XMFLOAT3( -halfWidth, -halfHeight, 1.0f ), XMFLOAT2( 0.0f, 1.0f )},
{XMFLOAT3( -halfWidth, halfHeight, 1.0f ), XMFLOAT2( 0.0f, 0.0f )},
{XMFLOAT3( halfWidth, halfHeight, 1.0f ), XMFLOAT2( 1.0f, 0.0f )}
};
D3D11_BUFFER_DESC vertexDesc;
ZeroMemory(&vertexDesc,sizeof(vertexDesc));
vertexDesc.Usage = D3D11_USAGE_DEFAULT;
vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexDesc.ByteWidth = sizeof(Vertex.PosTex)*6;
D3D11_SUBRESOURCE_DATA resourceData;
resourceData.pSysMem = vertices;
ID3D11Buffer* pVBuffer = pImage->GetpVertexBuffer();
result = m_pDevice->CreateBuffer(&vertexDesc,&resourceData,&pVBuffer);
if (FAILED(result))
{
MessageBoxA(NULL,"Error Creating VBuffer","Error",MB_OK);
return false;
}
/// SET POINTER TO IMAGEDESC
ImageDesc* pThisImDesc = pImage->GetpImageDesc();
pThisImDesc = pImDesc;
return true;
}
bool GameManager::GMDrawImage(Image* pImage, const CLIPRECT& rkClip)
{
ImageDesc* thisImDesc = pImage->GetpImageDesc();
if ( (thisImDesc != m_pImDesc) ) {
m_pImDesc = thisImDesc;
m_pContext->IASetInputLayout(m_pImDesc->pInputLayout);
m_pContext->IASetPrimitiveTopology(m_pImDesc->Topology);
m_pContext->VSSetShader(m_pImDesc->pSolidColorVS,0,0);
m_pContext->PSSetShader(m_pImDesc->pSolidColorPS,0,0);
m_pContext->PSSetSamplers(0,1,&m_pImDesc->pSampler);
m_pContext->OMSetBlendState(m_pImDesc->pBlendState,NULL,0xFFFFFFFF);
}
UINT stride = m_pImDesc->VertexSize;
UINT offset = 0;
ID3D11Buffer* pVBuffer = pImage->GetpVertexBuffer();
ID3D11ShaderResourceView* pColorMap = pImage->GetpColorMap();
m_pContext->IASetVertexBuffers(0,1,&pVBuffer,&stride,&offset);
m_pContext->PSSetShaderResources(0,1,&pColorMap);
//set constant buffers?
m_pContext->Draw(6,0);
}

Yes, as long as your texture coordinates are hardcoded to 0.0 through 1.0 in your vertex buffer, you can use a texture transformation matrix. It's a 3x3 matrix that will transform your 2D texture coordinates.
For instance, if you want to use the bottom-right quadrant of your texture (assuming top-left is origin), you could use the following matrix:
0.5 0.0 0.0
0.0 0.5 0.0
0.5 0.5 1.0
Then, in the vertex shader, you multiply the texture coordinates by that matrix like so:
float3 coord = float3(In.texCoord, 1.0);
coord *= textureTransform;
Out.texCoord = coord.xy / coord.z;
In.texCoord and Out.texCoord being float2 input and output texture coordinates respectively.
The division by Z is optional if you are only doing affine transformations (translations, scaling, rotations and skews) so feel free to remove it if not needed.
To generalize the matrix:
Sx 0.0 0.0
0.0 Sy 0.0
Tx Ty 1.0
Where Txy is the position of the clip area and Sxy the size of the clip area, in texture coordinates.

Related

Can't render a triangle with D3D12

I want to render a triangle with D3D12, but it doesn't function. I followed the Direct 3D Programming Guide by Microsoft.
I can clear the render target view and in the output I become a COMMAND_LIST_DRAW_VERTEX_BUFFER_TOO_SMALL warning (DrawInstanced: Vertex Buffer at the input vertex slot 0 is not big enough for what the Draw*() call expects to traverse...) every frame. So my thought is, the problem is at the vertices, coping the vertices.
Here is my code:
struct Vertex {
float x;
float y;
};
const Vertex vertices[] = {
{ 0.0f, 0.5f },
{ 0.5f, -0.5f },
{ -0.5f, -0.5f }
};
const UINT vertexBufferSize = sizeof(vertices);
// Create and load the vertex buffers
// I'm using an upload heap, because I can't find any solution about a default heap.
CD3DX12_HEAP_PROPERTIES vertexBufferHeapProperties(D3D12_HEAP_TYPE_UPLOAD/*D3D12_HEAP_TYPE_DEFAULT*/);
auto vertexBufferResourceDesc = CD3DX12_RESOURCE_DESC::Buffer(vertexBufferSize);
THROW_IF_FAILED(m_pDevice->CreateCommittedResource(
&vertexBufferHeapProperties,
D3D12_HEAP_FLAG_NONE,
&vertexBufferResourceDesc,
D3D12_RESOURCE_STATE_GENERIC_READ/*D3D12_RESOURCE_STATE_COPY_SOURCE*/,
nullptr,
IID_PPV_ARGS(&m_pVertexBuffer)
));
// Copy the vertex data to the vertex buffer
UINT8* pVertexDataBegin;
CD3DX12_RANGE readRange(0, 0);
THROW_IF_FAILED(m_pVertexBuffer->Map(0, &readRange, reinterpret_cast<void**>(&pVertexDataBegin)));
memcpy(pVertexDataBegin, vertices, sizeof(vertices));
m_pVertexBuffer->Unmap(0, nullptr);
// Create the vertex buffer views
m_vertexBufferView.BufferLocation = m_pVertexBuffer->GetGPUVirtualAddress();
m_vertexBufferView.SizeInBytes = sizeof(Vertex);
m_vertexBufferView.StrideInBytes = vertexBufferSize;
In the input layout I'm using a DXGI_FORMAT_R32G32_FLOAT and D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA for position.
And in the PopulateCommandList function:
const float clearColor[] = { 0.6f, 0.7f, 0.9f, 1.0f };
m_pCommandList->ClearRenderTargetView(renderTargetViewHandle, clearColor, 0, 0);
m_pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
m_pCommandList->IASetVertexBuffers(0, 1, &m_vertexBufferView);
m_pCommandList->DrawInstanced(3, 1, 0, 0);
It should be
m_vertexBufferView.BufferLocation = m_pVertexBuffer->GetGPUVirtualAddress();
m_vertexBufferView.SizeInBytes = vertexBufferSize;
m_vertexBufferView.StrideInBytes = sizeof(Vertex);
instead of
m_vertexBufferView.BufferLocation = m_pVertexBuffer->GetGPUVirtualAddress();
m_vertexBufferView.SizeInBytes = sizeof(Vertex);
m_vertexBufferView.StrideInBytes = vertexBufferSize;

Setting Constant Buffer Directx

So, I'm trying to get my constant buffer in my shader to have a projection matrix that is orthogonal...
Can anyone tell me why I'm rendering nothing now that I tried to do this?
I assumed I needed to make two mapped subresources, and two buffers, one for vertex and the other for constant, maybe this is wrong?
Here is my code:
Vertex OurVertices[] =
{
{ D3DXVECTOR2(-0.5f,-0.5f), D3DXCOLOR(0.0f, 0.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(-0.5f,0.5f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(0.5f,0.5f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) },
{ D3DXVECTOR2(-0.5f,-0.5f), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(0.5f,0.5f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(0.5f,-0.5f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) }
};
// create the vertex buffer
D3D11_BUFFER_DESC vertexBufferDesc;
ZeroMemory(&vertexBufferDesc, sizeof(vertexBufferDesc));
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
vertexBufferDesc.ByteWidth = sizeof(Vertex) * 6; // size is the VERTEX struct * 3
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&vertexBufferDesc, NULL, &pVBuffer); // create the buffer
D3D11_MAPPED_SUBRESOURCE ms_Vertex;
devcon->Map(pVBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &ms_Vertex);
memcpy(ms_Vertex.pData, OurVertices, sizeof(OurVertices)); // copy the data
devcon->Unmap(pVBuffer, NULL); // unmap the buffer
devcon->VSSetConstantBuffers(NULL, 1, &pVBuffer); // Finanly set the constant buffer in the vertex shader with the updated values.
MatrixBufferType* dataPtr;
D3D11_BUFFER_DESC constantBufferDesc; // create the constant buffer
ZeroMemory(&constantBufferDesc, sizeof(constantBufferDesc));
constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
constantBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
constantBufferDesc.ByteWidth = sizeof(MatrixBufferType);
constantBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
constantBufferDesc.MiscFlags = 0;
constantBufferDesc.StructureByteStride = 0;
dev->CreateBuffer(&constantBufferDesc, NULL, &pCBuffer); // create the buffer
D3D11_MAPPED_SUBRESOURCE ms_CBuffer;
devcon->Map(pCBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &ms_CBuffer);
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
D3DXMatrixTranspose(&m_orthoMatrix, &m_orthoMatrix);
dataPtr = (MatrixBufferType*)ms_CBuffer.pData;
dataPtr->projection = m_orthoMatrix;
memcpy(ms_CBuffer.pData, &dataPtr, sizeof(MatrixBufferType));
devcon->Unmap(pCBuffer, NULL);
devcon->VSSetConstantBuffers(NULL, 1, &pCBuffer); // Finally set the constant buffer in the vertex shader with the updated values.
So, attempting to put my projection matrix to use in my shader code:
cbuffer ConstantBuffer
{
matrix world;
matrix view;
matrix projection;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = position;
output.position = mul(output.position, projection);
output.color = color;
return output;
}
Causes my quad which was originally rendering, to disappear.
This leads me to believe I'm doing something wrong, I just don't know what yet.

DirectX 11 LINELIST_TOPOLOGY uncompleted 2D rectangle at left-top coordinates

I'm doing something with DirectX 11 and came to a rectagle drawing (empty, non-colored), seemed simple for me at start (linelist_topology, 8 indices) but when I have it on the screen I see that my rectangle is kinda incomleted at left-top coordinate, there is a point of a background color there, the code is not complicated at all, vertices are 2D space:
SIMPLEVERTEX gvFrameVertices[4]=
{XMFLOAT3(0.0f,0.0f,1.0f),XMFLOAT2(0.0f, 0.0f),
XMFLOAT3(1.0f, 0.0f, 1.0f), XMFLOAT2(1.0f, 0.0f),
XMFLOAT3(1.0f, -1.0f, 1.0f), XMFLOAT2(1.0f, 1.0f),
XMFLOAT3(0.0f, -1.0f, 1.0f), XMFLOAT2(0.0f, 1.0f)};
indices:
WORD gvRectangularIndices[8] = { 0, 1, 1, 2, 2, 3, 3, 0 };
Shader just returns given color in constant buffer:
float4 PS_PANEL(PS_INPUT input) : SV_Target
{
return fontcolor;
}
Function code itself:
VOID rectangle(INT _left, INT _top, INT _width, INT _height, XMFLOAT4 _color)
{
XMMATRIX scale;
XMMATRIX translate;
XMMATRIX world;
scale = XMMatrixScaling( _width, _height, 1.0f );
translate = XMMatrixTranslation(_left, gvHeight - _top, 1.0f);
world = scale * translate;
gvConstantBufferData.world = XMMatrixTranspose(world);
gvConstantBufferData.index = 1.0f;
gvConstantBufferData.color = _color;
gvContext->PSSetShader(gvPanelPixelshader, NULL, 0);
gvContext->UpdateSubresource(gvConstantBuffer, 0, NULL, &gvConstantBufferData, 0, 0 );
gvContext->IASetIndexBuffer(gvLinelistIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
gvContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_LINELIST );
gvContext->DrawIndexed(8, 0, 0);
gvContext->IASetIndexBuffer(gvTriangleslistIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
gvContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST );
};
gvHeight - _top - I'm using orthographic matrix for projection, so the coordinate center is at left-bottom, that why need to substract for proper Y coordinate.
gvOrthographicProjection = XMMatrixOrthographicOffCenterLH( 0.0f, gvWidth, 0.0f, gvHeight, 0.01f, 100.0f );
Do you have any idea what can cause this pointal incompletness of a rectangle in my case or I need to supply more code info (don't really want to link lines of the whole initializations cause they seem very obvious and simple for me, done for /at c++ and directx amateur level :)
Thank you:)

DirectX 11.2 triangle shape is distorted by normal vectors

I am new to DirectX. I now have a to some extent silly question.
I am using Windows 8.1 DirectX 11.2 and I am using right-hand coordinate system.
I tried to apply the texture to an equilateral triangle in x-y plane and centred at (0,0,0).
But the output shape is distorted(It doesn't look like equilateral triangle at all!!). And In theory if looking from x-axis, I can see nothing because the equilateral triangle locates in the x-y plane. But it turns out that I can see the triangle. In addition, if I change the value of normal vectors, the output shape changes too! I do not understand why and please help!
Here is the view matrix configuration:
static const XMVECTORF32 eye = { 0.0f, 0.0f, 1.5f, 0.0f };
static const XMVECTORF32 at = { 0.0f, 0.0f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
Here is the vertexshader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};
PixelShaderInput main(VertexShaderInput input)
{
PixelShaderInput vertexShaderOutput;
float4 pos = float4(input.pos, 1.0f);
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
vertexShaderOutput.pos = pos;
vertexShaderOutput.tex = input.tex;
vertexShaderOutput.norm = mul(float4(normalize(input.norm), 0.0f), model).xyz;
return vertexShaderOutput;
}
Here is the pixelshader:
Texture2D Texture : register(t0);
SamplerState Sampler : register(s0);
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float3 lightDirection = normalize(float3(0, 0, -1));
return Texture.Sample(Sampler, input.tex); //* (0.8f * saturate(dot(normalize(input.norm), -lightDirection)) + 0.2f);
}
Here is the coordinates:
VertexPositionTexture vertexPositionTexture[] =
{
{ XMFLOAT3(-1.5, -0.5*sqrtf(3), 0.0f), XMFLOAT3(0.0f, 0.0f, 0.4f), XMFLOAT2(0.0, 1.0) },
{ XMFLOAT3(0.0f, sqrtf(3), 0.0f), XMFLOAT3(0.0f, 0.0f, 0.4f), XMFLOAT2(0.5, 0.0) },
{ XMFLOAT3(1.5, -0.5*sqrtf(3), 0.0f), XMFLOAT3(0.0f, 0.0f, 0.4f), XMFLOAT2(1.0, 1.0) },
}
The index array would simply be {0,1,2} in clockwise order.
So if I change the normal vector value in vertexPositionTexture XMFLOAT3(0.0, 0.0, 0.4) to XMFLOAT3(0.0, 0.0, -1), the shape will definitely change. I don't know why?
Here is how I create the DeviceDependentResources:
void TextureSceneRenderer::CreateDeviceDependentResources()
{
// Load shaders asynchronously.
auto loadVSTask = DX::ReadDataAsync(L"TextureVertexShader.cso");
auto loadPSTask = DX::ReadDataAsync(L"TexturePixelShader.cso");
BasicLoader^ loader = ref new BasicLoader(m_deviceResources->GetD3DDevice());
loader->LoadTexture(
L"cat.dds",
&m_texture,
&m_textureSRV
);
// create the sampler
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_ANISOTROPIC;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
// use 4x on feature level 9.2 and above, otherwise use only 2x
samplerDesc.MaxAnisotropy = 8;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;
// allow use of all mip levels
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateSamplerState(
&samplerDesc,
&m_sampler)
);
// After the vertex shader file is loaded, create the shader and input layout.
auto createVSTask = loadVSTask.then([this](const std::vector<byte>& fileData) {
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateVertexShader(
&fileData[0],
fileData.size(),
nullptr,
&m_vertexShader
)
);
static const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateInputLayout(
vertexDesc,
ARRAYSIZE(vertexDesc),
&fileData[0],
fileData.size(),
&m_inputLayout
)
);
});
// After the pixel shader file is loaded, create the shader and constant buffer.
auto createPSTask = loadPSTask.then([this](const std::vector<byte>& fileData) {
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreatePixelShader(
&fileData[0],
fileData.size(),
nullptr,
&m_pixelShader
)
);
CD3D11_BUFFER_DESC constantBufferDesc(sizeof(ModelViewProjectionConstantBuffer), D3D11_BIND_CONSTANT_BUFFER);
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateBuffer(
&constantBufferDesc,
nullptr,
&m_constantBuffer
)
);
});
// Once both shaders are loaded, create the mesh.
auto createCubeTask = (createPSTask && createVSTask).then([this]() {
D3D11_SUBRESOURCE_DATA vertexBufferData = { 0 };
vertexBufferData.pSysMem = this->model->getVertexPositionTexture();
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC vertexBufferDesc(sizeof(this->model- >getVertexPositionTexture()[0])*this->model->n_texture_vertex, D3D11_BIND_VERTEX_BUFFER);
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&m_vertexBuffer
)
);
m_indexCount = this->model->n_mesh;//ARRAYSIZE(cubeIndices);
D3D11_SUBRESOURCE_DATA indexBufferData = { 0 };
indexBufferData.pSysMem = this->model->getMeshTextureIndex();
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC indexBufferDesc(sizeof(this->model->getMeshTextureIndex()[0])*m_indexCount, D3D11_BIND_INDEX_BUFFER);
DX::ThrowIfFailed(
m_deviceResources->GetD3DDevice()->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
&m_indexBuffer
)
);
});
// Once the cube is loaded, the object is ready to be rendered.
createCubeTask.then([this]() {
m_loadingComplete = true;
});
}
Here is the render function:
void TextureSceneRenderer::Render()
{
// Loading is asynchronous. Only draw geometry after it's loaded.
if (!m_loadingComplete)
{
return;
}
auto context = m_deviceResources->GetD3DDeviceContext();
// Set render targets to the screen.
ID3D11RenderTargetView *const targets[1] = { m_deviceResources->GetBackBufferRenderTargetView() };
context->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());
// Prepare the constant buffer to send it to the graphics device.
context->UpdateSubresource(
m_constantBuffer.Get(),
0,
NULL,
&m_constantBufferData,
0,
0
);
// Each vertex is one instance of the VertexPositionColor struct.
UINT stride = sizeof(VertexPositionColor);
UINT offset = 0;
context->IASetVertexBuffers(
0,
1,
m_vertexBuffer.GetAddressOf(),
&stride,
&offset
);
context->IASetIndexBuffer(
m_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
0
);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
context->IASetInputLayout(m_inputLayout.Get());
// Attach our vertex shader.
context->VSSetShader(
m_vertexShader.Get(),
nullptr,
0
);
// Send the constant buffer to the graphics device.
context->VSSetConstantBuffers(
0,
1,
m_constantBuffer.GetAddressOf()
);
// Attach our pixel shader.
context->PSSetShader(
m_pixelShader.Get(),
nullptr,
0
);
context->PSSetShaderResources(
0,
1,
m_textureSRV.GetAddressOf()
);
context->PSSetSamplers(
0, // starting at the first sampler slot
1, // set one sampler binding
m_sampler.GetAddressOf()
);
// Draw the objects.
context->DrawIndexed(
m_indexCount,
0,
0
);
}

scaling different objects using mouse wheel

I use glfw and glm.
If I scroll up - I want to make object bigger, when I scroll down - I want to make object smaller.
How to do it?
I use this function to handle mouse scrolling.
static void mousescroll(GLFWwindow* window, double xoffset, double yoffset)
{
if (yoffset > 0) {
scaler += yoffset * 0.01; //make it bigger than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
else {
scaler -= yoffset * 0.01; //make it smaller than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
}
By default scaler is 1.0.
I can describe the problem like this.
There is an object. If I scroll up - the value of scaler will become 1.01. So the object will be bigger in 1.01 times. When I scroll up again - as far as I can understand in my case the size of object will be bigger in 1.02 than the previous size(which is bigger than the original in 1.01 times)! But I want its size to be bigger than the original in 1.02 times.
How to solve this problem?
Matrix world looks like this
glm::mat4 world = glm::mat4(
glm::vec4(1.0f, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 1.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 0.0f, 1.0f));
And I calculate the positions of vertex in the shader
gl_Position = world * vec4(Position, 1.0);
But I want its size to be bigger than the original in 1.02 times.
Then reset the transform each time instead of accumulating the scales:
world = glm::scale( scaler, scaler, scaler );