C++ DirectX10 Mesh Disappearing After One Frame, Why? - c++

First time poster on this site. But I have hit a serious block and am lost. If this is too much to read, the question is at the bottom. But I thought the background would help.
A little background on the project:
I currently have a rendering engine set up using DirectX10. It seems that after implementing my design of component(data) pointers inside of a component manager(holds pointers to components of entities inside of world) and sending that data to an interface manager (all methods for loading, creation, updating, rendering, etc.).
Here is an image of a class diagram to make it easier to visualize:
edited: (Do not have enough rep to post, so here is a link instead: http://imgur.com/L3nOyoY
edit:
My rendering engine has a weird issue of rending a cube (parsed from a Wavefront .obj file and put into an array of custom vertices) for only one frame. After the initial presentation of the back buffer, my cube disappears.
I have gone through a lot of debugging to fix this issue, but have yielded no answers. I have files of dumps of the change in vectors for position, scale, rotation, etc. as well as the world matrix. All data in regards to the position of the object in world space and the camera position in world space maintains its integrity. I have also checked to see if there were issues with other pointers either being corrupted, being modified where they ought not to be, or deleted by accident.
Stepping through the locals over several updates and renders I find no change in Movement Components,
no change in Texture components, and no change to my shader variables and pointers.
Here is the code running initially, and then the loop,
void GameWorld::Load()
{
//initialize all members here
mInterface = InterfaceManager(pDXManager);
//create entities here
mEHouse = CEntity(
CT::MESH | CT::MOVEMENT | CT::SHADER | CT::TEXTURE,
"House", &mCManager);
mECamera = CEntity(
CT::CAMERA | CT::MOVEMENT | CT::LIGHT,
"Camera", &mCManager);
//HACKS FOR TESTING ONLY
//Ideally create script to parse to create entities;
//GameWorld will have dynamic entity list;
//////////////////////////////////////////////////
tm = CMesh("../Models/Box.obj");
mCManager.mMesh[0] = &tm;
//hmmm.... how to make non-RDMS style entities...
tc = CCamera(
XMFLOAT3(0.0f, 0.0f, 1.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 1.0f, 0.0f), 1);
mCManager.mCamera[0] = &tc;
tmc = CMovement(
XMFLOAT3(0.0f, 0.0f, -10.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f));
mCManager.mMovement[1] = &tmc;
////////////////////////////////////////////////////
//only after all entities are created
mInterface.onLoad(&mCManager);
}
//core game logic goes here
void GameWorld::Update(float dt)
{
mInterface.Update(dt, &mCManager);
}
//core rendering logic goes here
void GameWorld::Render()
{
pDXManager->BeginScene();
//render calls go here
mInterface.Render(&mCManager);
//disappears after end scene
pDXManager->EndScene();
}
And here is the interface render and update methods:
void InterfaceManager::onLoad(CComponentManager* pCManager)
{
//create all
for(int i = 0; i < pCManager->mTexture.size(); ++i)
{
mTexture2D.loadTextureFromFile(pDXManager->mD3DDevice, pCManager->mTexture[i]);
}
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
mMesh.loadMeshFromOBJ(pCManager->mMesh[i]);
mMesh.createMesh(pDXManager->mD3DDevice, pCManager->mMesh[i]);
}
for(int i = 0; i < pCManager->mShader.size(); ++i)
{
mShader.Init(pDXManager->mD3DDevice, pDXManager->mhWnd, pCManager->mShader[i], pCManager->mTexture[i]);
}
//TODO: put this somewhere else to maintain structure
XMMATRIX pFOVLH = XMMatrixPerspectiveFovLH((float)D3DX_PI / 4.0f, (float)pDXManager->mWindowWidth/pDXManager->mWindowHeight, 0.1f, 1000.0f);
XMStoreFloat4x4(&pCManager->mCamera[0]->mProjectionMat, pFOVLH);
}
void InterfaceManager::Update(float dt, CComponentManager* pCManager)
{
//update input
//update ai
//update collision detection
//update physics
//update movement
for(int i = 0; i < pCManager->mMovement.size(); ++i)
{
mMovement.transformToWorld(pCManager->mMovement[i]);
}
//update animations
//update camera
//There is only ever one active camera
//TODO: somehow set up for an activecamera variable
mCamera.Update(pCManager->mCamera[0], pCManager->mMovement[pCManager->mCamera[0]->mOwnerID]);
}
void InterfaceManager::Render(CComponentManager* pCManager)
{
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
//render meshes
mMesh.RenderMeshes(pDXManager->mD3DDevice, pCManager->mMesh[i]);
//set shader variables
mShader.setShaderMatrices(pCManager->mCamera[0], pCManager->mShader[i], pCManager->mMovement[i]);
mShader.setShaderLight(pCManager->mLight[i], pCManager->mShader[i]);
mShader.setShaderTexture(pCManager->mTexture[i]);
//render shader
mShader.RenderShader(pDXManager->mD3DDevice, pCManager->mShader[i], pCManager->mMesh[i]);
}
}
In short, my question could be this: Why is my cube only rendering for one frame, then disappearing?
UPDATE: I found the method causing the issues by isolating it. It lays within my update() method, before render(). It is when my camera is updated that it causes issues. Here is the code for that method, perhaps someone can see what I am not?
void ICamera::Update(CCamera* pCamera, CMovement* pMovement)
{
XMMATRIX rotMat = XMMatrixRotationRollPitchYaw(pMovement->mRotation.x,
pMovement->mRotation.y,
pMovement->mRotation.z);
XMMATRIX view = XMLoadFloat4x4(&pCamera->mViewMat);
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mEye);
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
lookAt = XMVector3TransformCoord(lookAt, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + lookAt;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
XMStoreFloat4x4(&pCamera->mViewMat, view);
}

From the camera update code, one obvious issue is the lookAt variable.
Usually lookAt variable is a "point" not a "vector" (direction), but from your code, it seems you saved it as a point, but used it as a vector. And your camera definition is also not complete, a standard camera should at least contains: position, up direction and view direction (or lookAt point).
I assume you want to rotate and translate your camera in ICamera::Update function, so you rotate up and view direction, and translate the position.
I guess your CMovement will give you a new camera position and apply a rotation to the camera. Then, you can try to modify as below, (mEye and mLookAt is position, mUp is the direction)
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mLookAt);
XMVECTOR oldPos = XMLoadFloat3(&pCamera->mEye);
XMVECTOR viewDir = lookAt - oldPos;
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
viewDir = XMVector3TransformCoord(viewDir, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + viewDir;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, position);
XMStoreFloat3(&pCamera->mLookAt, lookAt);
XMStoreFloat3(&pCamera->mUp, up);

Related

OpenGL: screen-to-world transformation and good use of glm::unProject

I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.

Mouse to world position without gluUnProject?

I'm trying to implement an editor like placing mod with opengl.
When you click somewhere on the screen an object get placed in that position.
So far this is my code
void RayCastMouse(double posx, double posy)
{
glm::fvec2 NDCCoords = glm::fvec2( (2.0*posx)/ float(SCR_WIDTH)-1.f, ((-2.0*posy) / float(SCR_HEIGHT)+1.f) );
glm::mat4 viewProjectionInverse = glm::inverse(projection * camera.GetViewMatrix());
glm::vec4 worldSpacePosition(NDCCoords.x, NDCCoords.y, 0.0f, 1.0f);
glm::vec4 worldRay = viewProjectionInverse*worldSpacePosition;
printf("X=%f / y=%f / z=%f\n", worldRay.x, worldRay.y, worldRay.z);
m_Boxes.emplace_back(worldRay.x, 0, worldRay.z);
}
The problem is the object isn't placed at the correct position, worldRay vec3 is used to translate the model matrix.
Can anyone one please help with this i will really appreciate it.
By the way division the worldRay xyz component by worldRay.w is setting the object at the camera position.

Matrix multiplication - How to make a planet spin on it's own axis, and orbit it's parent?

I've got two planets, the sun and the earth. I want them to spin on their own axis, and orbit the planet at the same time.
I can get these two behaviors to work individually, but I'm stumped as to how to combine them.
void planet::render() {
mat4 axisPos = mat4(1.0f);
axialRotation = rotate(axisPos, axisRotationSpeedConstant, vec3(0.0f, 1.0f, 0.0f));
if (hostPlanet != NULL) {
mat4 hostTransformPosition = mat4(1.0f);
hostTransformPosition[3] = hostPlanet->getTransform()[3];
orbitalSpeed += orbitalSpeedConstant;
orbitRotation = rotate(hostTransformPosition, orbitalSpeed, vec3(0.0f, 1.0f, 0.0f));
orbitRotation = translate(orbitRotation, vec3(distanceFromParent, 0.0f, 0.0f));
//rotTransform will make them spin on their axis, but not orbit their parent planet
mat4 rotTransform = transform * axialRotation;
//transform *= rotTransform;
//orbitRotation will make the planet orbit, but it won't spin on it's own axis.
transform = orbitRotation;
}
else {
transform *= axialRotation;
}
glUniform4fv(gColLoc, 1, &color[0]);
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &getTransform()[0][0]);
glDrawArrays(GL_LINES, 0, NUMVERTS);
};
Woohoo! As usual, asking the question lead me to being able to answer it. After the last line, knowing that transform[0 to 2] represents the rotation in the 4x4 matrix (with transform[3] representing the position in 3D space), I thought to replace the rotation from the previous matrix calculation with the current one.
Badabing, I got my answer.
transform = orbitRotation;
transform[0] = rotTransform[0];
transform[1] = rotTransform[1];
transform[2] = rotTransform[2];

Modifying camera location based on keypress

I have an application which renders graphics onscreen using DirectX. The default view of first person is set using the below code, setting the Y axis to 0. The code is contained within the SetupCamera() function which is called prior to Render().
D3DXVECTOR3 vCamera(5.0f, 0.0f, -45.0f);
I am using DirectInput to manage user input. I want to be able to allow the user to press SPACE and change the Y axis value to 90 to switch to a top down view, then press again to switch it back. I currently have the below code in my ProcessKeyboardInput() function, called prior to Render().
if (KEYDOWN(buffer, DIK_SPACE))
{
\\ ???
}
but I am unsure what I need to do to allow the user to adjust the value which doesn't interrupt the rendering. I must be missing something simple here. Any help would be much appreciated. Thanks!
Full CameraSetup() code...
void SetupCamera()
{
// Setup View Matrix
D3DXVECTOR3 vCamera(5.0f, 0.0f, -45.0f); // camera location x,y,z plane
D3DXVECTOR3 vLookat(5.0f, 5.0f, 0.0f); // camera direction x,y,z plane
D3DXVECTOR3 vUpVector(0.0f, 1.0f, 0.0f); // which way is up x,y,z plane
D3DXMATRIX matView;
D3DXMatrixLookAtLH( &matView, &vCamera, &vLookat, &vUpVector);
D3D_Device -> SetTransform(D3DTS_VIEW, &matView);
// Setup Projection Matrix to transform 2D geometry into 3D space
D3DXMATRIX matProj;
D3DXMatrixPerspectiveFovLH(&matProj, D3DX_PI/4, 1.0f, 1.0f, 100.0f);
D3D_Device -> SetTransform(D3DTS_PROJECTION, &matProj);
}
Full ProcessKeyboardInput() code...
void WINAPI ProcessKeyboardInput()
{
// Define a macro to represent the key detection predicate
#define KEYDOWN(name, key) (name[key] & 0x80)
// Create buffer to contain keypress data
char buffer[256];
HRESULT hr;
// Clear the buffer prior to use
ZeroMemory(&buffer, 256);
hr = g_pDIKeyboardDevice -> GetDeviceState(sizeof(buffer),(LPVOID)&buffer);
if FAILED(hr)
{
// If device state cannot be attained, check if it has been lost and try to aquire it again
hr = g_pDIKeyboardDevice -> Acquire();
while (hr == DIERR_INPUTLOST) hr = g_pDIKeyboardDevice -> Acquire();
hr = g_pDIKeyboardDevice -> GetDeviceState(sizeof(buffer),(LPVOID)&buffer);
}
bool topView = false;
if (KEYDOWN(buffer, DIK_Q))
{
// 'Q' has been pressed - instruct the application to exit.
PostQuitMessage(0);
}
if (KEYDOWN(buffer, DIK_SPACE))
{
// Space has been pressed - swap from 1st person view to overhead view
// topView is true
topView = !topView;
// if topView is true, adjust camera accordingly
if (topView)
{
vCamera.y = 90.f;
}
else // if untrue, set camera to 1st person
{
vCamera.y = 0.f;
}
}
}
I think you're on the right track. Don't worry about "interrupt the rendering". There is a lot of processing going on between render calls in any graphics application. How about something like this:
// Have this somewhere appropriate in the code
bool topView = false;
if (KEYDOWN(buffer, DIK_SPACE))
{
// Toggle top view
topView = !topView;
// Set camera vector
if (topView) {
vCamera.y = 90.f;
} else {
vCamera.y = 0.f:
}
}
After updating, you need to send your matrices to DirectX again. Basically you need to call the following chunk of code again now that the vCamera (or any of the other view vectors) have been changed. I'd suggest bundling the code up in the SetupCamera() function and call it in the Render() function, after checking for user input. That would also require to move vCamera etc out outside the function so that other functions can use them.
// vCamera, vLookat, vUpVector defined outside function
// Call this in rendering loop to set up camera
void SetupCamera() {
// Calculate new view matrix from view vectors
D3DXMATRIX matView;
D3DXMatrixLookAtLH( &matView, &vCamera, &vLookat, &vUpVector);
D3D_Device -> SetTransform(D3DTS_VIEW, &matView);
// Setup Projection Matrix to transform 2D geometry into 3D space
D3DXMATRIX matProj;
D3DXMatrixPerspectiveFovLH(&matProj, D3DX_PI/4, 1.0f, 1.0f, 100.0f);
D3D_Device -> SetTransform(D3DTS_PROJECTION, &matProj);
}

DirectX 11 D3DXMatrixTranslation continues to run?

I want to bind moving an object to a button press. When I press the button, the object vanishes quickly and appears as if the first Translation was always running. Then when I let go of the button, it quickly vanishes and ends up where it would've been without touching the button. 'Bouncing' between the two when I press/release the button.
D3DXMATRIX worldMatrix, viewMatrix, projectionMatrix;
bool result;
// Generate the view matrix based on the camera's position.
m_Camera->Render();
// Get the world, view, and projection matrices from the camera and d3d objects.
m_Camera->GetViewMatrix(viewMatrix);
m_Direct3D->GetWorldMatrix(worldMatrix);
m_Direct3D->GetProjectionMatrix(projectionMatrix);
// Move the world matrix by the rotation value so that the object will move.
if(m_Input->IsAPressed() == true) {
D3DXMatrixTranslation(&worldMatrix, 1.0f*rotation, 0.0f, 0.0f);
}
else {
D3DXMatrixTranslation(&worldMatrix, 0.1f*rotation, 0.0f, 0.0f);
}
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
if(!result)
{
return false;
}
// Present the rendered scene to the screen.
m_Direct3D->EndScene();
I'm still really new to DX11 and I've had a good look around. I'm pulling my hair out here trying to work out whats going on.
That's what your code does. If a button is pressed you set one world matrix, if not - another. What you need to do is multiply the world matrix by a newly generated translation matrix. Notice that this multiplication will happen ~60 times every second, so you need to move only a very tiny distance with each one.
Your code should be like this
if (m_Input->IsAPressed() == true) {
D3DXMATRIX translation;
D3DXMatrixTranslation(&translation, 0.05f, 0.0f, 0.0f);
worldMatrix *= translation;
}
You may need to do
m_Direct3D->SetWorldMatrix(worldMatrix);
Or something similar. I don't think I'm familiar with the classes you're using for m_Camera and m_Direct3D.