I want to bind moving an object to a button press. When I press the button, the object vanishes quickly and appears as if the first Translation was always running. Then when I let go of the button, it quickly vanishes and ends up where it would've been without touching the button. 'Bouncing' between the two when I press/release the button.
D3DXMATRIX worldMatrix, viewMatrix, projectionMatrix;
bool result;
// Generate the view matrix based on the camera's position.
m_Camera->Render();
// Get the world, view, and projection matrices from the camera and d3d objects.
m_Camera->GetViewMatrix(viewMatrix);
m_Direct3D->GetWorldMatrix(worldMatrix);
m_Direct3D->GetProjectionMatrix(projectionMatrix);
// Move the world matrix by the rotation value so that the object will move.
if(m_Input->IsAPressed() == true) {
D3DXMatrixTranslation(&worldMatrix, 1.0f*rotation, 0.0f, 0.0f);
}
else {
D3DXMatrixTranslation(&worldMatrix, 0.1f*rotation, 0.0f, 0.0f);
}
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
if(!result)
{
return false;
}
// Present the rendered scene to the screen.
m_Direct3D->EndScene();
I'm still really new to DX11 and I've had a good look around. I'm pulling my hair out here trying to work out whats going on.
That's what your code does. If a button is pressed you set one world matrix, if not - another. What you need to do is multiply the world matrix by a newly generated translation matrix. Notice that this multiplication will happen ~60 times every second, so you need to move only a very tiny distance with each one.
Your code should be like this
if (m_Input->IsAPressed() == true) {
D3DXMATRIX translation;
D3DXMatrixTranslation(&translation, 0.05f, 0.0f, 0.0f);
worldMatrix *= translation;
}
You may need to do
m_Direct3D->SetWorldMatrix(worldMatrix);
Or something similar. I don't think I'm familiar with the classes you're using for m_Camera and m_Direct3D.
Related
I am trying to animate an object in Qt3D to rotate around a specific axis (not the origin) while performing other transformations (e.g. scaling and translating).
The following code rotates the object as I want it but without animation yet.
QMatrix4x4 mat = QMatrix4x4();
mat.scale(10);
mat.translate(QVector3D(-1.023, 0.836, -0.651));
mat.rotate(QQuaternion::fromAxisAndAngle(QVector3D(0,1,0), -20));
mat.translate(-QVector3D(-1.023, 0.836, -0.651));
//scaling here after rotating/translating shifts the rotation back to be around the origin (??)
Qt3DCore::QTransform *transform = new Qt3DCore::QTransform(root);
transform->setMatrix(mat);
//...
entity->addComponent(transform); //the entity of the object i am animating
I did not manage to incorporate a QPropertyAnimation as I desire with this code. Only animating the rotationY property does not let me include the rotation origin, so it rotates around the wrong axis. And animating the matrix property produces the end result but rotates in a way that is not desired/realistic in my scenario. So how can I animate this rotation to rotate around a given axis?
EDIT: There is a QML equivalent to what I want. There, you can specify the origin of the rotation and just animate the angle values:
Rotation3D{
id: doorRotation
angle: 0
axis: Qt.vector3d(0,1,0)
origin: Qt.vector3d(-1.023, 0.836, -0.651)
}
NumberAnimation {target: doorRotation; property: "angle"; from: 0; to: -20; duration: 500}
How can I do this in C++?
I think it is possible to use the Qt 3D: Simple C++ Example to obtain what it is looked for by simply modifying the updateMatrix() method in orbittransformcontroller.cpp:
void OrbitTransformController::updateMatrix()
{
m_matrix.setToIdentity();
// Move to the origin point of the rotation
m_matrix.translate(40, 0.0f, -200);
// Infinite 360° rotation
m_matrix.rotate(m_angle, QVector3D(0.0f, 1.0f, 0.0f));
// Radius of the rotation
m_matrix.translate(m_radius, 0.0f, 0.0f);
m_target->setMatrix(m_matrix);
}
Note: It is easier to change the torus into a small sphere to observe the rotation.
EDIT from question asker: This idea is indeed a very good way of solving the problem! To apply it specifically to my scenario the updateMatrix() function has to look like this:
void OrbitTransformController::updateMatrix()
{
//take the existing matrix to not lose any previous transformations
m_matrix = m_target->matrix();
// Move to the origin point of the rotation, _rotationOrigin would be a member variable
m_matrix.translate(_rotationOrigin);
// rotate (around the y axis)
m_matrix.rotate(m_angle, QVector3D(0.0f, 1.0f, 0.0f));
// translate back
m_matrix.translate(-_rotationOrigin);
m_target->setMatrix(m_matrix);
}
I have made _rotationOrigin also a property in the controller class, which can then be set externally to a different value for each controller.
I'm making a weather simulation in Opengl 4.0 and am trying to create the sky by creating a fullscreen quad in the background. I'm trying to do that by having the vertex shader generate four vertexes and then drawing a triangle strip. Everything compiles just fine and I can see all the other objects I've made before, but the sky is nowhere to be seen. What am I doing wrong?
main.cpp
GLint stage = glGetUniformLocation(myShader.Program, "stage");
//...
glBindVertexArray(FS); //has four coordinates (-1,-1,1) to (1,1,1) in buffer object
glUniform1i(stage, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
vertex shader
uniform int stage;
void main()
{
if (stage==1)
{
gl_Position = vec4(position, 1.0f);
}
else
{
//...
}
}
fragment shader
uniform int stage;
void main()
{
if (stage==1)
{ //placeholder gray colour so I can see the sky
color = vec4(0.5f, 0.5f, 0.5f, 1.0f);
}
else
{
//...
}
}
I should also mention that I'm a beginner in OpenGL and that it really has to be in OpenGL 4.0 or later.
EDIT:
I've figured out where's the problem, but still don't know how to fix it. The square exists, but only displays if I multiply it with the view and projection matrix (but then it doesn't stay glued to the screen and just rotates along with the rest of the scene, which I do not want). Essentially, I somehow need to switch back to 2D or to screen space or however it's called, draw the square, and switch back to 3D so that all the other objects work fine. How?
The issue was with putting a 1 as the z coord – putting 0.999f instead solved the issue.
First time poster on this site. But I have hit a serious block and am lost. If this is too much to read, the question is at the bottom. But I thought the background would help.
A little background on the project:
I currently have a rendering engine set up using DirectX10. It seems that after implementing my design of component(data) pointers inside of a component manager(holds pointers to components of entities inside of world) and sending that data to an interface manager (all methods for loading, creation, updating, rendering, etc.).
Here is an image of a class diagram to make it easier to visualize:
edited: (Do not have enough rep to post, so here is a link instead: http://imgur.com/L3nOyoY
edit:
My rendering engine has a weird issue of rending a cube (parsed from a Wavefront .obj file and put into an array of custom vertices) for only one frame. After the initial presentation of the back buffer, my cube disappears.
I have gone through a lot of debugging to fix this issue, but have yielded no answers. I have files of dumps of the change in vectors for position, scale, rotation, etc. as well as the world matrix. All data in regards to the position of the object in world space and the camera position in world space maintains its integrity. I have also checked to see if there were issues with other pointers either being corrupted, being modified where they ought not to be, or deleted by accident.
Stepping through the locals over several updates and renders I find no change in Movement Components,
no change in Texture components, and no change to my shader variables and pointers.
Here is the code running initially, and then the loop,
void GameWorld::Load()
{
//initialize all members here
mInterface = InterfaceManager(pDXManager);
//create entities here
mEHouse = CEntity(
CT::MESH | CT::MOVEMENT | CT::SHADER | CT::TEXTURE,
"House", &mCManager);
mECamera = CEntity(
CT::CAMERA | CT::MOVEMENT | CT::LIGHT,
"Camera", &mCManager);
//HACKS FOR TESTING ONLY
//Ideally create script to parse to create entities;
//GameWorld will have dynamic entity list;
//////////////////////////////////////////////////
tm = CMesh("../Models/Box.obj");
mCManager.mMesh[0] = &tm;
//hmmm.... how to make non-RDMS style entities...
tc = CCamera(
XMFLOAT3(0.0f, 0.0f, 1.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 1.0f, 0.0f), 1);
mCManager.mCamera[0] = &tc;
tmc = CMovement(
XMFLOAT3(0.0f, 0.0f, -10.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f));
mCManager.mMovement[1] = &tmc;
////////////////////////////////////////////////////
//only after all entities are created
mInterface.onLoad(&mCManager);
}
//core game logic goes here
void GameWorld::Update(float dt)
{
mInterface.Update(dt, &mCManager);
}
//core rendering logic goes here
void GameWorld::Render()
{
pDXManager->BeginScene();
//render calls go here
mInterface.Render(&mCManager);
//disappears after end scene
pDXManager->EndScene();
}
And here is the interface render and update methods:
void InterfaceManager::onLoad(CComponentManager* pCManager)
{
//create all
for(int i = 0; i < pCManager->mTexture.size(); ++i)
{
mTexture2D.loadTextureFromFile(pDXManager->mD3DDevice, pCManager->mTexture[i]);
}
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
mMesh.loadMeshFromOBJ(pCManager->mMesh[i]);
mMesh.createMesh(pDXManager->mD3DDevice, pCManager->mMesh[i]);
}
for(int i = 0; i < pCManager->mShader.size(); ++i)
{
mShader.Init(pDXManager->mD3DDevice, pDXManager->mhWnd, pCManager->mShader[i], pCManager->mTexture[i]);
}
//TODO: put this somewhere else to maintain structure
XMMATRIX pFOVLH = XMMatrixPerspectiveFovLH((float)D3DX_PI / 4.0f, (float)pDXManager->mWindowWidth/pDXManager->mWindowHeight, 0.1f, 1000.0f);
XMStoreFloat4x4(&pCManager->mCamera[0]->mProjectionMat, pFOVLH);
}
void InterfaceManager::Update(float dt, CComponentManager* pCManager)
{
//update input
//update ai
//update collision detection
//update physics
//update movement
for(int i = 0; i < pCManager->mMovement.size(); ++i)
{
mMovement.transformToWorld(pCManager->mMovement[i]);
}
//update animations
//update camera
//There is only ever one active camera
//TODO: somehow set up for an activecamera variable
mCamera.Update(pCManager->mCamera[0], pCManager->mMovement[pCManager->mCamera[0]->mOwnerID]);
}
void InterfaceManager::Render(CComponentManager* pCManager)
{
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
//render meshes
mMesh.RenderMeshes(pDXManager->mD3DDevice, pCManager->mMesh[i]);
//set shader variables
mShader.setShaderMatrices(pCManager->mCamera[0], pCManager->mShader[i], pCManager->mMovement[i]);
mShader.setShaderLight(pCManager->mLight[i], pCManager->mShader[i]);
mShader.setShaderTexture(pCManager->mTexture[i]);
//render shader
mShader.RenderShader(pDXManager->mD3DDevice, pCManager->mShader[i], pCManager->mMesh[i]);
}
}
In short, my question could be this: Why is my cube only rendering for one frame, then disappearing?
UPDATE: I found the method causing the issues by isolating it. It lays within my update() method, before render(). It is when my camera is updated that it causes issues. Here is the code for that method, perhaps someone can see what I am not?
void ICamera::Update(CCamera* pCamera, CMovement* pMovement)
{
XMMATRIX rotMat = XMMatrixRotationRollPitchYaw(pMovement->mRotation.x,
pMovement->mRotation.y,
pMovement->mRotation.z);
XMMATRIX view = XMLoadFloat4x4(&pCamera->mViewMat);
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mEye);
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
lookAt = XMVector3TransformCoord(lookAt, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + lookAt;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
XMStoreFloat4x4(&pCamera->mViewMat, view);
}
From the camera update code, one obvious issue is the lookAt variable.
Usually lookAt variable is a "point" not a "vector" (direction), but from your code, it seems you saved it as a point, but used it as a vector. And your camera definition is also not complete, a standard camera should at least contains: position, up direction and view direction (or lookAt point).
I assume you want to rotate and translate your camera in ICamera::Update function, so you rotate up and view direction, and translate the position.
I guess your CMovement will give you a new camera position and apply a rotation to the camera. Then, you can try to modify as below, (mEye and mLookAt is position, mUp is the direction)
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mLookAt);
XMVECTOR oldPos = XMLoadFloat3(&pCamera->mEye);
XMVECTOR viewDir = lookAt - oldPos;
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
viewDir = XMVector3TransformCoord(viewDir, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + viewDir;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, position);
XMStoreFloat3(&pCamera->mLookAt, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
I'm writing an engine and using Light 0 as the "sun" for the scene. The sun is a directional light.
I setup the scene's Ortho viewpoint, then setup the light to be on the "East" side of the screen (and to the character) (x/y are coordinates of the plane terrain, with a positive z facing the camera and indicating "height" on the terrain -- the scene is also rotated for an isometric view on the x axis).
The light seems to be shining fine "East" of 0,0,0, but as the character moves it does not shift (CenterCamera does a glTranslate3f on the negative of the values provided, such that they can be mapped specifying world coordinates). Meaning, the further I move to the west, it's ALWAYS dark, with no light.
Graphics.BeginRenderingLayer();
{
Video.MapRenderingMode();
Graphics.BeginLightingLayer( Graphics.AmbientR, Graphics.AmbientG, Graphics.AmbientB, Graphics.DiffuseR, Graphics.DiffuseG, Graphics.DiffuseB, pCenter.X, pCenter.Y, pCenter.Z );
{
Graphics.BeginRenderingLayer();
{
Graphics.CenterCamera( pCenter.X, pCenter.Y, pCenter.Z );
RenderMap( pWorld, pCenter, pCoordinate );
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer();
{
Graphics.DrawMan( pCenter );
}
Graphics.EndRenderingLayer();
}
Graphics.EndLightingLayer();
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer = PushMatrix, EndRenderingLayer = PopMatrix Video.MapRenderingMode = Ortho Projection and Scene Rotation/Zoom CenterCamera does a translate to the opposite of the X/Y/Z, such that the character is now centered at X/Y/Z in the middle of the screen.
Any thoughts? Maybe I've confused some of my code here a little?
The lighting code is as follows:
public static void BeginLightingLayer( float pAmbientRed, float pAmbientGreen, float pAmbientBlue, float pDiffuseRed, float pDiffuseGreen, float pDiffuseBlue, float pX, float pY, float pZ )
{
Gl.glEnable( Gl.GL_LIGHTING );
Gl.glEnable( Gl.GL_NORMALIZE );
Gl.glEnable( Gl.GL_RESCALE_NORMAL );
Gl.glEnable( Gl.GL_LIGHT0 );
Gl.glShadeModel( Gl.GL_SMOOTH );
float[] AmbientLight = new float[4] { pAmbientRed, pAmbientGreen, pAmbientBlue, 1.0f };
float[] DiffuseLight = new float[4] { pDiffuseRed, pDiffuseGreen, pDiffuseBlue, 1.0f };
float[] PositionLight = new float[4] { pX + 10.0f, pY, 0, 0.0f };
//Light position of Direction is 5 to the east of the player.
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_AMBIENT, AmbientLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_DIFFUSE, DiffuseLight );
Gl.glLightfv( Gl.GL_LIGHT0, Gl.GL_POSITION, PositionLight );
Gl.glEnable( Gl.GL_COLOR_MATERIAL );
Gl.glColorMaterial( Gl.GL_FRONT_AND_BACK, Gl.GL_AMBIENT_AND_DIFFUSE );
}
You will need to provide normals for each surface. What is happening (without normals) is the directional light is essentially shining on everything east of zero, positionally, while everything there has a normal of 0,0,1 (it faces west.)
You do not need to send normals with each vertex as far as I can tell, but rather because GL is a state machine, you need to make sure that whenever the normal changes you change it. So if you're rendering a face on a cube, the 'west' face should have a single call
glNormal3i(0,0,1);
glTexCoord..
glVertex3f...
glTexCoord..
etc.
In the case of x-y-z aligned rectangular prisms, 'integers' are sufficient. For faces that do not face one of the six cardinal directions, you will need to normalize them. In my experience you only need to normalize the first three points unless the quad is not flat. This is done by finding the normal of the triangle formed by the first three sides in the quad.
There are a few simple tuts on 'Calculating Normals' that I found enlightening.
The second part of this is that since it is a directional light, (W=0) repositioning it with the player position doesn't make sense. Unless the light itself is being emitted from behind the camera and you are rotating an object in front of you (like a model) that you wish to always be front-lit, its position should probably be something like
float[] PositionLight = new float[4] { 0.0f, 0.0f, 1.0f, 0.0f };
Or, if the GLx direction is being interpreted as the East-West direction (i.e. you initially are facing north/south)
float[] PositionLight = new float[4] { 1.0f, 0.0f, 0.0f, 0.0f };
The concept is that you are calculating the light per-face, and if the light doesn't move and the scene itself is not moving (just the camera moving around the scene) the directional calculation will always remain correct. Provided the normals are accurate, GL can figure out the intensity of light showing on a particular face.
The final thing here is that GL will not automatically handle shadows for you. Basic GL_Light is sufficient for a controlled lighting of a series of convex shapes, so you will have to figure out whether or not a light (such as the sun) should be applied to a face. In some cases this is just taking the solid the face belongs to and seeing if the vector of the sun's light intersects with another solid before reaching the 'sky'.
Look for stuff on lightmaps as well as shadowmapping for this.
One thing that can trip up many people is that the position sent to glLightFv is translated by the current matrix stack. Thus if you want to have your light set to a specific position in world coordinates, your camera and projection matrices must be set and active on the matrix stack at the time of the glLightFv call.
Say I use glRotate to translate the current view based on some arbitrary user input (i.e, if key left is pressed then rtri+=2.5f)
glRotatef(rtri,0.0f,1.0f,0.0f);
Then I draw the triangle in the rotated position:
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd(); // Finished Drawing The Triangle
How do I get the resulting translated vertexes for use in collision detection? Or will I have to manually apply the transform myself and thus doubling up the work?
The reason I ask is that I wouldn't mind implementing display lists.
The objects you use for collision detection are usually not the objects you use for display. They are usually simpler and faster.
So yes, the way to do it is to maintain the transformation you're using manually but you wouldn't be doubling up the work because the objects are different.
Your game loop should look like (with c++ syntax) :
void Scene::Draw()
{
this->setClearColor(0.0f, 0.0f, 0.0f);
for(std::vector<GameObject*>::iterator it = this->begin(); it != this->end(); ++it)
{
this->updateColliders(it);
glPushMatrix();
glRotatef(it->rotation.angle, it->rotation.x, it->rotation.y, it->rotation.z);
glTranslatef(it->position.x, it->position.y, it->position.z);
glScalef(it->scale.x, it->scale.y, it->scale.z);
it->Draw();
glPopMatrix();
}
this->runNextFrame(this->Draw, Scene::MAX_FPS);
}
So, for instance, if i use a basic box collider with a cube the draw method will :
Fill the screen with a black color (rgb : (0,0,0))
For each object
Compute the collisions with position and size informations
Save the actual ModelView matrix state
Transform the ModelView matrix (rotate, translate, scale)
Draw the cube
Restore the ModelView matrix state
Check the FPS and run the next frame at the right time
** The class Scene inherits from the vector class
I hope it will help ! :)