DirectX 11 Moving my square - c++

I am learning DirectX 11 and have gotten myself to a point where I have a square displayed. My 4 vertices are defined as:
VertexPos vertices[] =
{
XMFLOAT3(-0.2f, 0.2f, 0.2f),
XMFLOAT3(0.2f, 0.2f, 0.2f),
XMFLOAT3(-0.2f, -0.2f, 0.2f),
XMFLOAT3(0.2f, -0.2f, 0.2f)
};
They then go through the necessary stages to render to the screen (like the hello world of DirectX programming). I have combined some code from a direct Input demo and would like to be able to move the square around using the arrow keys. I have so far:
void SquareMove::Update( float dt )
{
keyboardDevice_->GetDeviceState(sizeof(keyboardKeys_), (LPVOID)&keyboardKeys_);
// Button down event.
if (KEYDOWN(prevKeyboardKeys_, DIK_DOWN) && !KEYDOWN(keyboardKeys_, DIK_DOWN))
{
PostQuitMessage(0);
}
(That was to test that my eg. down arrow callback works). I am now at a loss how to actually implement the steps necessary to move my square. I understand it has to do with a D3DXMatrixTranslation, but am struggling to see how they all convolute together to perform the necessary operations. Thanks

I am not quite clear if you actually understand the effect of the translation matrix or not. There are many great tutorials to guide you through for the implementation details, so I would just share what is interesting to me - the way I understood rendering first.
In short you have to learn to visualize the effect of mathematics on your object. Your square's vertices are currently in what is called "model" and "local" space. I.e. they are just giving information actually on the shape and size of the model (related to the 0,0,0 world coordinates). Now you have to position it in your "world". To do so you have to move ("translate") each of the vertices of your object to the new place keeping the size and shape you just defined in your model space - to translate each of them with the same length and in the same direction, in order words with the same vector, to apply the same calculation to each of the vertices.
In rendering transformation of object is achieved through matrices. Each matrix has values at certain positions which when multiplied with the coordinates of the object will in some way change it. Their consequent application on the object's coordinates (via multiplication) applies consequent transformations - an object may first be rotated - this will rotate it around the center of your world, i.e. around (0,0,0), then translated, then rotated again...this is just multiplication of 3 matrices.
The specific implementation may be found at plenty of places: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-5

Just for completeness this is how I managed to do it. In the render function I added the following
XMMATRIX m_Translation = XMMatrixTranslation(0.0f, fY, 0.0f);
XMMATRIX mvp = world*vpMatrix_*m_Translation;
(Previously it was just world*viewPortMatrix)
where fY was controlled via button down events
if (KEYDOWN(prevKeyboardKeys_, DIK_DOWN) && !KEYDOWN(keyboardKeys_, DIK_DOWN))
{
fY -= 0.1f;
}
And when running the application my object moves!

Related

Correct camera transformation for first person camera

I am making a camera in openGl and I am having troubles with first person camera. So I had a few versions of camera transformation and all of them had their own problems. So at first, I was doing transformations in this order: I would first translate the object in the positive direction when trying to move away from it and I would translate it in the negative direction when trying to move towards it. After this translation, I would perform rotations arround X and Y axis. Now, when I try to use this camera, I found out that when I have objects in my scene, lets say a few cubes, and when I rotate, everything is fine, but when after this rotation I try translation, all of the objects converge to me or better to say, towards the "player". So after I gave this some thought I realized that because I am doing translations first, in the next frame when I try to translate the player in the direction in which the camera is looking at that moment, what happens is, objects get translated first and then rotated so I get, as a result of this, movement of the objects towards or away from the player. Code for this is here (and dont mind camUp and camRight vectors, these are just y and x axis vectors and are not transformed at all):
m_ViewMatrix = inverse(glm::rotate(glm::mat4(1.0f), m_Rotation, camUp))* inverse(glm::rotate(glm::mat4(1.0f), m_TiltRotation, camRight)) * glm::translate(glm::mat4(1.0f), m_Position);
But option to rotate and then translate is not good because then I get editor sort of camera which is actually fine but that is not what I want.
So I thought about it some more and tried to make small transformations and then reset the parameters, acumulating all the transformations in this way:
m_ViewMatrix = inverse(glm::rotate(glm::mat4(1.0f), m_Rotation, camUp)) * glm::translate(glm::mat4(1.0f), m_Position)* inverse(glm::rotate(glm::mat4(1.0f), m_TiltRotation, camRight))*m_ViewMatrix;
m_Position = { 0.0f, 0.0f, 0.0f };
m_Rotation = 0.0f;
m_TiltRotation = 0.0f;
But now I have a problem with rotations arround z axis which I don't want. This problem was not there before. So now I have no idea what to do, I read some answers here but couldn't apply them I don't know why. So if anyone could help me in the context of the code I just copied here, that would be great.

Efficient way to render multiple mesh objects in different positions using DirectX / C++

When using only one translation matrix, multiple meshes appear overlapping onscreen.
The solution I tried was to create multiple translation matrices to set different initial xyz coordinates for each mesh. It works, but this method seems pretty inefficient in terms of the number of lines of code used. (The final project will probably incorporate 20+ meshes so I was hoping I would not need to create 20 different translation matrices using basically the same section of code).
I'd greatly appreciate any suggestions as to the best way of rendering multiple meshes with the most efficient use of code (i.e fewest instructions with least amount of repetition).
This is only a small graphical demo so getting a high framerate is not the priority, but achieving the result with the most efficient use of code is.
Below Code is a sample of how i'm currently rendering multiple meshes in different positions...
// initial locations for each mesh
D3DXVECTOR3 Translate1(-30.0f, 0.0f, 0.0f);
D3DXVECTOR3 Translate2(-30.0f, 10.0f, 0.0f);
D3DXVECTOR3 Translate3(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 Translate4(0.0f, 10.0f, 0.0f);
//set scaling on all x y and z axis
D3DXVECTOR3 Scaling(g_ScaleX, g_ScaleY, g_ScaleZ);
/////////////////////////////////////////////////////////////////////////
// create first transformation matrix
D3DXMATRIX world1;
// create first translation matrix
D3DXMATRIX matrixTranslate;
D3DXMatrixTransformation(&world1, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate1);
//D3DXMatrixIdentity(&world1); // set world1 as current transformation matrix
// set world1 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world1);
// recompute normals
g_pd3dDevice -> SetRenderState(D3DRS_NORMALIZENORMALS, TRUE);
// render first mesh
mesh1.Render(g_pd3dDevice);
/////////////////////////////////////////////////////////////////////////
D3DXMATRIX world2;
D3DXMATRIX matrixTranslate2;
D3DXMatrixTransformation(&world2, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate2);
// set world2 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world2);
// render second mesh
mesh2.Render(g_pd3dDevice);
////////////////////////////////////////////////////////////////////////
D3DXMATRIX world3;
D3DXMATRIX matrixTranslate3;
D3DXMatrixTransformation(&world3, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate3);
// set world2 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world3);
//render thirdmesh
mesh3.Render(g_pd3dDevice);
//////////////////////////////////////////////////////////////////////
Edit: I see by efficient you meant 'compact code' (it usually means 'less cpu usage' :)
In that case, yes, I noticed that you copy and paste essentially the same code. Why don't you use a function that takes parameters, including a transform and a mesh? That way, you can write one function that draws a mesh, and call it for each mesh. Better yet, also store the meshes in an array, then iterate over the array, calling the draw function for each element. I think you should read up on basic tutorials for C/C++. Have fun!
http://www.cplusplus.com/doc/tutorial/
Original comment:
The cost of calculating and setting a transform is much smaller than the cost of rendering a mesh, so i think there is no problem here we can help you solve. For example, is your framerate low?
When thinking about performance (in computer graphics or otherwise), try to express your problem as a measurable statement, rather than guesses based on feel. Start by describing your purpose (e.g. drawing multiple meshes at a good frame rate), then describe what isn't working, then develop theories as to why, which you can then test.

Setting up a camera in OpenGL

I've been working on a game engine for awhile. I've started out with 2D Graphics with just SDL but I've slowly been moving towards 3D capabilities by using OpenGL. Most of the documentation I've seen about "how to get things done," use GLUT, which I am not using.
The question is how do I create a "camera" in OpenGL that I could move around a 3D environment and properly display 3D models as well as sprites (for example, a sprite that has a fixed position and rotation). What functions should I be concerned with in order to setup a camera in OpenGL camera and in what order should they be called in?
Here is some background information leading up to why I want an actual camera.
To draw a simple sprite, I create a GL texture from an SDL surface and I draw it onto the screen at the coordinates of (SpriteX-CameraX) and (SpriteY-CameraY). This works fine but when moving towards actual 3D models it doesn't work quite right. The cameras location is a custom vector class (i.e. not using the standard libraries for it) with X, Y, Z integer components.
I have a 3D cube made up of triangles and by itself, I can draw it and rotate it and I can actually move the cube around (although in an awkward way) by passing in the camera location when I draw the model and using that components of the location vector to calculate the models position. Problems become evident with this approach when I go to rotate the model though. The origin of the model isn't the model itself but seems to be the origin of the screen. Some googling tells me I need to save the location of the model, rotate it about the origin, then restore the model to its origal location.
Instead of passing in the location of my camera and calculating where things should be being drawn in the Viewport by calculating new vertices, I figured I would create an OpenGL "camera" to do this for me so all I would need to do is pass in the coordinates of my Camera object into the OpenGL camera and it would translate the view automatically. This tasks seems to be extremely easy if you use GLUT but I'm not sure how to set up a camera using just OpenGL.
EDIT #1 (after some comments):
Following some suggestion, here is the update method that gets called throughout my program. Its been updated to create perspective and view matrices. All drawing happens before this is called. And a similar set of methods is executed when OpenGL executes (minus the buffer swap). The x,y,z coordinates are straight an instance of Camera and its location vector. If the camera was at (256, 32, 0) then 256, 32 and 0 would be passed into the Update method. Currently, z is set to 0 as there is no way to change that value at the moment. The 3D model being drawn is a set of vertices/triangles + normals at location X=320, Y=240, Z=-128. When the program is run, this is what is drawn in FILL mode and then in LINE mode and another one in FILL after movement, when I move the camera a little bit to the right. It likes like may Normals may be the cause, but I think it has moreso to do with me missing something extremely important or not completely understanding what the NEAR and FAR parameters for glFrustum actually do.
Before I implemented these changes, I was using glOrtho and the cube rendered correctly. Now if I switch back to glOrtho, one face renders (Green) and the rotation is quite weird - probably due to the translation. The cube has 6 different colors, one for each side. Red, Blue, Green, Cyan, White and Purple.
int VideoWindow::Update(double x, double y, double z)
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glFrustum(0.0f, GetWidth(), GetHeight(), 0.0f, 32.0f, 192.0f);
glMatrixMode( GL_MODELVIEW );
SDL_GL_SwapBuffers();
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(0, 1.0f, 0.0f, 0.0f);
glRotatef(0, 0.0f, 1.0f, 0.0f);
glRotatef(0, 0.0f, 0.0f, 1.0f);
glTranslated(-x, -y, 0);
return 0;
}
EDIT FINAL:
The problem turned out to be an issue with the Near and Far arguments of glFrustum and the Z value of glTranslated. While change the values has fixed it, I'll probably have to learn more about the relationship between the two functions.
You need a view matrix, and a projection matrix. You can do it one of two ways:
Load the matrix yourself, using glMatrixMode() and glLoadMatrixf(), after you use your own library to calculate the matrices.
Use combinations of glMatrixMode(GL_MODELVIEW) and glTranslate() / glRotate() to create your view matrix, and glMatrixMode(GL_PROJECTION) with glFrustum() to create your projection matrix. Remember - your view matrix is the negative translation of your camera's position (As it's where you should move the world to relative to the camera origin), as well as any rotations applied (pitch/yaw).
Hope this helps, if I had more time I'd write you a proper example!
You have to do it using the matrix stack as for object hierarchy,
but the camera is inside the hierarchy so you have to put the inverse transform on the stack before drawing the objects as openGL only uses the matrix from 3D to camera.
If you have not checked then may be looking at following project would explain in detail what "tsalter" wrote in his post.
Camera from OGL SDK (CodeColony)
Also look at Red book for explanation on viewing and how does model-view and projection matrix will help you create camera. It starts with good comparison between actual camera and what corresponds to OpenGL API. Chapter 3 - Viewing

Making an object orbit a fixed point in directx?

I am trying to make a very simple object rotate around a fixed point in 3dspace.
Basically my object is created from a single D3DXVECTOR3, which indicates the current position of the object, relative to a single constant point. Lets just say 0,0,0.
I already calculate my angle based on the current in game time of the day.
But how can i apply that angle to the position, so it will rotate?
:(?
Sorry im pretty new to Directx.
So are you trying to plot the sun or the moon?
If so then one assumes your celestial object is something like a sphere that has (0,0,0) as its center point.
Probably the easiest way to rotate it into position is to do something like the following
D3DXMATRIX matRot;
D3DXMATRIX matTrans;
D3DXMatrixRotationX( &matRot, angle );
D3DXMatrixTranslation( &matTrans, 0.0f, 0.0f, orbitRadius );
D3DXMATRIX matFinal = matTrans * matRot;
Then Set that matrix as your world matrix.
What it does is it creates a rotation matrix to rotate the object by "angle" around the XAxis (ie in the Y-Z plane); It then creates a matrix that pushes it out to the appropriate place at the 0 angle (orbitRadius may be better off as the 3rd parameter in the translation call, depending on where your zero point is). The final line multiplies these 2 matrices together. Matrix multiplications are non commutative (ie M1 * M2 != M2 * M1). What the above does is move the object orbitRadius units along the Z-axis and then it rotates that around the point (0, 0, 0). You can think of rotating an object that is held in your hand. If orbitRadius is the distance from your elbow to your hand then any rotation around your elbow (at 0,0,0) is going to form an arc through the air.
I hope that helps, but I would really recommend doing some serious reading up on Linear Algebra. The more you know the easier questions like this will be to solve yourself :)

OpenGL Rotation

I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
But the result is not a triangle rotated 90 degrees.
Edit
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
glMatrixMode(GL_MODELVIEW);
Otherwise, you may be modifying either the projection or a texture matrix instead.
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.
Two major problems with the code as written:
Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.
You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.
The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow.
Regarding Projection matrix, you can find a good source to start with here:
http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx
It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).
transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:
Transform1 = Object coordinates system to World (for example - object rotation and scale)
Transform2 = World coordinates system to Camera (placing the object in the right place)
Transform3 = Camera coordinates system to Screen space (projecting to screen)
Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.
Have fun