Direct3D (v10) Multiple World Transformations - c++

In my code I am trying to run two (at the moment, probably more in the future) matrix transformations on my world matrix.
Like so:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
where rotation is a changing float and worldMatrix is a D3DXMATRIX. My problem is that only the last line of code in the transformation statements works. So in the above code, the worldMatrix would get translated, but not rotated. But if I switch the order of the two statements, the worldMatrix will get rotated, but not translated. However, I played around with it, and this code works just fine:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMATRIX temp = worldMatrix;
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
worldMatrix *= temp;
After this, the worldMatrix is translated and rotated. Why doesn't it work if I only use the variables and don't include the temp matrix? Thank you!!

D3DXMatrixTranslation takes an output parameter as 1st parameter. The created matrix is written to that matrix, overriding the already present elements in that matrix. The matrices are not automatically multiplied by that call.
Your new code is fine; you could also write it like this:
D3DXMatrix rot;
D3DXMatrix trans;
D3DXMatrixRotationY(&rot, rotation);
D3DXMatrixTranslation(&trans, 0.0f, -1.0f, 0.0f);
D3DXMatrix world = rot * trans;

Related

Create an OpenGL 2D view camera but using model view projection cameras

I am trying to create a 2D, top down, style camera in OpenGL. I would like to stick to the convention of using model-view-projection matrices, this is so I can switch between a 3D view and a top down view while the application runs. I am actually using the glm::lookAt method to create the view matrix.
However there is something missing in my understanding. I am rendering a triangle on the screen, [very close to this tutorial][1], and that works perfectly fine (so no problems with windowing, display loops, vertex buffers, etc). The triangle is centered at (0, 0), and vertices are on -0.5/0.5 (so already in NDC).
I then added a uniform mat4 mpv; to the vertex shader. If I set the mpv matrix to:
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
I get the same, unmodified triangle as expected as these are (from my understanding) the default values for OpenGL.
Now I thought if I changed the Z value of the camera position it would have the same effect as zooming in and out, however all I get is the clear color, no triangle is rendered.
// Trying to simulate zoom in and out by changing z value of camera
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -3.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
So I printed the view matrix, and noticed that all I was doing was translating the Z value, which makes sense.
I then added an ortho projection matrix, to make sure everything is in NDC, but I still get nothing.
// *2 cause im on a Mac/high-res screen and the frame buffer scale is 2.
// Doing projection * view in one step and just updating view uniform until I get it working.
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
Where is my misunderstanding taking place. I would like to:
Simulate a top down view where I can zoom in and out on the target.
Create a 2D camera that follows a target (racing car), so the camera_pos XY and target_pos XY will be the same.
Eventually add an option to switch to a 3D following camera, like a standard racing game 3rd person view, hence the MPV vs just using simple translations.
[1]: https://learnopengl.com/Getting-started/Hello-Triangle
The vertex coordinates are in the range [-0.5, 0.5], but the orthographic projection projects the cuboid volume with the left, bottom, near point _(0, 0, 0.1) and the right, top, far point (800.0 * 2, 600.0 * 2, 100) of the viewport.
Therefore, the triangle mesh just covers one fragment in the lower left of the viewport.
Change the orthographic projection:
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
view = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.1f, 100.0f) * view;

Incorrect ray direction from inverse vp matrix and camera position

I have a problem with my ray generation that I do not understand. The direction for my ray is computed wrongly. I ported this code from DirectX 11 to Vulkan, where it works fine, so I was surprised I could not get it to work:
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz - camPos.xyz);
Yet this code works perfectly:
vec4 nearPos = inverseViewProj * vec4(screenPos, 0, 1);
nearPos /= nearPos.w;
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz – nearPos.xyz);
[Edit] Matrix and camera positions are set like this:
const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f);
projMatrix = clip * glm::perspectiveFov(FieldOfView, float(ViewWidth), float(ViewHeight), NearZ, FarZ);
viewMatrix = glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
buffer.inverseViewProjMatrix = glm::inverse(projMatrix * viewMatrix);
buffer.camPos = viewMatrix[3];
[Edit2] What I see on screen is correct if I start at the origin. However, if I move left, for example, it looks as if I am moving right. All my rays seem to be perturbed. In some cases, strafing the camera looks as if I am moving around a different point in space. I assume the camera position is not equal to the singularity of my perspective matrix, yet I can not figure out why.
I think I am misunderstanding something basic. What am I missing?
Thanks to the comments I have found the problem. I was building my view matrix incorrectly, in the exact same way as in this post:
glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
This is equal to translating first and then rotating, which of course leads to something unwanted. In addition, the Position was negative and camPos was obtained using the last column of the view matrix instead of the inverse view matrix, which is wrong.
It was not noticable with my fractal raycaster simply because I never moved far away from the origin. That, and the fact that there is no point of reference in such an environment.

Moving Camera on X axis glm::lookat()

I'm trying to make a 2D camera on OpenGl using the function glm::lookat(). The problem is that once everything is rendered, I can't move the camera. I'm only trying to move it on an horizontal way.
glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(this->Width), static_cast<GLfloat>(this->Height), 0.0f, 0.1f, 500.0f);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
So, something like:
void Update()
{
static glm::vec3 cameraPos(0,0,-1);
cameraPos.x += 0.1f;
... etc
}
(Though, I'd suggest creating a Camera class, or at least storing the vec3 outside of this method, for anything beyond experimentation.) Everything else should work fine.
Typically, you'd also want to take into account the time difference between frames. There are many ways to measure this - whatever gl framework you're using probably provides a function for it - but you're probably running at 60Hz, so assume the time between frames is 16.6ms. In which case, you might
float velocity = 10; // units per second
glm::vec3 cameraPos(0,0,-3);
float deltaT = 16.6e-3f; // 16 milliseconds
void Update()
{
cameraPos.x += velocity * deltaT;
glm::lookAt(cameraPos, .....);
}
You might run into a situation where everything renders the first frame, then disappears. In that case, drop the velocity to zero to make sure everything is working as it was before, then try some really small velocity values (like 0.001). It depends on how big your geometry is, distance from camera, and a bunch of other stuff. A camera Z of -3 is pretty small, might try backing it off some more while you're working this out.
Good luck!

OpenGL convert/translate/normalize coordinates?

I am looking for some examples of (non-deprecated code) OpenGL that actually show code/conversion, step-by-step, between coordinates from start to finish, in a 2D way that use non-standard coordinates. i.e.
GLfloat Square[] =
{
-5.5f, -5.0f, 0.0f, 1.0f,
-5.5f, 5.0f, 0.0f, 1.0f,
5.5f, 5.0f, 0.0f, 1.0f,
5.5f, -5.0f, 0.0f, 1.0f
};
And then using coordinates such as the above, taking these & convert them into necessary coordinates so they map to -1,1 etc & output to screen.
Every single example, tutorial I have found on the net etc, only do so using coordinates in the above range between -1<>1.
Environment: C++.
Well if you use glm (http://glm.g-truc.net/index.html) you can just use glm::ortho to create a view matrix in the same way you would use glOrtho in old-style OpenGL.
EG:
glm::mat4 viewMatrix = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, 0.0f, 1.0f);
If you then plug that into your shader you should get a mapping from -5 to +5 instead of -1 to +1, or whatever scale it is that you want.
when you use 'no' transformations in your project then you simply need to use coordinates that are placed in "normalized device space". That is actually a box ranging from -1 to 1 at each axis.
in that case in the vertex shader there will be line similar to this:
gl_Position = attribVertexPos; // no transformation
For 2D, if you want to provide coords from different range, all you need to do is simply scale position:
for range -10 to 10 use vertex shader with code gl_Position = attribVertexPos * 0.1
or you can look for a scaling matrix and use it as well

Changing From Perspective to Orthographic Matrix

I have the scene with one simple triangle. And i am using perspective projection. I have my MVP matrix set up (with the help of GLM) like this:
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(0,0,5), // Camera is at (0,0,5), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
And it all works ok, i can change the values of the camera and the triangle is still displayed properly.
But i want to use orthographic projection. And when i change the projection matrix to orthographic, it works unpredictable, i can't display the triangle, or i just see one small part of it in the corner of the screen. To use the orthographic projection, i do this:
glm::mat4 Projection = glm::ortho( 0.0f, 800.0f, 600.0f, 0.0f,-5.0f, 5.0f);
while i don't change anything in View and Model matrices. And i just doesn't work properly.
I just need a push in the right direction, am i doing something wrong? What am i missing, what should i do to properly set up orthographic projection?
PS i don't know if it's needed, but these are the coordinates of the triangle:
static const GLfloat g_triangle[] = {
-1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 2.0f, 0.0f,
};
Your triangle is about 1 unit large. If you use an orthographic projection matrix that is 800/600 units wide, it is natural that you triangle appears very small. Just decrease the bounds of the orthographic matrix and make sure that the triangle is inside this area (e.g. the first vertex is outside of the view, because its x-coordinate is less than 0).
Furthermore, make sure that your triangle is not erased by backface culling or z-clipping. Btw.. negative values for zNear are a bit unusual but should work for orthographic projections.