I have 3 transformations in the following order and with the following variables:
glTranslate(dirX, dirY, dirZ);
glRotate(angleX, 1, 0, 0);
glRotate(angleY, 0, 1, 0);
With these, I'm able to transform my ModelView in 3D achieving several effects (translate object around space, rotate object around its center, zoom in and out from origin) .
With the same variables, and using gluLookAt(), I want to achieve the last 2 (rotate around center of object, zoom from origin)
target = object_position
pos.x = zoom * sin(phi) * cos(theta);
pos.y = zoom * cos(phi);
pos.z = zoom * sin(phi) * sin(theta);
pos += target;
gluLookAt(pos, target, vec3(0, 1, 0)); // up vector is fixed...
The code above creates a 'camera' that is looking at the object centre and can rotate around (using spherical coords).
http://mathworld.wolfram.com/SphericalCoordinates.html
Related
I'm trying to setup view, and projection matrices to work with my intended world coordinates and handedness. I'm going for a left handed coordinate system, +X to your right, +Y above you and +Z before you.
Y coordinates are working fine but objects placed in front of the camera (+Z) are showing up behind it, so I have to turn the camera 180 degrees to see them, this was an easy fix as flipping the view matrices' Z did it, but now object are flipped X wise (text is seen as in a mirror). I tried negating each objects Z for their model matrix and that works fine, but I feel there should be another cleaner solution.
My issue is similar to this: Inverted X axis in OpenGL, but I couldn't find a proper solution.
This is the projection matrix code.
Matrix4 BuildPerspectiveMatrix(const float32 fov, const float32 aspectRatio, const float32 nearPlane, const float32 farPlane)
{
Matrix4 matrix;
//Tangent of half the vertical view angle.
const auto yScale = 1.0f / Tangent(fov * 0.5f);
const auto far_m_near = farPlane - nearPlane;
matrix[0][0] = yScale / aspectRatio; //xScale
matrix[1][1] = -yScale;
matrix[2][2] = farPlane / (nearPlane - farPlane);
matrix[2][3] = (farPlane * nearPlane) / (nearPlane - farPlane);
matrix[3][2] = -1.0f;
matrix[3][3] = 0.0f;
return matrix;
}
Scene is setup like this:
Camera is at (0, 0, 0) (center of the world), object 1 is at (0, 0, 2) (2 units forward in front of the camera) and object 2 is at (1, 0, 2) (1 unit to the right and 2 units in front of the camera).
Any help is appreciated!
Vulkan, like non-legacy OpenGL and DX 11+ are all independent of any chosen "handdedness", that's an artefact of the math library you're using (if any).
As to your actual question, the matrix you're building is right handed because you assign -1 to matrix[3][2]. The left handed version is the same except it has 1 for that location.
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
I'm trying to implement raycasting with DirectX in C++ but have run into problems. I've tried two approaches, using XMVector3Unproject and following the instructions provided here on stackoverflow a while ago.
Unproject Approach:
//p is mouse location vector
//Don't know if I need to normalize mouse input
//p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
//p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
Vector3 orig = XMVector3Unproject(Vector3(p.x, p.y, 0),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 dest = XMVector3Unproject(Vector3(p.x, p.y, 1),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 direction = dest - orig;
direction.Normalize();
//shoot a bullet to visualize the ray for testing
g_cObjectManager.createObject(BUL_OBJ, "bullet", orig, direction*100);
Matrix Inversion Approach:
//Normalized device coordinates
p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
XMVECTOR det; //Determinant, needed for matrix inverse function call
Vector3 origin = Vector3(p.x, p.y, 0);
Vector3 faraway = Vector3(p.x, p.y, 1);
XMMATRIX invViewProj = XMMatrixInverse(&det, GameRenderer.m_matView * GameRenderer.m_matProj);
Vector3 rayorigin = XMVector3Transform(origin, invViewProj);
Vector3 rayend = XMVector3Transform(faraway, invViewProj);
Vector3 raydirection = rayend - rayorigin;
raydirection.Normalize();
g_cObjectManager.createObject(BUL_OBJ, "bullet", rayorigin, raydirection * 5);
I assume at least the matrix inversion approach from stackoverflow should work, but for some reason my attempt doesn't. Do I need the world matrix as well, or is there some step I'm missing?
This is my first stackoverflow post, so if anything is unclear or more information is needed please let me know.
In the end I realized my problem originated with switching from perspective to orthographic view to draw the HUD and never resetting the view matrix. I was able to use the XMVector3Unproject() function with the identity matrix in place of the world matrix and everything works flawlessly now.
Thanks Nico Schertler for the information and reassurance!
I try to use what many people seem to find a good way, I call gluUnproject 2 times with different z-values and then try to calculate the direction vector for the ray from these 2 vectors.
I read this question and tried to use the structure there for my own code:
glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
But this only returns the Homogeneous Clip Coordinates (I believe), since they only range from -1 to 1 on every axis.
How to actually get coordinates from which I can create a ray?
EDIT: This is how I construct the matrices:
Matrix projectionMatrix = vecmath.perspectiveMatrix(60f, aspect, 0.1f,
100f);
//The matrix of the camera = viewMatrix
setTransformation(vecmath.lookatMatrix(eye, center, up));
//And every object sets a ModelMatrix in it's display method
Matrix modelMatrix = parentMatrix.mult(vecmath
.translationMatrix(translation));
modelMatrix = modelMatrix.mult(vecmath.rotationMatrix(1, 0, 1, angle));
EDIT 2:
This is how the function looks right now:
private void calcMouseInWorldPosition(float mouseX, float mouseY, Matrix proj, Matrix view) {
Vector start = vecmath.vector(0, 0, 0);
Vector end = vecmath.vector(0, 0, 0);
FloatBuffer modelBuffer = BufferUtils.createFloatBuffer(16);
modelBuffer.put(view.asArray());
modelBuffer.rewind();
FloatBuffer projBuffer = BufferUtils.createFloatBuffer(16);
projBuffer.put(proj.asArray());
projBuffer.rewind();
FloatBuffer startBuffer = BufferUtils.createFloatBuffer(16);
FloatBuffer endBuffer = BufferUtils.createFloatBuffer(16);
IntBuffer viewBuffer = BufferUtils.createIntBuffer(16);
//The two calls for projection and modelView matrix are disabled here,
as I use my own matrices in this case
// glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
// glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
//I know this is really ugly and bad, but I know that the height and width is always 600
// and this is just for testing purposes
mouseY = 600 - mouseY;
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
}
I'm trying to use my own projection and view matrix, but this only seems to give weirder results.
With the GlGet... stuff I get this for a click in the upper right corner:
start: (0.97333336, -0.98, -1.0)
end: (0.97333336, -0.98, 1.0)
When I use my own stuff I get this for the same position:
start: (-2.4399707, -0.55425626, -14.202201)
end: (-2.4399707, -0.55425626, -16.198204)
Now I actually need a modelView matrix instead of just the view matrix, but I don't know how I am supposed to get it, since it is altered and created anew in every display call of every object.
But is this really the problem? In this tutorial he says "Normally, to get into clip space from eye space we multiply the vector by a projection matrix. We can go backwards by multiplying by the inverse of this matrix." and in the next step he multiplies again by the inverse of the view matrix, so I thought this is what I should actually do?
EDIT 3:
Here I tried what user42813 suggested:
Matrix view = cam.getTransformation();
view = view.invertRigid();
mouseY = height - mouseY - 1;
//Here I only these values, because the Z and W values would be 0
//following your suggestion, so no use adding them here
float tempX = view.get(0, 0) * mouseX + view.get(1, 0) * mouseY;
float tempY = view.get(0, 1) * mouseX + view.get(1, 1) * mouseY;
float tempZ = view.get(0, 2) * mouseX + view.get(1, 2) * mouseY;
origin = vecmath.vector(tempX, tempY, tempZ);
direction = cam.getDirection();
But now the direction and origin values are always the same:
origin: (-0.04557252, -0.0020000197, -0.9989586)
direction: (-0.04557252, -0.0020000197, -0.9989586)
Ok I finally managed to work this out, maybe this will help someone.
I found some formula for this and did this with the coordinates that I was getting, which ranged from -1 to 1:
float tempX = (float) (start.x() * 0.1f * Math.tan(Math.PI * 60f / 360));
float tempY = (float) (start.y() * 0.1f * Math.tan(Math.PI * 60f / 360) * height / width);
float tempZ = -0.1f;
direction = vecmath.vector(tempX, tempY, tempZ); //create new vector with these x,y,z
direction = view.transformDirection(direction);
//multiply this new vector with the INVERSED viewMatrix
origin = view.getPosition(); //set the origin to the position values of the matrix (the right column)
I dont really use deprecated opengl but i would share my thought,
First it would be helpfull if you show us how you build your View matrix,
Second the View matrix you have is in the local space of the camera,
now typically you would multiply your mouseX and (ScreenHeight - mouseY - 1) by the View matrix (i think the inverse of that matrix sorry, not sure!) then you will have the mouse coordinates in camera space, then you will add the Forward vector to that vector created by the mouse, then you will have it, it would look something like that:
float mouseCoord[] = { mouseX, screen_heihgt - mouseY - 1, 0, 0 }; /* 0, 0 because we multipling by a matrix 4.*/
mouseCoord = multiply( ViewMatrix /*Or: inverse(ViewMatrix)*/, mouseCoord );
float ray[] = add( mouseCoord, forwardVector );
I loaded an object from a .obj file.
I am trying to apply a glm::rotate, glm::translate, glm::scale to it.
The movement (translation and rotation) is made using keyboard input like this;
// speed is 10
// angle starts at 0
if (keys[UP]) {
movex += speed * sin(angle);
movez += speed * cos(angle);
}
if (keys[DOWN]) {
movex -= speed * sin(angle);
movez -= speed * cos(angle);
}
if (keys[RIGHT]) {
angle -= PI / 180;
}
if (keys[LEFT]) {
angle += PI / 180;
}
and then, the transformations:
// use shader
glUseProgram(gl_program_shader);
// send the uniform matrices to the shader
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(model_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "view_matrix"), 1, false, glm::value_ptr(view_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "projection_matrix"), 1, false, glm::value_ptr(projection_matrix));
glm::mat4 rotate = glm::rotate(model_matrix, angle, glm::vec3(0, 1, 0));
glm::mat4 translate = glm::translate(model_matrix, glm::vec3(RADIUS + movex, 0, -RADIUS + movez));
glm::mat4 scale = glm::scale(model_matrix, glm::vec3(20.0, 20.0, 20.0));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(translate * scale * rotate));
glBindVertexArray(ironMan->vao);
glDrawElements(GL_TRIANGLES, ironMan->num_indices, GL_UNSIGNED_INT, 0);
If I don't use KEY_RIGHT / KEY_LEFT, it works as expected (which means that it translates the object forward and backward).
If I use them, it rotates around it's own center, but when I press KEY_UP / KEY_DOWN, it translates ... well, not as expected.
I've put on the notifyKeyPressed() function a case to print out the movex, movey and angle value and it seems that the angle is not the one that it should:
when the IronMan is with it's back at me, angle = 0 - this is the
start point;
when the IronMan is with his face on the right, the
angle should be around -PI / 2 (-1.57), but it is around -80/-90.
I thought this was because of the glm::scale, but I changed the values and nothing changed.
Any idea why is the angle acting like this?
Your rotation center stays at the origin, you need to translate the rotation center of your object to the origin, make your scale, rotation, translate back, then translate.
EDIT
More importantly related to your exact problem, glm::rotate works in degrees.