Related
I'm trying to implement raycasting with DirectX in C++ but have run into problems. I've tried two approaches, using XMVector3Unproject and following the instructions provided here on stackoverflow a while ago.
Unproject Approach:
//p is mouse location vector
//Don't know if I need to normalize mouse input
//p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
//p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
Vector3 orig = XMVector3Unproject(Vector3(p.x, p.y, 0),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 dest = XMVector3Unproject(Vector3(p.x, p.y, 1),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 direction = dest - orig;
direction.Normalize();
//shoot a bullet to visualize the ray for testing
g_cObjectManager.createObject(BUL_OBJ, "bullet", orig, direction*100);
Matrix Inversion Approach:
//Normalized device coordinates
p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
XMVECTOR det; //Determinant, needed for matrix inverse function call
Vector3 origin = Vector3(p.x, p.y, 0);
Vector3 faraway = Vector3(p.x, p.y, 1);
XMMATRIX invViewProj = XMMatrixInverse(&det, GameRenderer.m_matView * GameRenderer.m_matProj);
Vector3 rayorigin = XMVector3Transform(origin, invViewProj);
Vector3 rayend = XMVector3Transform(faraway, invViewProj);
Vector3 raydirection = rayend - rayorigin;
raydirection.Normalize();
g_cObjectManager.createObject(BUL_OBJ, "bullet", rayorigin, raydirection * 5);
I assume at least the matrix inversion approach from stackoverflow should work, but for some reason my attempt doesn't. Do I need the world matrix as well, or is there some step I'm missing?
This is my first stackoverflow post, so if anything is unclear or more information is needed please let me know.
In the end I realized my problem originated with switching from perspective to orthographic view to draw the HUD and never resetting the view matrix. I was able to use the XMVector3Unproject() function with the identity matrix in place of the world matrix and everything works flawlessly now.
Thanks Nico Schertler for the information and reassurance!
I'm encountering a problem trying to replicate the OpenGL behaviour in an ambient without OpenGL.
Basically I need to create an SVG file from a list of lines my program creates. These lines are created using an othigraphic projection.
I'm sure that these lines are calculated correctly because if I try to use them with a OpenGL context with orthographic projection and save the result into an image, the image is correct.
The problem raises when I use the exactly same lines without OpenGL.
I've replicated the OpenGL projection and view matrices and I process every line point like this:
3D_output_point = projection_matrix * view_matrix * 3D_input_point
and then I calculate it's screen (SVG file) position like this:
2D_point_x = (windowWidth / 2) * 3D_point_x + (windowWidth / 2)
2D_point_y = (windowHeight / 2) * 3D_point_y + (windowHeight / 2)
I calculate the othographic projection matrix like this:
float range = 700.0f;
float l, t, r, b, n, f;
l = -range;
r = range;
b = -range;
t = range;
n = -6000;
f = 8000;
matProj.SetValore(0, 0, 2.0f / (r - l));
matProj.SetValore(0, 1, 0.0f);
matProj.SetValore(0, 2, 0.0f);
matProj.SetValore(0, 3, 0.0f);
matProj.SetValore(1, 0, 0.0f);
matProj.SetValore(1, 1, 2.0f / (t - b));
matProj.SetValore(1, 2, 0.0f);
matProj.SetValore(1, 3, 0.0f);
matProj.SetValore(2, 0, 0.0f);
matProj.SetValore(2, 1, 0.0f);
matProj.SetValore(2, 2, (-1.0f) / (f - n));
matProj.SetValore(2, 3, 0.0f);
matProj.SetValore(3, 0, -(r + l) / (r - l));
matProj.SetValore(3, 1, -(t + b) / (t - b));
matProj.SetValore(3, 2, -n / (f - n));
matProj.SetValore(3, 3, 1.0f);
and the view matrix this way:
CVettore position, lookAt, up;
position.AssegnaCoordinate(rtRay->m_pCam->Vp.x, rtRay->m_pCam->Vp.y, rtRay->m_pCam->Vp.z);
lookAt.AssegnaCoordinate(rtRay->m_pCam->Lp.x, rtRay->m_pCam->Lp.y, rtRay->m_pCam->Lp.z);
up.AssegnaCoordinate(rtRay->m_pCam->Up.x, rtRay->m_pCam->Up.y, rtRay->m_pCam->Up.z);
up[0] = -up[0];
up[1] = -up[1];
up[2] = -up[2];
CVettore zAxis, xAxis, yAxis;
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - position[0];
zAxis[1] = lookAt[1] - position[1];
zAxis[2] = lookAt[2] - position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * position[0]) + (xAxis[1] * position[1]) + (xAxis[2] * position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * position[0]) + (yAxis[1] * position[1]) + (yAxis[2] * position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * position[0]) + (zAxis[1] * position[1]) + (zAxis[2] * position[2])) * -1.0f;
// Set the computed values in the view matrix.
matView.SetValore(0, 0, xAxis[0]);
matView.SetValore(0, 1, yAxis[0]);
matView.SetValore(0, 2, zAxis[0]);
matView.SetValore(0, 3, 0.0f);
matView.SetValore(1, 0, xAxis[1]);
matView.SetValore(1, 1, yAxis[1]);
matView.SetValore(1, 2, zAxis[1]);
matView.SetValore(1, 3, 0.0f);
matView.SetValore(2, 0, xAxis[2]);
matView.SetValore(2, 1, yAxis[2]);
matView.SetValore(2, 2, zAxis[2]);
matView.SetValore(2, 3, 0.0f);
matView.SetValore(3, 0, result1);
matView.SetValore(3, 1, result2);
matView.SetValore(3, 2, result3);
matView.SetValore(3, 3, 1.0f);
The results I get from OpenGL and from the SVG output are quite different, but in two days I couldn't come up with a solution.
This is the OpenGL output
And this is my SVG output
As you can see, it's rotation isn't corrent.
Any idea why? The line points are the same and the matrices too, hopefully.
Pasing the matrices I was creating didn't work. I mean, the matrices were wrong, I think, because OpenGL didn't show anything.
So I tryed doing the opposite, I created the matrices in OpenGL and used them with my code. The result is better, but not perfect yet.
Now I think the I do something wrong mapping the 3D points into 2D screen points because the points I get are inverted in Y and I still have some lines not perfectly matching.
This is what I get using the OpenGL matrices and my previous approach to map 3D points to 2D screen space (this is the SVG, not OpenGL render):
Ok this is the content of the view matrix I get from OpenGL:
This is the projection matrix I get from OpenGL:
And this is the result I get with those matrices and by changing my 2D point Y coordinate calculation like bofjas said:
It looks like some rotations are missing. My camera has a rotation of 30° on both the X and Y axis, and it looks like they're not computed correctly.
Now I'm using the same matrices OpenGL does. So I think that I'm doing some wrong calculations when I map the 3D point into 2D screen coordinates.
Rather than debugging your own code, you can use transform feedback to compute the projections of your lines using the OpenGL pipeline. Rather than rasterizing them on the screen you can capture them in a memory buffer and save directly to the SVG afterwards. Setting this up is a bit involved and depends on the exact setup of your OpenGL codepath, but it might be a simpler solution.
As per your own code, it looks like you either mixed x and y coordinates somewhere, or row-major and column-major matrices.
I've solved this problem in a really simple way. Since when I draw using OpenGL it's working, I've just created the matrices in OpenGL and then retrieved them with glGet(). Using those matrices everything is ok.
You're looking for a specialized version of orthographic (oblique) projections called isometric projections. The math is really simple if you want to know what's inside the matrix. Have a look on Wikipedia
OpenGL loads matrices in column major(opposite of c++).for example this matrix:
[1 ,2 ,3 ,4 ,
5 ,6 ,7 ,8 ,
9 ,10,11,12,
13,14,15,16]
loads this way in memory:
|_1 _|
|_5 _|
|_9 _|
|_13_|
|_2 _|
.
.
.
so i suppose you should transpose those matrices from openGL(if you`re doing it row major)
I try to use what many people seem to find a good way, I call gluUnproject 2 times with different z-values and then try to calculate the direction vector for the ray from these 2 vectors.
I read this question and tried to use the structure there for my own code:
glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
But this only returns the Homogeneous Clip Coordinates (I believe), since they only range from -1 to 1 on every axis.
How to actually get coordinates from which I can create a ray?
EDIT: This is how I construct the matrices:
Matrix projectionMatrix = vecmath.perspectiveMatrix(60f, aspect, 0.1f,
100f);
//The matrix of the camera = viewMatrix
setTransformation(vecmath.lookatMatrix(eye, center, up));
//And every object sets a ModelMatrix in it's display method
Matrix modelMatrix = parentMatrix.mult(vecmath
.translationMatrix(translation));
modelMatrix = modelMatrix.mult(vecmath.rotationMatrix(1, 0, 1, angle));
EDIT 2:
This is how the function looks right now:
private void calcMouseInWorldPosition(float mouseX, float mouseY, Matrix proj, Matrix view) {
Vector start = vecmath.vector(0, 0, 0);
Vector end = vecmath.vector(0, 0, 0);
FloatBuffer modelBuffer = BufferUtils.createFloatBuffer(16);
modelBuffer.put(view.asArray());
modelBuffer.rewind();
FloatBuffer projBuffer = BufferUtils.createFloatBuffer(16);
projBuffer.put(proj.asArray());
projBuffer.rewind();
FloatBuffer startBuffer = BufferUtils.createFloatBuffer(16);
FloatBuffer endBuffer = BufferUtils.createFloatBuffer(16);
IntBuffer viewBuffer = BufferUtils.createIntBuffer(16);
//The two calls for projection and modelView matrix are disabled here,
as I use my own matrices in this case
// glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
// glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
//I know this is really ugly and bad, but I know that the height and width is always 600
// and this is just for testing purposes
mouseY = 600 - mouseY;
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
}
I'm trying to use my own projection and view matrix, but this only seems to give weirder results.
With the GlGet... stuff I get this for a click in the upper right corner:
start: (0.97333336, -0.98, -1.0)
end: (0.97333336, -0.98, 1.0)
When I use my own stuff I get this for the same position:
start: (-2.4399707, -0.55425626, -14.202201)
end: (-2.4399707, -0.55425626, -16.198204)
Now I actually need a modelView matrix instead of just the view matrix, but I don't know how I am supposed to get it, since it is altered and created anew in every display call of every object.
But is this really the problem? In this tutorial he says "Normally, to get into clip space from eye space we multiply the vector by a projection matrix. We can go backwards by multiplying by the inverse of this matrix." and in the next step he multiplies again by the inverse of the view matrix, so I thought this is what I should actually do?
EDIT 3:
Here I tried what user42813 suggested:
Matrix view = cam.getTransformation();
view = view.invertRigid();
mouseY = height - mouseY - 1;
//Here I only these values, because the Z and W values would be 0
//following your suggestion, so no use adding them here
float tempX = view.get(0, 0) * mouseX + view.get(1, 0) * mouseY;
float tempY = view.get(0, 1) * mouseX + view.get(1, 1) * mouseY;
float tempZ = view.get(0, 2) * mouseX + view.get(1, 2) * mouseY;
origin = vecmath.vector(tempX, tempY, tempZ);
direction = cam.getDirection();
But now the direction and origin values are always the same:
origin: (-0.04557252, -0.0020000197, -0.9989586)
direction: (-0.04557252, -0.0020000197, -0.9989586)
Ok I finally managed to work this out, maybe this will help someone.
I found some formula for this and did this with the coordinates that I was getting, which ranged from -1 to 1:
float tempX = (float) (start.x() * 0.1f * Math.tan(Math.PI * 60f / 360));
float tempY = (float) (start.y() * 0.1f * Math.tan(Math.PI * 60f / 360) * height / width);
float tempZ = -0.1f;
direction = vecmath.vector(tempX, tempY, tempZ); //create new vector with these x,y,z
direction = view.transformDirection(direction);
//multiply this new vector with the INVERSED viewMatrix
origin = view.getPosition(); //set the origin to the position values of the matrix (the right column)
I dont really use deprecated opengl but i would share my thought,
First it would be helpfull if you show us how you build your View matrix,
Second the View matrix you have is in the local space of the camera,
now typically you would multiply your mouseX and (ScreenHeight - mouseY - 1) by the View matrix (i think the inverse of that matrix sorry, not sure!) then you will have the mouse coordinates in camera space, then you will add the Forward vector to that vector created by the mouse, then you will have it, it would look something like that:
float mouseCoord[] = { mouseX, screen_heihgt - mouseY - 1, 0, 0 }; /* 0, 0 because we multipling by a matrix 4.*/
mouseCoord = multiply( ViewMatrix /*Or: inverse(ViewMatrix)*/, mouseCoord );
float ray[] = add( mouseCoord, forwardVector );
I am trying to understand what the following code does:
glm::mat4 Projection = glm::perspective(35.0f, 1.0f, 0.1f, 100.0f);
Does it create a projection matrix? Clips off anything that is not in the user's view?
I wasn't able to find anything on the API page, and the only thing I could find in the pdf on their website was this:
gluPerspective:
glm::mat4 perspective(float fovy, float aspect, float zNear,
float zFar);
glm::dmat4 perspective(
double fovy, double aspect, double zNear,
double zFar);
From GLM_GTC_matrix_transform extension: <glm/gtc/matrix_transform.hpp>
But it doesn't explain the parameters. Maybe I missed something.
It creates a projection matrix, i.e. the matrix that describes the set of linear equations that transforms vectors from eye space into clip space. Matrices really are not black magic. In the case of OpenGL they happen to be a 4-by-4 arrangement of numbers:
X_x Y_x Z_x T_x
X_y Y_y Z_y T_y
X_z Y_z Z_z T_z
X_w Y_w Z_w W_w
You can multply a 4-vector by a 4×4 matrix:
v' = M * v
v'_x = M_xx * v_x + M_yx * v_y + M_zx * v_z + M_tx * v_w
v'_y = M_xy * v_x + M_yy * v_y + M_zy * v_z + M_ty * v_w
v'_z = M_xz * v_x + M_yz * v_y + M_zz * v_z + M_tz * v_w
v'_w = M_xw * v_x + M_yw * v_y + M_zw * v_z + M_tw * v_w
After reaching clip space (i.e. after the projection step), the primitives are clipped. The vertices resulting from the clipping are then undergoing the perspective divide, i.e.
v'_x = v_x / v_w
v'_y = v_y / v_w
v'_z = v_z / v_w
( v_w = 1 = v_w / v_w )
And that's it. There's really nothing more going on in all those transformation steps than ordinary matrix-vector multiplication.
Now the cool thing about this is, that matrices can be used to describe the relative alignment of a coordinate system within another coordinate system. What the perspective transform does is, that it let's the vertices z-values "slip" into their projected w-values as well. And by the perspective divide a non-unity w will cause "distortion" of the vertex coordinates. Vertices with small z will be divided by a small w, thus their coordinates "blow" up, whereas vertices with large z will be "squeezed", which is what's causing the perspective effect.
This is a c standalone version of the same function. This is roughly a copy paste version of the original.
# include <math.h>
# include <stdlib.h>
# include <string.h>
typedef struct s_mat {
float *array;
int width;
int height;
} t_mat;
t_mat *mat_new(int width, int height)
{
t_mat *to_return;
to_return = (t_mat*)malloc(sizeof(t_mat));
to_return->array = malloc(width * height * sizeof(float));
to_return->width = width;
to_return->height = height;
return (to_return);
}
void mat_zero(t_mat *dest)
{
bzero(dest->array, dest->width * dest->height * sizeof(float));
}
void mat_set(t_mat *m, int x, int y, float val)
{
if (m == NULL || x > m->width || y > m->height)
return ;
m->array[m->width * (y - 1) + (x - 1)] = val;
}
t_mat *mat_perspective(float angle, float ratio,
float near, float far)
{
t_mat *to_return;
float tan_half_angle;
to_return = mat_new(4, 4);
mat_zero(to_return);
tan_half_angle = tan(angle / 2);
mat_set(to_return, 1, 1, 1 / (ratio * tan_half_angle));
mat_set(to_return, 2, 2, 1 / (tan_half_angle));
mat_set(to_return, 3, 3, -(far + near) / (far - near));
mat_set(to_return, 4, 3, -1);
mat_set(to_return, 3, 4, -(2 * far * near) / (far - near));
return (to_return);
}
I'm new to c++ 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D?
You map 3D to 2D via projection. You map 2D to 3D by inserting the appropriate value in the Z element of the vector.
It is a matter of casting a ray from the screen onto a plane which is parallel to x-y and is at the required z location. You then need to find out where on the plane the ray is colliding.
Here's one example, considering that screen_x and screen_y ranges from [0, 1], where 0 is the left-most or top-most coordinate and 1 is right-most or bottom-most, respectively:
Vector3 point_of_contact(-1.0f, -1.0f, -1.0f);
Matrix4 view_matrix = camera->getViewMatrix();
Matrix4 proj_matrix = camera->getProjectionMatrix();
Matrix4 inv_view_proj_matrix = (proj_matrix * view_matrix).inverse();
float nx = (2.0f * screen_x) - 1.0f;
float ny = 1.0f - (2.0f * screen_y);
Vector3 near_point(nx, ny, -1.0f);
Vector3 mid_point(nx, ny, 0.0f);
// Get ray origin and ray target on near plane in world space
Vector3 ray_origin, ray_target;
ray_origin = inv_view_proj_matrix * near_point;
ray_target = inv_view_proj_matrix * mid_point;
Vector3 ray_direction = ray_target - ray_origin;
ray_direction.normalise();
// Check for collision with the plane
Vector3 plane_normal(0.0f, 0.0f, 1.0f);
float denominator = plane_normal.dotProduct(ray_direction);
if (fabs(denom) >= std::numeric_limits<float>::epsilon())
{
float num = plane_normal.dotProduct(ray.getOrigin()) + Vector3(0, 0, z_pos);
float distance = -(num/denom);
if (distance > 0)
{
point_of_contact = ray_origin + (ray_direction * distance);
}
}
return point_of_contact
Disclaimer Notice: This solution was taken from bits and pieces of Ogre3D graphics library.
The simplest way is to do a divide by z. Therefore ...
screenX = projectionX / projectionZ;
screenY = projectionY / projectionZ;
That does perspective projection based on distance. Thing is it is often better to use homgeneous coordinates as this simplifies matrix transformation (everything becomes a multiply). Equally this is what D3D and OpenGL use. Understanding how to use non-homogeneous coordinates (ie an (x,y,z) coordinate triple) will be very helpful for things like shader optimisations however.
One lame solution:
^ y
|
|
| /z
| /
+/--------->x
Angle is the angle between the Ox and Oz axes (
#include <cmath>
typedef struct {
double x,y,z;
} Point3D;
typedef struct {
double x,y;
} Point2D
const double angle = M_PI/4; //can be changed
Point2D* projection(Point3D& point) {
Point2D* p = new Point2D();
p->x = point.x + point.z * sin(angle);
p->y = point.y + point.z * cos(angle);
return p;
}
However there are lots of tutorials on this on the net... Have you googled for it?