I have a problem.
I Have a OpenGl project and i want to set the camera matrix of the OpenGl into a OpenSceneGraph camera to have the same view.
I've this code to get OpenGl camera :
GLdouble camera_pos[3];
double rationZoom = // RatioZoom
int* iViewport = // the ViewPort
double* projetMatrix = // Projection Matrix
double* modelMatrix = // ModelView matrix
// screenCenter is the position of my model.
double screenCenterX = // The center in x Axis
double screenCenterY = // The center in y Axis
gluUnProject((iViewport[2] - iViewport[0]) / 2, (iViewport[3] - iViewport[1]) / 2,
0.0, modelMatrix, projetMatrix, iViewport, &camera_pos[0], &camera_pos[1], &camera_pos[2]);`
//Camera_pos is the position X,Y,Z of my camera.
And in OpenSceneGraph i make this code to set the camera with eye/center/up to LookAt of the camera (to have the same view as OpenGl) :
// i use a zoom ration to have the same distance.
double X = ((camera_pos[0]/2) - ((screenCenterX)))/rationZoom;
double Y = ((camera_pos[1]/2) - ((screenCenterY)))/rationZoom;
double Z = ((camera_pos[2]/2)) / rationZoom;
osg::Vec3 updateCameraPosition(X, Y, Z);
osg::Matrix matrixCameraPosition;
matrixCameraPosition.setTrans(updateCameraPosition);
// Yes, for the center i invert the position matrix of the camera
osg::Matrix matrixCameraCenter;
matrixCameraCenter.invert(matrixCameraPosition);
osg::Vec3f eye(matrixCameraPosition.getTrans());
osg::Vec3f up(0, 0, 1);
osg::Vec3f centre = osg::Vec3f(matrixCameraCenter.getTrans().x(),
matrixCameraCenter.getTrans().y(),
matrixCameraCenter.getTrans().z());
// And a set the view into the camera
nodeCamera->setViewMatrixAsLookAt(eye, centre, up);
For the initialisation of the position i've no problem but if i panning the model of the OpenGl project i don't have the same view.
If I'm not mistaken for OpenGl the coordinate system is : X-left, Y-up and Z-backward, and for OpenSceneGraph this is : X-left, Y-backward, Z-up.
Maybe this is the problem and i have to set the Y up instead of Z in OpenSceneGraph ?
I have solved my problem.
I don't need to calculate the camera or set the setViewMatrixAsLookAt but just to get the modelview matrix and the projection matrix of OpenGl and set to the OpenScenGraph camera, like this:
double* projetMatrix = // the Projection matrix
double* modelMatrix = // The modelView matrix
osg::Matrixd modelViewMatrix(modelMatrix[0], modelMatrix[1], modelMatrix[2], modelMatrix[3],
modelMatrix[4], modelMatrix[5], modelMatrix[6], modelMatrix[7],
modelMatrix[8], modelMatrix[9], modelMatrix[10], modelMatrix[11],
modelMatrix[12], modelMatrix[13], modelMatrix[14], modelMatrix[15]
);
osg::Matrixd projectionMatrix(projetMatrix[0], projetMatrix[1], projetMatrix[2], projetMatrix[3],
projetMatrix[4], projetMatrix[5], projetMatrix[6], projetMatrix[7],
projetMatrix[8], projetMatrix[9], projetMatrix[10], projetMatrix[11],
projetMatrix[12], projetMatrix[13], projetMatrix[14], projetMatrix[15]
);
And :
camera->setViewMatrix(modelViewMatrix);
camera->setProjectionMatrix(projectionMatrix);
camera->setViewport(new osg::Viewport(iViewport[0], iViewport[1], iViewport[2], iViewport[3]));
Related
glm::vec3 Position(0, 0, 500);
glm::vec3 Front(0, 0, 1);
glm::vec3 Up(0, 1, 0);
glm::vec3 vPosition = glm::vec3(Position.x, Position.y, Position.z);
glm::vec3 vFront = glm::vec3(Front.x, Front.y, Front.z);
glm::vec3 vUp = glm::vec3(Up.x, Up.y, Up.z);
glm::mat4 view1 = glm::lookAt(vPosition, vPosition + vFront, vUp);
glm::mat4 projection1 = glm::perspective(glm::radians(45.0f), (float)1920 / (float)1080, 0.1f, 1000.0f);
glm::mat4 VPMatrix = projection1 * view1;
float testZ = 0.0f;
glm::vec3 modelVertices(-50.0f, 50.0f, testZ);
glm::vec4 finalPositionMin = VPMatrix * glm::vec4(modelVertices, 1.0);
Print() << finalPositionMin.x;
In the code if i change the FOV value of perspective than that affects the object drawn on screen for smaller values object size increases on the screen.
At FOV value of 45 the finalPositionMin.x is -50
At FOV value of 25 the finalPositionMin.x is -126
but if i move the camera closer to the object than that should also affect the object and more closer we come to the object the finalPositionMin.x should be affected.
Why changing value the Z positon of camera is not affecting the finalPositionMin.x of the object ?
To get a Cartesian coordinate, you need to divide the x, y, and z components by the w component. You have to print finalPositionMin.x/finalPositionMin.w.
Likely you confuse "window" coordinates and "world" coordinates. In your example, finalPositionMin is not in world space, it is in clip space. To get a world space coordinate you need to multiply modelVertices by the model matrix, but nothing else.
Note, the view matrix (view1) transforms from world space to view space. The projection matrix (projection1 transforms from view space to clip space. With perspective divide you can transforms from clip space to a normalized device coordinate.
To get window coordinates ("pixel" coordinates) you have to "project" the NDC onto the viewport (width, height is the size of the viewport):
x = width * (ndc.x+1)/2
y = height * (1-ndc.y)/2
In my SharpGL project (C#) I have used the Unproject function in order to get the world coordinates from mouse coordinates.
This procedure, quite trivial, fails when the drawing is scaled. I found many articles about this issue, but no one suited me.
When I say scaled means that in draw main proc i apply this code:
_gl.Scale(_params.ScaleFactor, _params.ScaleFactor, _params.ScaleFactor);
Then, when I intercept the mouse move I want to visualize the world coords. These coordinates are precise when the scale factor is 1, but when I change it these are wrong.
for example:
a world point (10, 10)
scaled 1 is detected (10, 10)
scaled 1,25 is detected (8, 8)
scaled 1,25 is detected (6.65, 6.65)
This is my simple code, consider that scale_factor is just passed for debugging.
public static XglVertex GetWorldCoords(this OpenGL gl, int x, int y, float scale_factor)
{
double worldX = 0;
double worldY = 0;
double worldZ = 0;
int[] viewport = new int[4];
double[] modelview = new double[16];
double[] projection = new double[16];
gl.GetDouble(OpenGL.GL_MODELVIEW_MATRIX, modelview); //get the modelview info
gl.GetDouble(OpenGL.GL_PROJECTION_MATRIX, projection); //get the projection matrix info
gl.GetInteger(OpenGL.GL_VIEWPORT, viewport); //get the viewport info
float winX = (float)x;
float winY = (float)viewport[3] - (float)y;
float winZ = 0;
//get the world coordinates from the screen coordinates
gl.UnProject(winX, winY, winZ, modelview, projection, viewport, ref worldX, ref worldY, ref worldZ);
XglVertex vres = new XglVertex((float)worldX, (float)worldY, (float)worldZ);
Debug.Print(string.Format("World Coordinate: x = {0}, y = {1}, z = {2}, sf = {3}", vres.X, vres.Y, vres.Z, scale_factor));
return vres;
}
I found a solution!
there were in the main draw procedure a portion of code that alterate results of UnProject function.
gl.PushMatrix();
.Translate(_mouse.CursorPosition.X * _params.ScaleFactor, _mouse.CursorPosition.Y * _params.ScaleFactor, _mouse.CursorPosition.Z * _params.ScaleFactor);
foreach (var el in _cursor.Elements) //objects to be drawn
{
gl.LineWidth(1f);
gl.Begin(el.Mode);
foreach (var v in el.Vertex)
{
gl.Color(v.Color.R, v.Color.G, v.Color.B, v.Color.A);
gl.Vertex(v.X, v.Y, v.Z);
}
gl.End();
}
gl.PopMatrix();
I need help about the matrix Transformation to find the corners of my viewPort in 3d coordinates(World Space).
I have done some test but i cannot find the solution.
Step 1:
I have Projection and Model Matrix avaible (ViewPort size too). To find the center of my "Screen" i have used this:
OpenGL.UnProject(0.0, 0.0, 0.0) <- this function tell me where is the center of my screen in 3D space (Correct!)
Another approach is to multiply the Coordinate (0, 0, 0) * ProjectionMtx.Inverse.
Step 2:
Now i need for example the left top corner of my viewport, how can i find the 3D point in the world space?
Probably i should work with the viewport size but how?
This is my unproject method:
double[] mview = new double[16];
GetDouble(GL_MODELVIEW_MATRIX, mview);
double[] prj = new double[16];
GetDouble(GL_PROJECTION_MATRIX, prj);
int[] vp = new int[4];
GetInteger(GL_VIEWPORT, vp);
double[] r = new double[3];
gluUnProject(winx, winy, winz, mview, prj, vp, ref r[0], ref r[1], ref r[2]);
For Example:
if i have my camera in (-40,0,0) and my vieport [0,0,1258,513] and i unproject my near plane points i have this result:
left_bottom_near =>X=-39.7499881839701,Y=-0.0219584744091603,Z=0.946276352352364
right_bottom_near =>X=-39.7499881839701,Y=-0.0219584744091603,Z=0.946446903614738
left_top_near =>X=-39.7499881839701,Y=-0.0217879231516134,Z=0.946276352352364
right_top_near =>X=-39.7499881839701,Y=-0.0217879231516134,Z=0.946446903614738
I can understand the X value of my points that is ~ to my x world value of my camera position but, what about the Y & Z? I cannot understand.
gluUnProject converts from window coordinates to to world space (or model space).
The view matrix converts from world space to view space.
The projection matrix transforms from view space to normalized device space. The normalized device space is a cube with the left, bottom, near of (-1, -1, -1) and the right, top, far of (1, 1, 1).
Finally the xy coordinates in normalized device space are mapped to the viewport (window coordinates). The viewport is defined by glViewport. The z component is maped to the glDepthRange (defalt [0.0, 1.0]. gluUnProject does the opposite of all this steps.
At orthographic projection the viewing volume is a cuboid. At Perspective projection the viewing volume is a frustum. The projection matrix transforms from view space to normalized device space.
What you can see on the viewport is the projection of the normalized device space to its xy plane.
In general the corners of the viewing volume can be get by un-projecting the 8 corner points of the window coordinates. For that you have to know the viewport rectangle and the depth range.
I assume that the viwport has the size of the window (0, 0, widht, height) and that the depth range is [0.0, 1.0]:
left = 0.0;
right = width;
bottom = 0.0;
top = height;
left_bottom_near = gluUnProject(left, bottom, 0.0)
right_bottom_near = gluUnProject(right, bottom, 0.0)
left_top_near = gluUnProject(left, top, 0.0)
right_top_near = gluUnProject(right, top, 0.0)
left_bottom_far = gluUnProject(left, bottom, 1.0)
right_bottom_far = gluUnProject(right, bottom, 1.0)
left_top_far = gluUnProject(left, top, 1.0)
right_top_far = gluUnProject(right, top, 1.0)
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
I want to draw a 2d box (smallest possible containing the object) around a 3d object with OpenGL.
Image: http://imgur.com/h1Vyy4b
What I have is:
Camera X/Y/Z/yaw/pitch, Object X/Y/Z/width/height/depth
I can draw on a 2d surface and a 3d surface.
How would I go about this?
I went here and found a function for getting screen coordinates out of your 3D points:
function point2D get2dPoint(Point3D point3D, Matrix viewMatrix,
Matrix projectionMatrix, int width, int height) {
Matrix4 viewProjectionMatrix = projectionMatrix * viewMatrix;
//transform world to clipping coordinates
point3D = viewProjectionMatrix.multiply(point3D);
int winX = (int) Math.round((( point3D.getX() + 1 ) / 2.0) *
width );
//we calculate -point3D.getY() because the screen Y axis is
//oriented top->down
int winY = (int) Math.round((( 1 - point3D.getY() ) / 2.0) *
height );
return new Point2D(winX, winY);
}
If your not sure how to get the matrices:
glGetDoublev (GL_MODELVIEW_MATRIX, mvmatrix);
glGetDoublev (GL_PROJECTION_MATRIX,pjmatrix);
After getting your 2D coordinates you go like this: (pseudo code)
int minX, maxX, minY, maxY;
for each 2dpoint p:
if (p.x<minX) minX=p.x;
if (p.x>maxX) maxX=p.x;
if (p.y<minY) minY=p.y;
if (p.y>maxY) maxY=p.y;
Then you draw a box with
P1=(minX,minY)
P2=(maxX,minY)
P3=(maxX,maxY)
P4=(minX,maxY)