glm::unProject appears to be mixing up the screen Y coordinate - opengl

I'm trying to convert my mouse's cursor position in my OpenGL viewport to world coordinates. I'm using glm::unProject() to do this. However, it appears that the mouse position's Y coordinate is being negated somehow.
If I orient my camera so the world's Y axis is pointing up and X is pointing right, moving the mouse left/right gives me the correct coordinates, however moving the mouse up/down the Y "world" coordinates I get are reversed (positive Y goes downwards).
If I reorient the camera so X is now going up, Y is going left. Moving the mouse left/right gives the right Y coordinates, but moving up/down gives reversed X coordinates. Same behavior when I orient for Z.
This page mentions that device coordinates use the LHS, maybe this is the cause? Is there something I need to do to handle the case where device coordinates are in a different system? Is there a way to determine that?
I'm also noticing that my transformed coordinates are half what they should be (mouse on an object at (1,0,0) shows (0.5,0,0) but I think this is a separate issue so I'll ask another question once I solve this.

The basic problem is that OpenGL defines the origin as the lower left corner of the window, while most windowing systems use the upper left instead. The solution is simple: subtract the mouse Y coordinate from the window height:
gl_x = mouse_x;
gl_y = windowHeight - mouse_y;

Related

mfc when to use logical/device coordinates

I have heard that rectangles, coordinates of the mouse and other things involving drawing all use device coordinates. Is this true? Are there any ways that can tell me if I have logical or device coordinates?
I could look at the documentation of functions that give me the coordinates, but sometimes they don't explicitly say if these are logical or device coordinates. For example, the documentation for GetCursorPos function says it "retrieves the position of the mouse cursor, in screen coordinates."
I am assuming screen coordinates are the same as device coordinates? Does this mean I have to convert the screen coordinates I get from the function into client coordinates?
You know what coordinate (0,0) is on the top-left corner of screen. But on paper when we draw a graph, (0,0) may be on bottom left, or on the centre of the graph plotting paper.
By default, the logical and coordinates and physical/screen coordinates are same, and (0,0) points to top-left. But what if you want to draw a line from bottom left to somewhere in middle of screen, that matches the math/trigonometry you've learnt or are practising? Well, the you move to changing the logical coordinate system to something of your liking.
You'd use SetMapMode to change the logical coordinate system. You may later use LPtoDP, DPtoLP, ClientToScreen, ScreenToClient etc for mapping and for physical monitor to your window coordinates mapping.
About Coordinate Spaces and Transformations

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

How do i convert from center origin to bottom origin?

I will start by apologizing I highly doubt I will have any of the correct terminology, unfortenately after a few hours of raw testing and mashing my head against the wall I can figure this out.
I working with an engine the orients its models using a bottom aligned system. Meaning that the z axis (in a z up system), is z - radius = origin, or if the model is sitting at 0,0,0 all the tris would be in positive-z space.
I am integrating bullet which is a center aligned system, meaning that the objects origin is in the center of the mass (in this case a simple aabb cube).
I have the problem that the yaw pitch roll and origin i pass into the renderer is offset by radius in the z+ direction. Now the biggest issue comes when the pitch or roll becomes something other than 0. Because bullet's center aligned system, rotates around the pitch and roll around the center and the render rotates around the bottom. So there is a clear difference in where the model and the bounding box are lining up.
So is there an algorithm to convert from these two forms of orientation?
OK I figured out my own issue.
So For any one that stumbles upon this ill explain what exactly is happening and how to fix it.
Simply put my question is how to convert from world space (x, y, z planes) to local space (relative x y z planes). So if you were to take an arrow and face it in the direction of 0 x 0 y 0 z, where as its origin was in positive space, say 1 x 0 y 0 z your arrow is facing in world space. Meaning that if you were to move forward you would be moving along the x plane, left along the y plane, and up along the z plane.
Now if that arrow rotated along the yaw x degrees so that it was pointing at 1 x 1 y 0 z now when you move forward you are no longer moving along just the x plane. So this is whats called moving in local space. Meaning you are moving along the planes that are relative to the yaw and pitch of your node (object, model, etc).
So in my case i have bullet which is working in "world space" and i have my render that is working in "model space" so i just need to put it in a matrix that converts the two.
Two convert the spaces I need a projection matrix that converts them.
So here's the link to the source that I found that converts this and explains the relative math fairly easily. http://www.codinglabs.net/article_world_view_projection_matrix.aspx
thanx every one
chasester

OpenGL - have object follow mouse

I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is:
Get the coordinates within the window with glfwGetCursorPos.
The window was created with
window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL);
and the code to get coordinates is
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
Next, I use GLM unproject, to get the coordinates in "object space"
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f);
glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f);
glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport);
There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear.
Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, possibly because the position vector I pass into unproject always has a value of 0.0 for z. However, varying the last coordinate in the position vector doesn't fix the fact that the object moves much slower than the mouse.
I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object space point, and every point on such a line will be projected to the same screen-space point.
So what you get when you unproject with z=0 is actually the point on the near plane. Now it will depend on how far your object is away from the camera, as sketched here in top view.
What you get is point A in coordinates relative to the object space origin. You could get point B back if you just read the Z value from the pixel under the mouse curser back from the depth buffer.
However, I think that neither point A or B are really helping you. You need some further constraint. For example, you could specify that the distance of the object is not changed, and the pixel the mouse is pointing at shall be the point where the object center (or any other reference point in object space) should appear. Then, you could just compute the point on the ray you have to move the point to. But it is not really clear what kind of movement you actually want to implement.
As a side note: Manually adjusting the vertex coordinates is not a good idea. The GPU can do this far better. You should just manipulate the model matrix to move the object around. And it would be useful in such a scheme not to project the point or ray into the object space, but into world space, and use that to update the model matrix of the object.

How to move an object depending on the camera in OpenGL.

As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.