Related
My goal is to create an intuitive 3D manipulator to handle rotations of meshes displayed in my 3D editor, made with Qt / QML.
To do that, when the user clicks on an entity, 3 tori are spawned around the mesh, representing the euler angles the user can act on. If the user then clicks on one torus, I want him to be able to rotate the mesh by dragging the mouse. The natural way users seem to do that is by dragging the mouse around the torus in the direction they want the mesh to rotate.
I therefore need a way to know how the user is rotating his mouse. I thought of a way: when the user clicks on the torus, I retrieve the position of the center of the torus. Then, I translate this world position to its screen position. Then, I monitor the angle between the cursor of the mouse and the center of the torus. The evolution of this angle should tell me everything I need: if the angle increases clockwise, the mesh should rotate clockwise and vice versa. This solution should yield a result good enough for my application, since it won't depend on the angle of the camera, or only very minimally.
However, I can't find a way to translate a world position to its screen position with Qt. I found the function QVector3D::project(const QMatrix4x4 &modelView, const QMatrix4x4 &projection, const QRect &viewport), but its documentation is very scarce and I couldn't find anyone using it... I might have found what to feed in for the projection argument (the projectionMatrix property from QCamera, here https://doc.qt.io/qt-5/qml-qt3d-render-camera.html), but that's it. What is the modelView ? And viewport ? Is it simply QRect(0, 0, 1920, 1080) ?
If anyone have any kind of lead, it would be amazing, I can't find anything anywhere and I'm kind of losing hope now. Or maybe another, simpler, solution to my problem ? Please note that the user can also freely move the camera around the mesh, which adds in complexity.
Thanks a lot for your time, and have a nice day !
Yes, you should be able to translate from world position to screen position using the mentioned function. You are correct about the projection argument. As for the modelView argument, you should use viewMatrix property from QCamera, which is missing from the official documentation, but it works for me. The viewport parameter represents the dimensions of the part of the screen, you are projecting on. You could use QRect(0, 0, 1920, 1080) if you use full screen Full HD projection, otherwise use something like QRect(QPoint(0, 0), view->size()), where view is the wigdet or window with your 3D image. Be careful, that the resulting screen position will have y = 0 being down and positive values being above, which the opposite to usual screen coordinates.
I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is:
Get the coordinates within the window with glfwGetCursorPos.
The window was created with
window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL);
and the code to get coordinates is
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
Next, I use GLM unproject, to get the coordinates in "object space"
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f);
glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f);
glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport);
There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear.
Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, possibly because the position vector I pass into unproject always has a value of 0.0 for z. However, varying the last coordinate in the position vector doesn't fix the fact that the object moves much slower than the mouse.
I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object space point, and every point on such a line will be projected to the same screen-space point.
So what you get when you unproject with z=0 is actually the point on the near plane. Now it will depend on how far your object is away from the camera, as sketched here in top view.
What you get is point A in coordinates relative to the object space origin. You could get point B back if you just read the Z value from the pixel under the mouse curser back from the depth buffer.
However, I think that neither point A or B are really helping you. You need some further constraint. For example, you could specify that the distance of the object is not changed, and the pixel the mouse is pointing at shall be the point where the object center (or any other reference point in object space) should appear. Then, you could just compute the point on the ray you have to move the point to. But it is not really clear what kind of movement you actually want to implement.
As a side note: Manually adjusting the vertex coordinates is not a good idea. The GPU can do this far better. You should just manipulate the model matrix to move the object around. And it would be useful in such a scheme not to project the point or ray into the object space, but into world space, and use that to update the model matrix of the object.
I am working on my first real OpenGL Project. It is a 3x3x3 Rubiks Cube. Here is a link to a simple screenshot of what i have so far(my rubiks cube)
Rotating the cube is done with dragging the mouse while holding the right mouse button. This works using the example of a arcball from NeHe Tutorials(NeHe Arcball)
I have the class singleCubes which represents one cube via 6 actual quads, stored in a display list that can be used in it´s draw method.
Class ComplexCube has an array of 3x3x3 singleCubes and is used as interface when interacting with the complete rubiks cube.
Now i want to rotate each specific face according to the mousedragging with left mouse button down. I use picking to get the id of the corresponding side of the single cube the user clicked on. This works also. So i click on a side of one cube on a face and depending on the direction of the dragging i set a rotation and offset factor of the cubes that get affected. (i also want to implement that u actually see the face rotate instead of just changing the color)
Now my Problem is that when i rotate the rubiks cube in any direction with right mouse dragging, it becomes upside down for example. So when i click on a side and want to rotate the face to the right, it´s going the wrong direction because i can´t keep track if the cube is upside down or whatever. Due to the use of the arcball rotation i dont have a x- or y-rotation angle which i could use to determine this.
Question 1: How can i keep track or later on get the information if the cube is upside down, tilted etc in order to translate the mouse dragging information(when rotating one face) when using the arcball example linked above?
// In render function
glPushMatrix();
{
glMultMatrixf(Transform.M); // Rotation applied by arcball object
complCube.draw(); // Draw all the cubes using display lists
}
glPopMatrix();
Setup: C++ with Microsoft Visual Studio 2008, GLEW, freeglut
You could use gluUnProject to convert mouse coordinates to 3d space and get a vector (difference between two points). This vector could then be used to apply a "force" to the selected face. Since gluUnProject uses the projection matrix, it would automatically deal with the orientation of the camera.
Basically, once you get your "force" vector, you project it onto the three axes (so onto (1,0,0), (0,1,0), (0,0,1)). Then choose the one with the largest magnitude. Then you have to convert this direction into a rotation axis as in the diagram below (sorry for the bad paint skills):
So what we have is the "force" vector in black and the selected rubiks face in grey. To get the rotation axis, just take the cross product the "force" vector with the normal of the selected face. This gives the red arrow. From that, you should be able to rotate your cubes in the right direction.
Edit to answer the question in more detail
So continuing from my explanation, I will give an example of how this will help you. Let's first assume your screen is 800x800 pixels and your rubiks cube is always centred. Now lets also assume that, as per your drawings in the comments, that we are in the case on the left.
We drag the mouse and get two positions which using gluUnProject are transformed into world coordinates (the numbers were chosen to show my point, not by any calculation):
p1 : (600, 600) -> (1, -0.5, 0)
p2 : (630, 605) -> (1.3, -0.505, 0)
Now we get the difference vector: p2 - p1 = v = (0.3, -0.05, 0). The reason that I was saying to "project onto the three axes" is so that you extract your major movement (which in this case is 0.3 in the x axis) (since the rubiks cube can't rotate along diagonals). To do the "projection" you just have to take the x, y, z axes individually and create vectors from them so you wind up with:
v1 = (0.3, 0, 0)
v2 = (0, -0.05, 0)
v3 = (0, 0, 0)
Now take the magnitudes and discard the smallest vectors, so we are left with the vector v1 = (0.3, 0, 0). This is your movement vector in world space. Now you take the cross product of that vector, with the normal vector of the selected face (which in this case would be (0, 0, 1)). This gives you a vector which points down (0, 1, 0) (after normalization) (in this step you will probably also have to extract the largest component only (0.02, 1.2, 0.8) -> (0, 1, 0) otherwise you would get bizarre rotations if your camera was not pointing directly along the main axes). You can now use that vector as the rotation axis and use 0.3 as your rotation amount (if it rotates in the opposite direciton to that expected, just put a -).
Now how does this help if your cube is upside down? Suppose we click on the screen in the same way. We now get:
p1 : (600, 600) -> (-1, 0.5, 0)
p2 : (630, 605) -> (-1.3, 0.505, 0)
See the difference in the world coordinates? They are inverted! So when you take the difference vector p2 - p1 = v = (-0.3, 0.05, 0). Extracting the largest component vector gives (-0.3, 0, 0). Doing the cross product once again gives you the rotation axis, but now the rotation is in the opposite direction, which is what you want.
Another reason for the cross product with the normal of the face is that if you were to select the faces on the top (in our drawings), then it would either give a rotation axis along the x or z axes (to the left, or into the screen) which is what you want for the top faces.
Like most of us, you will encounter the famous problem called Gimbal Lock.
see: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=208925
This problem is extremely well documented so there is not much point for me to go into details here. I am sure you will find a ton of information about it.
Original Question/Code
I am fine tuning the rendering for a 3D object and attempting to implement a camera following the object using gluLookAt because the object's center y position constantly increases once it reaches it's maximum height. Below is the section of code where I setup the ModelView and Projection matrices:
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(centerX - diam,
centerX + diam,
centerY - diam,
centerY + diam,
diam,
40 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0., 0., 2. * diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
Currently the object displays very far away and appears to move further back into the screen (-z) and down (-y) until it eventually disappears.
What am I doing wrong? How can I get my surface to appear in the center of the screen, taking up the full view, and the camera moving with the object as it is updated?
Updated Code and Current Issue
This is my current code, which is now putting the object dead center and filling up my window.
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(centerX - diam,
centerX,
centerY - diam,
centerY,
1.0,
1.0 + 4 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(centerX, _cameraOffset, diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
I still have one problem when the object being viewed starts moving it does not stay perfectly centered. It appears to almost jitter up by a pixel and then down by 2 pixels when it updates. Eventually the object leaves the current view. How can I solve this jitter?
Your problem is with the understanding what the projection does. In your case glFrustum. I think the best way to explain glFrustum is by a picture (I just drew -- by hand). You start of a space called Eye Space. It's the space your vertices are in after they have been transformed by the modelview matrix. This space needs to be transformed to a space called Normalized Device Coordinates space. This happens in a two fold process:
The Eye Space is transformed to Clip Space by the projection (matrix)
The perspective divide {X,Y,Z} = {x,y,z}/w is applied, taking it into Normalized Device Coordinate space.
The visible effect of this is that of kind of a "lens" of OpenGL. In the below picture you can see a green highlighted area (technically it's a 3 volume) in eye space that, is the NDC space backprojected into it. In the upper case the effect of a symmetric frustum, i.e. left = -right, top = -bottom is shown. In the bottom picture an asymmetric frustum, i.e. left ≠ -right, top ≠ -bottom is shown.
Take note, that applying such an asymmetry (by your center offset) will not turn, i.e. rotate your frustum, but skew it. The "camera" however will stay at the origin, still pointing down the -Z axis. Of course the center of image projection will shift, but that's not what you want in your case.
Skewing the frustum like that has applications. Most importantly it's the correct method to implement the different views of left and right eye an a stereoscopic rendering setup.
The answer by Nicol Bolas pretty much tells what you're doing wrong so I'll skip that. You are looking for an solution rather than telling you what is wrong, so let's step right into it.
This is code I use for projection matrix:
glViewport(0, 0, mySize.x, mySize.y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fovy, (float)mySize.x/(float)mySize.y, nearPlane, farPlane);
Some words to describe it:
glViewport sets the size and position of display place for openGL inside window. Dunno why, I alsways include this for projection update. If you use it like me, where mySize is 2D vector specifying window dimensions, openGL render region will ocuppy whole window. You should be familiar with 2 next calls and finaly that gluPerspective. First parameter is your "field of view on Y axis". It specifies the angle in degrees how much you will see and I never used anything else than 45. It can be used for zooming though, but I prefer to leave that to camera operating. Second parameter is aspect. It handles that if you render square and your window sizes aren't in 1:1 ratio, it will be still square. Third is near clipping plane, geometry closer than this to camera won't get rendered, same with farPlane but on contrary it sets maximum distance in what geometry gets rendered.
This is code for modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt( camera.GetEye().x,camera.GetEye().y,camera.GetEye().z,
camera.GetLookAt().x,camera.GetLookAt().y,camera.GetLookAt().z,
camera.GetUp().x,camera.GetUp().y,camera.GetUp().z);
And again something you should know: Again, you can use first 2 calls so we skip to gluLookAt. I have camera class that handles all the movement, rotations, things like that. Eye, LookAt and Up are 3D vectors and these 3 are really everything that camera is specified by. Eye is the position of camera, where in space it is. LookAt is the position of object you're looking at or better the point in 3D space at which you're looking because it can be really anywhere not just center object. And if you are worried about what's Up vector, it's really simple. It's vector perpedicular to vector(LookAt-Eye), but becuase there's infinite number of such vectors, you must specify one. If your camera is at (0,0,0) and you are looking at (0,0,-1) and you want to be standing on your legs, up vector will be (0,1,0). If you'd like to stand on your head instead, use (0,-1,0). If you don't get the idea, just write in comment.
As you don't have any camera class, you need to store these 3 vectors separately by yourself. I believe you have something like center of 3D object you're moving. Set that position as LookAt after every update. Also in initialization stage(when you're making the 3D object) choose position of camera and up vector. After every update to object position, update the camera position the same way. If you move your object 1 point up at Y axis, do the same to camera position. The up vectors remains constant if you don't want to rotate camera. And after every such update, call gluLookAt with updated vectors.
For updated post:
I don't really get what's happening without bigger frame of reference (but I don't want to know it anyway). There are few things I get curious about. If center is 3D vector that stores your object position, why are you setting the center of this object to be in right top corner of your window? If it's center, you should have those +diam also in 2nd and 4th parameter of glOrtho, and if things get bad by doing this, you are using wrong names for variables or doing something somewhere before this wrong. You're setting the LookAt position right in your updated post, but I don't find why you are using those parameters for Eye. You should have something more like: centerX, centerY, centerZ-diam as first 3 parameters in gluLookAt. That gives you the camera on the same X and Y position as your object, but you will be looking on it along Z axis from distance diam
The perspective projection matrix generated by glFrustum defines a camera space (the space of vertices that it takes as input) with the camera at the origin. You are trying to create a perspective matrix with a camera that is not at the origin. glFrustum can't do that, so the way you're attempting to do it simply will not work.
There are ways to generate a perspective matrix where the camera is not at the origin. But there's really no point in doing that.
The way a moving camera is generally handled is by simply adding a transform from the world space to the camera space of the perspective projection. This just rotates and translates the world to be relative to the camera. That's the job of gluLookAt. But your parameters to that are wrong too.
The first three values are the world space location of the camera. The next three should be the world-space location that the camera should look at (the position of your object).
In my program I'm loading in a 3D mesh to view and interact with. The user can rotate and scale the view. I will be using a rotation matrix for the rotation and calling multmatrix to rotate the view, and scaling using glScalef. The user can also paint the mesh, and this is why I need to translate the mouse coordinates to see if it intersects with the mesh.
I've read http://www.opengl.org/resources/faq/technical/selection.htm and the method where I use gluUnproject at the near and far plane and subtracting, and I have some success with it, but only when gluLookAt's position is (0, 0, z) where z can be any reasonable number. When I move the position to say (0, 1, z), it goes haywire and returns an intersection where there is only empty space, and returns no intersection where the mesh is clearly underneath the mouse.
This is how I'm making the ray from the mouse click to the scene:
float hx, hy;
hx = mouse_x;
hy = mouse_y;
GLdouble m1x,m1y,m1z,m2x,m2y,m2z;
GLint viewport[4];
GLdouble projMatrix[16], mvMatrix[16];
glGetIntegerv(GL_VIEWPORT,viewport);
glGetDoublev(GL_MODELVIEW_MATRIX,mvMatrix);
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
//unproject to find actual coordinates
gluUnProject(hx,scrHeight-hy,2.0,mvMatrix,projMatrix,viewport,&m1x,&m1y,&m1z);
gluUnProject(hx,scrHeight-hy,10.0,mvMatrix,projMatrix,viewport,&m2x,&m2y,&m2z);
gmVector3 dir = gmVector3(m2x,m2y,m2z) - gmVector3(m1x,m1y,m1z);
dir.normalize();
gmVector3 point;
bool intersected = findIntersection(gmVector3(0,0,5), dir, point);
I'm using glFrustum if that makes any difference.
The findIntersection code is really long and I'm pretty confident it works, so I won't post it unless someone wants to see it. The gist of it is that for every face in the mesh, find intersection between the ray and the plane, then see if the intersection point is inside the face.
I believe that it has to do with the camera's position and look at vector, but I don't know how, and what to do with them so that the mouse clicks on the mesh properly. Can anyone help me with this?
I also haven't yet made the rotation matrix or anything with the glScalef, so can anyone give me insight into this? Like, does unproject account for the multmatrix and glScalef calls when calculating?
Many thanks!
The solution is with raytracing. The ray you shoot is defined through two points. The first one is the origin of the camera, the second one is the mouse position projected on the view plane in the scene (the plane you describe with glFrustum). The intersection of this ray with you model is the point where your mouse click has hit the model
making the ray from the camera to the scene using the ray dir, I should've used:
bool intersected = findIntersection(gmVector3(m1x,m1y,m1z), dir, point);
(notice the different vector being passed to the function). This solved my problem, and didn't have anything to do with the gluLookAt after all!
Also, for the second part of the question that I asked, yes, the unproject does take into account the glScale and glmultmatrix functions.