Object position in world coordinates - xtk

I have some objects in a scene and I want to know how to get the world coordinates of the object after some rotations.
For example, I used this: X.matrix.multiplyByVector(X.cube.transform.matrix, 0, 0, 0); to get the world coordinates at the very begining of the rendering process. The coordinates are (201.5, -54.5, 102.5)
Then I make some rotations and then applied the formula again and it display the same coordinates as before even though the object (a cube in this example) is in another place in the scene.

Did you check the value of the X.cube.transform.matrix before and after rotation, to see if it get modified?
Moreover if your object is centered on (0, 0, 0) and if you rotate the object around its center (0, 0, 0), the position of the center will still be the same...
In this case you could try with the cube borders.
Does it make sense?
Hope this helps

Related

Determining position of rotated object

I'm writing a project in OpenGL and I've encountered a problem with determining the position of object after translating and rotating the Model-View Matrix.
Just to visualize this, imagine how Earth is rotating around Sun, basically, I need to determine postion of Earth at runtime.
I'll divide my code into a few steps, let's assume we are at starting position of (0,0,0) and our rotation is equal to 0.
while(true)
{
modelViewMatrix.PushMatrix(); //
modelViewMatrix.Translate(1, 1, 0); // 1
modelViewMatrix.Rotate(k++, 0, 1, 0); // 2
object.Draw(); // 3
modelViewMatrix.PopMatrix(); //
}
1 - At this point determining position is easy, it's (1, 1, 0)
2 - Now we are rotating object over some constantly incrementing value to keep it moving around position (0, 0, 0)
3 - Drawing the object
Now I know that modelViewMatrix stores information like rotation and position but I don't know how to utilize this to find out the actual position of my object after translating and rotating it.
Here's my try at drawing what I'm talking about, the red question mark (?) indicates an example position of object that I'm trying to find.
You should be able to create Vec3 at (0,0,0) and transform it by your matrix. That will give you the position of your 'Earth' - Your object probably already has a position, so you really should be using your matrix to transform your object's actual position rather than changing your entire model-view matrix just to draw the object there.
If you're curious how these matrices work, google "homogeneous transformation matrix" to read up on them.

C++ OpenGL dragging multiple objects with mouse

just wondering how someone would go about dragging 4 different
objects in openGL. I have very simple code to draw these objects:
glPushMatrix();
glTranslatef(mouse_x, mouse_y, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x2, mouse_y2, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x3, mouse_y3, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x4, mouse_y4, 0);
glutSolidIcosahedron();
glPopMatrix();
I understand how to move an object, but I want to learn how to drag and drop any one of these objects.
I've been researching about the Name Stack and Selection Mode, but it just confused the hell out of me.
And I also know about having to have some sort of glutMouseFunc.
It's just the selection of each shape I'm puzzled on.
First thing that you need to do is capturing the position of mouse on the screen when the button is clicked. There are plenty of ways to do it but I believe it's outside of the scope of this question. When you have screen X,Y coords you must detect if any object is selected and which one it is. There are two possible approaches. You can either keep track of a bounding rectangle positions of each object (in screen space) and the test if the cursor is inside any of those rectangles will be quite simple. Or you can cast a ray from eye through cursor position in world space and check intersection of this ray with each object.
The second approach is more versatile for 3D graphics but you seem to be using only X and Y coords so you don't need to worry about Z order of objects.
In case of the first solution the main problem is: how to know how big is your object on the screen. glutSolidIcosahedron() renders an object of radius 1. To calculate it's screen radius you can either use some matrix math or in that case a simple trigonometry. You will need to know the distance from camera to the drawing plane (I believe you're using some glTranslatef(0,0,X) before you render. X is your distance) You also need to know the view angle of the camera. You set it in projection matrix. Now take a piece of paper, draw a cone of angle alpha, intersecting a plane at distance X and knowing that an object has radius 1 you can easily calculate how large area of the screen it occupies. (I'll leave this calculation for you)
Now if you know the radius on screen, simply test the distance from your click position to each object. if the distance is below radius it's selected. If more than one object passes this test just select first one of them.

OpenGL - have object follow mouse

I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is:
Get the coordinates within the window with glfwGetCursorPos.
The window was created with
window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL);
and the code to get coordinates is
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
Next, I use GLM unproject, to get the coordinates in "object space"
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f);
glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f);
glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport);
There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear.
Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, possibly because the position vector I pass into unproject always has a value of 0.0 for z. However, varying the last coordinate in the position vector doesn't fix the fact that the object moves much slower than the mouse.
I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object space point, and every point on such a line will be projected to the same screen-space point.
So what you get when you unproject with z=0 is actually the point on the near plane. Now it will depend on how far your object is away from the camera, as sketched here in top view.
What you get is point A in coordinates relative to the object space origin. You could get point B back if you just read the Z value from the pixel under the mouse curser back from the depth buffer.
However, I think that neither point A or B are really helping you. You need some further constraint. For example, you could specify that the distance of the object is not changed, and the pixel the mouse is pointing at shall be the point where the object center (or any other reference point in object space) should appear. Then, you could just compute the point on the ray you have to move the point to. But it is not really clear what kind of movement you actually want to implement.
As a side note: Manually adjusting the vertex coordinates is not a good idea. The GPU can do this far better. You should just manipulate the model matrix to move the object around. And it would be useful in such a scheme not to project the point or ray into the object space, but into world space, and use that to update the model matrix of the object.

Drawing multiple objects

I have been learning DirectX for a while now & I got to the point of drawing multiple objects to the screen from .obj files. My problem is that if I draw 2 objects, the 2nd draw will be on top of the 1st.
Example:
Obj1 = Cardboard box
Obj2 = Cube
loadStuff_&_draw(Obj1);
loadStuff_&_draw(Obj2);
That would draw the cube outside/front the box even if it is in/behind.
How would I make multiple objects draw together so that they overlap correctly?
The only drawing things I know of are:
Load vertex, index, constant buffers
updateSubresource()
drawIndexed()
Edit:
Here is a picture of a cube in a box. It shows that the cube is behind the box rather than inside. It also shows that the rim of the box clips behind the box. Not sure what happened.
I drew the cube, then drew the hollow box.
This is what the depth buffer (sometimes called the z-buffer) is for. When you write a pixel during the drawing of one object, it records the distance from the viewpoint for that pixel in the depth buffer. Then when you draw a second object that would also write to that pixel, it can compare the new object's depth value at that pixel with the buffered value (from previous objects) and only overwrites the pixel if the new value would be nearer than the old one. If the pixel gets drawn, then the depth value is updated to reflect the closer value.
Some links that might give you some ideas on how to implement this:
http://msdn.microsoft.com/en-gb/library/windows/desktop/bb205074%28v=vs.85%29.aspx
http://www.rastertek.com/dx11tut35.html
https://stackoverflow.com/a/8641210/28875
This depends on how do you set the world matrix of the object, If you want to second object behind the first object, then set the correct world matrix before drawing. suppose object 1 place at origin(0, 0, 0), then you can translate object2 to (0, 0, -10) to make it behind object1. and translate to (0, 0, 10) will make it in front of object1.

OpenGL rubiks cube - face rotation with mouse

I am working on my first real OpenGL Project. It is a 3x3x3 Rubiks Cube. Here is a link to a simple screenshot of what i have so far(my rubiks cube)
Rotating the cube is done with dragging the mouse while holding the right mouse button. This works using the example of a arcball from NeHe Tutorials(NeHe Arcball)
I have the class singleCubes which represents one cube via 6 actual quads, stored in a display list that can be used in it´s draw method.
Class ComplexCube has an array of 3x3x3 singleCubes and is used as interface when interacting with the complete rubiks cube.
Now i want to rotate each specific face according to the mousedragging with left mouse button down. I use picking to get the id of the corresponding side of the single cube the user clicked on. This works also. So i click on a side of one cube on a face and depending on the direction of the dragging i set a rotation and offset factor of the cubes that get affected. (i also want to implement that u actually see the face rotate instead of just changing the color)
Now my Problem is that when i rotate the rubiks cube in any direction with right mouse dragging, it becomes upside down for example. So when i click on a side and want to rotate the face to the right, it´s going the wrong direction because i can´t keep track if the cube is upside down or whatever. Due to the use of the arcball rotation i dont have a x- or y-rotation angle which i could use to determine this.
Question 1: How can i keep track or later on get the information if the cube is upside down, tilted etc in order to translate the mouse dragging information(when rotating one face) when using the arcball example linked above?
// In render function
glPushMatrix();
{
glMultMatrixf(Transform.M); // Rotation applied by arcball object
complCube.draw(); // Draw all the cubes using display lists
}
glPopMatrix();
Setup: C++ with Microsoft Visual Studio 2008, GLEW, freeglut
You could use gluUnProject to convert mouse coordinates to 3d space and get a vector (difference between two points). This vector could then be used to apply a "force" to the selected face. Since gluUnProject uses the projection matrix, it would automatically deal with the orientation of the camera.
Basically, once you get your "force" vector, you project it onto the three axes (so onto (1,0,0), (0,1,0), (0,0,1)). Then choose the one with the largest magnitude. Then you have to convert this direction into a rotation axis as in the diagram below (sorry for the bad paint skills):
So what we have is the "force" vector in black and the selected rubiks face in grey. To get the rotation axis, just take the cross product the "force" vector with the normal of the selected face. This gives the red arrow. From that, you should be able to rotate your cubes in the right direction.
Edit to answer the question in more detail
So continuing from my explanation, I will give an example of how this will help you. Let's first assume your screen is 800x800 pixels and your rubiks cube is always centred. Now lets also assume that, as per your drawings in the comments, that we are in the case on the left.
We drag the mouse and get two positions which using gluUnProject are transformed into world coordinates (the numbers were chosen to show my point, not by any calculation):
p1 : (600, 600) -> (1, -0.5, 0)
p2 : (630, 605) -> (1.3, -0.505, 0)
Now we get the difference vector: p2 - p1 = v = (0.3, -0.05, 0). The reason that I was saying to "project onto the three axes" is so that you extract your major movement (which in this case is 0.3 in the x axis) (since the rubiks cube can't rotate along diagonals). To do the "projection" you just have to take the x, y, z axes individually and create vectors from them so you wind up with:
v1 = (0.3, 0, 0)
v2 = (0, -0.05, 0)
v3 = (0, 0, 0)
Now take the magnitudes and discard the smallest vectors, so we are left with the vector v1 = (0.3, 0, 0). This is your movement vector in world space. Now you take the cross product of that vector, with the normal vector of the selected face (which in this case would be (0, 0, 1)). This gives you a vector which points down (0, 1, 0) (after normalization) (in this step you will probably also have to extract the largest component only (0.02, 1.2, 0.8) -> (0, 1, 0) otherwise you would get bizarre rotations if your camera was not pointing directly along the main axes). You can now use that vector as the rotation axis and use 0.3 as your rotation amount (if it rotates in the opposite direciton to that expected, just put a -).
Now how does this help if your cube is upside down? Suppose we click on the screen in the same way. We now get:
p1 : (600, 600) -> (-1, 0.5, 0)
p2 : (630, 605) -> (-1.3, 0.505, 0)
See the difference in the world coordinates? They are inverted! So when you take the difference vector p2 - p1 = v = (-0.3, 0.05, 0). Extracting the largest component vector gives (-0.3, 0, 0). Doing the cross product once again gives you the rotation axis, but now the rotation is in the opposite direction, which is what you want.
Another reason for the cross product with the normal of the face is that if you were to select the faces on the top (in our drawings), then it would either give a rotation axis along the x or z axes (to the left, or into the screen) which is what you want for the top faces.
Like most of us, you will encounter the famous problem called Gimbal Lock.
see: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=208925
This problem is extremely well documented so there is not much point for me to go into details here. I am sure you will find a ton of information about it.