I am attempting to cast a ray from the center of the screen and check for collisions with objects.
When rendering, I use these calls to set up the camera:
GL11.glRotated(mPitch, 1, 0, 0);
GL11.glRotated(mYaw, 0, 1, 0);
GL11.glTranslated(mPositionX, mPositionY, mPositionZ);
I am having trouble creating the ray, however. This is the code I have so far:
ray.origin = new Vector(mPositionX, mPositionY, mPositionZ);
ray.direction = new Vector(?, ?, ?);
My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll?
I answered a question not unlike your's just recently. So I suggest you read this: 3d coordinate from point and angles
This applies to your question as well, only that you don't want just a point, but a ray. Well, remember that a point can be assumed a displacement-from-origin vector and that a ray is defined as
r(t) = v*t + s
In your case, s is the camera position, and v would be a point relative to the camera's position. You figure the rest (or ask, if things are still unclear).
Related
I'm writing a project in OpenGL and I've encountered a problem with determining the position of object after translating and rotating the Model-View Matrix.
Just to visualize this, imagine how Earth is rotating around Sun, basically, I need to determine postion of Earth at runtime.
I'll divide my code into a few steps, let's assume we are at starting position of (0,0,0) and our rotation is equal to 0.
while(true)
{
modelViewMatrix.PushMatrix(); //
modelViewMatrix.Translate(1, 1, 0); // 1
modelViewMatrix.Rotate(k++, 0, 1, 0); // 2
object.Draw(); // 3
modelViewMatrix.PopMatrix(); //
}
1 - At this point determining position is easy, it's (1, 1, 0)
2 - Now we are rotating object over some constantly incrementing value to keep it moving around position (0, 0, 0)
3 - Drawing the object
Now I know that modelViewMatrix stores information like rotation and position but I don't know how to utilize this to find out the actual position of my object after translating and rotating it.
Here's my try at drawing what I'm talking about, the red question mark (?) indicates an example position of object that I'm trying to find.
You should be able to create Vec3 at (0,0,0) and transform it by your matrix. That will give you the position of your 'Earth' - Your object probably already has a position, so you really should be using your matrix to transform your object's actual position rather than changing your entire model-view matrix just to draw the object there.
If you're curious how these matrices work, google "homogeneous transformation matrix" to read up on them.
I want to rotate my model Matix in x, y, and z direction, but it rotates in an unexpected way.
I use Qt.
QMatrix4x4 mMatrix;
mMatrix.setToIdentity();
mMatrix.rotate(yAngle, QVector3D(0, 1, 0));
mMatrix.rotate(zAngle, QVector3D(0, 0, 1));
mMatrix.translate(cube->getPosition());
After the first rotation the followed rotations rotate around the base of the new model Matrix, while I want the followed rotations to rotate around the origin.
I drew a little sketch so my problem might be clearer (black shows how it is right now, green is how i want it to be):
The green arrow shows how i want it to be.
I think you will get the desired result if you simply reverse the order of calls you make on the QMatrix4x4 class. To combine the two rotations:
QMatrix4x4 mMatrix;
mMatrix.setToIdentity();
mMatrix.rotate(zAngle, QVector3D(0, 0, 1));
mMatrix.rotate(yAngle, QVector3D(0, 1, 0));
The documentation is kind of lacking, but from looking at the QMatrix4x4 source code, I'm getting the impression that it applies matrix operations the way it's more or less standard with matrix libraries that are used for OpenGL, and the way the OpenGL fixed pipeline used to work.
This means that when you combine matrices, the new matrix is multiplied from the right. As a result, when the combined matrix is applied to vectors, the last specified sub-transformation is applied to the vectors first. Or putting it differently, you specify transformations in the reverse of the order you want them applied.
In your example, it you want to rotate around the y-axis first, then around the z-axis, you need to specify the z-rotation first, then the y-rotation.
I got figured it out by myself, each object needs to have its own rotation Matrix / Quaterunion.
A new rotation around the world origin ist done by creating a rotation matrix with the wanted rotation and right multiply the exiting rotation matrix of the object.
Well, if you want the green arrow thing to happen, then it means that you want a rotation around the x-axis, 90 degrees in the negative direction (or 270 degrees in the positive direction).
To make things simple, think of it like this:
Which of the axes there remains the same? x, right? Since x doesn't change, it looks like you're rotating around the x, and you are.
Now, point your thumb towards the direction that positive x is directed at and casually close the rest of your fingers like a cylinder. The direction that those rest of your fingers curling at is the direction of the rotation.
Since you want a rotation of 90 degrees in the opposite direction that those 4 fingers of yours are curling at, or 270 in the same direction... yeah.
(1, 0, 0) would be the direction vector pointing towards the positive x. While I don't know so much about the functions you're using, my guess is that the second call should be:
mMatrix.rotate(270, QVector3D(1, 0, 0));
Instead, or optionally -90 if it supports negative values.
In my program I'm loading in a 3D mesh to view and interact with. The user can rotate and scale the view. I will be using a rotation matrix for the rotation and calling multmatrix to rotate the view, and scaling using glScalef. The user can also paint the mesh, and this is why I need to translate the mouse coordinates to see if it intersects with the mesh.
I've read http://www.opengl.org/resources/faq/technical/selection.htm and the method where I use gluUnproject at the near and far plane and subtracting, and I have some success with it, but only when gluLookAt's position is (0, 0, z) where z can be any reasonable number. When I move the position to say (0, 1, z), it goes haywire and returns an intersection where there is only empty space, and returns no intersection where the mesh is clearly underneath the mouse.
This is how I'm making the ray from the mouse click to the scene:
float hx, hy;
hx = mouse_x;
hy = mouse_y;
GLdouble m1x,m1y,m1z,m2x,m2y,m2z;
GLint viewport[4];
GLdouble projMatrix[16], mvMatrix[16];
glGetIntegerv(GL_VIEWPORT,viewport);
glGetDoublev(GL_MODELVIEW_MATRIX,mvMatrix);
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
//unproject to find actual coordinates
gluUnProject(hx,scrHeight-hy,2.0,mvMatrix,projMatrix,viewport,&m1x,&m1y,&m1z);
gluUnProject(hx,scrHeight-hy,10.0,mvMatrix,projMatrix,viewport,&m2x,&m2y,&m2z);
gmVector3 dir = gmVector3(m2x,m2y,m2z) - gmVector3(m1x,m1y,m1z);
dir.normalize();
gmVector3 point;
bool intersected = findIntersection(gmVector3(0,0,5), dir, point);
I'm using glFrustum if that makes any difference.
The findIntersection code is really long and I'm pretty confident it works, so I won't post it unless someone wants to see it. The gist of it is that for every face in the mesh, find intersection between the ray and the plane, then see if the intersection point is inside the face.
I believe that it has to do with the camera's position and look at vector, but I don't know how, and what to do with them so that the mouse clicks on the mesh properly. Can anyone help me with this?
I also haven't yet made the rotation matrix or anything with the glScalef, so can anyone give me insight into this? Like, does unproject account for the multmatrix and glScalef calls when calculating?
Many thanks!
The solution is with raytracing. The ray you shoot is defined through two points. The first one is the origin of the camera, the second one is the mouse position projected on the view plane in the scene (the plane you describe with glFrustum). The intersection of this ray with you model is the point where your mouse click has hit the model
making the ray from the camera to the scene using the ray dir, I should've used:
bool intersected = findIntersection(gmVector3(m1x,m1y,m1z), dir, point);
(notice the different vector being passed to the function). This solved my problem, and didn't have anything to do with the gluLookAt after all!
Also, for the second part of the question that I asked, yes, the unproject does take into account the glScale and glmultmatrix functions.
I been working on a space sim for sometime now.
At first I was using my own 3d engine with software rasterizer.
But I gave up when the time for implementing textures was due.
Now I started again after sometime and now I'm using Opengl (with SDL) instead to render the 3d models.
But now I hit another brick wall.
I can't figure out how to make proper rotations.
Being a space-simulator I want similar controls to a flighsim
using
glRotatef(angleX, 1.0f, 0.0f, 0.0f);
glRotatef(angleY, 0.0f, 1.0f, 0.0f);
glRotatef(angleZ, 0.0f, 0.0f, 1.0f);
or similar,
does not work properly if I rotate the model(spaceship) first 90 degrees to the left and then rotate it "up".
Instead it rolls.
Here's a image that illustrate my problem.
Image Link
I tried several tricks to try and counter this but somehow I feel I missing something.
It doesn't help either that simulator style rotation examples are almost impossible to find.
So I'm searching for examples, links and the theory of rotating a 3d model (like a spaceship, airplane).
Should I be using 3 vectors (left, up, forward) for orientation as I'm also going to have to calculate things like acceleration from thrusters and stuff that will change with the rotation (orientation?) and from the models perspective points in a direction like rocket-engines.
I'm not very good with math and trying to visualize a solution just give a headache
I'm not sure I entirely understand the situation, but it sounds like you might be describing gimbal lock. You might want to look at using quaternions to represent your rotations.
Getting this right certainly can be challenging. The problem I think you are facing is that you are using the same transformation matrices for rotations regardless of how the 'ship' is already oriented. But you don't want to rotate your ship based on how it would turn when it's facing forward, you want to rotate based on how it's facing now. To do that, you should transform your controlled turn matrices the same way you transform your ship.
For instance, say we've got three matrices, each representing the kinds of turns we want to do.
float theta = 10.0*(pi/180.0)
matrix<float> roll = [[ cos(theta), sin(theta), 0]
[ -sin(theta), cos(theta), 0]
[ 0, 0, 1]
matrix<float> pitch = [[ cos(theta), 0, sin(theta)]
[ 0, 1, 0]
[ -sin(theta), 0, cos(theta)]
matrix<float> yaw = [[1, 0, 0]
[0, cos(theta), sin(theta)]
[0, -sin(theta), cos(theta)]]
matrix<float> orientation = [[1, 0, 0]
[0, 1, 0]
[0, 0, 1]]
each which represent 10 degrees of rotation across each of the three flight attitude axes. Also we have a matrix for your ship's orientation, initially just strait forward. You will transform your ship's vertices by that orientation matrix to display it.
Then to get your orientation after a turn, you need to do just a bit of cleverness, first transform the attitude control matrices into the player's coordinates, and then apply that to the orientation to get a new orientation: something like
function do_roll(float amount):
matrix<float> local_roll = amount * (roll * orientation)
orientation = orientation * local_roll
function do_pitch(float amount):
matrix<float> local_pitch = amount * (pitch * orientation)
orientation = orientation * pitch_roll
function do_yaw(float amount):
matrix<float> local_yaw = amount * (yaw * orientation)
orientation = orientation * local_yaw
so that each time you want to rotate in one way or another, you just call one of those functions.
What you're going to want to use here is quaternions. They eliminate the strange behaviors you are experiencing. Thinking of them as a matrix on steroids with similar functionality. You CAN use matrices better than you are (in your code above) by using whatever OpenGL functionality that allows you to create a rotational matrix upon a particular rotational axis vector, but quaternions will store your rotations for future modification. For example, You start with an identity quaternion and rotate it upon a particular axis vector. The quaternion then gets translated into a world matrix for your object, but you keep the quaternion stored within your object. Next time you need to perform a rotation, then just further modify that quaternion instead of having to try and keep track of X, Y, and Z axis rotation degrees, etc.
My experience is within directx (sorry, no OpenGL experience here), though I did once run into your problem when I was attempting to rotate beachballs that were bouncing around a room & rotating as they encountered floors, walls, and each other.
Google has a bunch of options on "OpenGL Quaternion", but this, in particular, appears to be a good source:
http://gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation
As you may have guessed by now, Quaternions are great for handling cameras within your environment. Here's a great tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=Quaternion_Camera_Class
You should study 3D mathematics so that you can gain a deeper understanding of how to control rotations. If you don't know the theory it can be hard to even copy and paste correctly. Specifically texts such as 3D Math Primer(Amazon), and relevant sites like http://gamemath.com will greatly aid you in your project (and all future ones).
I understand you may not like math now, but learning the relevant arithmetic will be the best solution to your issue.
Quaternions may help, but a simpler solution may be to observe a strict order of rotations. It sounds like you're rotating around y, and then rotating around x. You must always rotate x first, then y, then z. Not that there's anything particularly special about that order, just that if you do it that way, rotations tend to work a little bit closer to how you expect them to work.
Edit: to clarify a little bit, you also shouldn't cumulatively rotate across time in the game. Each frame you should start your model out in identity position, and then rotate, x, y, then z, to that frame's new position.
General rotations are difficult. Physicist tend to use some set of so-called Euler angles to describe them. In this method a general rotation is described by three angles taken about three axis in fixed succession. But the three axis are not the X-, Y- and Z- axis of the original frame. They are often the Z-, Y- and Z- axis of the original frame (yes, it really is completely general), or two axis from the original frame followed by an axis in the intermediate frame. Many choices are available, and making sure that you are following the same convention all the way through can be a real hassle.
I am trying to make a very simple object rotate around a fixed point in 3dspace.
Basically my object is created from a single D3DXVECTOR3, which indicates the current position of the object, relative to a single constant point. Lets just say 0,0,0.
I already calculate my angle based on the current in game time of the day.
But how can i apply that angle to the position, so it will rotate?
:(?
Sorry im pretty new to Directx.
So are you trying to plot the sun or the moon?
If so then one assumes your celestial object is something like a sphere that has (0,0,0) as its center point.
Probably the easiest way to rotate it into position is to do something like the following
D3DXMATRIX matRot;
D3DXMATRIX matTrans;
D3DXMatrixRotationX( &matRot, angle );
D3DXMatrixTranslation( &matTrans, 0.0f, 0.0f, orbitRadius );
D3DXMATRIX matFinal = matTrans * matRot;
Then Set that matrix as your world matrix.
What it does is it creates a rotation matrix to rotate the object by "angle" around the XAxis (ie in the Y-Z plane); It then creates a matrix that pushes it out to the appropriate place at the 0 angle (orbitRadius may be better off as the 3rd parameter in the translation call, depending on where your zero point is). The final line multiplies these 2 matrices together. Matrix multiplications are non commutative (ie M1 * M2 != M2 * M1). What the above does is move the object orbitRadius units along the Z-axis and then it rotates that around the point (0, 0, 0). You can think of rotating an object that is held in your hand. If orbitRadius is the distance from your elbow to your hand then any rotation around your elbow (at 0,0,0) is going to form an arc through the air.
I hope that helps, but I would really recommend doing some serious reading up on Linear Algebra. The more you know the easier questions like this will be to solve yourself :)