gluProject on NDS? - opengl

I've been struggling with this for a good while now. I'm trying to determine the screen coordinates of the vertexes in a model on the screen of my NDS using devKitPro. The library seems to implement some functionality of OpenGL, but in particular, the gluProject function is missing, which would (I assume) allow me to do just exactly that, easily.
I've been trying for a good while now to calculate the screen coordinates manually using the projection matricies that are stored in the DS's registers, but I haven't been having much luck, even when trying to build the projection matrix from scratch based on OpenGL's documentation. Here is the code I'm trying to use:
void get2DPoint(v16 x, v16 y, v16 z, float &result_x, float &result_y)
{
//Wait for the graphics engine to be ready
/*while (*(int*)(0x04000600) & BIT(27))
continue;*/
//Read in the matrix that we're currently transforming with
double currentMatrix[4][4]; int i;
for (i = 0; i < 16; i++)
currentMatrix[0][i] =
(double(((int*)0x04000640)[i]))/(double(1<<12));
//Now this hurts-- take that matrix, and multiply it by the projection matrix, so we obtain
//proper screen coordinates.
double f = 1.0 / tan(70.0/2.0);
double aspect = 256.0/192.0;
double zNear = 0.1;
double zFar = 40.0;
double projectionMatrix[4][4] =
{
{ (f/aspect), 0.0, 0.0, 0.0 },
{ 0.0, f, 0.0, 0.0 },
{ 0.0, 0.0, ((zFar + zNear) / (zNear - zFar)), ((2*zFar*zNear)/(zNear - zFar)) },
{ 0.0, 0.0, -1.0, 0.0 },
};
double finalMatrix[4][4];
//Ugh...
int mx = 0; int my = 0;
for (my = 0; my < 4; my++)
for (mx = 0; mx < 4; mx++)
finalMatrix[mx][my] =
currentMatrix[my][0] * projectionMatrix[0][mx] +
currentMatrix[my][1] * projectionMatrix[1][mx] +
currentMatrix[my][2] * projectionMatrix[2][mx] +
currentMatrix[my][3] * projectionMatrix[3][mx] ;
double dx = ((double)x) / (double(1<<12));
double dy = ((double)y) / (double(1<<12));
double dz = ((double)z) / (double(1<<12));
result_x = dx*finalMatrix[0][0] + dy*finalMatrix[0][1] + dz*finalMatrix[0][2] + finalMatrix[0][3];
result_y = dx*finalMatrix[1][0] + dy*finalMatrix[1][1] + dz*finalMatrix[1][2] + finalMatrix[1][3];
result_x = ((result_x*1.0) + 4.0)*32.0;
result_y = ((result_y*1.0) + 4.0)*32.0;
printf("Result: %f, %f\n", result_x, result_y);
}
There are lots of shifts involved, the DS works internally using fixed point notation and I need to convert that to doubles to work with. What I'm getting seems to be somewhat correct-- the pixels are translated perfectly if I'm using a flat quad that's facing the screen, but the rotation is wonky. Also, since I'm going by the projection matrix (which accounts for the screen width/height?) the last steps I'm needing to use don't seem right at all. Shouldn't the projection matrix be accomplishing the step up to screen resolution for me?
I'm rather new to all of this, I've got a fair grasp on matrix math, but I'm not as skilled as I would like to be in 3D graphics. Does anyone here know a way, given the 3D, non-transformed coordinates of a model's vertexes, and also given the matricies which will be applied to it, to actually come up with the screen coordinates, without using OpenGL's gluProject function? Can you see something blatantly obvious that I'm missing in my code? (I'll clarify when possible, I know it's rough, this is a prototype I'm working on, cleanliness isn't a high priority)
Thanks a bunch!
PS: As I understand it, currentMatrix, which I pull from the DS's registers, should be giving me the combined projection, translation, and rotation matrix, as it should be the exact matrix that's going to be used for the translation by the DS's own hardware, at least according to the specs at GBATEK. In practise, it doesn't seem to actually have the projection coordinates applied to it, which I suppose has something to do with my issues. But I'm not sure, as calculating the projection myself isn't generating different results.

That is almost correct.
The correct steps are:
Multiply Modelview with Projection matrix (as you've already did).
Extend your 3D vertex to a homogeneous coordinate by adding a W-component with value 1. E.g your (x,y,z)-vector becomes (x,y,z,w) with w = 1.
Multiply this vector with the matrix product. Your matrix should be 4x4 and your vector of size 4. The result will be a vector of size4 as well (don't drop w yet!). The result of this multiplication is your vector in clip-space. FYI: You can already do a couple of very useful things here with this vector: Test if the point is on the screen. The six conditions are:
x &lt -w : Point is outside the screen (left of the viewport)
x &gt W : Point is outside the screen (right of the viewport)
y &lt -w : Point is outside the screen (above the viewport)
y &gt w : Point is outside the screen (below the viewport)
z &lt -w : Point is outside the screen (beyond znear)
z &gt w : Point is outside the screen (beyond zfar)
Project your point into 2D space. To do this divide x and y by w:
x' = x / w;
y' = y / w;
If you're interested in the depth-value (e.g. what gets written to the zbuffer) you can project z as well:
z' = z / w
Note that the previous step won't work if w is zero. This case happends if your point is equal to the camera position. The best you could do in this case is to set x' and y' to zero. (will move the point into the center of the screen in the next step..).
Final Step: Get the OpenGL viewport coordinates and apply it:
x_screen = viewport_left + (x' + 1) * viewport_width * 0.5;
y_screen = viewport_top + (y' + 1) * viewport_height * 0.5;
Important: The y coordinate of your screen may be upside down. Contrary to most other graphic APIs in OpenGL y=0 denotes the bottom of the screen.
That's all.

I'll add some more thoughts to Nils' thorough answer.
don't use doubles. I'm not familiar with NDS, but I doubt it's got any hardware for double math.
I also doubt model view and projection are not already multiplied if you are reading the hardware registers. I have yet to see a hardware platform that does not use the full MVP in the registers directly.
the matrix storage into registers may or may not be in the same order as OpenGL. if they are not, the multiplication matrix-vector needs to be done in the other order.

Related

How to rotate model to follow path

I have a spaceship model that I want to move along a circular path. I want the nose of the ship to always point in the direction it is moving in.
Here is the code I have to move it in a circle right now:
glm::mat4 m = glm::mat4(1.0f);
//time
long value_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::time_point_cast<std::chrono::milliseconds>(std::chrono::
high_resolution_clock::now())
.time_since_epoch())
.count();
//translate
m = glm::translate(m, translate);
m = glm::translate(m, glm::vec3(-50, 0, -20));
m = glm::scale(m, glm::vec3(0.025f, 0.025f, 0.025f));
m = glm::translate(m, glm::vec3(1800, 0, 3000));
float speed = .002;
float x = 100 * cos(value_ms * speed); // + 1800;
float y = 0;
float z = 100 * sin(value_ms * speed); // + 3000;
m = glm::translate(m, glm::vec3(x, y, z));
How would I move it so the nose always points ahead? I tried doing glm::rotate with the rotation axis set as x or y or z but I cannot get it to work properly.
First see Understanding 4x4 homogenous transform matrices as I am using terminology and stuff from there...
Its usual to use a transform matrix of object for its navigation purposes and not the other way around ... So you should have a transform matrix M for your space ship that represents its position and orientation in [GCS] (global coordinate system). On top of that is sometimes multiplied another matrix M0 that align your space ship mesh to the first matrix (you know some meshes are not centered around (0,0,0) nor axis aligned...)
Now when you are moving your object you just do local transformations on the M so moving forward is just translating M origin position by a multiple of forward axis basis vector. The same goes for sliding to sides (just use different basis vector) resulting in that the object is alway aligned to where it supposed to be (in respect to movement). The same goes for turns. So going in circle is just moving forward and turning at constant speeds per time iteration step (timer).
You are doing this backwards first you compute position and orientation and then you are trying to make operations resulting in matrix that would do the same... In such case is much much easier to construct the matrix M instead of creating transformations that will create it... So what you need is:
origin position
3 perpendicular (most likely unit) basis vectors
So the origin is your x,y,z position. 2 basis vectors can be obtained from the circle so forward is tangent (or position-last_position) and vector towards circle center cen be used as (right or left). The 3th vector can be obtained by cross product so let assume:
+X axis is right
+Y axis is up
+Z axis is forward
you got:
r=100.0
a=speed*t
pos = (r*cos(a),0.0,r*sin(a))
center = (0.0,0.0,0.0)
so:
Z = (cos(a-0.5*M_PI),0.0,sin(a-0.5*M_PI))
X = (cos(a),0.0,sin(a))-ceneter
Y = cross(X,Z)
O = pos
normalize:
X /= length(X)
Y /= length(Y)
Z /= length(Z)
So now just feed your X,Y,Z,O to your matrix (depending on the conventions you use like multiplication order, direct/inverse matrix, row-major or column-major matrices ...)
so for example like this:
double M[16]=
{
X[0],X[1],X[2],0.0,
Y[0],Y[1],Y[2],0.0,
Z[0],Z[1],Z[2],0.0,
O[0],O[1],O[2],1.0,
};
or:
double M[16]=
{
X[0],Y[0],Z[0],O[0],
X[1],Y[1],Z[1],O[1],
X[2],Y[2],Z[2],O[2],
0.0 ,0.0 ,0.0 ,1.0,
};
And that is all ... The matrix might be transposed, inverted etc based on the conventions you use. Sorry I do not use GLM but the syntax should be very siilar ... the matrix feeding might be even simpler if rows or columns are loadable by a vector ...

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

Rotate a 3D- Point around another one

I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.

3d coordinate from point and angles

I'm working on a simple OpenGL world- and so far I've got a bunch of cubes randomly placed about and it's pretty fun to go zooming about. However I'm ready to move on. I would like to drop blocks in front of my camera, but I'm having trouble with the 3d angles. I'm used to 2d stuff where to find an end point we simply do something along the lines of:
endy = y + (sin(theta)*power);
endx = x + (cos(theta)*power);
However when I add the third dimension I'm not sure what to do! It seems to me that the power of the second dimensional plane would be determined by the z axis's cos(theta)*power, but I'm not positive. If that is correct, it seems to me I'd do something like this:
endz = z + (sin(xtheta)*power);
power2 = cos(xtheta) * power;
endx = x + (cos(ytheta) * power2);
endy = y + (sin(ytheta) * power2);
(where x theta is the up/down theta and y = left/right theta)
Am I even close to the right track here? How do I find an end point given a current point and an two angles?
Working with euler angles doesn't work so well in 3D environments, there are several issues and corner cases in which they simply don't work. And you actually don't even have to use them.
What you should do, is exploit the fact, that transformation matrixes are nothing else, then coordinate system bases written down in a comprehensible form. So you have your modelview matrix MV. This consists of a model space transformation, followed by a view transformation (column major matrices multiply right to left):
MV = V * M
So what we want to know is, in which way the "camera" lies within the world. That is given to you by the inverse view matrix V^-1. You can of course invert the view matrix using Gauss Jordan method, but most of the time your view matrix will consist of a 3×3 rotation matrix with a translation vector column P added.
R P
0 1
Recall that
(M * N)^-1 = N^-1 * M^-1
and also
(M * N)^T = M^T * N^T
so it seems there is some kind of relationship between transposition and inversion. Not all transposed matrices are their inverse, but there are some, where the transpose of a matrix is its inverse. Namely it are the so called orthonormal matrices. Rotations are orthonormal. So
R^-1 = R^T
neat! This allows us to find the inverse of the view matrix by the following (I suggest you try to proof it as an exersice):
V = / R P \
\ 0 1 /
V^-1 = / R^T -P \
\ 0 1 /
So how does this help us to place a new object in the scene at a distance from the camera? Well, V is the transformation from world space into camera space, so V^-1 transforms from camera to world space. So given a point in camera space you can transform it back to world space. Say you wanted to place something at the center of the view in distance d. In camera space that would be the point (0, 0, -d, 1). Multiply that with V^-1:
V^-1 * (0, 0, -d, 1) = (R^T)_z * d - P
Which is exactly what you want. In your OpenGL program you somewhere have your view matrix V, probably not properly named yet, but anyway it is there. Say you use old OpenGL-1 and GLU's gluLookAt:
void display(void)
{
/* setup viewport, clear, set projection, etc. */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...);
/* the modelview matrix now holds the View transform */
At this point we can extract the modelview matrix
GLfloat view[16];
glGetFloatv(GL_MODELVIEW_MATRIX, view);
Now view is in column major order. If we were to use it directly we could directly address the columns. But remember that transpose is inverse of a rotation, so we actually want the 3rd row vector. So let's assume you keep view around, so that in your event handler (outside display) you can do the following:
GLfloat z_row[3];
z_row[0] = view[2];
z_row[1] = view[6];
z_row[2] = view[10];
And we want the position
GLfloat * const p_column = &view[12];
Now we can calculate the new objects position at distance d:
GLfloat new_object_pos[3] = {
z_row[0]*d - p_column[0],
z_row[1]*d - p_column[1],
z_row[2]*d - p_column[2],
};
There you are. As you can see, nowhere you had to work with angles or trigonometry, it's just straight linear algebra.
Well I was close, after some testing, I found the correct formula for my implementation, it looks like this:
endy = cam.get_pos().y - (sin(toRad(180-cam.get_rot().x))*power1);
power2 = cos(toRad(180-cam.get_rot().x))*power1;
endx = cam.get_pos().x - (sin(toRad(180-cam.get_rot().y))*power2);
endz = cam.get_pos().z - (cos(toRad(180-cam.get_rot().y))*power2);
This takes my camera's position and rotational angles and get's the corresponding points. Works like a charm =]

Direct3D & iPhone Accelerometer Matrix

I am using a WinSock connection to get the accelerometer info off and iPhone and into a Direct3D application. I have modified Apples GLGravity's sample code to get my helicopter moving in relation to gravity, however I need to "cap" the movement so the helicopter can't fly upside down! I have tried to limit the output of the accelerometer like so
if (y < -0.38f) {
y = -0.38f;
}
Except this doesn't seem to work!? The only thing I can think of is I need to modify the custom matrix, but I can't seem to get my head around what I need to be changing. The matrix is code is below.
_x = acceleration.x;
_y = acceleration.y;
_z = acceleration.z;
float length;
D3DXMATRIX matrix, t;
memset(matrix, '\0', sizeof(matrix));
D3DXMatrixIdentity(&matrix);
// Make sure acceleration value is big enough.
length = sqrtf(_x * _x + _y * _y + _z * _z);
if (length >= 0.1f && kInFlight == TRUE) { // We have a acceleration value good enough to work with.
matrix._44 = 1.0f; //
// First matrix column is a gravity vector.
matrix._11 = _x / length;
matrix._12 = _y / length;
matrix._13 = _z / length;
// Second matrix is arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz}.
// defined by the equation Gx * x + Gy * y + Gz * z = 0 in which we set x = 0 and y = 1.
matrix._21 = 0.0f;
matrix._22 = 1.0f;
matrix._23 = -_y / _z;
length = sqrtf(matrix._21 * matrix._21 + matrix._22 * matrix._22 + matrix._23 * matrix._23);
matrix._21 /= length;
matrix._22 /= length;
matrix._23 /= length;
// Set third matrix column as a cross product of the first two.
matrix._31 = matrix._12 * matrix._23 - matrix._13 * matrix._22;
matrix._32 = matrix._21 * matrix._13 - matrix._23 * matrix._11;
matrix._33 = matrix._11 * matrix._22 - matrix._12 * matrix._21;
}
If anyone can help it would be much appreciated!
I think double integration is probably over-complicating things. If I understand the problem correctly, the iPhone is giving you a vector of values from the accelerometers. Assuming the user isn't waving it around, that vector will be of roughly constant length, and pointing directly downwards with gravity.
There is one major problem with this, and that is that you can't tell when the user rotates the phone around the horizontal. Imagine you lie your phone on the table, with the bottom facing you as you're sitting in front of it; the gravity vector would be (0, -1, 0). Now rotate your phone around 90 degrees so the bottom is facing off to your left, but is still flat on the table. The gravity vector is still going to be (0, -1, 0). But you'd really want your helicopter to have turned with the phone. It's a basic limitation of the fact that the iPhone only has a 2D accelerometer, and it's extrapolating a 3D gravity vector from that.
So let's assume that you've told the user they're not allowed to rotate their phone like that, and they have to keep it with the bottom point to you. That's fine, you can still get a lot of control from that.
Next, you need to cap the input such that the helicopter never goes more than 90 degrees over on it's side. Imagine the vector that you're given as being a stick attached to your phone, and dangling with gravity. The vector you have is describing the direction of gravity, relative to the phone's flat surface. If it were (0, -1, 0) the stick is pointing directly downwards (-y). if it were (1, 0, 0), the stick is pointing to the right of the phone (+x), and implies that the phone has been twisted 90 degrees clockwise (looking away from you at the phone).
Assume in this metaphor that the stick has full rotational freedom. It can be pointing in any direction from the phone. So moving the stick around describes the surface of a sphere. But crucially, you only want the stick to be able to move around the lower half of that sphere. If the user twists the phone so that the stick would be in the upper half of the sphere, you want it to cap such that it's pointing somewhere around the equator of the sphere.
You can achieve this quite cleanly by using polar co-ordinates. 3D vectors and polar co-ordinates are interchangeable - you can convert to and from without losing any information.
Convert the vector you have (normalised of course) into a set of 3D polar co-ordinates (you should be able to find this logic on the web quite easily). This will give you an angle around the horizontal plane, and an angle for vertical plane (and a distance from the origin - for a normalised vector, this should be 1.0). If the vertical angle is positive, the vector is in the upper half of the sphere, negative it's in the lower half. Then, cap the vertical angle so that it is always zero or less (and so in the lower half of the sphere). Then you can take the horizontal and capped vertical angle, and convert it back into a vector.
This new vector, if plugged into the matrix code you already have, will give you the correct orientation, limited to the range of motion you need. It will also be stable if the user turns their phone slightly beyond the 90 degree mark - this logic will keep your directional vector as close to the user's current orientation as possible, without going beyond the limit you set.
Try normalizing the acceleration vector first. (edit: after you check the length) (edit edit: I guess I need to learn how to read... how do I delete my answer?)
So if I understand this correctly, the iPhone is feeding you accelerometer data, saying how hard you're moving the iPhone in 3 axes.
I'm not familiar with that apple sample, so I don't know what its doing. However, it sounds like you're mapping acceleration directly to orientation, but I think what you want to do is doubly integrate the acceleration in order to obtain a position and look at changes in position in order to orient the helicopter. Basically, this is more of a physics problem than a Direct3D problem.
It looks like you are using the acceleration vector from the phone to define one axis of a orthogonal frame of reference. And I suppose +Y is points towards the ground so you are concerned about the case when the vector points towards the sky.
Consider the case when the iphone reports {0, -6.0, 0}. You will change this vector to {0, -.38, 0}. But they both normalize to {0, -1.0, 0}. So, the effect of clamping y at -.38 is influenced by magnitude of the other two components of the vector.
What you really want is to limit the angle of the vector to the XZ plane when Y is negative.
Say you want to limit it to be no more than 30 degrees to the XZ plane when Y is negative. First normalize the vector then:
const float limitAngle = 30.f * PI/180.f; // angle in radians
const float sinLimitAngle = sinf(limitAngle);
const float XZLimitLength = sqrtf(1-sinLimitAngle*sinLimitAngle);
if (_y < -sinLimitAngle)
{
_y = -sinLimitAngle;
float XZlengthScale = XZLimitLength / sqrtf(_x*_x + _z*_z);
_x *= XZlengthScale;
_z *= XZlengthScale;
}