convert Cartesian width and height to Isometric - c++

I'm trying to create an Isometric game in the SDL graphics library.
When you render in SDL you require a source rectangle and a destination rectangle. The source rectangle is the portion of the texture you have loaded that you wish to render and the destination rectangle is the area on the screen you are rendering to. Rectangles simply consist of 4 properties, X position, Y position, Width, and Height.
Now say I have a Cartesian destination rectangle with these coordinates:
X=20, Y=10, Width=16, Height=16
Now say I wanted to convert this to Isometric. To convert the X and Y coordinates I would use:
isometricX = cartX - cartY;
isometricY = (cartX + cartY) / 2;
Now what I don't understand is what I would do to convert the Cartesian Width and Height to Isometric Width and Height to create the illusion that the view port is being moved 45 degrees to one side and 30 degrees down, creating the Isometric look.
EDIT:
I'd like to clarify my question a little, so when I convert the Cartesian Coordinates to Isometric I change this:http://i.stack.imgur.com/I79yK.png
to this:http://i.stack.imgur.com/ZCJg1.png. So now I am trying to figure out how to rotate the tiles so they all fit together and how I need to adjust the Height and Width to get this to happen.

To start with you'll need these operations to convert to and from isometric coordinates:
isoX = carX + carY;
isoY = carY - carX / 2.0;
carX = (isoX - isoY) / 1.5;
carY = isoX / 3.0 + isoY / 1.5;
right-angled corners in the top-left and bottom-right become 120 degrees, the other two corners become 60 degrees. the bottom-right corner becomes the bottom corner, the top-left corner becomes the top. this also assumes that y increases going up, and x increases going right (if your system is different, flip the signs accordingly). you can verify via substitution that these operations are eachothers inverse.
for a rectangle you need 4 points transformed - the corners - as they will not be 'rectangular' for the purposes of SDL (it will be a parallelogram). this is easier to see numerically.
first, assign the corners names of some sort. i prefer clockwise starting with the bottom-left - this coordinate shall be known as C1, and has an associated X1 and Y1, the others will be C2-4.
C2 - C3
| |
C1 - C4
then compute their cartesian coordinates...
X1 = RECT.X;
Y1 = RECT.Y;
X2 = X1; // moving vertically
Y2 = RECT.Y + RECT.HEIGHT;
X3 = RECT.X + RECT.WIDTH;
Y3 = Y2; // moving horizontally
X4 = X3; // moving vertically
Y4 = RECT.Y;
and lastly apply the transform individually to each coordinate, to get I1, I2, I3, I4 coordinates...
iX1 = X1 + Y1;
iY1 = Y1 - X1 / 2.0;
// etc
and what you end up with is on-screen coordinates I1-4, that take this shape:
I2
/ \
I1 I3
\ /
I4
But unlike this shoddy depiction, the angles for I4 and I2 will be ~127 deg, and for I1 and I3 it should be ~53 deg. (this could be fine-tuned to be exactly 60/120, and depends on the 2.0 factor for carX when computing isoY - it should be sqrt(3) rather than 2.0 but meh, close enough)
if you use the inverse transform, you can turn back the I1-4 coordinates into C1-4, or locate a world coordinate from a screen coordinate etc.
implementing a camera / viewport gets a little tricky if only at first but it's beyond what was asked so i won't go there (without further prodding)...
(Edit) Regarding SDL...
SDL does not appear to "play nice" with generalized transforms. I haven't used it but its interface is remarkably similar to GDI (windows) which I've played around with for a game engine before and ran into this exact issue (rotating + scaling textures).
There is one (looks to be non-standard) SDL function that does both scaling and rotating of textures, but it does it in the wrong order so it always maintains the perspective of the image, and that's not what is needed here.
Basic geometry will be fine, as you've seen, because it's fills and lines which don't need to be scaled, only positioned. But for textures... You're going to have to either write code to render the texture one pixel at a time, or use a combination of transforms (scale after rotate), or layering (drawing an alpha-masked isometric square and rendering a pre-computed texture) and so on...
Or of course, if it's an option for you, use something suited for raw geometry and texture data like OpenGL / Direct3D. Personally I'd go with OpenGL / SFML for something like this.

Unfortunately I cannot comment to ask for clarification so I must answer with a question: can you not convert all four points then from those points calculate the width and height from the transformed points?
X=20, Y=10, Width=16, Height=16
as you've said
isometricX = cartX - cartY;
isometricY = (cartX + cartY) / 2;
so
isometricX1 = cartX1 - cartY1;
isometricY1 = (cartX1 + cartY1) / 2;
and
isometricWidth = std::abs(isometricY - isometricY1)
There's probably a more efficient route though but as I do not know Cartesian geometry I can't find that solution
EDITisometricWidth found the distance between the two points not the width and hieght
another note is you'd need opposite corners (yes I realize the other guy probably as a much better answer :P)

Related

Clipping triangles in screen space per pixel

As I understand it, in OpenGL polygons are usually clipped in clip space and only those triangles (or parts of the triangles if the clipping process splits them) that survive the comparison with +- w. This then requires implementation of a polygon clipping algorithm such as Sutherland-Hodgman.
I am implementing my own CPU rasterizer and for now would like to avoid doing that. I have the NDC coordinates of vertices available (not really normalized since I did not clip anything so the positions may not be in range [-1, 1]). I would like to interpolate these values for all pixels and only draw pixels the NDC coordinates of which fall within [-1, 1] in the x, y and z dimensions. I would then additionally perform the depth test.
Would this work? If yes what would the interpolation look like? Can I use the OpenGl spec (page 427 14.9) formula for attribute interpolation as described here? Alternatively, should I use the formula 14.10 which is used for depth (z) interpolation for all 3 coordinates (I don't really understand why a different one is used there)?
Update:
I have tried interpolating the NDC values per pixel by two methods:
w0, w1, w2 are the barycentric weights of the vertices.
1) float x_ndc = w0 * v0_NDC.x + w1 * v1_NDC.x + w2 * v2_NDC.x;
float y_ndc = w0 * v0_NDC.y + w1 * v1_NDC.y + w2 * v2_NDC.y;
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;
2)
float x_ndc = (w0*v0_NDC.x/v0_NDC.w + w1*v1_NDC.x/v1_NDC.w + w2*v2_NDC.x/v2_NDC.w) /
(w0/v0_NDC.w + w1/v1_NDC.w + w2/v2_NDC.w);
float y_ndc = (w0*v0_NDC.y/v0_NDC.w + w1*v1_NDC.y/v1_NDC.w + w2*v2_NDC.y/v2_NDC.w) /
(w0/v0_NDC.w + w1/w1_NDC.w + w2/v2_NDC.w);
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;
The clipping + depth test always looks like this:
if (-1.0f < z_ndc && z_ndc < 1.0f && z_ndc < currentDepth &&
1.0f < y_ndc && y_ndc < 1.0f &&
-1.0f < x_ndc && x_ndc < 1.0f)
Case 1) corresponds to using equation 14.10 for their interpolation. Case 2) corresponds to using equation 14.9 for interpolation.
Results documented in gifs on imgur.
1) Strange things happen when the second cube is behind the camera or when I go into a cube.
2) Strange artifacts are not visible but as the camera approaches vertices, they start disappearing. And since this is the perspective correct interpolation of attributes vertices (nearer to the camera?) have greater weight so as soon as a vertex gets clipped this information is interpolated with strong weight to the triangle pixels.
Is all of this expected or have I done something wrong?
Clipping against the near plane is not strictly necessary, unless the triangle goes to or past 0 in the camera-space Z. Once that happens, the homogeneous coordinate math gets weird.
Most hardware only bothers to clip triangles if they extend more than a screen's width outside the clip space or if they cross the camera-Z of zero. This kind of clipping is called "guard-band clipping", and it saves a lot of performance, since clipping isn't cheap.
So yes, the math can work fine. The main thing you have to do, when setting up your scan lines, is figure out where each of them start/end on screen. The interpolation math is the same either way.
I don't see any reason why this wouldn't work. But it will be ways slower than traditional clipping. Note, that you might get into trouble with triangles close to the projection center since they will be vanishingly small and might cause problems in the barycentric coordinate calculation.
The difference between equation 14.9 and 14.10 is, that depth is basically z/w (and remapped to [0, 1]). Since the perspective divide has already happened, it has to be left away during interpolation.

Find a point inside a rotated rectangle

Ok so, this should be super simple, but I'm not a smart man. Technically I want to know whether a point resides inside a rectangle, however the rectangle can be in different states. In my current context when I want to draw a rectangle rotated by, lets say, 45° clockwise, what I do is rotate the entire x,y axis centered at the top-left corner of the rectangle and then I just draw the rectangle as if nothing has happened. Same goes if I want to draw the rectangle at a random coordinate. Given that is the coordinate system who gets tossed and rotated, the rectangle always thinks it's being drawn at (0,0) with 0°, therefore, the best way to find if a given point is inside the rectangle would be to find the projection for the point based on the translation + rotation of the rectangle. But I have no idea how to do that.
This is what I currently do in order to find out if a point is inside a rectangle (not taking into consideration rotation):
bool Image::isPointInsideRectangle(int x, int y, const ofRectangle & rectangle){
return x - xOffset >= rectangle.getX() && x - xOffset <= rectangle.getX() + rectangle.getWidth() &&
y - yOffset >= rectangle.getY() && y - yOffset <= rectangle.getY() + rectangle.getHeight();
}
I already have angleInDegrees stored, as long as I could use it to project the (x,y) point I receive I should be able find out if the point is inside the rectangle.
Cheers!
Axel
The easiest way is to un-rotate x,y in the reverse direction relative to the origin and rotation of the rectangle.
For example, if angleInDegrees is 45 degrees, you would rotate the point to test -45 degrees (or 315 degrees if your rotation routine only allows positive rotations). This will plot the x,y on the same coordinate system as the unrotated rectangle.
Then, you can use the function you already provided to test whether the point is within the rectangle.
Note that prior to rotating x,y, you will probably need to adjust the x,y relative to the point of rotation - the upper-left corner of the rectangle. Since the rotation is relative to that point rather than the overall coordinate origin 0,0. You can compute the difference between x,y and the upper-left corner of your rectangle (which won't change during rotation), then simply rotate the adjusted point by -angleToRotate, then add the origin point difference back into the unrotated point to get absolute coordinates on your coordinate system.
Editted:
#include <cmath>
bool Image::isPointInsideRectangle(int x, int y, const ofRectangle & rectangle){
return x*cosd(deg) - y*sin(deg) + xOffset >= rectangle.getX()
&& x*cosd(deg) - y*sin(deg) + xOffset <= rectangle.getX() + rectangle.getWidth()
&& x*sind(deg) + y*cosd(deg) + yOffset >= rectangle.getY()
&& x*sind(deg) + y*cosd(deg) + yOffset <= rectangle.getY() + rectangle.getHeight();
Like you have already told, you could translate the coordinates of your point into the space of the rectangle. This is a common task in many software products which work with geometry. Each object have it own coordinate space and works as it would be at position (0, 0) without rotation. If your rectangle is at position v and rotated about b degree/radian, than you can translate your point P into the space of the rectangle with the following formula:
| cos(-b) -sin(-b) | | P_x - v_x |
| | ⋅ | |
| sin(-b) cos(-b) | | P_y - v_y |
Many of the most important transformations can be represented as matrices. At least if you are using homogeneous coordinates. It is also very common to do that. Depending of the complexity and the goals of your program you could consider to use some math library like glm and use the transformations of your objects in form of matrices. Then you could write something like inverse(rectangle.transformation()) * point to get point translated into the space of rectangle.

Positioning a widget involving intersection of line and a circle?

Here is the problem I'm trying to solve for my game.
I have this scenario:
I'm trying to solve for the position and size of the green rectangle. The circle is at 50%, 40% of the screen and its radius is proportional to the height of the screen.
The green rectangle must always be 10 pixels away from the bottom. Its left corner must be 10 pixels away also. And as can be seen in the image, the distance from the top right corner until the rectangle touches the circle is 10 pixels also.
Another constraint is that the green rectangle must always be 3 times wider than its height (aspect ratio).
Given these constraints, how can I solve for the position and size of the green rectangle?
Essentially, the Game Window can have a bunch of different aspect ratios so the green rectangle must look good in any of these situations.
I'm not necessarily looking for code but just an idea on how this could be solved.
Thanks
The thing to do in these situations is to describe the constraints mathematically, and see if it simplifies. This is an essential skill for geometric processing.
Let's assume the bottom left corner of the image area is (0,0). That puts the bottom-left corner of the rectangle at (10,10); we'll call the top-right corner (x1,y1). I'll assume you've already calculated where the circle will be since that's pretty straight-forward, we'll call the center (x2,y2) and the radius r.
The first constraint: the rectangle is 3 times wider than it is tall.
x1-10 = 3 * (y1-10) or x1 = 3 * (y1-10) + 10 or x1 = 3*y1 - 20
The second constraint: x1,y1 lies 10 pixels away from the circle. If we describe another circle 10 pixels larger than the first, the point will lie on it.
(x1-x2)^2 + (y1-y2)^2 = (r+10)^2
Substituting for x1:
(3*y1 - 20 - x2)^2 + (y1-y2)^2 = (r+10)^2
This is great, because r, x2, and y2 are known; the only unknown left is y1. Let's see if we can gather all the y1's together.
(3*y1 + (-20 - x2))^2 + (y1-y2)^2 = (r+10)^2
3^2*y1^2 + 2*(3*y1*(-20-x2) + (-20-x2)^2 + y1^2 + 2*y1*-y2 + y2^2 = (r+10)^2
3^2*y1^2 + y1^2 + 6*(-20-x2)*y1 + 2*-y2*y1 + y2^2 = (r+10)^2
(3^2+1)*y1^2 + (-120 - 6*x2 - 2*y2)*y1 + y2^2 = (r+10)^2
At this point it's looking almost like a quadratic equation. One more little tweak:
10 * y1^2 + (-120 - 6*x2 - 2*y2) * y1 + (y2^2 - (r+10)^2) = 0
The final step is to apply the Quadratic Formula.
a*y1^2 + b*y1 + c = 0
a = 10
b = (-120 - 6*x2 - 2*y2)
c = (y2^2 - (r+10)^2)
y1 = (-b +/- sqrt(b^2 - 4*a*c)) / 2*a
There are two possible answers from the quadratic equation, but one of them will put the rectangle on the far side of the circle. It should be easy to eliminate that case.
What you have there is a classic circle-line intersection problem. You know a point on the line - the bottom left corner of the rectangle. And you know the slope of the line (from the aspect ratio). The circle you intersect with can be your red circle shifted left by 10 to give you your 10 pixel gap. The intersection will be the top right corner of the desired rectangle. That should be enough for an idea.

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.

Direct3D & iPhone Accelerometer Matrix

I am using a WinSock connection to get the accelerometer info off and iPhone and into a Direct3D application. I have modified Apples GLGravity's sample code to get my helicopter moving in relation to gravity, however I need to "cap" the movement so the helicopter can't fly upside down! I have tried to limit the output of the accelerometer like so
if (y < -0.38f) {
y = -0.38f;
}
Except this doesn't seem to work!? The only thing I can think of is I need to modify the custom matrix, but I can't seem to get my head around what I need to be changing. The matrix is code is below.
_x = acceleration.x;
_y = acceleration.y;
_z = acceleration.z;
float length;
D3DXMATRIX matrix, t;
memset(matrix, '\0', sizeof(matrix));
D3DXMatrixIdentity(&matrix);
// Make sure acceleration value is big enough.
length = sqrtf(_x * _x + _y * _y + _z * _z);
if (length >= 0.1f && kInFlight == TRUE) { // We have a acceleration value good enough to work with.
matrix._44 = 1.0f; //
// First matrix column is a gravity vector.
matrix._11 = _x / length;
matrix._12 = _y / length;
matrix._13 = _z / length;
// Second matrix is arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz}.
// defined by the equation Gx * x + Gy * y + Gz * z = 0 in which we set x = 0 and y = 1.
matrix._21 = 0.0f;
matrix._22 = 1.0f;
matrix._23 = -_y / _z;
length = sqrtf(matrix._21 * matrix._21 + matrix._22 * matrix._22 + matrix._23 * matrix._23);
matrix._21 /= length;
matrix._22 /= length;
matrix._23 /= length;
// Set third matrix column as a cross product of the first two.
matrix._31 = matrix._12 * matrix._23 - matrix._13 * matrix._22;
matrix._32 = matrix._21 * matrix._13 - matrix._23 * matrix._11;
matrix._33 = matrix._11 * matrix._22 - matrix._12 * matrix._21;
}
If anyone can help it would be much appreciated!
I think double integration is probably over-complicating things. If I understand the problem correctly, the iPhone is giving you a vector of values from the accelerometers. Assuming the user isn't waving it around, that vector will be of roughly constant length, and pointing directly downwards with gravity.
There is one major problem with this, and that is that you can't tell when the user rotates the phone around the horizontal. Imagine you lie your phone on the table, with the bottom facing you as you're sitting in front of it; the gravity vector would be (0, -1, 0). Now rotate your phone around 90 degrees so the bottom is facing off to your left, but is still flat on the table. The gravity vector is still going to be (0, -1, 0). But you'd really want your helicopter to have turned with the phone. It's a basic limitation of the fact that the iPhone only has a 2D accelerometer, and it's extrapolating a 3D gravity vector from that.
So let's assume that you've told the user they're not allowed to rotate their phone like that, and they have to keep it with the bottom point to you. That's fine, you can still get a lot of control from that.
Next, you need to cap the input such that the helicopter never goes more than 90 degrees over on it's side. Imagine the vector that you're given as being a stick attached to your phone, and dangling with gravity. The vector you have is describing the direction of gravity, relative to the phone's flat surface. If it were (0, -1, 0) the stick is pointing directly downwards (-y). if it were (1, 0, 0), the stick is pointing to the right of the phone (+x), and implies that the phone has been twisted 90 degrees clockwise (looking away from you at the phone).
Assume in this metaphor that the stick has full rotational freedom. It can be pointing in any direction from the phone. So moving the stick around describes the surface of a sphere. But crucially, you only want the stick to be able to move around the lower half of that sphere. If the user twists the phone so that the stick would be in the upper half of the sphere, you want it to cap such that it's pointing somewhere around the equator of the sphere.
You can achieve this quite cleanly by using polar co-ordinates. 3D vectors and polar co-ordinates are interchangeable - you can convert to and from without losing any information.
Convert the vector you have (normalised of course) into a set of 3D polar co-ordinates (you should be able to find this logic on the web quite easily). This will give you an angle around the horizontal plane, and an angle for vertical plane (and a distance from the origin - for a normalised vector, this should be 1.0). If the vertical angle is positive, the vector is in the upper half of the sphere, negative it's in the lower half. Then, cap the vertical angle so that it is always zero or less (and so in the lower half of the sphere). Then you can take the horizontal and capped vertical angle, and convert it back into a vector.
This new vector, if plugged into the matrix code you already have, will give you the correct orientation, limited to the range of motion you need. It will also be stable if the user turns their phone slightly beyond the 90 degree mark - this logic will keep your directional vector as close to the user's current orientation as possible, without going beyond the limit you set.
Try normalizing the acceleration vector first. (edit: after you check the length) (edit edit: I guess I need to learn how to read... how do I delete my answer?)
So if I understand this correctly, the iPhone is feeding you accelerometer data, saying how hard you're moving the iPhone in 3 axes.
I'm not familiar with that apple sample, so I don't know what its doing. However, it sounds like you're mapping acceleration directly to orientation, but I think what you want to do is doubly integrate the acceleration in order to obtain a position and look at changes in position in order to orient the helicopter. Basically, this is more of a physics problem than a Direct3D problem.
It looks like you are using the acceleration vector from the phone to define one axis of a orthogonal frame of reference. And I suppose +Y is points towards the ground so you are concerned about the case when the vector points towards the sky.
Consider the case when the iphone reports {0, -6.0, 0}. You will change this vector to {0, -.38, 0}. But they both normalize to {0, -1.0, 0}. So, the effect of clamping y at -.38 is influenced by magnitude of the other two components of the vector.
What you really want is to limit the angle of the vector to the XZ plane when Y is negative.
Say you want to limit it to be no more than 30 degrees to the XZ plane when Y is negative. First normalize the vector then:
const float limitAngle = 30.f * PI/180.f; // angle in radians
const float sinLimitAngle = sinf(limitAngle);
const float XZLimitLength = sqrtf(1-sinLimitAngle*sinLimitAngle);
if (_y < -sinLimitAngle)
{
_y = -sinLimitAngle;
float XZlengthScale = XZLimitLength / sqrtf(_x*_x + _z*_z);
_x *= XZlengthScale;
_z *= XZlengthScale;
}