I am trying to produce random equilateral triangles on the console screen.
The method I am using is creating a center point for the triangle (randomly positioned), moving the center point to the origin (0,0) and then creating 3 points from the center (adding the radius(random number) of the triangle to the Y axis of each point). Then I rotate 2 of the points, one at 120 degrees and the other at 240 making an equilateral triangle then draw lines between the points. Then bring the points back to the original plot relating to the centroid.
This for the most past of the time works and I get an equilateral triangle, however other times I don't quite get an equilateral triangle and I am at a complete loss as to why.
I am using Brensenham's line algorithm to draw the line between points.
Image of working triangle: http://imgur.com/GpF406O
Image of broken triangle: http://imgur.com/Oa2BYun
Here is the code that plots the coords for the triangle:
void Triangle::createVertex(Vertex cent)
{
// angle of 120 in radians
double s120 = sin(2.0943951024);
double c120 = cos(2.0943951024);
// angle of 240 in radians
double s240 = sin(4.1887902048);
double c240 = cos(4.1887902048);
// bringing centroid to the origin and saving old pos to move later on
int x = cent.getX();
int y = cent.getY();
cent.setX(0);
cent.setY(0);
// creating the points all equal distance from the centroid
Vertex v1(cent.getX(), cent.getY() + radius);
Vertex v2(cent.getX(), cent.getY() + radius);
Vertex v3(cent.getX(), cent.getY() + radius);
// rotate points
double newx = v1.getX() * c120 - v1.getY() * s120;
double newy = v1.getY() * c120 + v1.getX() * s120;
double xnew = v2.getX() * c240 - v2.getY() * s240;
double ynew = v2.getY() * c240 + v2.getX() * s240;
// giving the points the actual location in relation the the old pos of the centroid
v1.setX(newx + x);
v1.setY(newy + y);
v2.setX(xnew + x);
v2.setY(ynew + y);
v3.setX(x);
v3.setY(y + radius);
// adding the to a list (list is used in a function to draw the lines)
vertices.push_back(v1);
vertices.push_back(v2);
vertices.push_back(v3);
}
Looking at the images of your two triangles (and looking at the line drawing algorithm) you are drawing lines as a series of discrete pixels. That means a vertex must fall in a pixel (it can't be on a boundary) like in this image.
So what happens if your vertex falls on* a border between pixels? Your line drawing algorithm has to make a decision on which pixel to put the vertex in.
Looking at the algorithm description on wikipedia and the c++ implementation on a page a www.cs.helsinki.fi
I see that both list implementations using integer arithmetic** which in this case is not unreasonable given you have discreet rows of pixels. This means that if your floating point calculations put one vertex above the threshold of the integer label for the next row of pixels when the floor (conversion from float to int) is done, but the other vertex is below that threshold then the two vertices will be placed on different rows.
think v1.y = 5.00000000000000000001 and v2.y = 4.99999999999999999999 which leads to v1 being placed on row 5 and v2 being placed on row 4.
This explains why you only see the issue occurring occasionally, you only occasionally have your vertices land on a boundary like this.
In order to fix a couple of things come to mind:
Fix it when you assign values to your vertices, the y values are the same anyways.
given:
v1.getX() = v2.getX() = 0 (defined by your code)
v1.getY() = v2.getY() = radius (defined by your code)
cos(120 degrees) = cos(240 degrees) ('tis true)
This reduces your two y values to
double newy = v1.getY() * c120
double ynew = v1.getY() * c120
ergo:
v1.setY(newy + y);
v2.setY(newy + y);
If you wrote your own Brensenham's algorithm implementation you could add a check in that code to make sure your vertices are at the same height, but that seems like a really bad place to put that kind of check since the height of the endpoints is specific to your problem and not drawing lines in general.
*Or not exactly on, but close enough you can't tell the difference after accounting for floating point error
**The algorithm is not restricted to integer arithmetic, but I suspect given the irregularity of your problem and the way the algorithm has been presented, along with the fact that you are using discreet characters for the lines in your images the integer arithmetic is the issue.
Related
I wanted to draw a circle using graphics.h in C++, but not directly using the circle() function. The circle I want to draw uses smaller circles as it's points i.e. The smaller circles would constitute the circumference of the larger circle. So I thought, if I did something like this, it would work:
{
int radius = 4;
// Points at which smaller circles would be drawn
int x, y;
int maxx = getmaxx();
int maxy = getmaxy();
// Co-ordinates of center of the larger circle (centre of the screen)
int h = maxx/2;
int k = maxy/2;
//Cartesian cirle formula >> (X-h)^2 + (Y-k)^2 = radius^2
//Effectively, this nested loop goes through every single coordinate on the screen
int gmode = DETECT;
int gdriver;
initgraph(&gmode, &gdriver, "");
for(x = 0; x<maxx; x++)
{
for(y = 0; y<maxy; y++)
{
if((((x-h)*(x-h)) + ((y-k)*(y-k))) == (radius*radius))
{
circle(x, y, 5) //Draw smaller circle with radius 5
} //at points which satisfy circle equation only!
}
}
getch();
}
This is when I'm using graphics.h on Turbo C++ as this is the compiler we're learning with at school.
I know it's ancient.
So, theoretically, since the nested for loops check all the points on the screen, and draw a small circle at every point that satisfies the circle equation only, I thought I would get a large circle of radius as entered, whose circumference constitutes of the smaller circles I make in the for loop.
However, when I try the program, I get four hyperbolas (all pointing towards the center of the screen) and when I increase the radius, the pointiness (for lack of a better word) of the hyperbolas increase, until finally, when the radius is 256 or more, the two hyperbolas on the top and bottom intersect to make a large cross on my screen like : "That's it, user, I give up!"
I came to the value 256 as I noticed that of the radius was a multiple of 4 the figures looked ... better?
I looked around for a solution for quite some time, but couldn't get any answers, so here I am.
Any suggestions???
EDIT >> Here's a rough diagram of the output I got...
There are two issues in your code:
First: You should really call initgraph before you call getmaxx and getmaxy, otherwise they will not necessarily return the correct dimensions of the graphics mode. This may or may not be a contributing factor depending on your setup.
Second, and most importantly: In Turbo C++, int is 16-bit. For example, here is circle with radius 100 (after the previous initgraph order issue was fixed):
Note the stray circles in the four corners. If we do a little debugging and add some print-outs (a useful strategy that you should file away for future reference):
if((((x-h)*(x-h)) + ((y-k)*(y-k))) == (radius*radius))
{
printf(": (%d-%d)^2 + (%d-%d)^2 = %d^2\n", x, h, y, k, radius);
circle(x, y, 5); //Draw smaller circle with radius
} //at points which satisfy circle equation only!
You can see what's happening (first line is maxx and maxy, not shown in above snippet):
In particular that circle at (63, 139) is one of the corners. If you do the math, you see that:
(63 - 319)2 + (139 - 239)2 = 75536
And since your ints are 16-bit, 75536 modulo 65536 = 10000 = the value that ends up being calculated = 1002 = a circle where it shouldn't be.
An easy solution to this is to just change the relevant variables to long:
maxx, maxy
x, y
h, k
So:
long x, y;
...
initgraph(...);
...
long maxx = getmaxx();
long maxy = getmaxy();
...
long h = maxx / 2;
long k = maxy / 2;
And then you'll end up with correct output:
Note of course that like other answers point out, since you are using ints, you'll miss a lot of points. This may or may not be OK, but some values will produce noticeably poorer results (e.g. radius 256 only seems to have 4 integer solutions). You could introduce a tolerance if you want. You could also use a more direct approach but that might defeat the purpose of your exercise with the Cartesian circle formula. If you're into this sort of thing, here is a 24-page document containing a bunch of discussion, proofs, and properties about integers that are the sum of two squares.
I don't know enough about Turbo C++ to know if you can make it use 32-bit ints, I'll leave that as an exercise to you.
First of all, maxx and maxy are integers, which you initialize using some functions representing the borders of the screen and then later you use them as functions. Just remove the paranthesis:
// Co-ordinates of center of the larger circle (centre of the screen)
int h = maxx/2;
int k = maxy/2;
Then, you are checking for exact equality to check whether a point is on a circle. Since the screen is a grid of pixels, many of your points will be missed. You need to add a tolerance, a maximum distance between the point you check and the actual circle. So change this line:
if(((x-h)*(x-h)) + ((y-k)*(y-k)) == radius*radius)
to this:
if(abs(((x-h)*(x-h)) + ((y-k)*(y-k)) - radius*radius) < 2)
Introduction of some level of tolerance will solve the problem.
But it is not wise to check all the points in graphical window. Would you change an approach? You can draw needed small circles without checks at all:
To fill all big circle circumference (with RBig radius), you need NCircles small circles with RSmall radius
NCircles = round to integer (Pi / ArcSin(RSmall / RBig));
Center of i-th small circle is at position
cx = mx + Round(RBig * Cos(i * 2 * Pi / N));
cy = my + Round(RBig * Sin(i * 2 * Pi / N));
where mx, my - center of the big circle
Does anyone know some algorithm to calculate the number of sides required to approximate a circle using polygon, if radius, r of the circle and maximum departure of the polygon from circularity, D is given? I really need to find the number of sides as I need to draw the approximated circle in OpenGL.
Also, we have the resolution of the screen in NDC coordinates per pixel given by P and solving D = P/2, we could guarantee that our circle is within half-pixel of accuracy.
What you're describing here is effectively a quality factor, which often goes hand-in-hand with error estimates.
A common way we handle this is to calculate the error for a a small portion of the circumference of the circle. The most trivial is to determine the difference in arc length of a slice of the circle, compared to a line segment joining the same two points on the circumference. You could use more effective measures, like difference in area, radius, etc, but this method should be adequate.
Think of an octagon, circumscribed with a perfect circle. In this case, the error is the difference in length of the line between two adjacent points on the octagon, and the arc length of the circle joining those two points.
The arc length is easy enough to calculate: PI * r * theta, where r is your radius, and theta is the angle, in radians, between the two points, assuming you draw lines from each of these points to the center of the circle/polygon. For a closed polygon with n sides, the angle is just (2*PI/n) radians. Let the arc length corresponding to this value of n be equal to A, ie A=2*PI*r/n.
The line length between the two points is easily calculated. Just divide your circle into n isosceles triangles, and each of those into two right-triangles. You know the angle in each right triangle is theta/2 = (2*PI/n)/2 = (PI/n), and the hypotenuse is r. So, you get your equation of sin(PI/n)=x/r, where x is half the length of the line segment joining two adjacent points on your circumscribed polygon. Let this value be B (ie: B=2x, so B=2*r*sin(PI/n)).
Now, just calculate the relative error, E = |A-B| / A (ie: |TrueValue-ApproxValue|/|TrueValue|), and you get a nice little percentage, represented in decimal, of your error vector. You can use the above equations to set a constraint on E (ie: it cannot be greater than some value, say, 1.05), in order for it to "look good".
So, you could write a function that calculates A, B, and E from the above equations, and loop through values of n, and have it stop looping when the calculated value of E is less than your threshold.
I would say that you need to set the number of sides depending on two variables the radius and the zoom (if you allow zoom)
A circle or radius 20 pixels can look ok with 32 to 56 sides, but if you use the same number of sided for a radios of 200 pixels that number of sides will not be enough
numberOfSides = radius * 3
If you allow zoom in and out you will need to do something like this
numberOfSides = radiusOfPaintedCircle * 3
When you zoom in radiusOfPaintedCircle will be bigger that the "property" of the circle being drawn
I've got an algorithm to draw a circle using fixed function opengl, maybe it'll help?
It's hard to know what you mean when you say you want to "approximate a circle using polygon"
You'll notice in my algorithm below that I don't calculate the number of lines needed to draw the circle, I just iterate between 0 .. 2Pi, stepping the angle by 0.1 each time, drawing a line with glVertex2f to that point on the circle, from the previous point.
void Circle::Render()
{
glLoadIdentity();
glPushMatrix();
glBegin(GL_LINES);
glColor3f(_vColour._x, _vColour._y, _vColour._z);
glVertex3f(_State._position._x, _State._position._y, 0);
glVertex3f(
(_State._position._x + (sinf(_State._angle)*_rRadius)),
(_State._position._y + (cosf(_State._angle)*_rRadius)),
0
);
glEnd();
glTranslatef(_State._position._x, _State._position._y, 0);
glBegin(GL_LINE_LOOP);
glColor3f(_vColour._x, _vColour._y, _vColour._z);
for(float angle = 0.0f; angle < g_k2Pi; angle += 0.1f)
glVertex2f(sinf(angle)*_rRadius, cosf(angle)*_rRadius);
glEnd();
glPopMatrix();
}
I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.
I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.
I am using a WinSock connection to get the accelerometer info off and iPhone and into a Direct3D application. I have modified Apples GLGravity's sample code to get my helicopter moving in relation to gravity, however I need to "cap" the movement so the helicopter can't fly upside down! I have tried to limit the output of the accelerometer like so
if (y < -0.38f) {
y = -0.38f;
}
Except this doesn't seem to work!? The only thing I can think of is I need to modify the custom matrix, but I can't seem to get my head around what I need to be changing. The matrix is code is below.
_x = acceleration.x;
_y = acceleration.y;
_z = acceleration.z;
float length;
D3DXMATRIX matrix, t;
memset(matrix, '\0', sizeof(matrix));
D3DXMatrixIdentity(&matrix);
// Make sure acceleration value is big enough.
length = sqrtf(_x * _x + _y * _y + _z * _z);
if (length >= 0.1f && kInFlight == TRUE) { // We have a acceleration value good enough to work with.
matrix._44 = 1.0f; //
// First matrix column is a gravity vector.
matrix._11 = _x / length;
matrix._12 = _y / length;
matrix._13 = _z / length;
// Second matrix is arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz}.
// defined by the equation Gx * x + Gy * y + Gz * z = 0 in which we set x = 0 and y = 1.
matrix._21 = 0.0f;
matrix._22 = 1.0f;
matrix._23 = -_y / _z;
length = sqrtf(matrix._21 * matrix._21 + matrix._22 * matrix._22 + matrix._23 * matrix._23);
matrix._21 /= length;
matrix._22 /= length;
matrix._23 /= length;
// Set third matrix column as a cross product of the first two.
matrix._31 = matrix._12 * matrix._23 - matrix._13 * matrix._22;
matrix._32 = matrix._21 * matrix._13 - matrix._23 * matrix._11;
matrix._33 = matrix._11 * matrix._22 - matrix._12 * matrix._21;
}
If anyone can help it would be much appreciated!
I think double integration is probably over-complicating things. If I understand the problem correctly, the iPhone is giving you a vector of values from the accelerometers. Assuming the user isn't waving it around, that vector will be of roughly constant length, and pointing directly downwards with gravity.
There is one major problem with this, and that is that you can't tell when the user rotates the phone around the horizontal. Imagine you lie your phone on the table, with the bottom facing you as you're sitting in front of it; the gravity vector would be (0, -1, 0). Now rotate your phone around 90 degrees so the bottom is facing off to your left, but is still flat on the table. The gravity vector is still going to be (0, -1, 0). But you'd really want your helicopter to have turned with the phone. It's a basic limitation of the fact that the iPhone only has a 2D accelerometer, and it's extrapolating a 3D gravity vector from that.
So let's assume that you've told the user they're not allowed to rotate their phone like that, and they have to keep it with the bottom point to you. That's fine, you can still get a lot of control from that.
Next, you need to cap the input such that the helicopter never goes more than 90 degrees over on it's side. Imagine the vector that you're given as being a stick attached to your phone, and dangling with gravity. The vector you have is describing the direction of gravity, relative to the phone's flat surface. If it were (0, -1, 0) the stick is pointing directly downwards (-y). if it were (1, 0, 0), the stick is pointing to the right of the phone (+x), and implies that the phone has been twisted 90 degrees clockwise (looking away from you at the phone).
Assume in this metaphor that the stick has full rotational freedom. It can be pointing in any direction from the phone. So moving the stick around describes the surface of a sphere. But crucially, you only want the stick to be able to move around the lower half of that sphere. If the user twists the phone so that the stick would be in the upper half of the sphere, you want it to cap such that it's pointing somewhere around the equator of the sphere.
You can achieve this quite cleanly by using polar co-ordinates. 3D vectors and polar co-ordinates are interchangeable - you can convert to and from without losing any information.
Convert the vector you have (normalised of course) into a set of 3D polar co-ordinates (you should be able to find this logic on the web quite easily). This will give you an angle around the horizontal plane, and an angle for vertical plane (and a distance from the origin - for a normalised vector, this should be 1.0). If the vertical angle is positive, the vector is in the upper half of the sphere, negative it's in the lower half. Then, cap the vertical angle so that it is always zero or less (and so in the lower half of the sphere). Then you can take the horizontal and capped vertical angle, and convert it back into a vector.
This new vector, if plugged into the matrix code you already have, will give you the correct orientation, limited to the range of motion you need. It will also be stable if the user turns their phone slightly beyond the 90 degree mark - this logic will keep your directional vector as close to the user's current orientation as possible, without going beyond the limit you set.
Try normalizing the acceleration vector first. (edit: after you check the length) (edit edit: I guess I need to learn how to read... how do I delete my answer?)
So if I understand this correctly, the iPhone is feeding you accelerometer data, saying how hard you're moving the iPhone in 3 axes.
I'm not familiar with that apple sample, so I don't know what its doing. However, it sounds like you're mapping acceleration directly to orientation, but I think what you want to do is doubly integrate the acceleration in order to obtain a position and look at changes in position in order to orient the helicopter. Basically, this is more of a physics problem than a Direct3D problem.
It looks like you are using the acceleration vector from the phone to define one axis of a orthogonal frame of reference. And I suppose +Y is points towards the ground so you are concerned about the case when the vector points towards the sky.
Consider the case when the iphone reports {0, -6.0, 0}. You will change this vector to {0, -.38, 0}. But they both normalize to {0, -1.0, 0}. So, the effect of clamping y at -.38 is influenced by magnitude of the other two components of the vector.
What you really want is to limit the angle of the vector to the XZ plane when Y is negative.
Say you want to limit it to be no more than 30 degrees to the XZ plane when Y is negative. First normalize the vector then:
const float limitAngle = 30.f * PI/180.f; // angle in radians
const float sinLimitAngle = sinf(limitAngle);
const float XZLimitLength = sqrtf(1-sinLimitAngle*sinLimitAngle);
if (_y < -sinLimitAngle)
{
_y = -sinLimitAngle;
float XZlengthScale = XZLimitLength / sqrtf(_x*_x + _z*_z);
_x *= XZlengthScale;
_z *= XZlengthScale;
}