Shadow volumes - finding a silhouette - c++

Im working on my OpenGL task, and next stage is loading models and producing shadows using shadow volumes algorithm. I do it in 3 stages -
setConnectivity - finding
neighbours of each triangle and
storing their indices in neigh
parameter of each triangle,
markVisible(float* lp) - if lp
represents vector of light's
position, it marks triangles as
visible = true or visible =
false depending on dot production
of its normal vector and light
position,
markSilhoutte(float *lp) - marking silhouette edges and building the volume itself, extending silhouette to infinity(100 units is enough) in the direction opposite to light.
I checked all stages, and can definitely say that its all ok with first two, so the problem is in third function, which i included in my question. I use the algorithm introduced in this tutorial: http://www.3dcodingtutorial.com/Shadows/Shadow-Volumes.html
Briefly, edge is included in silhouette if it belongs to the visible triangle and non-visible triangle at the same time.
Here is a pair of screenshots to show you whats wrong:
http://prntscr.com/17dmg , http://prntscr.com/17dmq
As you can see, green sphere represents light's position, and these ugly green-blue polygons are faces of "shadow volume". You can also see, that im applying this function to the model of cube, and one of volume's side is missing(its not closed, but i should be). Can someone suggest whats wrong with my code and how can i fix it? Here goes the code i promised to include(variables names are self-explanatory, i suppose, but if you dont think so i can add description for each of them):
void Model::markSilhouette(float* lp){
glBegin(GL_QUADS);
for ( int i = 0; i < m_numMeshes; i++ )
{
for ( int t = 0; t < m_pMeshes[i].m_numTriangles; t++ )
{
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[t];
Triangle* pTri = &m_pTriangles[triangleIndex];
if (pTri->visible){
for(int j=0;j<3;j++){
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[pTri->neigh[j]-1];
Triangle* pTrk = &m_pTriangles[triangleIndex];
if(!pTrk->visible){
int p1j=pTri->m_vertexIndices[j];
int p2j=pTri->m_vertexIndices[(j+1)%3];
float* v1=m_pVertices[p1j].m_location;
float* v2=m_pVertices[p2j].m_location;
float x1=m_pVertices[p1j].m_location[0];
float y1=m_pVertices[p1j].m_location[1];
float z1=m_pVertices[p1j].m_location[2];
float x2=m_pVertices[p2j].m_location[0];
float y2=m_pVertices[p2j].m_location[1];
float z2=m_pVertices[p2j].m_location[2];
t=100;
float xl1=(x1-lp[0])*t;
float yl1=(y1-lp[1])*t;
float zl1=(z1-lp[2])*t;
float xl2=(x2-lp[0])*t;
float yl2=(y2-lp[1])*t;
float zl2=(z2-lp[2])*t;
glColor3f(0,0,1);
glVertex3f(x1 + xl1,
y1 + yl1,
z1 + zl1);
glVertex3f(x1,
y1,
z1);
glColor3f(0,1,0);
glVertex3f(x2 + xl2,
y2 + yl2,
z2 + zl2);
glVertex3f(x2,
y2,
z2);
}
}
}
}
}
glEnd();
}

I've found it. It looks like if you dont see an obvious algorithm mistake for a few days, then you've made a f*cking stupid mistake.
My triangle index variable is called t. Guess what? My extending vector length is also called t, and they are in the same scope, and i set t=100 after FIRST visible triangle :D So now volumes look like this:
outside http://prntscr.com/17l3n
inside http://prntscr.com/17l40
And it looks good for all light positions(acceptable by shadow volumes aglorithm, of course). So the working code for drawing a shadow volume is the following:
void Model::markSilouette(float* lp){
glDisable(GL_LIGHTING);
glPointSize(4.0);
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK,GL_FILL);
glBegin(GL_QUADS);
for ( int i = 0; i < m_numMeshes; i++ )
{
for ( int t = 0; t < m_pMeshes[i].m_numTriangles; t++ )
{
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[t];
Triangle* pTri = &m_pTriangles[triangleIndex];
if (pTri->visible){
for(int j=0;j<3;j++){
Triangle* pTrk;
if(pTri->neigh[j]){
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[pTri->neigh[j]-1];
pTrk = &m_pTriangles[triangleIndex];
}
if((!pTri->neigh[j]) || !pTrk->visible){
int p1j=pTri->m_vertexIndices[j];
int p2j=pTri->m_vertexIndices[(j+1)%3];
float* v1=m_pVertices[p1j].m_location;
float* v2=m_pVertices[p2j].m_location;
float x1=m_pVertices[p1j].m_location[0];
float y1=m_pVertices[p1j].m_location[1];
float z1=m_pVertices[p1j].m_location[2];
float x2=m_pVertices[p2j].m_location[0];
float y2=m_pVertices[p2j].m_location[1];
float z2=m_pVertices[p2j].m_location[2];
float f=100; // THE PROBLEM WAS HERE
float xl1=(x1-lp[0])*f;
float yl1=(y1-lp[1])*f;
float zl1=(z1-lp[2])*f;
float xl2=(x2-lp[0])*f;
float yl2=(y2-lp[1])*f;
float zl2=(z2-lp[2])*f;
glColor3f(0,0,0);
glVertex3f(x1 + xl1,
y1 + yl1,
z1 + zl1);
glVertex3f(x1,
y1,
z1);
glVertex3f(x2,
y2,
z2);
glVertex3f(x2 + xl2,
y2 + yl2,
z2 + zl2);
}
}
}
}
}
glEnd();
}

I think everything is ok, you are just rendering volume without depth test =)

Related

How does fWorldPerScreenWidthPixel work for drawing in loop

This code draws a sine wave with function. In the following panning/zooming code, I am trying to understand how fWorldPerScreenWidthPixel is being used to draw the line segments.
WorldToScreen(fWorldLeft - fWorldPerScreenWidthPixel, -function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f, opx, opy);
It is setting opx and opy, but why is it subtracted from: fWorldLeft
It seems strange to want to start left of fWorldLeft in the for loop where it draws the line. fWorldLeft starts at -25.
I have included the necessary code to explain:
// Draw Chart
float fWorldPerScreenWidthPixel = (fWorldRight - fWorldLeft) / ScreenWidth();
float fWorldPerScreenHeightPixel = (fWorldBottom - fWorldTop) / ScreenHeight();
int px, py, opx = 0, opy = 0;
WorldToScreen(fWorldLeft - fWorldPerScreenWidthPixel, -function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f, opx, opy);
for (float x = fWorldLeft; x < fWorldRight; x += fWorldPerScreenWidthPixel)
{
float y = -function(x - 5.0f) + 5.0f;
WorldToScreen(x, y, px, py);
DrawLine(opx, opy, px, py, PIXEL_SOLID, FG_GREEN);
opx = px;
opy = py;
}
Call to set fWorldLeft:
// Clip
float fWorldLeft, fWorldTop, fWorldRight, fWorldBottom;
ScreenToWorld(0, 0, fWorldLeft, fWorldTop);
Sets fWorldleft:
// Convert coordinates from Screen Space --> World Space
void ScreenToWorld(int nScreenX, int nScreenY, float &fWorldX, float &fWorldY)
{
fWorldX = ((float)nScreenX / fScaleX) + fOffsetX;
fWorldY = ((float)nScreenY / fScaleY) + fOffsetY;
}
and while I'm at it, World to Screen:
// Convert coordinates from World Space --> Screen Space
void WorldToScreen(float fWorldX, float fWorldY, int &nScreenX, int &nScreenY)
{
nScreenX = (int)((fWorldX - fOffsetX) * fScaleX);
nScreenY = (int)((fWorldY - fOffsetY) * fScaleY);
}
Thank you!
Josh
Let's break it down
WorldToScreen(
fWorldLeft - fWorldPerScreenWidthPixel,
-function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f,
opx, opy);
A clearer way to write that would be
x = fWorldLeft - fWorldPerScreenWidthPixel;
WorldToScreen(
x,
-function((x) - 5.0f) + 5.0f,
opx, opy);
This transforms the position (x, f(x)) from world space to screen space and stores the result in (opx, opy). Let's see how these two variables are used:
for(...)
{
...
DrawLine(opx, opy, px, py, PIXEL_SOLID, FG_GREEN);
...
}
This draws a line from (opx, opy) to (px, py) (which is the current point on the function. (opx, opy) is the old point on the function. And this is exactly what you are doing with the initialization from above. You set (opx, opy) to a point that is one pixel outside of the screen to ensure that there are no gaps at the border.

Calculating points on a 3D angle between two lines [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I'm trying to calculate multiple points on an angle (circle segment) so that I can store it as a VBO of Vector3 and render it in OpenGL.
Imagine each of those points on the dotted line as a coordinate I want to calculate
I know I can find the magnitude of the angle using the dot product, and in 2 dimensions I would be able to calculate the points on the angle just using sin and cos of this angle. But how do I apply this in 3 dimensions?
I thought maybe I should split the angle down into components, but then wasn't sure how to calculate magnitudes in that situation.
So what is the best method for calculating those points, and how do I do it?
I'm working in C# but of course pseudo code or just methods would do.
normalize and scale both vectors and then slerp between them
slerp stands for spherical linear interpolation and is referenced mostly for quaternions but is valid here as well
vec3 slerp(vec3 a, vec3 b, float t){
float dotp = dot(a,b);
if (dotp > DOT_THRESHOLD) {
// If the inputs are too close for comfort, linearly interpolate
// and normalize the result to avoid division by near 0
vec3 result = v0 + t*(v1 – v0);
result.normalize();
return result;
}
float theta = acos(dotp);
return (sin(theta*(1-t))*a + sin(theta*t)*b)/sin(theta);
}
The trick is to compute the dots as if your two vectors are the unit vectors. Just a half circle. But then instead of writing it as (x,y) = (0,0,1)*x + (0,1,0)*y... just put in your two red and green vectors as a new base.
Compute your 2d x,y circle points. Then the 3d point is redvector*x + greenvector*y.
Here is a C++ DLL for your C# project(tested but angles are not uniform but there is no division by zero as long as vectors have a magnitude):
Usage:
[DllImport("angle.dll")]
extern static void anglePoints(
float x1,
float y1,
float z1,
float x2,
float y2,
float z2,
float [] points
);
İnside of DLL:
class float3
{
public:
float x,y,z;
float3(float X, float Y, float Z)
{
x=X; y=Y; z=Z;
}
float3 sub(float X, float Y, float Z)
{
float3 tmp(0,0,0);
tmp.x=x-X;
tmp.y=y-Y;
tmp.z=z-Z;
return tmp;
}
float3 sub(float3 b)
{
float3 tmp(0,0,0);
tmp.x=x-b.x;
tmp.y=y-b.y;
tmp.z=z-b.z;
return tmp;
}
float3 add(float3 b)
{
float3 tmp(0,0,0);
tmp.x=x+b.x;
tmp.y=y+b.y;
tmp.z=z+b.z;
return tmp;
}
void normalize()
{
float r=sqrt(x*x+y*y+z*z);
x/=r;
y/=r;
z/=r;
}
void scale(float s)
{
x*=s;y*=s;z*=s;
}
void set(float3 v)
{
x=v.x;y=v.y;z=v.z;
}
};
extern "C" __declspec(dllexport) void anglePoints(
float x1,
float y1,
float z1,
float x2,
float y2,
float z2,
float * points
)
{
float3 A(x1,y1,z1);
float3 B(x2,y2,z2);
float3 tmp(0,0,0);
float3 diff(0,0,0);
for(int i=0;i<10;i++)
{
tmp.set(A);
diff.set(B.sub(A));
diff.scale(0.1*((float)i)); // simple and not efficient :P
diff.set(diff.add(tmp));
diff.normalize(); // normalized values so you can
points[i*3+0]=diff.x; // simply use them
points[i*3+1]=diff.y;
points[i*3+2]=diff.z;
}
}
Example:
float[] tmp = new float[30];
anglePoints(0,1,1,10,10,10,tmp);
for (int i = 0; i < 30; i++)
{
Console.WriteLine(tmp[i]);
}
Output:
0 // starts as 0,1,1 normalized
0,7071068
0,7071068
0,34879
0,6627011
0,6627011
0,4508348
0,6311687
0,6311687
0,4973818
0,6134375
0,6134375
0,5237828
0,6023502
0,6023502
0,540738
0,5948119
0,5948119
0,5525321
0,5893675
0,5893675
0,5612046
0,5852562
0,5852562
0,5678473
0,5820435
0,5820435
0,5730973 //ends as 10,10,10 but normalized
0,579465
0,579465

Making balls bounce off each other (openGL)

I'm trying to make an application where balls bounce off the walls and also off each other. The bouncing off the walls works fine, but I'm having some trouble getting them to bounce off each other. Here's the code I'm using to make them bounce off another ball (for testing I only have 2 balls)
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
What happens is that they randomly bounce, or pass through each other. Is there some physics that I'm missing?
EDIT: Here's the full function
// Callback handler for window re-paint event
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
// FOR LOOP
for (int i = 0; i < numOfBalls; i++)
{
glLoadIdentity(); // Reset model-view matrix
int numSegments = 100;
GLfloat angle = 0;
glTranslatef(balls[i].xPos, balls[i].yPos, 0.0f); // Translate to (xPos, yPos)
// Use triangular segments to form a circle
glBegin(GL_TRIANGLE_FAN);
glColor4f(balls[i].colorR, balls[i].colorG, balls[i].colorB, balls[i].colorA);
glVertex2f(0.0f, 0.0f); // Center of circle
for (int j = 0; j <= numSegments; j++)
{
// Last vertex same as first vertex
angle = j * 2.0f * PI / numSegments; // 360 deg for all segments
glVertex2f(cos(angle) * balls[i].ballRadius, sin(angle) * balls[i].ballRadius);
}
glEnd();
// Animation Control - compute the location for the next refresh
balls[i].xPos += balls[i].xSpeed;
balls[i].yPos += balls[i].ySpeed;
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
else
{
std::cout << "No collision\n";
}
// Check if the ball exceeds the edges
if (balls[i].xPos > balls[i].xPosMax)
{
balls[i].xPos = balls[i].xPosMax;
balls[i].xSpeed = -balls[i].xSpeed;
}
else if (balls[i].xPos < balls[i].xPosMin)
{
balls[i].xPos = balls[i].xPosMin;
balls[i].xSpeed = -balls[i].xSpeed;
}
if (balls[i].yPos > balls[i].yPosMax) {
balls[i].yPos = balls[i].yPosMax;
balls[i].ySpeed = -balls[i].ySpeed;
}
else if (balls[i].yPos < balls[i].yPosMin)
{
balls[i].yPos = balls[i].yPosMin;
balls[i].ySpeed = -balls[i].ySpeed;
}
}
glutSwapBuffers(); // Swap front and back buffers (of double buffered mode)
}
**Note: Most of the function uses a for loop with numOfBalls as the counter, but to test collision, I'm only using 2 balls, hence the balls[0] and balls[1]
Here are some things to consider.
If the length of (xSpeed,ySpeed) and is roughly comparable with .ballRadius it is possible for two balls to travel "thru" each other between "ticks" of the simulation's clock (one step). Consider two balls which are traveling perfectly vertical, one up, one down, and 1 .ballRadius apart horizontally. In real life they would clearly collide but it would be easy for your simulation to miss this event if .ySpeed ~ .ballRadius.
Second, you change in the vector of the balls results in each ball coming to rest, since
balls[0].xSpeed -= balls[0].xSpeed;
is a really exotic way of writing
balls[0].xSpeed = 0;
For the physics almost correct stuff, you need to invert only the component perpendicular to the plane of contact.
In other words take collision_vector to be the vector between the center of the balls (just subtract one point's coordinates from the other's). Because you have spheres this also happens to be the normal of the collision plane.
Now for each ball in turn, you need to decompose their speeds. The A component will be the one aligned with the colision_vector you can obtain it by doing some vector arithmetic A = doc(Speed, collision_vector) * collision_vector. This will be the thing you want to invert. You also want to extract the B component that is parallel to the collision plane. Because it's parallel it won't change because of the collision. You obtain it by subtracting A from the speed vector.
Finally the new speed will be something like B - A. If you want to get the balls to spin you will need an angular momentum in the direction of A - B. If the balls have different mass then you will need use the weight ratio as a multiplier for A in the first formula.
This will make the collision look legit. The detection still needs to happen correctly. Make sure that the speeds are significantly smaller than the radius of the balls. For comparable or bigger speeds you will need more complex algorithms.
Note: most of the stuff above is vector arithmetics. Also It's late here so I might have mixed up some signs (sorry). Take a simple example on paper and work it out. It will also help you understand the solution better.

How do I manually apply an OpenGL translation matrix to a vertex?

I have a specific need to apply a stored openGL matrix to a vertex by hand. I admit a weak spot with regards to matrix math, but I have read through all the documentation I can find and I'm reasonably sure I'm doing this correctly, but I'm getting an unexpected result. What am I missing?
(Note that this may be a math question, but I suspect I'm actually misunderstanding how to apply the translation matrix, so I thought I'd try here)
In the code snippet below, #1 works fine, #2 fails...
float x=1;
float y=1;
float z=1;
float w=1;
float x2=0;
float y2=0;
float z2=0;
float w2=1;
// 1 THIS WORKS:
glLoadIdentity();
// Convert from NSArray to C float
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
// Load the matrix the openGL way
glLoadMatrixf(modelMatrix);
// Custom function takes two coordinates and draws a box
[self drawBoxFromX:x FromY:y FromZ:z ToX:x2 ToY:y2 ToZ:z2];
//2 THIS DOES NOT WORK: Apply the matrix by hand
glLoadIdentity();
float new_x = (x*modelMatrix[0])+(y*modelMatrix[4])+(z*modelMatrix[8])+(w*modelMatrix[12]);
float new_y = (x*modelMatrix[1])+(y*modelMatrix[5])+(z*modelMatrix[9])+(w*modelMatrix[13]);
float new_z = (x*modelMatrix[2])+(y*modelMatrix[6])+(z*modelMatrix[10])+(w*modelMatrix[14]);
float new_x2 = (x2*modelMatrix[0])+(y2*modelMatrix[4])+(z2*modelMatrix[8])+(w2*modelMatrix[12]);
float new_y2 = (x2*modelMatrix[1])+(y2*modelMatrix[5])+(z2*modelMatrix[9])+(w2*modelMatrix[13]);
float new_z2 = (x2*modelMatrix[2])+(y2*modelMatrix[6])+(z2*modelMatrix[10])+(w2*modelMatrix[14]);
// Should draw a box identical to above, but gives strange result)
[self drawBoxFromX:new_x FromY:new_y FromZ:new_z ToX:new_x2 ToY:new_y2 ToZ:new_z2];
Update:
Based on a helpful comment below I realized I was only rotating two of the vertexes rather than all 8 of the cube. The following code works as expected, posting here for anyone who runs into a similar problem wrapping their head around 3d/opengl stuff. (Note: In case it is not obvious, this is not production code. There are many more efficient and less manual ways multiply matrices and describe cubes (see comments). The purpose of this code is simply to explicitly illustrate a behavior.)
struct Cube myCube;
myCube.a1.x=-1;
myCube.a1.y=-1;
myCube.a1.z=-1;
myCube.b1.x=-1;
myCube.b1.y=-1;
myCube.b1.z=1;
myCube.c1.x=1;
myCube.c1.y=-1;
myCube.c1.z=1;
myCube.d1.x=1;
myCube.d1.y=-1;
myCube.d1.z=-1;
myCube.a2.x=-1;
myCube.a2.y=1;
myCube.a2.z=-1;
myCube.b2.x=-1;
myCube.b2.y=1;
myCube.b2.z=1;
myCube.c2.x=1;
myCube.c2.y=1;
myCube.c2.z=1;
myCube.d2.x=1;
myCube.d2.y=1;
myCube.d2.z=-1;
//1 Load modelview and draw a box (this works fine)
glLoadIdentity();
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
glLoadMatrixf(modelMatrix);
[self drawCube:myCube];
//2 Load the matrix by hand (identical to above)
glLoadIdentity();
float w=1;
float new_Ax = (myCube.a1.x*modelMatrix[0])+(myCube.a1.y*modelMatrix[4])+(myCube.a1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Ay = (myCube.a1.x*modelMatrix[1])+(myCube.a1.y*modelMatrix[5])+(myCube.a1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Az = (myCube.a1.x*modelMatrix[2])+(myCube.a1.y*modelMatrix[6])+(myCube.a1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Bx = (myCube.b1.x*modelMatrix[0])+(myCube.b1.y*modelMatrix[4])+(myCube.b1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_By = (myCube.b1.x*modelMatrix[1])+(myCube.b1.y*modelMatrix[5])+(myCube.b1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Bz = (myCube.b1.x*modelMatrix[2])+(myCube.b1.y*modelMatrix[6])+(myCube.b1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Cx = (myCube.c1.x*modelMatrix[0])+(myCube.c1.y*modelMatrix[4])+(myCube.c1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Cy = (myCube.c1.x*modelMatrix[1])+(myCube.c1.y*modelMatrix[5])+(myCube.c1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Cz = (myCube.c1.x*modelMatrix[2])+(myCube.c1.y*modelMatrix[6])+(myCube.c1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Dx = (myCube.d1.x*modelMatrix[0])+(myCube.d1.y*modelMatrix[4])+(myCube.d1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Dy = (myCube.d1.x*modelMatrix[1])+(myCube.d1.y*modelMatrix[5])+(myCube.d1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Dz = (myCube.d1.x*modelMatrix[2])+(myCube.d1.y*modelMatrix[6])+(myCube.d1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_A2x = (myCube.a2.x*modelMatrix[0])+(myCube.a2.y*modelMatrix[4])+(myCube.a2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_A2y = (myCube.a2.x*modelMatrix[1])+(myCube.a2.y*modelMatrix[5])+(myCube.a2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_A2z = (myCube.a2.x*modelMatrix[2])+(myCube.a2.y*modelMatrix[6])+(myCube.a2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_B2x = (myCube.b2.x*modelMatrix[0])+(myCube.b2.y*modelMatrix[4])+(myCube.b2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_B2y = (myCube.b2.x*modelMatrix[1])+(myCube.b2.y*modelMatrix[5])+(myCube.b2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_B2z = (myCube.b2.x*modelMatrix[2])+(myCube.b2.y*modelMatrix[6])+(myCube.b2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_C2x = (myCube.c2.x*modelMatrix[0])+(myCube.c2.y*modelMatrix[4])+(myCube.c2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_C2y = (myCube.c2.x*modelMatrix[1])+(myCube.c2.y*modelMatrix[5])+(myCube.c2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_C2z = (myCube.c2.x*modelMatrix[2])+(myCube.c2.y*modelMatrix[6])+(myCube.c2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_D2x = (myCube.d2.x*modelMatrix[0])+(myCube.d2.y*modelMatrix[4])+(myCube.d2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_D2y = (myCube.d2.x*modelMatrix[1])+(myCube.d2.y*modelMatrix[5])+(myCube.d2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_D2z = (myCube.d2.x*modelMatrix[2])+(myCube.d2.y*modelMatrix[6])+(myCube.d2.z*modelMatrix[10])+(w*modelMatrix[14]);
myCube.a1.x=new_Ax;
myCube.a1.y=new_Ay;
myCube.a1.z=new_Az;
myCube.b1.x=new_Bx;
myCube.b1.y=new_By;
myCube.b1.z=new_Bz;
myCube.c1.x=new_Cx;
myCube.c1.y=new_Cy;
myCube.c1.z=new_Cz;
myCube.d1.x=new_Dx;
myCube.d1.y=new_Dy;
myCube.d1.z=new_Dz;
myCube.a2.x=new_A2x;
myCube.a2.y=new_A2y;
myCube.a2.z=new_A2z;
myCube.b2.x=new_B2x;
myCube.b2.y=new_B2y;
myCube.b2.z=new_B2z;
myCube.c2.x=new_C2x;
myCube.c2.y=new_C2y;
myCube.c2.z=new_C2z;
myCube.d2.x=new_D2x;
myCube.d2.y=new_D2y;
myCube.d2.z=new_D2z;
[self drawCube:myCube];
Drawing a rotated box is not the same as rotating two of the box corners and then drawing an axis-parallel box. The simplest way to draw a transformed 3D box is to transform all 8 vertices.

OpenGL draw circle, weird bugs

I'm no mathematician, but I need to draw a filled in circle.
My approach was to use someone else's math to get all the points on the circumference of a circle, and turn them into a triangle fan.
I need the vertices in a vertex array, no immediate mode.
The circle does appear. However, when I try and overlay circles strange things happen. They appear only a second and then disappear. When I move my mouse out of the window a triangle sticks out from nowhere.
Here's the class:
class circle
{
//every coordinate with have an X and Y
private:
GLfloat *_vertices;
static const float DEG2RAD = 3.14159/180;
GLfloat _scalex, _scaley, _scalez;
int _cachearraysize;
public:
circle(float scalex, float scaley, float scalez, float radius, int numdegrees)
{
//360 degrees, 2 per coordinate, 2 coordinates for center and end of triangle fan
_cachearraysize = (numdegrees * 2) + 4;
_vertices = new GLfloat[_cachearraysize];
for(int x= 2; x < (_cachearraysize-2); x = x + 2)
{
float degreeinRadians = x*DEG2RAD;
_vertices[x] = cos(degreeinRadians)*radius;
_vertices[x + 1] = sin(degreeinRadians)*radius;
}
//get the X as X of 0 and X of 180 degrees, subtract to get diameter. divide
//by 2 for radius and add back to X of 180
_vertices[0]= ((_vertices[2] - _vertices[362])/2) + _vertices[362];
//same idea for Y
_vertices[1]= ((_vertices[183] - _vertices[543])/2) + _vertices[543];
//close off the triangle fan at the same point as start
_vertices[_cachearraysize -1] = _vertices[0];
_vertices[_cachearraysize] = _vertices[1];
_scalex = scalex;
_scaley = scaley;
_scalez = scalez;
}
~circle()
{
delete[] _vertices;
}
void draw()
{
glScalef(_scalex, _scaley, _scalez);
glVertexPointer(2,GL_FLOAT, 0, _vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, _cachearraysize);
}
};
That's some ugly code, I'd say - lots of magic numbers et cetera.
Try something like:
struct Point {
Point(float x, float y) : x(x), y(y) {}
float x, y;
};
std::vector<Point> points;
const float step = 0.1;
const float radius = 2;
points.push_back(Point(0,0));
// iterate over the angle array
for (float a=0; a<2*M_PI; a+=step) {
points.push_back(cos(a)*radius,sin(a)*radius);
}
// duplicate the first vertex after the centre
points.push_back(points.at(1));
// rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2,GL_FLOAT,0, &points[0]);
glDrawArrays(GL_TRIANGLE_FAN,0,points.size());
It's up to you to rewrite this as a class, as you prefer. The math behind is really simple, don't fear to try and understand it.