How do I manually apply an OpenGL translation matrix to a vertex? - opengl

I have a specific need to apply a stored openGL matrix to a vertex by hand. I admit a weak spot with regards to matrix math, but I have read through all the documentation I can find and I'm reasonably sure I'm doing this correctly, but I'm getting an unexpected result. What am I missing?
(Note that this may be a math question, but I suspect I'm actually misunderstanding how to apply the translation matrix, so I thought I'd try here)
In the code snippet below, #1 works fine, #2 fails...
float x=1;
float y=1;
float z=1;
float w=1;
float x2=0;
float y2=0;
float z2=0;
float w2=1;
// 1 THIS WORKS:
glLoadIdentity();
// Convert from NSArray to C float
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
// Load the matrix the openGL way
glLoadMatrixf(modelMatrix);
// Custom function takes two coordinates and draws a box
[self drawBoxFromX:x FromY:y FromZ:z ToX:x2 ToY:y2 ToZ:z2];
//2 THIS DOES NOT WORK: Apply the matrix by hand
glLoadIdentity();
float new_x = (x*modelMatrix[0])+(y*modelMatrix[4])+(z*modelMatrix[8])+(w*modelMatrix[12]);
float new_y = (x*modelMatrix[1])+(y*modelMatrix[5])+(z*modelMatrix[9])+(w*modelMatrix[13]);
float new_z = (x*modelMatrix[2])+(y*modelMatrix[6])+(z*modelMatrix[10])+(w*modelMatrix[14]);
float new_x2 = (x2*modelMatrix[0])+(y2*modelMatrix[4])+(z2*modelMatrix[8])+(w2*modelMatrix[12]);
float new_y2 = (x2*modelMatrix[1])+(y2*modelMatrix[5])+(z2*modelMatrix[9])+(w2*modelMatrix[13]);
float new_z2 = (x2*modelMatrix[2])+(y2*modelMatrix[6])+(z2*modelMatrix[10])+(w2*modelMatrix[14]);
// Should draw a box identical to above, but gives strange result)
[self drawBoxFromX:new_x FromY:new_y FromZ:new_z ToX:new_x2 ToY:new_y2 ToZ:new_z2];
Update:
Based on a helpful comment below I realized I was only rotating two of the vertexes rather than all 8 of the cube. The following code works as expected, posting here for anyone who runs into a similar problem wrapping their head around 3d/opengl stuff. (Note: In case it is not obvious, this is not production code. There are many more efficient and less manual ways multiply matrices and describe cubes (see comments). The purpose of this code is simply to explicitly illustrate a behavior.)
struct Cube myCube;
myCube.a1.x=-1;
myCube.a1.y=-1;
myCube.a1.z=-1;
myCube.b1.x=-1;
myCube.b1.y=-1;
myCube.b1.z=1;
myCube.c1.x=1;
myCube.c1.y=-1;
myCube.c1.z=1;
myCube.d1.x=1;
myCube.d1.y=-1;
myCube.d1.z=-1;
myCube.a2.x=-1;
myCube.a2.y=1;
myCube.a2.z=-1;
myCube.b2.x=-1;
myCube.b2.y=1;
myCube.b2.z=1;
myCube.c2.x=1;
myCube.c2.y=1;
myCube.c2.z=1;
myCube.d2.x=1;
myCube.d2.y=1;
myCube.d2.z=-1;
//1 Load modelview and draw a box (this works fine)
glLoadIdentity();
float modelMatrix[16];
for(int x=0;x<16;x++){modelMatrix[x]=[[cs.modelView objectAtIndex:x] floatValue];}
glLoadMatrixf(modelMatrix);
[self drawCube:myCube];
//2 Load the matrix by hand (identical to above)
glLoadIdentity();
float w=1;
float new_Ax = (myCube.a1.x*modelMatrix[0])+(myCube.a1.y*modelMatrix[4])+(myCube.a1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Ay = (myCube.a1.x*modelMatrix[1])+(myCube.a1.y*modelMatrix[5])+(myCube.a1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Az = (myCube.a1.x*modelMatrix[2])+(myCube.a1.y*modelMatrix[6])+(myCube.a1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Bx = (myCube.b1.x*modelMatrix[0])+(myCube.b1.y*modelMatrix[4])+(myCube.b1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_By = (myCube.b1.x*modelMatrix[1])+(myCube.b1.y*modelMatrix[5])+(myCube.b1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Bz = (myCube.b1.x*modelMatrix[2])+(myCube.b1.y*modelMatrix[6])+(myCube.b1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Cx = (myCube.c1.x*modelMatrix[0])+(myCube.c1.y*modelMatrix[4])+(myCube.c1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Cy = (myCube.c1.x*modelMatrix[1])+(myCube.c1.y*modelMatrix[5])+(myCube.c1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Cz = (myCube.c1.x*modelMatrix[2])+(myCube.c1.y*modelMatrix[6])+(myCube.c1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_Dx = (myCube.d1.x*modelMatrix[0])+(myCube.d1.y*modelMatrix[4])+(myCube.d1.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_Dy = (myCube.d1.x*modelMatrix[1])+(myCube.d1.y*modelMatrix[5])+(myCube.d1.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_Dz = (myCube.d1.x*modelMatrix[2])+(myCube.d1.y*modelMatrix[6])+(myCube.d1.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_A2x = (myCube.a2.x*modelMatrix[0])+(myCube.a2.y*modelMatrix[4])+(myCube.a2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_A2y = (myCube.a2.x*modelMatrix[1])+(myCube.a2.y*modelMatrix[5])+(myCube.a2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_A2z = (myCube.a2.x*modelMatrix[2])+(myCube.a2.y*modelMatrix[6])+(myCube.a2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_B2x = (myCube.b2.x*modelMatrix[0])+(myCube.b2.y*modelMatrix[4])+(myCube.b2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_B2y = (myCube.b2.x*modelMatrix[1])+(myCube.b2.y*modelMatrix[5])+(myCube.b2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_B2z = (myCube.b2.x*modelMatrix[2])+(myCube.b2.y*modelMatrix[6])+(myCube.b2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_C2x = (myCube.c2.x*modelMatrix[0])+(myCube.c2.y*modelMatrix[4])+(myCube.c2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_C2y = (myCube.c2.x*modelMatrix[1])+(myCube.c2.y*modelMatrix[5])+(myCube.c2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_C2z = (myCube.c2.x*modelMatrix[2])+(myCube.c2.y*modelMatrix[6])+(myCube.c2.z*modelMatrix[10])+(w*modelMatrix[14]);
float new_D2x = (myCube.d2.x*modelMatrix[0])+(myCube.d2.y*modelMatrix[4])+(myCube.d2.z*modelMatrix[8])+(w*modelMatrix[12]);
float new_D2y = (myCube.d2.x*modelMatrix[1])+(myCube.d2.y*modelMatrix[5])+(myCube.d2.z*modelMatrix[9])+(w*modelMatrix[13]);
float new_D2z = (myCube.d2.x*modelMatrix[2])+(myCube.d2.y*modelMatrix[6])+(myCube.d2.z*modelMatrix[10])+(w*modelMatrix[14]);
myCube.a1.x=new_Ax;
myCube.a1.y=new_Ay;
myCube.a1.z=new_Az;
myCube.b1.x=new_Bx;
myCube.b1.y=new_By;
myCube.b1.z=new_Bz;
myCube.c1.x=new_Cx;
myCube.c1.y=new_Cy;
myCube.c1.z=new_Cz;
myCube.d1.x=new_Dx;
myCube.d1.y=new_Dy;
myCube.d1.z=new_Dz;
myCube.a2.x=new_A2x;
myCube.a2.y=new_A2y;
myCube.a2.z=new_A2z;
myCube.b2.x=new_B2x;
myCube.b2.y=new_B2y;
myCube.b2.z=new_B2z;
myCube.c2.x=new_C2x;
myCube.c2.y=new_C2y;
myCube.c2.z=new_C2z;
myCube.d2.x=new_D2x;
myCube.d2.y=new_D2y;
myCube.d2.z=new_D2z;
[self drawCube:myCube];

Drawing a rotated box is not the same as rotating two of the box corners and then drawing an axis-parallel box. The simplest way to draw a transformed 3D box is to transform all 8 vertices.

Related

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Kernel failed error while using CUDA 5.5 on Mac OS X

I am writing a CUDA Raytracer and seem to be stuck at a weird problem. I am using CUDA 5.5 along with GCC4.2.1 on my Mac OS X and am using GLM 0.9.4.4.
Whenever I call my raycastFromCameraKernel function, I get this error:
Cuda error: Kernel failed!: OS call failed or operation not supported on this OS.
After some debugging, I think I have narrowed down the problem to the glm::normalize(temp) function. If I substitute this by writing my own normalize function, the code works fine. Interestingly, when I wrote a sample program using glm::normalize just to see if it was working, it compiled and ran properly!
Here is the code to the function having the issue:
__host__ __device__ ray raycastFromCameraKernel(glm::vec2 resolution, float time, int x, int y, glm::vec3 eye, glm::vec3 view, glm::vec3 up, glm::vec2 fov)
{
glm::vec3 eyePoint = eye;
glm::vec3 V = up;
glm::vec3 W = view;
glm::vec3 U = glm::cross(V,W); // Perter Sherley page 74 (Creating orthonormal vectors)
float fovY = fov.y;
//d is the near clip plane
float distance = (resolution.y / 2.0f) / tan(fovY);
float left = -resolution.x/2;
float right = resolution.x/2;
float top = resolution.y/2;
float bottom = -resolution.y/2;
float u = left + (right - left)*(x + 0.5)/resolution.x;
float v = bottom + (top - bottom)*(y + 0.5)/resolution.y;
ray r;
r.origin = eyePoint;
glm::vec3 temp = -1*distance*W + u*U + v*V;
r.direction = glm::normalize(temp);
return r;
}
Could someone please help?
So the problem was that I was having a divide by zero error caused due to very less values (near to zero) in temp for particular values of distance, u and V and this was causing the divide by zero error in glm::normalize. I solved this by checking for the values of temp before normalizing it and only normalized temp if it was above a given threshold. That solved the problem.

incrementing my spherical coordinates clockwise

I am launching a projectile around a sphere. My code moves it in a counterclockwise direction just fine. However, I would like it to move in a clockwise direction instead.
I'm guessing that it's a matter of tuning my math.
// these are my stepping and incrementing variables
int goose1_egg1_step = 1;
int &r_goose1_egg1_step = goose1_egg1_step;
float goose1_egg1_divider = 17500;
// the starting theta/phi values are: 5 and 5
int goose1_egg1_theta=5;
int goose1_egg1_phi=5;
// the ending theta/phi values are: 7 and 1
// there is a difference of 2 between the start and end theta values
// there is a difference of 4 between the start and end phi values
float goose1_egg1_theta_increment = 2/goose1_egg1_divider;
float goose1_egg1_phi_increment = 4/goose1_egg1_divider;
This is my function that displays the updated coordinates each frame with a sphere:
if (goose1_egg1_step < goose1_egg1_divider)
{
float goose1_egg1_theta_math = (goose1_egg1_theta+(goose1_egg1_theta_increment* r_goose1_egg1_step))/10.0*M_PI;
float goose1_egg1_phi_math = (goose1_egg1_phi-(goose1_egg1_phi_increment* r_goose1_egg1_step))/10.0*2*M_PI;
r_goose1_egg1_x = Radius * sin(goose1_egg1_theta_math) * cos(goose1_egg1_phi_math);
r_goose1_egg1_y = Radius * sin(goose1_egg1_theta_math) * sin(goose1_egg1_phi_math);
r_goose1_egg1_z = Radius * cos(goose1_egg1_theta_math);
glPushMatrix();
glTranslatef(r_goose1_egg1_x,r_goose1_egg1_y,r_goose1_egg1_z);
glColor3f (1.0, 0.0, 0.0);
glutSolidSphere (0.075,5,5);
glEnd();
glPopMatrix();
}
And here is how I increment the step value:
if (r_goose1_egg1_step < goose1_egg1_divider)
{
++(r_goose1_egg1_step);
}
else
r_goose1_egg1_step=1;
Even though you are talking about "clockwise motion" in a sphere, when it only makes sense to me in a plane, it seems to me that what you want could be done just by changing the signals in the two lines where you create goose1_egg1_theta_math and goose1_egg1_phi_math, like this:
float goose1_egg1_theta_math = (goose1_egg1_theta-(goose1_egg1_theta_increment* r_goose1_egg1_step))/10.0*M_PI;
float goose1_egg1_phi_math = (goose1_egg1_phi+(goose1_egg1_phi_increment* r_goose1_egg1_step))/10.0*2*M_PI;
This should reverse the way you increment you spherical coordinates, giving you the "counter-clockwise" motion you're looking for.

Shadow volumes - finding a silhouette

Im working on my OpenGL task, and next stage is loading models and producing shadows using shadow volumes algorithm. I do it in 3 stages -
setConnectivity - finding
neighbours of each triangle and
storing their indices in neigh
parameter of each triangle,
markVisible(float* lp) - if lp
represents vector of light's
position, it marks triangles as
visible = true or visible =
false depending on dot production
of its normal vector and light
position,
markSilhoutte(float *lp) - marking silhouette edges and building the volume itself, extending silhouette to infinity(100 units is enough) in the direction opposite to light.
I checked all stages, and can definitely say that its all ok with first two, so the problem is in third function, which i included in my question. I use the algorithm introduced in this tutorial: http://www.3dcodingtutorial.com/Shadows/Shadow-Volumes.html
Briefly, edge is included in silhouette if it belongs to the visible triangle and non-visible triangle at the same time.
Here is a pair of screenshots to show you whats wrong:
http://prntscr.com/17dmg , http://prntscr.com/17dmq
As you can see, green sphere represents light's position, and these ugly green-blue polygons are faces of "shadow volume". You can also see, that im applying this function to the model of cube, and one of volume's side is missing(its not closed, but i should be). Can someone suggest whats wrong with my code and how can i fix it? Here goes the code i promised to include(variables names are self-explanatory, i suppose, but if you dont think so i can add description for each of them):
void Model::markSilhouette(float* lp){
glBegin(GL_QUADS);
for ( int i = 0; i < m_numMeshes; i++ )
{
for ( int t = 0; t < m_pMeshes[i].m_numTriangles; t++ )
{
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[t];
Triangle* pTri = &m_pTriangles[triangleIndex];
if (pTri->visible){
for(int j=0;j<3;j++){
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[pTri->neigh[j]-1];
Triangle* pTrk = &m_pTriangles[triangleIndex];
if(!pTrk->visible){
int p1j=pTri->m_vertexIndices[j];
int p2j=pTri->m_vertexIndices[(j+1)%3];
float* v1=m_pVertices[p1j].m_location;
float* v2=m_pVertices[p2j].m_location;
float x1=m_pVertices[p1j].m_location[0];
float y1=m_pVertices[p1j].m_location[1];
float z1=m_pVertices[p1j].m_location[2];
float x2=m_pVertices[p2j].m_location[0];
float y2=m_pVertices[p2j].m_location[1];
float z2=m_pVertices[p2j].m_location[2];
t=100;
float xl1=(x1-lp[0])*t;
float yl1=(y1-lp[1])*t;
float zl1=(z1-lp[2])*t;
float xl2=(x2-lp[0])*t;
float yl2=(y2-lp[1])*t;
float zl2=(z2-lp[2])*t;
glColor3f(0,0,1);
glVertex3f(x1 + xl1,
y1 + yl1,
z1 + zl1);
glVertex3f(x1,
y1,
z1);
glColor3f(0,1,0);
glVertex3f(x2 + xl2,
y2 + yl2,
z2 + zl2);
glVertex3f(x2,
y2,
z2);
}
}
}
}
}
glEnd();
}
I've found it. It looks like if you dont see an obvious algorithm mistake for a few days, then you've made a f*cking stupid mistake.
My triangle index variable is called t. Guess what? My extending vector length is also called t, and they are in the same scope, and i set t=100 after FIRST visible triangle :D So now volumes look like this:
outside http://prntscr.com/17l3n
inside http://prntscr.com/17l40
And it looks good for all light positions(acceptable by shadow volumes aglorithm, of course). So the working code for drawing a shadow volume is the following:
void Model::markSilouette(float* lp){
glDisable(GL_LIGHTING);
glPointSize(4.0);
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK,GL_FILL);
glBegin(GL_QUADS);
for ( int i = 0; i < m_numMeshes; i++ )
{
for ( int t = 0; t < m_pMeshes[i].m_numTriangles; t++ )
{
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[t];
Triangle* pTri = &m_pTriangles[triangleIndex];
if (pTri->visible){
for(int j=0;j<3;j++){
Triangle* pTrk;
if(pTri->neigh[j]){
int triangleIndex = m_pMeshes[i].m_pTriangleIndices[pTri->neigh[j]-1];
pTrk = &m_pTriangles[triangleIndex];
}
if((!pTri->neigh[j]) || !pTrk->visible){
int p1j=pTri->m_vertexIndices[j];
int p2j=pTri->m_vertexIndices[(j+1)%3];
float* v1=m_pVertices[p1j].m_location;
float* v2=m_pVertices[p2j].m_location;
float x1=m_pVertices[p1j].m_location[0];
float y1=m_pVertices[p1j].m_location[1];
float z1=m_pVertices[p1j].m_location[2];
float x2=m_pVertices[p2j].m_location[0];
float y2=m_pVertices[p2j].m_location[1];
float z2=m_pVertices[p2j].m_location[2];
float f=100; // THE PROBLEM WAS HERE
float xl1=(x1-lp[0])*f;
float yl1=(y1-lp[1])*f;
float zl1=(z1-lp[2])*f;
float xl2=(x2-lp[0])*f;
float yl2=(y2-lp[1])*f;
float zl2=(z2-lp[2])*f;
glColor3f(0,0,0);
glVertex3f(x1 + xl1,
y1 + yl1,
z1 + zl1);
glVertex3f(x1,
y1,
z1);
glVertex3f(x2,
y2,
z2);
glVertex3f(x2 + xl2,
y2 + yl2,
z2 + zl2);
}
}
}
}
}
glEnd();
}
I think everything is ok, you are just rendering volume without depth test =)

How to draw an Arc in OpenGL

While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)).
This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0.
Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious?
void Ball::drawBounce()
{ float piecesToDraw = 100.0f;
float arcWidth = 10.0f;
float arcAngle = 4.0f;
glBegin(GL_LINE_STRIP);
for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half
{ float currentX = (i / piecesToDraw) * arcWidth;
glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth));
}
for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now)
{ float currentX = (j / piecesToDraw) * arcWidth;
glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth));
}
glEnd();
}
Thanks in advance.
What is the purpose of sqrtf((-currentX * arcAngle)+ arcWidth)? When i>25, that expression becomes imaginary. The proper way of doing this would be using sin()/cos() to generate the X and Y coordinates for a semi-circle as stated in your question. If you want to use a parabola instead, the cleaner way would be to calculate y=H-H(x/W)^2