i am rendering a 3d surface in opengl by drawing a bunch of triangles. some of my primitives are see through. i don't simply mean that there is a blending of the color behind them, i mean that i can see completely through them. i have no idea why i am able to see through these primitives and would not like that to be the case (unless i specify alpha blending which i have not).
unfortunately i cannot link any code (there are ~1800 lines right now and i don't know where the error would be!), but any help would be great.
i hope i have given enough information, if not, please feel free to ask for me to clarify!
EDIT: more info ...
i call plotPrim(ix,iy,iz) which uses cube marching to plot a triangle (or a few) through the current cube of a rectangular grid.
myInit() is ...
void myInit()
{
// initialize vectors
update_vectors();
// set initial color to white
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_BLEND | GL_DEPTH_TEST);
}
plotMesh() is where i do my work of going through each cube and plotting the primitives
void plotMesh() //
{
if(plot_prop)
{
// do some stuff
}
else
{
glBegin(GL_TRIANGLES);
for(int ix = 0; ix < snx-1; ix++)
{
//x = surf_x[ix];
for(int iy = 0; iy < sny-1; iy++)
{
//y = surf_y[iy];
for(int iz = 0; iz < snz-1; iz++)
{
//z = surf_z[iz];
// front face
a = sv(ix+0, iy+0, iz+0);
b = sv(ix+0, iy+1, iz+0);
g = sv(ix+0, iy+0, iz+1);
d = sv(ix+0, iy+1, iz+1);
// back face
al = sv(ix+1, iy+0, iz+0);
be = sv(ix+1, iy+1, iz+0);
ga = sv(ix+1, iy+0, iz+1);
de = sv(ix+1, iy+1, iz+1);
// test to see if a primitive needs to be plotted
plotPrim(ix, iy, iz);
}
}
}
glEnd();
}
}
one example of a primitive being plotted in plotPrim() is ....
if(val>a && val<g && val<b && val<al || val<a && val>g && val>b && val>al) // "a" corner
{
tx = (val-a)/(al-a);
ty = (val-a)/(b-a);
tz = (val-a)/(g-a);
x1 = surf_x[ix] + tx*surf.dx;
y1 = surf_y[iy];
z1 = surf_z[iz];
x2 = surf_x[ix];
y2 = surf_y[iy] + ty*surf.dy;
z2 = surf_z[iz];
x3 = surf_x[ix];
y3 = surf_y[iy];
z3 = surf_z[iz] + tz*surf.dz;
getColor( (1.0-tx)*sv(ix,iy,iz) + tx*sv(ix+1,iy,iz) );
glVertex3f(x1,y1,z1);
getColor( (1.0-ty)*sv(ix,iy,iz) + ty*sv(ix,iy+1,iz) );
glVertex3f(x2,y2,z2);
getColor( (1.0-tz)*sv(ix,iy,iz) + tz*sv(ix,iy,iz+1) );
glVertex3f(x3,y3,z3);
}
glEnable(GL_BLEND | GL_DEPTH_TEST);
is wrong as glEnable only takes a single capability to enable, not a bitmask. You might have more errors, but you want to change the above to:
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
Related
The idea is to present a drawn 3D object "centered" in the screen. After loading the object with WaveFrontReader I got an array of vertices:
float bmin[3], bmax[3];
bmin[0] = bmin[1] = bmin[2] = std::numeric_limits<float>::max();
bmax[0] = bmax[1] = bmax[2] = -std::numeric_limits<float>::max();
for (int k = 0; k < 3; k++)
{
for (auto& v : objx->wfr.vertices)
{
if (k == 0)
{
bmin[k] = std::min(v.position.x, bmin[k]);
bmax[k] = std::max(v.position.x, bmax[k]);
}
if (k == 1)
{
bmin[k] = std::min(v.position.y, bmin[k]);
bmax[k] = std::max(v.position.y, bmax[k]);
}
if (k == 2)
{
bmin[k] = std::min(v.position.z, bmin[k]);
bmax[k] = std::max(v.position.z, bmax[k]);
}
}
}
I got the idea from the Viewer in TinyObjLoader (which uses OpenGL though), and then:
float maxExtent = 0.5f * (bmax[0] - bmin[0]);
if (maxExtent < 0.5f * (bmax[1] - bmin[1])) {
maxExtent = 0.5f * (bmax[1] - bmin[1]);
}
if (maxExtent < 0.5f * (bmax[2] - bmin[2])) {
maxExtent = 0.5f * (bmax[2] - bmin[2]);
}
_3dp.scale[0] = maxExtent;
_3dp.scale[1] = maxExtent;
_3dp.scale[2] = maxExtent;
_3dp.translation[0] = -0.5 * (bmax[0] + bmin[0]);
_3dp.translation[1] = -0.5 * (bmax[1] + bmin[1]);
_3dp.translation[2] = -0.5 * (bmax[2] + bmin[2]);
However this doesn't work. With an object like this spider which has vertices that the coordinates do not extend +/-100, the scale gets to about 100x by the above formula and yet, with the current view set to 0,0,0 the object is too close and I have to put the Z translation manually to something like 50000 to view it into a full box with a D3D11_VIEWPORT viewport = { 0.0f, 0.0f, w, h, 0.0f, 1.0f };, Not to mention that the Y is not centered as well.
Is there a proper algorithm to center the object into view?
Thanks a lot
You can actually change the position of the camera itself and not the objects.
Its recommended that you edit the camera position in OpenGL tutorials.
In games the camera (which is what captures the viewpoint which the rendered objects are viewed from) are not in the middle of the view but actually further way so you can see everything going on in the view/scene.
I'm trying to make an application where balls bounce off the walls and also off each other. The bouncing off the walls works fine, but I'm having some trouble getting them to bounce off each other. Here's the code I'm using to make them bounce off another ball (for testing I only have 2 balls)
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
What happens is that they randomly bounce, or pass through each other. Is there some physics that I'm missing?
EDIT: Here's the full function
// Callback handler for window re-paint event
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
// FOR LOOP
for (int i = 0; i < numOfBalls; i++)
{
glLoadIdentity(); // Reset model-view matrix
int numSegments = 100;
GLfloat angle = 0;
glTranslatef(balls[i].xPos, balls[i].yPos, 0.0f); // Translate to (xPos, yPos)
// Use triangular segments to form a circle
glBegin(GL_TRIANGLE_FAN);
glColor4f(balls[i].colorR, balls[i].colorG, balls[i].colorB, balls[i].colorA);
glVertex2f(0.0f, 0.0f); // Center of circle
for (int j = 0; j <= numSegments; j++)
{
// Last vertex same as first vertex
angle = j * 2.0f * PI / numSegments; // 360 deg for all segments
glVertex2f(cos(angle) * balls[i].ballRadius, sin(angle) * balls[i].ballRadius);
}
glEnd();
// Animation Control - compute the location for the next refresh
balls[i].xPos += balls[i].xSpeed;
balls[i].yPos += balls[i].ySpeed;
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
else
{
std::cout << "No collision\n";
}
// Check if the ball exceeds the edges
if (balls[i].xPos > balls[i].xPosMax)
{
balls[i].xPos = balls[i].xPosMax;
balls[i].xSpeed = -balls[i].xSpeed;
}
else if (balls[i].xPos < balls[i].xPosMin)
{
balls[i].xPos = balls[i].xPosMin;
balls[i].xSpeed = -balls[i].xSpeed;
}
if (balls[i].yPos > balls[i].yPosMax) {
balls[i].yPos = balls[i].yPosMax;
balls[i].ySpeed = -balls[i].ySpeed;
}
else if (balls[i].yPos < balls[i].yPosMin)
{
balls[i].yPos = balls[i].yPosMin;
balls[i].ySpeed = -balls[i].ySpeed;
}
}
glutSwapBuffers(); // Swap front and back buffers (of double buffered mode)
}
**Note: Most of the function uses a for loop with numOfBalls as the counter, but to test collision, I'm only using 2 balls, hence the balls[0] and balls[1]
Here are some things to consider.
If the length of (xSpeed,ySpeed) and is roughly comparable with .ballRadius it is possible for two balls to travel "thru" each other between "ticks" of the simulation's clock (one step). Consider two balls which are traveling perfectly vertical, one up, one down, and 1 .ballRadius apart horizontally. In real life they would clearly collide but it would be easy for your simulation to miss this event if .ySpeed ~ .ballRadius.
Second, you change in the vector of the balls results in each ball coming to rest, since
balls[0].xSpeed -= balls[0].xSpeed;
is a really exotic way of writing
balls[0].xSpeed = 0;
For the physics almost correct stuff, you need to invert only the component perpendicular to the plane of contact.
In other words take collision_vector to be the vector between the center of the balls (just subtract one point's coordinates from the other's). Because you have spheres this also happens to be the normal of the collision plane.
Now for each ball in turn, you need to decompose their speeds. The A component will be the one aligned with the colision_vector you can obtain it by doing some vector arithmetic A = doc(Speed, collision_vector) * collision_vector. This will be the thing you want to invert. You also want to extract the B component that is parallel to the collision plane. Because it's parallel it won't change because of the collision. You obtain it by subtracting A from the speed vector.
Finally the new speed will be something like B - A. If you want to get the balls to spin you will need an angular momentum in the direction of A - B. If the balls have different mass then you will need use the weight ratio as a multiplier for A in the first formula.
This will make the collision look legit. The detection still needs to happen correctly. Make sure that the speeds are significantly smaller than the radius of the balls. For comparable or bigger speeds you will need more complex algorithms.
Note: most of the stuff above is vector arithmetics. Also It's late here so I might have mixed up some signs (sorry). Take a simple example on paper and work it out. It will also help you understand the solution better.
This is how I position my torus (satellite) upon a sphere, and then rotate it around the sphere:
int satellite_1_1_step = 0;
int &r_satellite_1_1_step = satellite_1_1_step;
float satellite_1_1_divider = 300;
float satellite_1_1_theta = 6.5;
float satellite_1_1_phi = 1;
float satellite_1_1_theta_increment = 20/satellite_1_1_divider;
float satellite_1_1_phi_increment = 20/satellite_1_1_divider;
void satellite_1_1 ()
{
float satellite_1_1_theta_math = (satellite_1_1_theta-(satellite_1_1_theta_increment * r_satellite_1_1_step))/10.0*M_PI;
float satellite_1_1_phi_math = (satellite_1_1_phi-(satellite_1_1_phi_increment * r_satellite_1_1_step))/10.0*2*M_PI;
r_satellite_1_1_x = radius_exodus_pos * sin(satellite_1_1_theta_math) * cos(satellite_1_1_phi_math);
r_satellite_1_1_y = radius_exodus_pos * sin(satellite_1_1_theta_math) * sin(satellite_1_1_phi_math);
r_satellite_1_1_z = radius_exodus_pos * cos(satellite_1_1_theta_math);
glPushMatrix();
glTranslatef(r_satellite_1_1_x,r_satellite_1_1_y,r_satellite_1_1_z);
glColor3f(1,0,0);
glutSolidTorus(0.04, 0.2, 10, 100);
glEnd();
glPopMatrix();
}
This is how I update and increment its position:
void satellite_1_1_increment()
{
if (r_satellite_1_1_step < satellite_1_1_divider)
{
++(r_satellite_1_1_step);
}
if (r_satellite_1_1_step >= satellite_1_1_divider)
{
r_satellite_1_1_step = 1;
}
}
So, my torus (satellite) moves around the sphere, ending back up in its starting position, and continues over again - which is great. However, the path it takes wobbles around the poles (I think) along the way - rather than simply circumnavigating the sphere.
Is there an improvement that can be made to my math which will cause the satellite to circumnavigate the sphere in a more circular path?
The first issue I see is this:
void satellite_1_1_increment()
{
if (r_satellite_1_1_step < satellite_1_1_divider)
{
++(r_satellite_1_1_step);
}
if (r_satellite_1_1_step >= satellite_1_1_divider)
{
r_satellite_1_1_step = 1;
}
}
What happens at the edge case when the step is incremented by the first test such that it satisfies the second test? It is immediately reset, thus missing the value. I think you want it written like this to avoid that problem:
void satellite_1_1_increment()
{
if (r_satellite_1_1_step >= satellite_1_1_divider)
r_satellite_1_1_step = 1;
else ++r_satellite_1_1_step;
}
Is 1 the correct reset value? Maybe it should be 0?
Changed the first two lines of:
void satellite_1_1 ()
float satellite_1_1_theta_math = (satellite_1_1_theta+(satellite_1_1_theta_increment* r_satellite_1_1_step))*M_PI;
float satellite_1_1_phi_math = (satellite_1_1_phi-(satellite_1_1_phi_increment* r_satellite_1_1_step))*M_PI/360;
Now the satellite orbits 360 degrees along the equator. Adding a glRotatef after my glPushMatrix lets me fine tune its axis.
Thanks again wallyk. - kropcke
I am trying to add shading/lighting to my terrain generator. But for some reason my output still looks blocky even after I calculate surface normals.
set<pair<int,int> >::const_iterator it;
for ( it = mRandomPoints.begin(); it != mRandomPoints.end(); ++it )
{
for ( int i = 0; i < GetXSize(); ++i )
{
for ( int j = 0; j < GetZSize(); ++j )
{
float pd = sqrt(pow((*it).first - i,2) + pow((*it).second - j,2))*2 / mCircleSize;
if(fabs(pd) <= 1.0)
{
mMap[i][j][2] += mCircleHeight/2 + cos(pd*3.14)*mCircleHeight/2; ;
}
}
}
}
/*
The three points being considered to compute normals are
(i,j)
(i+1,j)
(i, j+1)
*/
for ( int i = 0; i < GetXSize() -1 ; ++i )
{
for ( int j = 0; j < GetZSize() - 1; ++j )
{
float b[] = {mMap[i+1][j][0]-mMap[i][j][0], mMap[i+1][j][1]-mMap[i][j][1], mMap[i+1][j][2]-mMap[i][j][2] };
float c[] = {mMap[i][j+1][0]-mMap[i][j][0], mMap[i][j+1][1]-mMap[i][j][1], mMap[i][j+1][2]-mMap[i][j][2] };
float a[] = {b[1]*c[2] - b[2]*c[1], b[2]*c[0]-b[0]*c[2], b[0]*c[1]-b[1]*c[0]};
float Vnorm = sqrt(pow(a[0],2) + pow(a[1],2) + pow(a[2],2));
mNormalMap[i][j][0] = a[0]/Vnorm;
mNormalMap[i][j][1] = a[1]/Vnorm;
mNormalMap[i][j][2] = a[2]/Vnorm;
}
}
Then when drawing this I use the following
float*** normal = map->GetNormalMap();
for (int i = 0 ; i < map->GetXSize() - 1; ++i)
{
glBegin(GL_TRIANGLE_STRIP);
for (int j = 0; j < map->GetZSize() - 1; ++j)
{
glNormal3fv(normal[i][j]);
float color = 1 - (terrain[i][j][2]/height);
glColor3f(color,color, color);
glVertex3f(terrain[i][j][0], terrain[i][j][2], terrain[i][j][1]);
glVertex3f(terrain[i+1][j][0], terrain[i+1][j][2], terrain[i+1][j][1]);
glVertex3f(terrain[i][j+1][0], terrain[i][j+1][2], terrain[i][j+1][1]);
glVertex3f(terrain[i+1][j+1][0], terrain[i+1][j+1][2], terrain[i+1][j+1][1]);
}
glEnd();
}
EDIT: Initialization Code
glFrontFace(GL_CCW);
glCullFace(GL_FRONT); // glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glMatrixMode(GL_PROJECTION);
Am I calculating the Normals Properly?
In addition to what Bovinedragon suggested, namely glShadeModel(GL_SMOOTH);, you should probably use per-vertex normals. This means that each glVertex3f would be preceded by a glNormal3fv call, which would define the average normal of all adjacent faces. To obtain it, you can simply add up these neighbouring normal vectors and normalize the result.
Reference this question: Techniques to smooth face edges in OpenGL
Have you set glShadeModel to GL_SMOOTH?
See: http://www.khronos.org/opengles/documentation/opengles1_0/html/glShadeModel.html
This settings also effects vertex colors in addition to lighting. You seem to say it was blocky even before lighting which makes me think this is the issue.
I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.