does depth clamp occurs before depth test or after depth test?
I am rendering a primitive with coordinates > 1.0 and <-1.0 and using depth clamping with depth test. But when i enable depth test it does not render any geometry.
Here is my code:
GLfloat vertices[]=
{
0.5f,0.5f,0.5f,
-0.5f,0.5f,0.5f,
-0.5f,-0.5f,0.5f,
0.5f,-0.5f,0.5f,
0.5f,-0.5f,-0.5f,
-0.5f,-0.5f,-0.5f,
-0.5f,0.5f,-0.5f,
0.5f,0.5f,-0.5f
}
for(int i=0;i<24;i++)
vertices[3*i+2]*=25;
glEnable(GL_DEPTH_CLAMP);
// when i comment stmt below, it draws triangle strips
glEnable(GL_DEPTH_TEST);
glClearDepth(15.0f);
glClearColor (1.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP,0,6);
How to use depth test and clamping together?
why does the above code does not draw anything on the screen with depth test enabled?
From OpenGL 4.3, 17.3.6:
If depth clamping (see section 13.5) is enabled, before the incoming fragment’s zw is compared, zw is clamped to the range [min(n; f ); max(n; f )]
i am assuming that by default it is GL_LEQUAL.
That's why you should post fully working examples when you don't know what's going on. The default depth test is GL_LESS. 1.0 is not less than 1.0, so all of your 0.5 depth vertices fail the depth test.
Also, your loop:
vertices[3*i+2]*=25
This is overwriting memory. Your index is going far off the end of the array, since it only has 24 elements. You probably meant to loop 8.
Related
I have built a simple scene like the following:
The problem is, the blue shape is lower than the red one but somehow bleeds through. It looks proper when I rotate it like the following:
From what I searched this could be related to the order of vertices being sent, and here is my definition for those:
Shape* Obj1 = new Quad(Vec3(-5.0, 5.0, 0.0), Vec3(5.0, 5.0, 0.0), Vec3(5.0, 5.0, -10.0), Vec3(-5.0, 5.0, -10.0));
Shape* Obj2 = new Quad(Vec3(-5.0, 3.0, 0.0), Vec3(5.0, 3.0, 0.0), Vec3(5.0, 3.0, -10.0), Vec3(-5.0, 3.0, -10.0));
The Vec3 class just holds 3 doubles for x,y,z coordinates. I add these Vec3 classes to a vector, and iterate through them when I want to draw, as such:
glBegin(GL_QUADS);
for (auto it = vertex_list.begin(); it != vertex_list.end(); ++it)
glVertex3d(it->get_x(), it->get_y(), it->get_z());
glEnd();
Finally, my settings:
glEnable(GL_ALPHA_TEST | GL_DEPTH_TEST | GL_CULL_FACE);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glAlphaFunc(GL_GREATER, 0.0f);
glViewport(0, 0, WINDOW_X, WINDOW_Y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.0f, 300.0);
// camera origin xyz, point to look at xyz, camera rot xyz
gluLookAt(10, 10, -20, 2.5, 2.5, -10, 0, 1, 0);
You should enable depth test, face culling and alpha testing separately.
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
They are not flags. You cannot use them in that way.
See glEnable:
glEnable — enable or disable server-side GL capabilities
void glEnable(GLenum cap);
cap Specifies a symbolic constant indicating a GL capability.
This means the paramter of glEnable is a constant and not a set of bits and GL_ALPHA_TEST, GL_DEPTH_TEST, GL_CULL_FACE are symbolic constats and not bits of a bit set.
Change your code like this:
glEnable(GL_ALPHA_TEST);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
See OpenGL Specifiction - 17.3.4 Depth Buffer Test, p. 500:
17.3.4 Depth Buffer Test
The depth buffer test discards the incoming fragment if a depth comparison fails. The comparison is enabled or disabled with the generic Enable and Disable commands using target DEPTH_TEST.
See OpenGL Specifiction - 14.6.1 Basic Polygon Rasterization, p. 473:
Culling is enabled or disabled by calling Enable or Disable with target CULL_FACE.
I have a Sphere . I would like to clip some planes like below picture. I need more than 10 clipping plane but maximum glClipPlane limit is 6. How can I solve this problem.
My Sample Code below;
double[] eqn = { 0.0, 1.0, 0.0, 0.72};
double[] eqn2 = { -1.0, 0.0, -0.5, 0.80 };
double[] eqnK = { 0.0, 0.0, 1.0, 0.40 };
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE0, eqn);
Gl.glEnable(Gl.GL_CLIP_PLANE0);
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE1, eqn2);
Gl.glEnable(Gl.GL_CLIP_PLANE1);
Gl.glClipPlane(Gl.GL_CLIP_PLANE2, eqnK);
Gl.glEnable(Gl.GL_CLIP_PLANE2);
//// draw sphere
Gl.glColor3f(0.5f, .5f, 0.5f);
Glu.gluSphere(quadratic, 0.8f, 50, 50);
Glu.gluDeleteQuadric(quadratic);
Gl.glDisable(Gl.GL_CLIP_PLANE0);
Gl.glDisable(Gl.GL_CLIP_PLANE1);
Gl.glDisable(Gl.GL_CLIP_PLANE2);
You should consider multi-pass rendering and the stencil buffer.
Say you need 10 user clip-planes and you are limited to 6, you can setup the first 6, render the scene into the stencil buffer and then do a second pass with the remaining 4 clip planes. You would then use the stencil buffer to reject parts of the screen that were clipped on the prior pass. So this way you get the effect of 10 user clip planes when the implementation only supports 6.
// In this example you want 10 clip planes but you can only do 6 per-pass,
// so you need 1 extra pass.
const int num_extra_clip_passes = 1;
glClear (GL_STENCIL_BUFFER_BIT);
// Disable color and depth writes for the extra clipping passes
glDepthMask (GL_FALSE);
glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
// Increment the stencil buffer value by 1 for every part of the sphere
// that is not clipped.
glStencilOp (GL_KEEP, GL_KEEP, GL_INCR);
glStencilFunc (GL_ALWAYS, 1, 0xFFFF);
// Setup Clip Planes: 0 through 5
// Draw Sphere
// Reject any part of the sphere that did not pass _all_ of the clipping passes
glStencilFunc (GL_EQUAL, num_extra_clip_passes, 0xFFFF);
// Re-enable color and depth writes
glDepthMask (GL_TRUE);
glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Setup Leftover Clip Planes
// DrawSphere
It is not perfect, it is quite fill-rate intensive and limits you to a total of 1536 clip planes (given an 8-bit stencil buffer), but it will get the job done without resorting to features present only in GLSL 130+ (namely gl_ClipDistance []).
You can just reuse "Gl.glEnable(Gl.GL_CLIP_PLANE1);" because you was disabled it later ...
I'm using glReadPixels to get depth value of select pixel, but i always get 1, how can i solve it? here is the code:
glEnable(GL_DEPTH_TEST);
..
glReadPixels(x, viewport[3] - y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, z);
Do I miss anything? And my rendering part is shown below. I use different shaders to draw different part of scene, so how should i make it correct to read depth value from buffer?
void onDisplay(void)
{
// Clear the window and the depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// calculate the view matrix.
GLFrame eyeFrame;
eyeFrame.MoveUp(gb_eye_height);
eyeFrame.RotateWorld(gb_eye_theta * 3.1415926 / 180.0, 1.0, 0.0, 0.0);
eyeFrame.RotateWorld(gb_eye_phi * 3.1415926 / 180.0, 0.0, 1.0, 0.0);
eyeFrame.MoveForward(-gb_eye_radius);
eyeFrame.GetCameraMatrix(gb_hit_modelview);
gb_modelViewMatrix.PushMatrix(gb_hit_modelview);
// draw coordinate system
if(gb_bCoord)
{
DrawCoordinateAxis();
}
if(gb_bTexture)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
GLfloat vAmbientColor[] = { 0.2f, 0.2f, 0.2f, 1.0f };
GLfloat vDiffuseColor[] = { 1.0f, 1.0f, 1.0f, 1.0f};
glUseProgram(normalMapShader);
glUniform4fv(locAmbient, 1, vAmbientColor);
glUniform4fv(locDiffuse, 1, vDiffuseColor);
glUniform3fv(locLight, 1, vEyeLight);
glUniform1i(locColorMap, 0);
glUniform1i(locNormalMap, 1);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
else
{
if(!gb_bOnlyVoxel)
{
if(gb_bPoints)
{
//GLfloat vPointColor[] = { 1.0, 1.0, 0.0, 0.6 };
GLfloat vPointColor[] = { 0.2, 0.0, 0.0, 0.9 };
gb_shaderManager.UseStockShader(GLT_SHADER_FLAT, gb_transformPipeline.GetModelViewProjectionMatrix(), vPointColor);
gb_treeskl.Display(NULL, NULL, 1);
}
if(gb_bSkeleton)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
}
if(gb_bVoxel)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
SetVoxelColor();
glPolygonMode(GL_FRONT, GL_LINE);
glLineWidth(1.0f);
gb_treeskl.DisplayVoxel();
glPolygonMode(GL_FRONT, GL_FILL);
}
}
//glUniformMatrix4fv(locMVP, 1, GL_FALSE, gb_transformPipeline.GetModelViewProjectionMatrix());
//glUniformMatrix4fv(locMV, 1, GL_FALSE, gb_transformPipeline.GetModelViewMatrix());
//glUniformMatrix3fv(locNM, 1, GL_FALSE, gb_transformPipeline.GetNormalMatrix());
//gb_sphereBatch.Draw();
gb_modelViewMatrix.PopMatrix();
glutSwapBuffers();
}
I think you are reading correctly the only problem is that you are not linearize the depth from buffer back to <znear...zfar> range hence the ~1 value for whole screen due to logarithmic dependence of depth (almost all the values are very close to 1).
I am doing this like this:
double glReadDepth(double x,double y,double *per=NULL) // x,y [pixels], per[16]
{
GLfloat _z=0.0; double m[16],z,zFar,zNear;
if (per==NULL){ per=m; glGetDoublev(GL_PROJECTION_MATRIX,per); } // use actual perspective matrix if not passed
zFar =0.5*per[14]*(1.0-((per[10]-1.0)/(per[10]+1.0))); // compute zFar from perspective matrix
zNear=zFar*(per[10]+1.0)/(per[10]-1.0); // compute zNear from perspective matrix
glReadPixels(x,y,1,1,GL_DEPTH_COMPONENT,GL_FLOAT,&_z); // read depth value
z=_z; // logarithmic
z=(2.0*z)-1.0; // logarithmic NDC
z=(2.0*zNear*zFar)/(zFar+zNear-(z*(zFar-zNear))); // linear <zNear,zFar>
return -z;
}
Do not forget that x,y is in pixels and (0,0) is bottom left corner !!! The returned depth is in range <zNear,zFar>. The function is assuming you are using perspective transform like this:
void glPerspective(double fovy,double aspect,double zNear,double zFar)
{
double per[16],f;
for (int i=0;i<16;i++) per[i]=0.0;
// original gluProjection
// f=divide(1.0,tan(0.5*fovy*deg))
// per[ 0]=f/aspect;
// per[ 5]=f;
// corrected gluProjection
f=divide(1.0,tan(0.5*fovy*deg*aspect));
per[ 0]=f;
per[ 5]=f*aspect;
// z range
per[10]=divide(zFar+zNear,zNear-zFar);
per[11]=-1.0;
per[14]=divide(2.0*zFar*zNear,zNear-zFar);
glLoadMatrixd(per);
}
Beware the depth accuracy will be good only for close to camera object without linear depth buffer. For more info see:
How to correctly linearize depth in OpenGL ES in iOS?
If the problem persist there might be also another reason for this. Do you have Depth buffer in your pixel format? In windows You can check like this:
Getting a window's pixel format
Missing depth buffer could explain why the value is always 1 (not like ~0.997). In such case you need to change the init of your window enabling some bits for depth buffer (16/24/32). See:
What is the proper OpenGL initialisation on Intel HD 3000?
For more detailed info about using this technique (with C++ example) see:
OpenGL 3D-raypicking with high poly meshes
Well, you missed to past the really relevent parts of the code. Also the status of the depth testing unit has no influence on what glReadPixels delivers. How about you post your rendering code as well.
Update
After a buffer swap SwapBuffers the contents of the back buffer are undefined and the default state for frame buffer reads is to read from the back buffer. Technically double buffering happens on only the color component, not the depth and stencil component. But you might run into a driver issue with that.
I suggest two tests to rule out those:
Do a read of the depth buffer with glReadBuffer(GL_BACK); right before the SwapBuffers.
Select the front buffer with glReadBuffer(GL_FRONT); for reading after SwapBuffers
Also please specify in which context (program, not OpenGL, well the later, too) you did your glReadPixels when this problem occours. Also check if you can read color value correctly.
I draw a glutSolidCube and a glutSolidTeapot on the screen.
Whenever I activate glEnable(GL_CULL_FACE) I get different results for each object. I can either get the cube to be shown properly (glCullFace(GL_BACK)), or the teapot (glCullFace(GL_FRONT)), but never both of them.
If I disable culling, then both of them are shown properly, but I would like to be able to activate it.
Other objects defined by me are not being shown properly either.
Since these objets are defined in GLUT I can guess it's not a problem of their normals, or is it?
I show an image of the effect:
Light definition:
void setLighting(void) {
//setMaterial();
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
//ambient light color variables
GLfloat alr = 0.0;
GLfloat alg = 0.0;
GLfloat alb = 0.0;
//diffuse light color variables
GLfloat dlr = 1.0;
GLfloat dlg = 1.0;
GLfloat dlb = 1.0;
//specular light color variables
GLfloat slr = 1.0;
GLfloat slg = 1.0;
GLfloat slb = 1.0;
//light position variables
GLfloat lx = 0.0;
GLfloat ly = 100.0;
GLfloat lz = 100.0;
GLfloat lw = 0.0;
GLfloat DiffuseLight[] = {dlr, dlg, dlb}; //set DiffuseLight[] to the specified values
GLfloat AmbientLight[] = {alr, alg, alb}; //set AmbientLight[] to the specified values
GLfloat SpecularLight[] = {slr, slg, slb}; //set AmbientLight[] to the specified values
GLfloat LightPosition[] = {lx, ly, lz, lw}; //set the LightPosition to the specified values
GLfloat global_ambient[] = { 0.1f, 0.1f, 0.1f, 1.0f};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);
glEnable(GL_LIGHTING);
glLightfv (GL_LIGHT0, GL_DIFFUSE, DiffuseLight); //change the light accordingly
glLightfv (GL_LIGHT0, GL_AMBIENT, AmbientLight); //change the light accordingly
glLightfv (GL_LIGHT0, GL_SPECULAR, SpecularLight); //change the light accordingly
glLightfv (GL_LIGHT0, GL_POSITION, LightPosition); //change the light accordingly
}
Depth test and culling enabling:
glEnable(GL_DEPTH_TEST); // Enable the depth buffer
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Ask for nicest perspective correction
glEnable(GL_CULL_FACE); // Cull back facing polygons
glCullFace(GL_BACK);
Your depth buffering looks bad.
Do you,
ask for a depth buffer of adequate resolution? (something like 32 bits)
pass in GL_DEPTH_BUFFER_BIT to gl_clear
tried making the distance between your near and far clipping planes to get better resolution?
I know the part you posted about your defined object vs the glut teapot, but the glut teapot is for sure has a CCW vertex winding, so the fact that it doesn't show up when you turn on culling makes me think depth buffer.
Simple face culling is based on the order of the vertices - whether they end up clockwise in screen space or anti-clockwise. So, the problem could be in the definition order of the vertices, or you might be applying some kind of transform, which flips the order (eg. negative scaling). The normals only play a role in lighting.
This statement seems odd:
Whenever I activate glEnable(GL_CULL_FACE) I get different results for each object. I can either get the cube to be shown properly (glCullFace(GL_BACK)), or the teapot (glCullFace(GL_FRONT)), but never both of them.
There are two possible explanations for this:
Either your GLUT is buggy, or you've (accidently?) swapped the Z axis (for example by using all negative distances for near and far clipping plane).
You switched the front face winding direction (call to glFrontFace)
Other objects defined by me are not being shown properly either.
The front face is determined by screen space vertex winding. The vertices order on screen in either clockwise or counterclockwise. By default OpenGL assumes faces which are drawn with their vertices in counterclockwise order as front face. Normals are not took into account for this.
I think the easiest way for you to fix this is by swapping the order in which you submit the vertices.
I think the GLUT teapot only renders correctly without CULL_FACE enabled, which would indicate that the normals that are generated don't correspond to the vertex order. The visual effect of this is that the teapot seems to rotate in the opposite direction as other objects. This is the well-known effect of rotating a hollow mask. The brain tries to make sense of the scene taking its cues from the lighting. If I invert the space by a glScalef(-1,-1,-1) before drawing, it looks like a normal teapot, even when culling.
Also note the teapot is not a closed surface, and that culling allows you to see through it along the gap around the lid.
If you want to combine 'cullable' objects next to this uncullable teapot, you can switch off culling just when rendering the teapot.
OMG, I have been working for the last days on this and just find that solution.
Obviously!, you have to set :
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE); // Don't waste energy trying the glCullFace(GL_BACK);
// thing since its set at GL_BACK by default.
// Yes, it does cuts triangle NOT facing you.
Also, make sure your camera is not at crazy distances like 0.0000001f to 1 Billions.
Then, I didnt find it anywhere except into an obscur tutorial, but you have to set you depthframebuffer (yep, another 5 lines right there).
Shuve these 5 lines anywhere after you generate a framebuffername ( glGenFramebuffers(1, &FramebufferName);) and before your display/loop and that WILL do the trick. I tried it before the FramebufferName generation and it didnt work. But before or after textures generation and it still does work.
// The depth buffer
GLuint depthrenderbuffer = 0;
glGenRenderbuffers(1, &depthrenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, windowWidth, windowHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer);
This openGL really did drive me crazy with years and I still cant find a complete higher c++ library that englobes every opengl calls. But im building one right now and if anyone is interrested, I might publish it one of these days.
I have a simple particle effect in OpenGL using GL_POINTS. The following is called, being passed particles in order from the particle furthest from the camera first to the one nearest the camera last:
void draw_particle(particle* part) {
/* The following is how the distance is calculated when ordering.
* GLfloat distance = sqrt(pow(get_camera_pos_x() - part->pos_x, 2) + pow(get_camera_pos_y() - part->pos_y, 2) + pow(get_camera_pos_z() - part->pos_z, 2));
*/
static GLfloat quadratic[] = {0.005, 0.01, 1/600.0};
glPointParameterfvARB(GL_POINT_DISTANCE_ATTENUATION_ARB, quadratic);
glPointSize(part->size);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_ARB);
glEnable(GL_TEXTURE_2D);
glBegin(GL_POINTS);
glColor4f(part->r, part->g, part->b, part->a);
glVertex3f(part->pos_x, part->pos_y, part->pos_z);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_POINT_SPRITE_ARB);
}
However, there is some artifacting when rendering as can be seen in the following effect:artifacted image http://img199.imageshack.us/img199/9574/particleeffect.png
The problems go away if I disable depth testing, but I need the effects to be able to interact with other elements of the scene, appearing in front of and behind elements of the same GL_TRIANGLE_STRIP depending on depth.
If your particules are already sorted you can render like this :
Render particule with GL WRITE DEPTH but no depth testing (I don't remember exactly the constants)
Render the rest of the scene with depth test.
This way you are sure to get scene interaction with nice-looking particules.
Note: Please specify which OpenGL version you use when you post questions. That goes for any API.
When you want to render primitives with alpha blending, you can't have depth writes enabled. You need to draw your blended primitives sorted back-to-front. If you have opaque objects in the scene, render them first, and then draw your transparent primitives in a back-to-front sorted fashion with depth test enabled and depth writes disabled.
glEnable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE);
glEnable(GL_CULL_FACE);
while(1)
{
...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_BLEND);
glDepthMask(1);
RenderOpaque();
SortSprites();
glEnable(GL_BLEND);
glDepthMask(0);
DrawSprites();
...
}