OpenGL – Get distance to farthest face from camera instead of closest face - opengl

I'm using OpenGL (with Python bindings) to render depth maps of models saved as .obj files. The output is a numpy array with the depth to the object at each pixel.
The seemingly relevant parts of the code I'm using look like this:
glDepthFunc(GL_LESS) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
I can use this to successfully render depth images of the object.
However, what I want to do is, instead of obtaining the depth of the face closest to the camera along each ray, I want to obtain the depth of the face furthest from the camera. I want faces to be rendered regardless of in which direction they are facing – i.e. I don't want to backface-cull.
I tried to achieve this by modifying the above code like so:
glDisable(GL_CULL_FACE) # Turn off backface culling
glDepthFunc(GL_GREATER) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
However, when I do this I don't get any depth at all rendered. I suspect that this is because I am using GL_DEPTH_COMPONENT to read out the depth, however I am not sure what to change to fix this.
How can I get the depth of the faces furthest from the camera?

Switching the depth test to GL_GREATER in priciple will do the trick, you overlooked just a tiny detail: You need to initialize the depth buffer differently. By default, it will be intialized to 1.0 when clearing the depth buffer, so that GL_LESS comparisions might update it to values smaller than 1.0.
Now, you want it to work the other way around, so you must intialize it to 0.0. To do so, just add a glClearDepth(0.0f) before the glClear(... | GL_DEPTH_BUFFER_BIT).
Furthermore, you yourself mention that you don't want backface culling for that. But instead of disabling that you can also switch the face culling around using glCullFace. You probably want GL_FRONT here, as GL_BACK is the default. But disabling it will work too, of course.

Make sure you use glDepthMask(GL_TRUE); to enable depth writing to the buffer.

Related

Why can't I render my depth map on a quad?

Some intro:
I'm currently trying to see how I can convert a depth map into a point cloud. In order to do this, I render a scene as usually and produce a depth map. From the depth map I try to recreate the scene as a point cloud from the given camera angle.
In order to do this I created a FBO so I can render my scene's depth map on a texture. The depth map is rendered on the texture successfully. I know it is done because I'm able to generate the point cloud from the depth texture using glGetTexImage and converting the data acquired.
The problem:
For presentation purposes, I want the depth map to be visible on a separate window. So, I just created a simple shader to draw the depth map texture on a quad. However, instead of the depth texture being drawn on the quad, the texture being drawn is the last that was bound using GlBindTexture. For example :
glUseProgram(simpleTextureViewerProgram);
glBindVertexArray(quadVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,randomTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(quadTextureSampler, 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
The code above renders the "randomTexure" on the quad instead of the "depthTexture". As I said earlier, "depthTexture" is the one I use in glGetTexImage, so it does contain the depth map.
I may be wrong but if I had to make a guess then the last GlBindTexture command fails and the problem is that "depthTexture" is not an RGB texture but a depth component texture. Is this the reason? How can I draw my depth map on the quad then?

How do I make my object transparent but still show the texture?

I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.

Drawing the grid over the texture

Before diving into details, I have added opengl tag because JoGL is a Java OpenGL binding and the questions seem to be accessible for experts of both to answer.
Basically what I am trying to do is to render the grid over the texture in JoGL using GLSL. So far, my idea was to render first the texture and draw the grid on top. So what I am doing is:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, textureId);
// skipped the code where I setup the model-view matrix and where I do fill the buffers
gl2.glVertexAttribPointer(positionAttrId, 3, GL2.GL_FLOAT, false, 0, vertexBuffer.rewind());
gl2.glVertexAttribPointer(textureAttrId, 2, GL2.GL_FLOAT, false, 0, textureBuffer.rewind());
gl2.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
And after I draw the grid, using:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl2.glDrawElements(GL2.GL_LINE_STRIP, indices, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
Without enabling the depth test, the result look pretty awesome.
But when I start updating the coordinates of the vertices (namely updating one of its axes which corresponds to height), the rendering is done in a wrong way (some things which should be in front appear behind, which makes sense without the depth test enabled). So I have enabled the depth test:
gl.glEnable(GL2.GL_DEPTH_TEST);
gl.glDepthMask(true);
An the result of the rendering is the following:
You can clearly see that the lines of the grid are blured, some of the are displayed thinner then others, etc. What I have tried to do to fix the problem is some line smoothing:
gl2.glHint(GL2.GL_LINE_SMOOTH_HINT, GL2.GL_NICEST);
gl2.glEnable(GL2.GL_LINE_SMOOTH);
The result is better, but I am not still satisfied.
QUESTION: So basically the question is how to improve further the solution, so I can see solid lines and those are displayed nicely when I start updating the vertex coordinates.
If it is required I can provide the code of Shaders (which is really simple, Vertex Shader only calculates the position based on projection, model view matrix and the vertex coords and Fragment Shader calculates the color from texture sampler).

Rendering 3D Models With Textures That Have Alpha In OpenGL

So Im trying to figure out the best way to render a 3D model in OpenGL when some of the textures applied to it have alpha channels.
When I have the depth buffer enabled, and start drawing all the triangles in a 3D model, if it draws a triangle that is in front of another triangle in the model, it will simply not render the back triangle when it gets to it. The problem is when the front triangle has alpha transparency, and should be able to be seen through to the triangle behind it, but the triangle behind is still not rendered.
Disabling the depth buffer eliminates that problem, but creates the obvious issue that if the triangle IS opaque, then it will still render triangles behind it on top if rendered after.
For example, I am trying to render a pine tree that is basically some cones stacked on top of each other that have a transparent base. The following picture shows the problem that arises when the depth buffer is enabled:
You can see how you can still see the outline of the transparent triangles.
The next picture shows what it looks like when the depth buffer is disabled.
Here you can see how some of the triangles on the back of the tree are being rendered in front of the rest of the tree.
Any ideas how to address this issue, and render the pine tree properly?
P.S. I am using shaders to render everything.
If you're not using any partial transparency (everything is either 0 or 255), you can glEnable(GL_ALPHA_TEST) and that should help you. The problem is that if you render the top cone first, it deposits the whole quad into the z-buffer (even the transparent parts), so the lower branches underneath get z-rejected when its their time to be drawn. Enabling alpha testing doesn't write pixels to the z buffer if they fail the alpha test (set with glAlphaFunc).
If you want to use partial transparency, you'll need to sort the order of rendering objects from back to front, or bottom to top in your case.
You'll need to leave z-buffer enabled as well.
[edit] Whoops I realized that those functions I don't believe work when you're using shaders. In the shader case you want to use the discard function in the fragment shader if the alpha value is close to zero.
if(color.a < 0.01) {
discard;
} else {
outcolor = color;
}
You needs to implement a two-pass algorithm.
The first pass render only the back faces, while the second pass render only the front faces.
In this way you don't need to order the triangles, but some artifacts may occour depending whether your geometry is convex or not.
I may be wrong, but this is because when you render in 3d you do no render the backside of triangles using Directx's default settings, when the Z is removed - it draws them in order, with the Z on it doesnt draw the back side of the triangles anymore.
It is possible to show both sides of the triangle, even with Z enabled, however I'm thinking there might be a reason its normally enabled.. such as speed..
Device->SetRenderState(D3DRS_CULLMODE, Value);
value can equal
D3DCULL_NONE - Shows both sides of triangle
D3DCULL_CW - Culls Front side of triangle
D3DCULL_CCW - Default state

C++, OpenGL Z-buffer prepass

I'm making a simple voxel engine (think Minecraft) and am currently at the stage of getting rid of occluded faces to gain some precious fps. I'm not very experimented in OpenGL and do not quite understand how the glColorMask magic works.
This is what I have:
// new and shiny
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// this one goes without saying
glEnable(GL_DEPTH_TEST);
// I want to see my code working, so fill the mask
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
// fill the z-buffer, or whatever
glDepthFunc(GL_LESS);
glColorMask(0,0,0,0);
glDepthMask(GL_TRUE);
// do a first draw pass
world_display();
// now only show lines, so I can see the occluded lines do not display
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
// I guess the error is somewhere here
glDepthFunc(GL_LEQUAL);
glColorMask(1,1,1,1);
glDepthMask(GL_FALSE);
// do a second draw pass for the real rendering
world_display();
This somewhat works, but once I change the camera position the world starts to fade away, I see less and less lines until nothing at all.
It sounds like you are not clearing your depth buffer.
You need to have depth writing enabled (via glDepthMask(GL_TRUE);) while you attempt to clear the depth buffer with glClear. You probably still have it disabled from the previous frame, causing all your clears to be no-ops in subsequenct frames. Just move your glDepthMask call before the glClear.
glColorMask and glDepthMask determine, which parts of the frame buffer are actually written to.
The idea of early Z culling is, to first render only the depth buffer part first -- the actual savings come from sorting the geometry near to far, so that the GPU can quickly discard occluded fragments. However while drawing the Z buffer you don't want to draw the color component: This allows you to switch of shaders, texturing, i.e. in short everything that's computationally intense.
A word of warning: Early Z only works with opaque geometry. Actually the whole depth buffer algorithm only works for opaque stuff. As soon as you're doing blending, you'll have to sort far to near and don't use depth buffering (search for "order independent transparency" for algorithms to overcome the associated problems).
S if you've got anything that's blended, remove it from the 'early Z' stage.
In the first pass you set
glDepthMask(1); // enable depth buffer writes
glColorMask(0,0,0); // disable color buffer writes
glDepthFunc(GL_LESS); // use normal depth oder testing
glEnable(GL_DEPTH_TEST); // and we want to perform depth tests
After the Z pass is done you change the settings a bit
glDepthMask(0); // don't write to the depth buffer
glColorMask(1,1,1); // now set the color component
glDepthFunc(GL_EQUAL); // only draw if the depth of the incoming fragment
// matches the depth already in the depth buffer
GL_LEQUAL does the job, too, but also lets fragments even closer than that in the depth buffer pass. But since no update of the depth buffer happens, anything between the origin and the stored depth will overwrite it, each time something is drawn there.
A slight change of the theme is using an 'early Z' populated depth buffer as a geometry buffer in multiple deferred shading passes afterwards.
To save further geometry, take a look into Occlusion Queries. With occlusion queries you ask the GPU how many, if any fragments pass all tests. This being a voxel engine you're probably using an octree or Kd tree. Drawing the spatial dividing faces (with glDepthMask(0), glColorMask(0,0,0)) of the tree's branches before traversing the branch tells you, if any geometry in that branch is visible at all. That combined with a near to far sorted traversal and a (coarse) frustum clipping on the tree will give you HUGE performance benefits.
z-pre pass can work with translucent objects. if they are translucent, do not render them in the prepass, then zsort and render.