I have a small OpenGL app for some scientific visualization with deferred rendering pipeline. I have got 2 passes: geometric pass, where I render textures with positions, normals, albedo, segmentation, e.t.c.; and lighting pass, where I just map some of that data to the quad and render it on the screen or even save some images on the hard drive. So, classic deferred rendering.
Now I need to add wireframe rendering to the additional texture.
I thought about doing it in geometry shader, but it seemed kind of complicated and the performance wasn't an issue, so I just set a third pass with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); where I render it to the texture and then pass it to the lighting pass along with the other stuff.
It works okay, but I was wondering if it's possible to somehow use depth buffer and not render wireframes behind the model? I mean, sure I can cull backface polygons, but there will be still some lines behind the frontface polygons which are also front faced. What I want is to cull it as if the polygons were filled, but only render wireframe.
It would be also okay to render a model and then render a wireframe on it, but I can't do this because I render a model to the texture in the geometry pass with glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); and a wireframe in another pass with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); so I kind of cannot use the default depth buffer.
So, you guys have any thoughts?
Okay, I did solve the problem. Before implementing solid wireframe rendering as #httpdigest suggested, I tried to just save depth buffers from both passes and render the model over wireframe if the depth is less than the wireframe's one.
It turned out pretty much what I needed.
But I'm almost sure the approach with geometry shader must be much faster and less memory consuming. But again, I am not developing a 60fps game, so...
Related
I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf
I'm trying to implement a deferred shader with OpenGL and GLSL and I'm having trouble with the light geometry. These are the steps I'm taking:
Bind multitarget framebuffer
Render color, position, normal and depth
Unbind framebuffer
Enable blend
Disable depth testing
Render every light
Enable depth testing
Disable blend
Render to screen
But since I'm only rendering the front face, when I'm inside a light it disappears completely, rendering the back face does not work, since I would get double the light power (And when inside, half [or the normal amount]).
How can I render the same light value from inside and outside the light geometry?
well in my case, i do it like that:
Bind gbuffer framebuffer
Render color, position, normal
Unbind framebuffer
Enable blend
Enable depth testing
glDepthMask(0);
glCullFace(GL_FRONT); //to render only backfaces
glDepthFunc(GL_GEQUAL); //to test if light fragment is "behind geometry", or it shouldn't affect it
Bind light framebuffer
Blit depth from gbuffer to light framebuffer //so you can depth-test light volumes against geometry
Render every light
If i remember correctly, in my deferred renderer i just render only the backfaces of the light volume. The drawback is you cannot depth test, you will only know if a light is behind a geometry after the light calculation is done and discard the pixel.
As another answer explained, you can do depth testing. Test for greater or equal to see if the backface is behind or on a geometry, therefore intersects with the surface of the geometry.
Alternatively you could check if you are inside the light volume when rendering and switch front faces accordingly.
I'd like to implement an algorithm about pencil rendering. Now I've got 32 textures with different stroke intensity and some models. The process goes like this. First, I should render the using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
The textures should be stored in 3d texture space. The problem is that I don't know how to do multipass rendering with opengl and shaders. And I don't know how to access to the textures with the right coordinate. What if the faces of the mesh is smaller than the texture? Can anybody show me some examples of doint this?
You need to render to some sort of offscreen buffer in order to do multi-pass rendering. Framebuffers are the current method of doing offscreen rendering in OpenGL. Apple has a good page describing how to use them. (It's not Mac-specific.)
I'm not sure I understand your question about texturing, so can't answer that part.
I'm making a simple voxel engine (think Minecraft) and am currently at the stage of getting rid of occluded faces to gain some precious fps. I'm not very experimented in OpenGL and do not quite understand how the glColorMask magic works.
This is what I have:
// new and shiny
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// this one goes without saying
glEnable(GL_DEPTH_TEST);
// I want to see my code working, so fill the mask
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
// fill the z-buffer, or whatever
glDepthFunc(GL_LESS);
glColorMask(0,0,0,0);
glDepthMask(GL_TRUE);
// do a first draw pass
world_display();
// now only show lines, so I can see the occluded lines do not display
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
// I guess the error is somewhere here
glDepthFunc(GL_LEQUAL);
glColorMask(1,1,1,1);
glDepthMask(GL_FALSE);
// do a second draw pass for the real rendering
world_display();
This somewhat works, but once I change the camera position the world starts to fade away, I see less and less lines until nothing at all.
It sounds like you are not clearing your depth buffer.
You need to have depth writing enabled (via glDepthMask(GL_TRUE);) while you attempt to clear the depth buffer with glClear. You probably still have it disabled from the previous frame, causing all your clears to be no-ops in subsequenct frames. Just move your glDepthMask call before the glClear.
glColorMask and glDepthMask determine, which parts of the frame buffer are actually written to.
The idea of early Z culling is, to first render only the depth buffer part first -- the actual savings come from sorting the geometry near to far, so that the GPU can quickly discard occluded fragments. However while drawing the Z buffer you don't want to draw the color component: This allows you to switch of shaders, texturing, i.e. in short everything that's computationally intense.
A word of warning: Early Z only works with opaque geometry. Actually the whole depth buffer algorithm only works for opaque stuff. As soon as you're doing blending, you'll have to sort far to near and don't use depth buffering (search for "order independent transparency" for algorithms to overcome the associated problems).
S if you've got anything that's blended, remove it from the 'early Z' stage.
In the first pass you set
glDepthMask(1); // enable depth buffer writes
glColorMask(0,0,0); // disable color buffer writes
glDepthFunc(GL_LESS); // use normal depth oder testing
glEnable(GL_DEPTH_TEST); // and we want to perform depth tests
After the Z pass is done you change the settings a bit
glDepthMask(0); // don't write to the depth buffer
glColorMask(1,1,1); // now set the color component
glDepthFunc(GL_EQUAL); // only draw if the depth of the incoming fragment
// matches the depth already in the depth buffer
GL_LEQUAL does the job, too, but also lets fragments even closer than that in the depth buffer pass. But since no update of the depth buffer happens, anything between the origin and the stored depth will overwrite it, each time something is drawn there.
A slight change of the theme is using an 'early Z' populated depth buffer as a geometry buffer in multiple deferred shading passes afterwards.
To save further geometry, take a look into Occlusion Queries. With occlusion queries you ask the GPU how many, if any fragments pass all tests. This being a voxel engine you're probably using an octree or Kd tree. Drawing the spatial dividing faces (with glDepthMask(0), glColorMask(0,0,0)) of the tree's branches before traversing the branch tells you, if any geometry in that branch is visible at all. That combined with a near to far sorted traversal and a (coarse) frustum clipping on the tree will give you HUGE performance benefits.
z-pre pass can work with translucent objects. if they are translucent, do not render them in the prepass, then zsort and render.
I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.