I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf
Related
I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).
I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/
I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.
First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.
In Computer graphics, what's the difference between material and texture?
In OpenGL, a material is a set of coefficients that define how the lighting model interacts with the surface. In particular, ambient, diffuse, and specular coefficients for each color component (R,G,B) are defined and applied to a surface and effectively multiplied by the amount of light of each kind/color that strikes the surface. A final emmisivity coefficient is then added to each color component that allows objects to appear luminous without actually interacting with other objects.
A texture, on the other hand, is a set of 1-, 2-, 3-, or 4- dimensional bitmap (image) data that is applied and interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color of the surface whether or not lighting is enabled (and depending on the texture mode, e.g. decal, modulate, etc.). Textures are used frequently to provide sub-polygon level detail to a surface, e.g. applying a repeating brick and mortar texture to a quad to simulate a brick wall, rather than modeling the geometry of each individual brick.
In the classical (fixed-pipeline) OpenGL model, textures and materials are somewhat orthogonal. In the new programmable shader world, the line has blurred quite a bit. Frequently textures are used to influence lighting in other ways. For example, bump maps are textures that are used to perturb surface normals to effect lighting, rather than modifying pixel color directly as a regular "image" texture would.
The question suggests a common misunderstanding of various computer graphics concepts. It is one born of pre-shader thinking and coding.
A texture is nothing more than a container for a series of one or more images, where an image is an array of some dimensionality (1D, 2D, etc) of values, where each value can be a vector of between 1 and 4 numbers. Textures also have some special techniques for accessing values from them that allow for interpolation and the minimizing of aliasing artifacts from sampling.
A texture can contain colors, but textures do not have to contain colors. Textures can be used to vary parameters across an objects surface, but that is not all textures can be used for.
Textures have no direct association with "materials"; you can use them for a wide variety of things (or nothing at all).
A material is a concept in lighting. Given a particular light and a point on the surface, the intensity (ie: color) of light reflected from that surface at that point is governed by a lighting equation. This equation is a function of many parameters. But those parameters are grouped into two categories.
One category of light equation parameters are the light parameters. These describe the characteristics of the light source. Point lights vs. directional lights vs. spot lights. The light intensity (again: color) is another parameter. Since the intensity itself may vary depending on the direction of the surface point relative to the light (think flashlights or spotlights), the intensity may be accessed from a texture. That's how many games project flashlights over a dark room.
The other category of light equation parameters describes the characteristics of the surface at that point. These are called material parameters. The material parameters, or material for short, describe important properties of the surface at the point in question. The normal at that point is an important one. There is also the diffuse reflectance (color), specular reflectance, specular shininess (exponent for Phong/Blinn-Phong) and various other parameters, depending on how comprehensive your lighting equation is.
Where do these values come from? Light parameters tend to be fixed in the world. Lights don't move per-object (though if you're doing lighting in object space, then each object would have its own light position). The light intensity may vary. But that's mostly it; anything else happens between frames, not within a single frame's rendering. So most light parameters are shader uniforms.
Material parameters can come from a variety of sources. Using a shader uniform effectively means that all points on the surface have that same value. So your could have the diffuse color come from a uniform, which would give the surface a uniform color (modified by lighting, of course). You can vary material parameters per-vertex, by passing them as vertex attributes or computing them from other attributes.
Or you can provide a parameter by mapping a texture to a surface. This mapping involves associating texture coordinates with vertex positions, so that the texture is directly attached to the surface. The texture is sampled at that location during rendering, and that value is used to perform the lighting.
The most common textures you're familiar with, "color textures", are simply varying the diffuse reflectance of the surface. The texture provides the diffuse color at each point along the surface.
This is not the only function of textures. You could just as easily vary the specular shininess over the surface. Or the light intensity. Or anything else.
Textures are tools. Materials are just a group of light equation parameters.
What I think of with those terms:
A texture is an image that is mapped onto a 3D object.
A material simulates a physical material. Take "glass" for example. You couldn't produce a glass effect with a plain texture map. It has parameters like how it reflects and refracts light at different angles. A material could also be a simple texture map so sometimes the terms mean the same thing.
Although the terms can be used interchangeably, it's common to refer to a bitmap as a texture.
While a fully defined texture, with lighting properties, bump mapping etc, would more usually be referred to as a material.
But I should stress that depending on the tools being used, their terminology will be used by the related community.