I'm working on a Minecraft-like engine as a hobby project to see how far the concept of voxel terrains can be pushed on modern hardware and OpenGL >= 3. So, all my geometry consists of quads, or squares to be precise.
I've built a raycaster to estimate ambient occlusion, and use the technique of "bent normals" to do the lighting. So my normals aren't perpendicular to the quad, nor do they have unit length; rather, they point roughly towards the space where least occlusion is happening, and are shorter when the quad receives less light. The advantage of this technique is that it just requires a one-time calculation of the occlusion, and is essentially free at render time.
However, I run into trouble when I try to assign different normals to different vertices of the same quad in order to get smooth lighting. Because the quad is split up into triangles, and linear interpolation happens over each triangle, the result of the interpolation clearly shows the presence of the triangles as ugly diagonal artifacts:
The problem is that OpenGL uses barycentric interpolation over each triangle, which is a weighted sum over 3 out of the 4 corners. Ideally, I'd like to use bilinear interpolation, where all 4 corners are being used in computing the result.
I can think of some workarounds:
Stuff the normals into a 2x2 RGB texture, and let the texture processor do the bilinear interpolation. This happens at the cost of a texture lookup in the fragment shader. I'd also need to pack all these mini-textures into larger ones for efficiency.
Use vertex attributes to attach all 4 normals to each vertex. Also attach some [0..1] coefficients to each vertex, much like texture coordinates, and do the bilinear interpolation in the fragment shader. This happens at the cost of passing 4 normals to the shader instead of just 1.
I think both these techniques can be made to work, but they strike me as kludges for something that should be much simpler. Maybe I could transform the normals somehow, so that OpenGL's interpolation would give a result that does not depend on the particular triangulation used.
(Note that the problem is not specific to normals; it is equally applicable to colours or any other value that needs to be smoothly interpolated across a quad.)
Any ideas how else to approach this problem? If not, which of the two techniques above would be best?
As you clearly understands, the triangle interpolation that GL will do is not what you want.
So the normal data can't be coming directly from the vertex data.
I'm afraid the solutions you're envisioning are about the best you can achieve. And no matter what you pick, you'll need to pass down [0..1] coefficients from the vertex to the shader (including 2x2 textures. You need them for texture coordinates).
There are some tricks you can do to somewhat simplify the process, though.
Using the vertex ID can help you out with finding which vertex "corner" to pass from vertex to fragment shader (our [0..1] values). A simple bit test on the lowest 2 bits can let you know which corner to pass down, without actual vertex data input. If packing texture data, you still need to pass an identifier inside the texture, so this may be moot.
if you use 2x2 textures to allow the interpolation, there are (were?) some gotchas. Some texture interpolators don't necessarily give a high precision interpolation if the source is in a low precision to begin with. This may require you to change the texture data type to something of higher precision to avoid banding artifacts.
Well... as you're using Bent normals technique, the best way to increase result is to pre-tessellate mesh and re-compute with mesh with higher tessellation.
Another way would be some tricks within pixel shader... one possible way - you can actually interpolate texture on your own (and not use built-in interpolator) in pixel shader, which could help you a lot. And you're not limited just to bilinear interpolation, you could do better, F.e. bicubic interpolation ;)
Related
I have question about usage of multiple shadowmaps in deferred shading. I have implemented a single shadowmap in forward shading.
In forward rendering, in the vertex shader of each object I calculated it's position in lightspace and compared it to the shadowmap in the fragment shader. I can see that working with multiple maps with an array of projection matrices and an array of shadowmaps as uniforms.
In the case of deferred shading I was wondering what is the common practice. The way I see it there are a few options:
In the deferred shading, for each pixel I calculate it's position in each lightspace and compare it to the corresponding lightmap. (that way I do the calculation for each fragment and each matrix which might too be expensive?)
In forward rendering I calculate the position of each vertex in each projection and there is a G-buffer output for each position. I then do the comparison in the deferred shading. (that way I do the computation of the position only once per vertex instead of once per pixel but I have a shadowmap and lightspace position for each shadow which seems suboptimal)
A bit like 2. But I do the verification in forward rendering. That way I can store many booleans if it's in the light or not for each shadow in one int texture. The problem is that I can't do soft shadows that way.
Maybe something better?
To synthesise: 1 needs many matrices multipication but is easy to implement. 2 needs few matrices multiplication but many textures and outputs (which is limited by the graphic card). and 3 needs few output and few calculations per pixel. But I can't' get soft shadows because the result is an array of boolean.
I am not doing it really for better performance but mostly to learn new stuffs. I'm open to suggestions. Maybe I'm misunderstanding something. Is there a standard way to do it?
I'm making a voxel engine dividing the world in chunks with 32 x 32 x 32 blocks each. I've been implementing many rendering optimizations such as not rendering faces covered by solid voxels or face merging for same looking neighbour voxels.
The thing is that I want to implement a per-vertex ambient occlusion, calculating the occlussion from the CPU and passing it to the shaders, just like that image. It works almost fine but the face merging makes bigger gradients on larger faces because of the linear interpolation OpenGL makes to the final color of the fragment.
I would like to now if there is a way to fix that problem, by telling the fragment shader how far is a fragment from the vertex or something like that.
It would be better to ditch your "face merging" system altogether. Though not just because it would let you do this occlusion stuff more effectively.
One problem with your face merging system is that renderers only guarantee that edges between two triangle will not have gaps only if the two shared positions are binary identical values. Your face merging system has many triangles that partially share edges. GPUs don't guarantee anything about edges in those cases; this can lead to cracks being visible between the edges.
A cube with different colored faces in intermediate mode is very simple. But doing this same thing with shaders seems to be quite a challenge.
I have read that in order to create a cube with different coloured faces, I should create 24 vertices instead of 8 vertices for the cube - in other words, (I visualies this as 6 squares that don't quite touch).
Is perhaps another (better?) solution to texture the faces of the cube using a real simple texture a flat color - perhaps a 1x1 pixel texture?
My texturing idea seems simpler to me - from a coder's point of view.. but which method would be the most efficient from a GPU/graphic card perspective?
I'm not sure what your overall goal is (e.g. what you're learning to do in the long term), but generally for high performance applications (e.g. games) your goal is to reduce GPU load. Every time you switch certain states (e.g. change textures, render targets, shader uniform values, etc..) the GPU stalls reconfiguring itself to meet your demands.
So, you can pass in a 1x1 pixel texture for each face, but then you'd need six draw calls (usually not so bad, but there is some prep work and potential cache misses) and six texture sets (can be very bad, often as bad as changing shader uniform values).
Suppose you wanted to pass in one texture and use that as a texture map for the cube. This is a little less trivial than it sounds -- you need to express each texture face on the texture in a way that maps to the vertices. Often you need to pass in a texture coordinate for each vertex, and due to the spacial configuration of the texture this normally doesn't end up meaning one texture coordinate for one spatial vertex.
However, if you use an environmental/reflection map, the complexities of mapping are handled for you. In this way, you could draw a single texture on all sides of your cube. (Or on your sphere, or whatever sphere-mapped shape you wanted.) I'm not sure I'd call this easier since you have to form the environmental texture carefully, and you still have to set a different texture for each new colors you want to represent -- or change the texture either via the GPU or in step with the GPU, and that's tricky and usually not performant.
Which brings us back to the canonical way of doing as you mentioned: use vertex values -- they're fast, you can draw many, many cubes very quickly by only specifying different vertex data, and it's easy to understand. It really is the best way, and how GPUs are designed to run quickly.
Additionally..
And yes, you can do this with just shaders... But it'd be ugly and slow, and the GPU would end up computing it per each pixel.. Pass the object space coordinates to the fragment shader, and in the fragment shader test which side you're on and output the corresponding color. Highly not recommended, it's not particularly easier, and it's definitely not faster for the GPU -- to change colors you'd again end up changing uniform values for the shaders.
I can't seem to understand the OpenGL pipeline process from a vertex to a pixel.
Can anyone tell me how important are vertex normals on these two shading techinques? As far as i know, in gouraud, lighting is calculated at each vertex, then the result color is interpolated across the polygon between vertices (is this done in fragment operations, before rasterizing?), and phong shading consists of interpolating first the vertices normals and then calculating the illumination on each of these normals.
Another thing is when bump mapping is applied to lets say a plane (2 triangles) and a brick texture as diffuse with its respect bump map, all of this with gouraud shading.
Bump mapping consist on altering the normals by a gradient depending on a bump map. But what normals does it alter and when (at the fragment shader?) if there are only 4 normals (4 vertices = plane), and all 4 are the same. In Gouraud you interpolate the color of each vertex after the illumination calculation, but this calculation is done after altering the normals.
How does the lighting work?
Vertex normals are absoloutely essential for both Gouraud and Phong shading.
In Gouraud shading the lighting is calculated per vertex and then interpolated across the triangle.
In Phong shading the normal is interpolated across the triangle and then the calculation is done per-pixel/fragment.
Bump-mapping refers to a range of different technologies. When doing normal mapping (probably the most common variety these days) the normals, bi-tangent (often erroneously called bi-normal) and tangent are calculated per-vertex to build a basis matrix. This basis matrix is then interpolated across the triangle. The normal retrieved from the normal map is then transformed by this basis matrix and then the lighting is performed per pixel.
There are extensions to the normal mapping technique above that allow bumps to hide other bumps behind them. This is, usually, performed by storing a height map along with the normal map and then ray marching through the height map to find parts that are being obscured. This technique is called Relief Mapping.
There are other older forms such as DUDV bump mapping (Which was implemented in DirectX 6 as Environment Mapped, bump mapping or EMBM).
You also have emboss bump mapping which was a really early way of doing bump mapping
Edit: In answer to your comment, emboss bump mapping CAN be performed on gouraud shaded triangles. Other forms of bump-mapping are, necessarily, per-pixel (due to the fact they work by modifying the surface normals on a per-pixel (or, at least, per-texel) basis). I wouldn't be surprised if there were other methods that can be performed with per-vertex lighting but I can't think of any off the top of my head. The results will look pretty rubbish compared to doing it on a per-pixel basis, though.
Re: Tangents and Bi-Tangents are actually quite simple once you get your head round them (took me years though, tbh ;)). Any 3D coordinate frame can be defined by a set of vectors that form an orthogonal basis matrix. By setting up the normal, tangent and bi-tangent per vertex you are merely setting up the coordinate frame at each vertex. From this you have the ability to transform a world or object space vector into the triangle's own coordinate frame. From here you can simply translate a light vector (or position) into the coordinate frame of a given pixel on the surface of the triangle. This then means that the normals in the normal map don't need to be stored in the object's space and hence as those triangles move around (when being animated, for example) the normals are already being handled in their own local space.
Normal mapping, one of the techniques to simulate bumped surfaces basically perturbs the per-pixel normals before you compute the light equation on that pixel.
For example, one way to implement requires you to interpolate surface normals and binormal (2 of the tangent space basis) and compute the third per-pixel (2+1 vectors which are the tangent basis). This technique also requires to interpolate the light vector. With those 3 (2+1 computed) vectors (named tagent space basis) you have a way to change the light vector from object space into tagent space. This is because these 3 vectors can be arranged as a 3x3 matrix which can be used to change the basis of your light direction vector.
Then it is simply a matter of using that tagent-space light vector and compute the light equation per pixel, where it most basic form would be a dot product between the tagent-space light vector and the normal map (your bump texture).
This is how a normal maps looks like (the normal component is stored in each channel of the texture and is already in tangent space):
This is one way, you can compute things in view space but the above is more easy to understand.
Old bump mapping was way simpler and was also kind of a fake effect.
All bump mapping techniques operate at pixel level, as they perturb in one way or other, how the surface is rendered. Even the old emboss bump mapping did some computation per pixel.
EDIT: I added a few more clarifications, when I have some spare minutes I will try to add some math and examples. Although there are great resources out there that explain this in great detail.
First of all, you don't need to understand the whole graphics pipeline to write a simple shader :). But, of course, you should know whats going on. You could read the graphics pipeline chapter in real-time rendering, 3rd edition (möller, hofmann, akenine-moller). What you describe is per-vertex and per-fragment lighting. For both calculations the vertex normals are part of the equation. For the bump mapping shader you alter the interpolated normals. So after rasterization you have fragments where missing data has to be caculated to determine the final pixel color.
In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.