Using GLSL shaders + lighting / normals - opengl

I've got this not-so-small-anymore tile-based game, which is my first real OpenGL project. I want to render every tile as a 3D object. So at first I created some objects, like a cube and a sphere, provided them with vertex normals and rendered them in immediate mode with flat shading. But since I've got like 10.000 objects per level, it was a bit slow. So I put my vertices and normals into VBOs.
That's where I encountered the first problem: Before using VBOs I just push()ed and pop()ed matrices for every object and used glTranslate / glRotate to place them in my scene. But when I did the same with VBOs, the lighting started to behave strangely. Instead of a fixed lighting position behind the camera, the light seemed to rotate with my objects. When moving around them 180 degrees I could see only a shadow.
So i did some research. I could not find any answer to my specific problem, but I read, that instead of using glTranslate/glRotate one should implement shaders and provide them with uniform matrices.
I thought "perhaps that could fix my problem too" and implemented a first small vertex shader program which only stretched my objects a bit, just to see if I could get a shader to work before focusing on the details.
void main(void)
{
vec4 v = gl_Vertex;
v.x = v.x * 0.5;
v.y = v.y * 0.5;
gl_Position = gl_ModelViewProjectionMatrix * v;
}
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades. And I can't find any helpful information. So I got a few questions:
Can I only use one shader at a time, and when using my own shader, OpenGLs flat shading is turned off? So do I have to implement flat shading myself?
What about my vector normals? I read somewhere, that there is something like a normal-matrix. Perhaps I have to apply operations to my normals as well when modifying vertices?

That your lighting gets messed up with matrix operations changes means, that your calls to glLightfv(..., GL_POSITION, ...) happen in the wrong context (not the OpenGL context, but state of matrices, etc.).
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades
I think you mean Gourad shading (flat shading means something different). The thing is: If you're using a vertex shader you must do everthing the fixed function pipeline did. That includes the lighting calculation. Lighthouse3D has a nice tutorial http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/ as does Nicol Bolas' http://arcsynthesis.org/gltut/Illumination/Illumination.html

Related

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

Weird noise on rendered objects - OpenGL

To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.

glsl effect on low poly surface

I've got a vertex/fragment shader, point light and attenuation, I need to apply such shader to a cube face and I need to see a change in gradation of colours, if I use an high poly mesh
everything works quite well and the effect it's nice my goal is to have a gradient on this low poly mesh.
I tried to do this gl_FragColor = vec4(n,1) n = normal but I get a solid colour per surface
and this can be the reason why I don't see a gradation?
cheers
It is correct behaviour that you are observing. Cube is perfectly flat, thus it's normals per face vertex are the same.
Note however, that in calculations of Phong lighting you also should use the position of fragment, which is interpolated between 3 (or 4, when using quads) vertices of the given (sub)face. It can be used to calculate angle between light position and eye vector in the given fragment's position.
I've experienced similar problems lately, and I figured out that your cube really needs to shine, if you want to see something non-flat; and I mean literally. Set the shininess to reasonably high value (250-500). You should see a focused, moving point of light on the face that is reflecting directly to you. If not, your lightning shader is probably wrong.

OpenGL/GLSL varying vectors: How to avoid starburst around vertices?

In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.

Is it possible to thicken a quadratic Bézier curve using the GPU only?

I draw lots of quadratic Bézier curves in my OpenGL program. Right now, the curves are one-pixel thin and software-generated, because I'm at a rather early stage, and it is enough to see what works.
Simply enough, given 3 control points (P0 to P2), I evaluate the following equation with t varying from 0 to 1 (with steps of 1/8) in software and use GL_LINE_STRIP to link them together:
B(t) = (1 - t)2P0 + 2(1 - t)tP1 + t2P2
Where B, obviously enough, results in a 2-dimensional vector.
This approach worked 'well enough', since even my largest curves don't need much more than 8 steps to look curved. Still, one pixel thin curves are ugly.
I wanted to write a GLSL shader that would accept control points and a uniform thickness variable to, well, make the curves thicker. At first I thought about making a pixel shader only, that would color only pixels within a thickness / 2 distance of the curve, but doing so requires solving a third degree polynomial, and choosing between three solutions inside a shader doesn't look like the best idea ever.
I then tried to look up if other people already did it. I stumbled upon a white paper by Loop and Blinn from Microsoft Research where the guys show an easy way of filling the area under a curve. While it works well to that extent, I'm having trouble adapting the idea to drawing between two bouding curves.
Finding bounding curves that match a single curve is rather easy with a geometry shader. The problems come with the fragment shader that should fill the whole thing. Their approach uses the interpolated texture coordinates to determine if a fragment falls over or under the curve; but I couldn't figure a way to do it with two curves (I'm pretty new to shaders and not a maths expert, so the fact I didn't figure out how to do it certainly doesn't mean it's impossible).
My next idea was to separate the filled curve into triangles and only use the Bézier fragment shader on the outer parts. But for that I need to split the inner and outer curves at variable spots, and that means again that I have to solve the equation, which isn't really an option.
Are there viable algorithms for stroking quadratic Bézier curves with a shader?
This partly continues my previous answer, but is actually quite different since I got a couple of central things wrong in that answer.
To allow the fragment shader to only shade between two curves, two sets of "texture" coordinates are supplied as varying variables, to which the technique of Loop-Blinn is applied.
varying vec2 texCoord1,texCoord2;
varying float insideOutside;
varying vec4 col;
void main()
{
float f1 = texCoord1[0] * texCoord1[0] - texCoord1[1];
float f2 = texCoord2[0] * texCoord2[0] - texCoord2[1];
float alpha = (sign(insideOutside*f1) + 1) * (sign(-insideOutside*f2) + 1) * 0.25;
gl_FragColor = vec4(col.rgb, col.a * alpha);
}
So far, easy. The hard part is setting up the texture coordinates in the geometry shader. Loop-Blinn specifies them for the three vertices of the control triangle, and they are interpolated appropriately across the triangle. But, here we need to have the same interpolated values available while actually rendering a different triangle.
The solution to this is to find the linear function mapping from (x,y) coordinates to the interpolated/extrapolated values. Then, these values can be set for each vertex while rendering a triangle. Here's the key part of my code for this part.
vec2[3] tex = vec2[3]( vec2(0,0), vec2(0.5,0), vec2(1,1) );
mat3 uvmat;
uvmat[0] = vec3(pos2[0].x, pos2[1].x, pos2[2].x);
uvmat[1] = vec3(pos2[0].y, pos2[1].y, pos2[2].y);
uvmat[2] = vec3(1, 1, 1);
mat3 uvInv = inverse(transpose(uvmat));
vec3 uCoeffs = vec3(tex[0][0],tex[1][0],tex[2][0]) * uvInv;
vec3 vCoeffs = vec3(tex[0][1],tex[1][1],tex[2][1]) * uvInv;
float[3] uOther, vOther;
for(i=0; i<3; i++) {
uOther[i] = dot(uCoeffs,vec3(pos1[i].xy,1));
vOther[i] = dot(vCoeffs,vec3(pos1[i].xy,1));
}
insideOutside = 1;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_ModelViewProjectionMatrix * pos1[i];
texCoord1 = tex[i];
texCoord2 = vec2(uOther[i], vOther[i]);
EmitVertex();
}
EndPrimitive();
Here pos1 and pos2 contain the coordinates of the two control triangles. This part renders the triangle defined by pos1, but with texCoord2 set to the translated values from the pos2 triangle. Then the pos2 triangle needs to be rendered, similarly. Then the gap between these two triangles at each end needs to filled, with both sets of coordinates translated appropriately.
The calculation of the matrix inverse requires either GLSL 1.50 or it needs to be coded manually. It would be better to solve the equation for the translation without calculating the inverse. Either way, I don't expect this part to be particularly fast in the geometry shader.
You should be able to use technique of Loop and Blinn in the paper you mentioned.
Basically you'll need to offset each control point in the normal direction, both ways, to get the control points for two curves (inner and outer). Then follow the technique in Section 3.1 of Loop and Blinn - this breaks up sections of the curve to avoid triangle overlaps, and then triangulates the main part of the interior (note that this part requires the CPU). Finally, these triangles are filled, and the small curved parts outside of them are rendered on the GPU using Loop and Blinn's technique (at the start and end of Section 3).
An alternative technique that may work for you is described here:
Thick Bezier Curves in OpenGL
EDIT:
Ah, you want to avoid even the CPU triangulation - I should have read more closely.
One issue you have is the interface between the geometry shader and the fragment shader - the geometry shader will need to generate primitives (most likely triangles) that are then individually rasterized and filled via the fragment program.
In your case with constant thickness I think quite a simple triangulation will work - using Loop and Bling for all the "curved bits". When the two control triangles don't intersect it's easy. When they do, the part outside the intersection is easy. So the only hard part is within the intersection (which should be a triangle).
Within the intersection you want to shade a pixel only if both control triangles lead to it being shaded via Loop and Bling. So the fragment shader needs to be able to do texture lookups for both triangles. One can be as standard, and you'll need to add a vec2 varying variable for the second set of texture coordinates, which you'll need to set appropriately for each vertex of the triangle. As well you'll need a uniform "sampler2D" variable for the texture which you can then sample via texture2D. Then you just shade fragments that satisfy the checks for both control triangles (within the intersection).
I think this works in every case, but it's possible I've missed something.
I don't know how to exactly solve this, but it's very interesting. I think you need every different processing unit in the GPU:
Vertex shader
Throw a normal line of points to your vertex shader. Let the vertex shader displace the points to the bezier.
Geometry shader
Let your geometry shader create an extra point per vertex.
foreach (point p in bezierCurve)
new point(p+(0,thickness,0)) // in tangent with p1-p2
Fragment shader
To stroke your bezier with a special stroke, you can use a texture with an alpha channel. You can check the alpha channel on its value. If it's zero, clip the pixel. This way, you can still make the system think it is a solid line, instead of a half-transparent one. You could apply some patterns in your alpha channel.
I hope this will help you on your way. You will have to figure out things yourself a lot, but I think that the Geometry shading will speed your bezier up.
Still for the stroking I keep with my choice of creating a GL_QUAD_STRIP and an alpha-channel texture.