glsl effect on low poly surface - glsl

I've got a vertex/fragment shader, point light and attenuation, I need to apply such shader to a cube face and I need to see a change in gradation of colours, if I use an high poly mesh
everything works quite well and the effect it's nice my goal is to have a gradient on this low poly mesh.
I tried to do this gl_FragColor = vec4(n,1) n = normal but I get a solid colour per surface
and this can be the reason why I don't see a gradation?
cheers

It is correct behaviour that you are observing. Cube is perfectly flat, thus it's normals per face vertex are the same.
Note however, that in calculations of Phong lighting you also should use the position of fragment, which is interpolated between 3 (or 4, when using quads) vertices of the given (sub)face. It can be used to calculate angle between light position and eye vector in the given fragment's position.
I've experienced similar problems lately, and I figured out that your cube really needs to shine, if you want to see something non-flat; and I mean literally. Set the shininess to reasonably high value (250-500). You should see a focused, moving point of light on the face that is reflecting directly to you. If not, your lightning shader is probably wrong.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

Low polygon cone - smooth shading at the tip

If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.

OpenGL/GLSL varying vectors: How to avoid starburst around vertices?

In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.