How to implement flat shading in LWJGL - opengl

I'm working on a small project in LWJGL. Currently I'm storing vertices, normals, and vertex colors from an obj/mtl file.
I'm trying to prevent vertex shading. That is, I want each polygon to have it's defined color. I've done some reading online and I've seen using the flat modifier on shaders, and gldisable(GL_BLEND). So far the only thing that has given me the desired result is creating 6 separate planes and shading them accordingly in Blender.
The sperate planes seems like a total hack, and I don't see how that would scale well.
The cube on the left is simply a cube exported from blender with 6 different materials. The one on the right, what I'm trying to achieve, but its 6 planes modeled to look like a cube, then exported.
What is the proper way to achieve my goal of independently colored faces?

Adding flat to the output of my vertex shader and flat to my fragment shader gave me the desired results
scene_vertex.vs
flat out vec3 mvVertexNormal;
flat out vec3 mvVertexPos;
scene_fragment.fs
flat in vec3 mvVertexNormal;
flat in vec3 mvVertexPos;

Related

Handling per-primitive normals when there are less vertices than primitives in OpenGL4.5

I've had a bit of trouble coming up with a solution for passing the correct normals to a fragment shader in OpenGL 4.5 for each of the triangle primitives to be able to use per triangle normals while doing indexed triangle rendering. (I want to use an IBO)
My current solution that works for some of models is to basically set the first vertex of each primitive to be the provoking vertex and have the primitive's normal be counted as the normal from the provoking vertex. (Of course adding the flat modifier to the normal attribute in the shaders)
This should work for most models but I've realized that it just doesn't work when there are more triangle primitives than vertices in a model. The simplest example I can come up with is a triangular bipyramid.
Is there a typical way this is done in industry for OpenGL? In industry are models just so large that per vertex normals are easier to implement and look better?
As others mentioned in the comments, "in the industry" one would often duplicate vertices that have discontinuous normals. This is unavoidable when only parts of your geometry are flat shaded and parts are smooth, or there are creases in it.
If your geometry is entirely flat shaded, an alternative thing you can do is to use the gl_PrimitiveID to fetch the per-primitive normal from an SSBO in the fragment shader:
layout(std430, binding = 0) buffer NormalsBuffer {
vec4 NORMALS[];
};
void main() {
vec3 normal = NORMALS[gl_PrimitiveID].xyz;
// ...
}
You can also use the unpackSnorm2x16 or similar functions to read normals stored in smaller datatypes and thus reduce the bandwidth, much like with vertex array attributes.

Calculating surface normals of dynamic mesh (without geometry shader)

I have a mesh whose vertex positions are generated dynamically by the vertex shader. I've been using https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal to calculate the surface normal for each primitive in the geometry shader, which seems to work fine.
Unfortunately, I'm planning on switching to an environment where using a geometry shader is not possible. I'm looking for alternative ways to calculate surface normals. I've considered:
Using compute shaders in two passes. One to generate the vertex positions, another (using the generated vertex positions) to calculate the surface normals, and then passing that data into the shader pipeline.
Using ARB_shader_image_load_store (or related) to write the vertex positions to a texture (in the vertex shader), which can then be read from the fragment shader. The fragment shader should be able to safely access the vertex positions (since it will only ever access the vertices used to invoke the fragment), and can then calculate the surface normal per fragment.
I believe both of these methods should work, but I'm wondering if there is a less complicated way of doing this, especially considering that this seems like a fairly common task. I'm also wondering if there are any problems with either of the ideas I've proposed, as I've had little experience with both compute shaders and image_load_store.
See Diffuse light with OpenGL GLSL. If you just want the face normals, you can use the partial derivative dFdx, dFdy. Basic fragment shader that calculates the normal vector (N) in the same space as the position:
in vec3 position;
void main()
{
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 N = normalize(cross(dx, dy));
// [...]
}

Calculate normals for plane inside fragment shader

I have a situation where I need to do light shading. I don't have a vertex shader so I can't interpolate normals into my fragment shader. Also I have no ability to pass in a normal map. Can I generate normals completely in the fragment shader based,for example on fragment coordinates? The geometry is always planar in my case.
And to extend on what I am trying to do:
I am using the NV_path_rendering extension which allows rendering pure vector graphics on GPU. The problem is that only the fragment stage is accessible via shader which basically means - I can't use a vertex shader with NV_Path objects.
Since your shapes are flat and NV_PATH require compat profile you can pass normal through on of built-in varyings gl_Color or gl_SecondaryColor
Extension description says that there is some kind of interpolation:
Interpolation of per-vertex data (section 3.6.1). Path primitives have neither conventional vertices nor per-vertex data. Instead fragments generate interpolated per-fragment colors, texture coordinate sets, and fog coordinates as a linear function of object-space or eye-space path coordinate's or using the current color, texture coordinate set, or fog coordinate state directly.
http://developer.download.nvidia.com/assets/gamedev/files/GL_NV_path_rendering.txt
Here's a method which "sets the normal as the face normal", without knowing anything about vertex normals (as I understand it).
https://stackoverflow.com/a/17532576/738675
I have a three.js demo working here:
http://meetar.github.io/three.js-normal-map-0/index6.html
My implementation is getting vertex position data from the vertex shader, but it sounds like you're able to get that through other means.

OpenGL/GLSL varying vectors: How to avoid starburst around vertices?

In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.