Increasing specular shininess decreases lighting in OpenGL - opengl

I've got an issue with changing the specular power component in my opengl 4.3 shader. What happens is the specular is working fine when I use a shininess value of between 0-10, however, as the value is increased to make the material more shiny, the lighting decreases in intensity. Here is my code:
//Direct Lighting
vec3 L = normalize(worldPos.xyz - lightPos.xyz);
vec3 D = normalize(lightDir.xyz);
float dist = length(-L);
float I = max(dot(N.xyz, -L), 0.0);
vec3 h = normalize(-L.xyz + E);
float specLighting = 0.0;
specLighting = pow(max(dot(N.xyz, h), 0.0),50.0);
fColor.xyz = vec3(specLighting);
So if increase the shininess value to something like 50, there is no lighting at all. By the way, in this code, I am only displaying specular to debug the problem.
[EDIT]
Sorry for the lack of detail in the explanation, I have attached some screenshots of the results of changing the specular shininess value from 1 to 7. As you can see, as the specular highlight reduces in size (which is what I want), the lighting also fades (which is not what I want). It gets to the point where after about 10, it becomes completely black.
By the way, I am doing this entirely in the pixel/fragment shader.
I have also added a screenshot from my directx 11 test engine using the exact same code for specular lighting but with a shininess factor of 100.
Directx 11:

If you want to maintain a minimal lighting-based illumination, you should add a non-specular compenent. The specular component is typically used to add highlights to a material, not as the sole contributor.
Anyway, the darkening you report is a natural result of increasing the exponent. Think about it: because the vectors are pre-normalized, dot(N.xyz, h) is no more than 1.0. Raising a number between 0 and 1 to a high power will naturally tend to make the result very small very quickly... which is exactly what you want for a sharp specular highlight!
Of course, reducing the size of the highlight will reduce the average specular reflection (unless you made the maximum specular value much brighter, of course...). But, very few actual objects have only specular reflection. And, if an object did have only specular reflection, it would look very dark except for the specular spots.
Another way to look at it: your formula gives you a specular brightness whose maximum value is 1.0 (which is in some ways practically convenient for conventional, low-dynamic range computer graphics where each color channel maxes out at 1.0). However, in the real world, a shinier surface will typically cause the specular highlights to get brighter as well as smaller, such that the average specular brightness stays the same. It is the contrast between these to cases which makes the situation confusing. For practical purposes the formula is "working as designed -- will not fix"; typically, the artist will simply adjust the specular exponent and brightness until he gets the appearance he wants.

Thanks for all your help guys,
It turned out that it was just a silly mistake in the vertex shader:
instead of:
fNorm = vec4(mat3(worldSpace)*vNorm.xyz,0);
I had just:
fNorm = vNorm;

I originally wrote this answer before the question was updated with enough information. So it's obviously not the right answer in this case, but may apply to someone with a similar problem...
One possible explanation as to why specular lighting would decrease in intensity with increasing power, is if you are calculating it at a vertex level, rather than per-pixel. If the highlight point happens to fall in the middle of a polygon, rather than right at a vertex, then as it decreases in size with increasing shininess, the vertex contributions will drop off rapidly, and the highlight will disappear. Really for good specular (and really for good lighting generally), you need to calculate per-pixel, and only interpolate things that actually vary smoothly across the polygon, such as position or normal.

If you increase the "shininess" you're material will be less shiny.

Related

Simplest 2D Lighting in GLSL

Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).
why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.

Combine material coefficients and textures in Phong Shading

I'm trying to implement a simple Phong shader that supports non-physically-based materials and textures. Even though the material has a texture for each light component I still want the respective material coefficient to have some effect. My doubt is how to handle both of them. Should I mix, multiply or sum them? Right now I've multiplied them:
ambient = material.ambient_color * light.ambient * texture_ambient;
diffuse = material.diffuse_color * light.diffuse * diffuse_strength * texture_diffuse;
specular = material.specular_color * light.specular * specular_strength * texture_specular;
It seems kinda dark, is this the correct way to combine material coefficients and textures?
This depends. In the end it's up to you to decide. If it's too dark, try something else.
You could just apply the material coefficent to your texture. I guess that's how it's done most of the time. But as BDL mentioned: This is not physically-based, so it's really up to you and there is no right or wrong.
That said: If you want to keep it separate, you could try to use high(er) material coefficients. If they only range from 0.9 to 1 they won't darken the object too much. If they range from 0 to 1 you could make use of smoothstep(0.9, 1, material.coefficient) or something like that.

Opengl lighting- Specular higlight is colored?

I am facing a problem, My specular lighting is also coloured.. In my code I have mentioned,
LightColor = ambient+diffuse+specular;
FragColor = BodyColor * LightColor;
//My specular light appears in the same color as that of body. Can somebody help me to fix this?
Of course it is the same color as the body - because you're doing a component-wise multiplication of BodyColor against LightColor. That means the specular highlight is influenced by the body color just as much as the diffuse and ambient are, which is obviously not what you want.
For example, if BodyColor is [1.0 0.0 0.0], aka red (let's ignore alpha for argument's sake), and the combined light color is [0.8 0.8 0.8], aka light gray, then:
FragColor = [1.0 0.0 0.0] * [0.8 0.8 0.8] = [0.8 0.0 0.0]
Which is not-quite-full red.
You can never escape the output being based upon the body color with the math you have. At least, without going through a convoluted division, and even then, it won't work with zero components.
There are a number of solutions to your problem, of varying degrees of accuracy, but the simplest (and what it seems you are trying to do) is to multiply the body by the combined lighting except specular, and add specular:
FragColor = BodyColor * (LightDiffuse + Ambient) + LightSpecular;
Specular lighting represents a fundamentally different phenomena than diffuse/ambient: it is the light bounced off of the surface, or out of a thin layer at the surface (as colored specular usually represents). As such, it should not be multiplied against the color of the body. It might be multiplied against a separate surface specular color, but usually is not.
You still need some rather complex linear algebra to arrive at the LightSpecular term, since it is not applied flat to the surface, but depends upon the angle of the camera to the object and from the object to the light. Since you didn't ask, I am assuming you have that; although since you probably don't, I will suggest that there are many pages about that topic all over the internet.

Simulating lighting function of fixed pipeline, which role plays the material emission?

So, I am trying to port this sample to jogl.
Kind of ironic because he is using compute shaders with deprecated opengl, but anyway, I'd like to emulate that.
He sets light:
ambient
diffuse
specular
cut-off
exponent
Material:
ambient
diffuse
specular
shininess
emission
I found an almost perfect link, where all the parameters he sets are cited, but one, the material emission
I also found another nice link where I can see all the default values for the fixed pipeline and I am going to use them to set what he doesn't.
So, where (and how) shall I insert the material emission in the function?
Edit: for the undervoter, I find it hard being more clear and explicit than the question above, maybe if you tell me what you did not comprehend I can try to help you, but you should have some basic notions about opengl and lighting first in order to get it
The emission color is similar to the ambient color, in the sense that both of them are used for terms that are independent of the light/normal directions.
The overall lighting calculation can be expressed as a sum of different terms:
emission + ambient + diffuse + specular
The difference between emission and ambient is:
The material emission color is used directly as a term in the overall lighting calculation.
The material ambient color is multiplied with the light model ambient color, and the ambient colors of each active light source. These terms are then all summed up to obtain the overall ambient contribution.
For details, check out the section "The Mathematics of Lighting" in the Red Book, which is available for free online (direct link to the section).

Should the gl_FragColor value be normalized?

I am writing a Phong lighting shader and I have a hard time deciding whether the value I pass to gl_FragColor should be normalized or not.
If I use normalized values, the lighting is a bit weird. For example an object far away from a light source (un-lighted) would have its color determined by the sum of the emissive component, ambient component and global ambiental light. Let us say that adds up to (0.3, 0.3, 0.3). The normal for this is roughly (0.57, 0.57, 0.57), which is quite more luminous than what I'm expecting.
However, if I use non-normalized values, for close objects the specular areas get really, really bright and I have to make sure I generally use low values for my material constants.
As a note, I am normalizing only the RGB component and the alpha component is always 1.
I am a bit swamped and I could not find anything related to this. Either that or my searches were completely wrong.
No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)), bright colors such as yellow (norm = sqrt(2)), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255] or [0.0, 1.0]. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0) and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0). The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255] RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).