Why fwidth behaves differently? - glsl

I'm working on a WebGL project to create isolines on a 3D surface on macOS/amd GPU. My idea is to colour the pixels based on elevation in fragment shader. With some optimizations, I can achieve a relatively consistent line width and I am happy about that. However when I tested it on windows it behaves differently.
Then I figured out it's because of fwidth(). I use fwidth() to prevent fragment shader from coloring the whole horizontal plane when it happens to locate at a isolevel. Please see the screenshot:
I solved this issue by adding the follow glsl line:
if (fwidth(vPositionZ) < 0.001) { /**then do not colour isoline on these pixels**/ };
It works very well on macOS since I got this:
.
However, on windows/nvidia GPU all isolines are gone because fwidth(vPositionZ) always evaluates to 0.0. Which doesn't make sense to me.
What am I doing wrong? Is there any better way to solve the issue presented in the first screenshot? Thank you all!
EDIT:
Here I attach my fragment shader. It's simplified but I think that's all relevant. I know looping is slow but for now I'm not worried about it.
uniform float zmin; // min elevation
uniform vec3 lineColor;
varying float vPositionZ; // elevation value for each vertex
float interval;
vec3 originColor = finalColor.rgb; // original surface color
for ( int i = 0; i < COUNT; i ++ ) {
float elevation = zmin + float( i + 1 ) * interval;
lineColor = mix( originColor, lineColor, step( 0.001, fwidth(vPositionZ)));
if ( vPositionZ <= elevation + lineWidth && vPositionZ >= elevation - lineWidth ) {
finalColor.rgb = lineColor;
}
// same thing but without condition:
// finalColor.rgb = mix( mix( originColor, lineColor, step(elevation - lineWidth, vPositionZ) ),
// originColor,
// step(elevation + lineWidth, vPositionZ) );
}
gl_FragColor = finalColor;
Environment: WebGL2.0, es version 300, chrome browser.

Put fwidth(vPosistionZ) before the loop will work. Otherwise, fwidth() evaluates anything to 0 if it's inside a loop.
I suspect this is a bug with Nvidia GPU.

Related

OpenGL Normal Mapping Issues - Normals Possibly Facing Wrong Direction?

I am currently working on my first OpenGL based game engine. I need normal mapping as a feature, but it isn't working correctly.
Here is an animation of what is Happening
The artifacts are affected by the angle between the light and the normals on the surface. Camera movement does not affect it in any way. I am also (at least for now) going the route of the less efficient method where the normal extracted from the normal map is converted into view space rather than converting everything to tangent space.
Here are the relevant pieces of my code:
Generating Tangents and Bitangents
for(int k=0;k<(int)mb->getIndexCount();k+=3)
{
unsigned int i1 = mb->getIndex(k);
unsigned int i2 = mb->getIndex(k+1);
unsigned int i3 = mb->getIndex(k+2);
JGE_v3f v0 = mb->getVertexPosition(i1);
JGE_v3f v1 = mb->getVertexPosition(i2);
JGE_v3f v2 = mb->getVertexPosition(i3);
JGE_v2f uv0 = mb->getVertexUV(i1);
JGE_v2f uv1 = mb->getVertexUV(i2);
JGE_v2f uv2 = mb->getVertexUV(i3);
JGE_v3f deltaPos1 = v1-v0;
JGE_v3f deltaPos2 = v2-v0;
JGE_v2f deltaUV1 = uv1-uv0;
JGE_v2f deltaUV2 = uv2-uv0;
float ur = deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x;
if(ur != 0)
{
float r = 1.0 / ur;
JGE_v3f tangent;
JGE_v3f bitangent;
tangent = ((deltaPos1 * deltaUV2.y) - (deltaPos2 * deltaUV1.y)) * r;
tangent.normalize();
bitangent = ((deltaPos1 * -deltaUV2.x) + (deltaPos2 * deltaUV1.x)) * r;
bitangent.normalize();
tans[i1] += tangent;
tans[i2] += tangent;
tans[i3] += tangent;
btans[i1] += bitangent;
btans[i2] += bitangent;
btans[i3] += bitangent;
}
}
Calculating the TBN matrix in the Vertex Shader
(mNormal corrects the normal for non-uniform scales)
vec3 T = normalize((mVW * vec4(tangent, 0.0)).xyz);
tnormal = normalize((mNormal * n).xyz);
vec3 B = normalize((mVW * vec4(bitangent, 0.0)).xyz);
tmTBN = transpose(mat3(
T.x, B.x, tnormal.x,
T.y, B.y, tnormal.y,
T.z, B.z, tnormal.z));
Finally here is where I use the sampled normal from the normal map and attempt to convert it to view space in the Fragment Shader
fnormal = normalize(nmapcolor.xyz * 2.0 - 1.0);
fnormal = normalize(tmTBN * fnormal);
"nmapcolor" is the sampled color from the normal map.
"fnormal" is then used like normal in the lighting calculations.
I have been trying to solve this for so long and have absolutely no idea how to get this working. Any help would be greatly appreciated.
EDIT - I slightly modified the code to work in world space and outputted the results. The big platform does not have normal mapping (and it works correctly) while the smaller platform does.
I added in what direction the normals are facing. They should both be generally the same color, but they're clearly different. Seems the mTBN matrix isn't transforming the tangent space normal into world (and normally view) space properly.
Well... I solved the problem. Turns out my normal mapping implementation was perfect. The problem actually was in my texture class. This is, of course, my first time writing an OpenGL rendering engine, and I did not realize that the unlock() function in my texture class saved ALL my textures as GL_SRGB_ALPHA including normal maps. Only diffuse map textures should be GL_SRGB_ALPHA. Temporarily forcing all textures to load as GL_RGBA fixed the problem.
Can't believe I had this problem for 11 months, only to find it was something so small.

volume rendering raycasting artifacts

I am trying to implement a simple raycasting volume rendering in WebGL.
It is kind of working, but there are some artifacts when you rotate the volume around (i.e. the head appears deformed).
Live demo:
http://fnndsc.github.io/vjs/#shaders_raycasting_adibrain
GLSL Code used for debugging:
https://github.com/FNNDSC/vjs/blob/master/src/shaders/shaders.raycasting.secondPass.frag
Simplified version of the code:
for(int rayStep = 0; rayStep < maxSteps; rayStep++){
// map world coordinates to data coordinates
vec4 dataCoordinatesRaw = uWorldToData * currentPosition;
ivec3 dataCoordinates = ivec3(int(floor(dataCoordinatesRaw.x)), int(floor(dataCoordinatesRaw.y)), int(floor(dataCoordinatesRaw.z)));
float intensity = getIntensity(dataCoordinates);
// we have the intensity now
vec3 colorSample = vec3(intensity);
float alphaSample = intensity;
accumulatedColor += (1.0 - accumulatedAlpha) * colorSample * alphaSample;
accumulatedAlpha += alphaSample;
//Advance the ray.
currentPosition += deltaDirection;
accumulatedLength += deltaDirectionLength;
if(accumulatedLength >= rayLength || accumulatedAlpha >= 1.0 ) break;
}
I do not understand what could explain those artifacts.
Could it be because I do not use gradients to modulate opacity/color?
Any hint would be very welcome.
The backface coordinates were not computed properly during the first pass of the raycasting. The range of the "normalized" coodinates was not [0, 1]. It was [-.5, 1.5], therefore creating the visualization artifact as all values outside of [0, 1] range were clamped out.

detecting if a gl_LightSource is enabled in glsl compatibility profile

I am writing a GLSL program as part of a plugin running inside of Maya, a closed-source 3D application. Maya uses the fixed function pipeline to define it's lights, so my program has to get it's light information from the gl_LightSource array using the compatibility profile. My light evaluation is working fine (thanks Nicol Bolas) except for one thing, I cannot figure out how to determine if a particular light in the array is enabled or disabled. Here is what I have so far:
#version 410 compatibility
vec3 incidentLight (in gl_LightSourceParameters light, in vec3 position)
{
if (light.position.w == 0) {
return normalize (-light.position.xyz);
} else {
vec3 offset = position - light.position.xyz;
float distance = length (offset);
vec3 direction = normalize (offset);
float intensity;
if (light.spotCutoff <= 90.) {
float spotCos = dot (direction, normalize (light.spotDirection));
intensity = pow (spotCos, light.spotExponent) *
step (light.spotCosCutoff, spotCos);
} else {
intensity = 1.;
}
intensity /= light.constantAttenuation +
light.linearAttenuation * distance +
light.quadraticAttenuation * distance * distance;
return intensity * direction;
}
}
void main ()
{
for (int i = 0; i < gl_MaxLights; ++i) {
if (/* ??? gl_LightSource[i] is enabled ??? */ 1) {
vec3 incident = incidentLight (gl_LightSource[i], position);
<snip>
}
}
<snip>
}
When Maya enables new lights my program works as expected but when Maya disables a previously enabled light, presumably using glDisable (GL_LIGHTi), it's parameters are not reset in the gl_LightSource array and gl_MaxLights obviously does not change, so my program continues to use that stale light information in it's shading computation. Although I am not showing it above, the light colors, for example gl_LightSource[i].diffuse, also continue to have stale non-zero values after they are disabled.
Maya draws all other geometry using the fixed-function pipline (no GLSL) and those objects correctly ignore disabled lights, how can I mimic this behavior in GLSL?
const vec4 AMBIENT_BLACK = vec4(0.0, 0.0, 0.0, 1.0);
const vec4 DEFAULT_BLACK = vec4(0.0, 0.0, 0.0, 0.0);
bool isLightEnabled(in int i)
{
// A separate variable is used to get
// rid of a linker error.
bool enabled = true;
// If all the colors of the Light are set
// to BLACK then we know we don't need to bother
// doing a lighting calculation on it.
if ((gl_LightSource[i].ambient == AMBIENT_BLACK) &&
(gl_LightSource[i].diffuse == DEFAULT_BLACK) &&
(gl_LightSource[i].specular == DEFAULT_BLACK))
enabled = false;
return(enabled);
}
Unfortunately I looked at the GLSL spec and I don't see anything that provides this information. I also saw another thread which seemed to come to the same conclusion.
Is there any way you can modify the light values in your plugin, or add an extra uniform that can be used as an enable/disable flag?

GLSL shaders and WebGL problem

I have created a shader that works perfectly in Firefox, but in Chrome the fragment and vertex shader cannot be linked. They compile just fine, but at the linking part something goes wrong. I have localized the problem at the fallowing bit of code :
else if (uLightType[i] == 1) { //point light
NdotL = dot(n, normalize(lightDir[i]));
if (NdotL > 0.0) {
distance = length(lightDir[i]);
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i] * att);
}
}
This small piece of code calculates the diffuse color reflected from a point light. It's part of a larger for loop. As it is shown here it won't link at all, but if I remove uLightAttenuation from calculating att, like so :
att = (1.0 / (distance * distance));
it works just fine. If I replace it with any other uniform, say uLightIntensity,
att = (1.0 / (uLightIntensity[i] * distance * distance));
again it won't work. If I replace it with a simple constant value / float variabile, strangely enough it compiles. And what is even more strange is, if I remove att from calculating color, but keep the uniform at it's current position, it runs just fine:
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i]);
The uniform is a float value, and even if it were a problem with type casting it should fail at compilation, not linking.
Here are the complete shaders, maybe I missed something elsewhere in the code.
Fragment Shader
Vertex Shader
I have managed to make it to work, it turns out I had 2 problems. One is with division by 0 when calculating att. It would let me divide something over a float uniform, so I combined uLightAttenuation and uLightIntensity into a single vec2 uniform, after that that part worked. Secondly, when calculating color I had to reference every component individually (color[0], color[1] etc...) and work only with float variables and not vectors. After that it worked correctly in chrome to.

C++ shader question

I am using Nvidia CG and Direct3D9 and have the question about the following code.
It compiles, but doesn't "loads" (using cgLoadProgram wrapper) and the resulting failure is described simplyas D3D failure happened.
It's a part of the pixel shader compiled with shader model set to 3.0
What may be interesting is that this shader loads fine in the following cases:
1) Manually unrolling the while statement (to many if { } statements).
2) Removing the line with the tex2D function in the loop.
3) Switching to shader model 2_X and manually unrolling the loop.
Problem part of the shader code:
float2 tex = float2(1, 1);
float2 dtex = float2(0.01, 0.01);
float h = 1.0 - tex2D(height_texture1, tex);
float height = 1.00;
while ( h < height )
{
height -= 0.1;
tex += dtex;
// Remove the next line and it works (not as expected,
// of course)
h = tex2D( height_texture1, tex );
}
If someone knows why this can happen or could test the similiar code in non-CG environment or could help me in some other way, I'm waiting for you ;)
Thanks.
I think you need to determine the gradients before the loop using ddx/ddy on the texture coordinates and then use tex2D(sampler2D samp, float2 s, float2 dx, float2 dy)
The GPU always renders quads not pixels (even on pixel borders - superfluous pixels are discarded by the render backend). This is done because it allows it to always calculate the screen space texture derivates even when you use calculated texture coordinates. It just needs to take the difference between the values at the pixel centers.
But this doesn't work when using dynamic branching like in the code in the question, because the shader processors at the individual pixels could diverge in control flow. So you need to calculate the derivates manually via ddx/ddy before the program flow can diverge.