I'm trying to do normal mapping in my fragment shader which causes rendering issues. I've boiled down the problem to the following:
I have my normal vector n and tangent t. If I do:
return n;
It all works fine and dandy (but obviously not normal mapped). However, if I do:
return n + 0.0 * t;
It screws everything up. To me, these seem like they should be the same thing, but apparently they're not.
return n + vec3(0.0, 0.0, 0.0);
Works fine, so obviously the issue is that:
0.0 * t != vec3(0.0, 0.0, 0.0)
The only case I can think of where this can happen is if x, y, or z in b is infinity. However, that's not the case as:
A:
0.0 * vec3(1.0/0.0, 1.0/0.0, 1.0/0.0)
works fine, and
B:
I've checked if it is inifinity using isinf(), which returns false.
I've also checked my code for any instance where the tangent could possibly be infinity (when normalized) but removing normalization doesn't fix it. I've also checked so the length of all tangent vectors are 1 and they are so it shouldn't be a problem either way.
Edit: I wasn't aware of the NaN possibility before so now that I've checked that and confirmed that that's indeed the case I now know the problem isn't with my GLSL code but somewhere in my OBJ model importing code. I guess this question can be marked as solved now?
Related
I am having trouble converting a cg shader to glsl.
In cg, there is a line :
float4 dst = tex2D(DST, i.uv);
float4 outputColor = (dst > 0.5 ? 1.0 : 2.0);
And when I convert it to glsl:
vec4 dst = texture2D(DST, v_texCoord);
vec4 outputColor = (dst > 0.5 ? 1.0 : 2.0);
I am having the error:
'>' : comparison operator only defined for scalars
And then I tried :
vec4 outputColor = (dst > vec4(0.5) ? 1.0 : 2.0);
Still the same error.....
Anybody can give me some advices on how to convert this in glsl ? thanks :)
Assuming that the Cg comparison code is essentially broadcasting each of those operations to the 4 components of the vector, GLSL doesn't have a simple, built-in way to handle it. But it does have a way to do it.
Modern GLSL (ie: versions where texture2D have long since been discarded) have access to component-wise comparison functions that have the effect of your condition. They produce boolean vectors that say whether the corresponding components satisfy the condition.
You can then use the mix function to do component-wise selection. However, you have to manually do the broadcasting of the integers to make this work.
So the equivalent GLSL code would be:
mix(vec4(2.0), vec4(1.0), greaterThan(dst, vec4(0.5)));
Yes, the order of the values in mix is "backwards": the value taken for a false condition (not greater than) is the first one; the true condition is the second.
In the following shader, m1 and m2 should have the same value because cos(asin(x)) == sqrt(1.0 - x*x).
However, the field produced using m1 shows a black ring in the lower left corner whereas m2 produces the expected smooth field:
precision highp float;
void main() {
float scale = 10000.0;
float p = length(gl_FragCoord.xy / scale);
float m1 = cos(asin(p));
float m2 = sqrt(1.0 - p*p);
float v = asin(m1); // change to m2 to see correct behavior
float c = degrees(v) / 90.0;
gl_FragColor = vec4(vec3(c), 1.0);
}
This behavior is really puzzling. What explains the black ring? I thought it may be a precision issue, but highp produces the same result. Or perhaps the black ring represents NaN results, but NaNs shouldn't occur there.
This replicates on MacOS 10.10.5 in Chrome/FF. Does not replicate on Windows 10 or iOS 9.3.3. Would something like this be a driver issue?
(For the curious, these formulas calculate latitude for an orthographic projection centered on the north pole.)
--UPDATE--
Confirmed today that MacOS 10.11.6 does not show the rendering error. This really seems like a driver/OS issue.
According to the spec
asin(x) : Results are undefined if ∣x∣ > 1.
and
sqrt(x) : Results are undefined if x < 0.
Do either of those point out the issue?
Try
float m1 = cos(asin(clamp(p, -1., 1.)));
float m2 = sqrt(abs(1.0 - p*p));
I'm trying to implement a simple viewer and I was trying to implement light attenuation for point light.
The problem I have is the following:
I have that unnatural line going over the sphere.
The relevant code in shader is:
....
vec3 Ldist = uLightPosition-vPosition.xyz;
vec3 L = normalize(Ldist);
....
float NdotL = max(dot(N,L),0.0);
float attenuation = 1.0/ (Ldist*Ldist);
vec3 light = uAmbientColor;
if(NdotL>0.0){
specularWeighting = rho_s * computeBRDF(roughness, Didx, Gidx, Fidx, L, N, V);
light = light + NdotL*uLightColor*attenuation*(specularWeighting*specularColor*envColor.rgb + diffuseColor);
}
Being new to slightly more advanced lighting, I really can't see what could be wrong.
(I know that maybe should be a different question, but being so small I was wondering if I could ask this here as well: is there are any rule of thumb to select the light and intensity position to have a nice result on a single object like the sphere up there?)
The following doesn't really make sense:
vec3 Ldist = uLightPosition-vPosition.xyz;
[...]
float attenuation = 1.0/ (Ldist*Ldist);
First of all, this shouldn't even compile, as Ldist is a vec3 and the * operator will do a component wise multiplication, leaving you whit a scalar divided by a vector. But apart from the syntax issues, and assuming that just len(LDist) was meant (which I will call d in the following), the attenuation term still does not make sense. Typically, the attenuation term used is
1.0/(a + b*d + c * d*d)
with a, b and c being the constant, linear and quadratric light attenuation coefficients, respectively. What is important to note here is that if the denominator of that equation becomes < 1, the "attenuation" will be aobve 1 - so the opposite effect is achieved. Since in a general scene, the distance can be as low as 0, the only way to make sure that this will never happen is by setting a >= 1, which is typically done. So I recommend that you use at least 1.0/(1.0+d) as attenuation term, or add some constant attenuation coefficient in general.
I'm calculating surface normal for my analytical surface.
Some parts of normal i'm getting are correct but not all.
Code is :
SurfaceVertices3f[pos] = i;
SurfaceVertices3f[pos+1] = j;
SurfaceVertices3f[pos+2] = (cos(i)*sin(j));
/*a and b hold the poutput of partial differentiation of vertices from above three lines.a is wrt i and b is wrt j */
a[0]=1;
a[1]=0;
a[2]=-sin(i)*sin(j);
b[0]=0;
b[1]=1;
b[2]=cos(i)*cos(j);
normal_var=Vec3Df::crossProduct( a, b);
normal_var.normalize();
My output looks like this, right image is mine and left one i'm using as refrence .
http://tinypic.com/view.php?pic=73l9co&s=5
Could anyone tell me what mistake i'm doing?
Your normal calculation is correct. The reference image has just a different way to map normals to colors.
If you have a look at the green ground color, you will see that the color's norm is not 1. But normals should have a norm of 1. If we assume another common mapping from normal to color like this one:
color.rgb = normal.xyz / 2 + 0.5
We see that this is no unit vector either. So either they used yet a different mapping or they just don't have unit length normals.
So, I've got an imposter (the real geometry is a cube, possibly clipped, and the imposter geometry is a Menger sponge) and I need to calculate its depth.
I can calculate the amount to offset in world space fairly easily. Unfortunately, I've spent hours failing to perturb the depth with it.
The only correct results I can get are when I go:
gl_FragDepth = gl_FragCoord.z
Basically, I need to know how gl_FragCoord.z is calculated so that I can:
Take the inverse transformation from gl_FragCoord.z to eye space
Add the depth perturbation
Transform this perturbed depth back into the same space as the original gl_FragCoord.z.
I apologize if this seems like a duplicate question; there's a number of other posts here that address similar things. However, after implementing all of them, none work correctly. Rather than trying to pick one to get help with, at this point, I'm asking for complete code that does it. It should just be a few lines.
For future reference, the key code is:
float far=gl_DepthRange.far; float near=gl_DepthRange.near;
vec4 eye_space_pos = gl_ModelViewMatrix * /*something*/
vec4 clip_space_pos = gl_ProjectionMatrix * eye_space_pos;
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((far-near) * ndc_depth) + near + far) / 2.0;
gl_FragDepth = depth;
For another future reference, this is the same formula as given by imallett, which was working for me in an OpenGL 4.0 application:
vec4 v_clip_coord = modelview_projection * vec4(v_position, 1.0);
float f_ndc_depth = v_clip_coord.z / v_clip_coord.w;
gl_FragDepth = (1.0 - 0.0) * 0.5 * f_ndc_depth + (1.0 + 0.0) * 0.5;
Here, modelview_projection is 4x4 modelview-projection matrix and v_position is object-space position of the pixel being rendered (in my case calculated by a raymarcher).
The equation comes from the window coordinates section of this manual. Note that in my code, near is 0.0 and far is 1.0, which are the default values of gl_DepthRange. Note that gl_DepthRange is not the same thing as the near/far distance in the formula for perspective projection matrix! The only trick is using the 0.0 and 1.0 (or gl_DepthRange in case you actually need to change it), I've been struggling for an hour with the other depth range - but that is already "baked" in my (perspective) projection matrix.
Note that this way, the equation really contains just a single multiply by a constant ((far - near) / 2) and a single addition of another constant ((far + near) / 2). Compare that to multiply, add and divide (possibly converted to a multiply by an optimizing compiler) that is required in the code of imallett.