In a recent project of mine, I procedurally create fragment shaders that look like this (but possibly larger), which I display using WebGL:
precision mediump float;
uniform vec2 u_windowSize;
void main() {
float s = 2.0 / min(u_windowSize.x, u_windowSize.y);
vec2 pos0 = s * (gl_FragCoord.xy - 0.5 * u_windowSize);
if (length(pos0) > 1.0) { gl_FragColor = vec4(0,0,0,0); return; }
vec2 pos1 = pos0/0.8;
vec2 pos2 = ((1.0-length(pos1))/length(pos1)) * pos1;
vec3 col2 = vec3(1.0,1.0,1.0);
vec2 pos3 = pos2;
vec3 col3 = vec3(1.0,0.0,0.0);
vec2 tmp2 = 6.0*(1.0/sqrt(2.0)) * mat2(1.0,1.0,-1.0,1.0) * pos2;
vec3 col4;
if (mod(tmp2.x, 2.0) < 1.0 != mod(tmp2.y, 2.0) < 1.0) {
col4 = col2;
} else {
col4 = col3;
};
vec2 pos5 = pos0;
vec3 col5 = vec3(0.0,1.0,1.0);
vec3 col6;
if (length(pos0) < 0.8) {
col6 = col4;
} else {
col6 = col5;
};
gl_FragColor = vec4(col6, 1.0);
}
Obviously there is some redundancy here that you would not write by hand – copying pos2 into pos3 is pointless, for example. But since I generate this code, this is convenient.
Before I now prematurely start optimizing and make my generator produce hopefully more efficient code, I’d like to know:
Do browsers and/or graphics drivers already optimize such things (so I don't have to)?
There is no requirement for the browser to do any optimization. You can see what the browser is sending to the driver by using the WEBGL_debug_shaders extension.
Example:
const gl = document.createElement('canvas').getContext('webgl');
const ext = gl.getExtension('WEBGL_debug_shaders');
const vs = `
attribute vec4 position;
uniform mat4 matrix;
void main() {
float a = 1. + 2. * 3.; // does this get optimized to 7?
float b = a; // does this get discarded?
gl_Position = matrix * position * vec4(b, a, b, a);
}
`;
const s = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(s, vs);
gl.compileShader(s);
console.log(ext.getTranslatedShaderSource(s));
On my machine/driver/browser that returns this result
#version 410
in vec4 webgl_74509a83309904df;
uniform mat4 webgl_5746d1f3d2c2394;
void main(){
(gl_Position = vec4(0.0, 0.0, 0.0, 0.0));
float webgl_2420662cd003acfa = 7.0;
float webgl_44a9acbe7629930d = webgl_2420662cd003acfa;
(gl_Position = ((webgl_5746d1f3d2c2394 * webgl_74509a83309904df) * vec4(webgl_44a9acbe7629930d, webgl_2420662cd003acfa, webgl_44a9acbe7629930d, webgl_2420662cd003acfa)));
}
We can see in this case the simple constant math was optimized but the fact that a an b are the same were not. That said there is no guarantee that other browsers would make this optimization.
Whether or not drivers optimize is up to the driver. Most drivers will at least do a little optimization but full optimization takes time. DirectX can take > 5 minutes to optimize a single complex shader with full optimization on so optimization is probably something that should be done offline. In the DirectX case you're expected to save the binary shader result to avoid the 5 minutes the next time the shader is needed but for WebGL that's not possible as binary shaders would be both not portable and a security issue. Also having the browser freeze for a long time waiting for the compile would also be unacceptable so browsers can't ask DirectX for full optimization. That said, some browser cache binary shader results behind the scenes.
Related
currently I am working on a OSG Project for my study and wrote a CelShading shader (alongside a simpleFog Shader). I first render with the CelShader along with the depth buffer to Texture and then use the fogShader. Everything works fine on my AMD Radeon HD 7950 and on my Intel HD4400 (although it is slow on the last), both running Windows. However, on a Quadro 600 runnning Linux, the Shader compiles without error, but is still wrong, the light is dulled and because of the lack of some light spots, it seems that not every light in the Scene is used. The whole toon effect is also gone.
I confirmed the Shader working on another AMD, a ATI Mobility HD3400.
But on other NVIDIAs, like a GTX 670 or 660 TI oder 560 TI (this time windows) the Shader is not working. First it was totally messed up because of non-uniform flow, but after I fixed it it is still not working.
I have this Problem now for some days and it is giving me a headache. I do not know what am I missing, why is it working on a simple Intel HD 4400 but not on high end NVIDIA Cards?
Strangely, the fogShader is working perfectly on every system and gives me the nice fog I want.
Does anyone have an idea? The Uniforms are set for the toonTex, but texture0 is not set, because the model is uv-mapped with blender, but the textures seem to work just fine (look at the Pony in the Screens). I assuming 0 is used as layout for texture0, which is perfectly valid,as far as I know. Here is a Video showing the shader on a GTX 660 TI. Something seems to work, if there is only one light, but it is not how it should look like, on a Radeon HD 7950 it is like this (ignore the black border, screenshot issue).
The light is cleary different.
EDIT: Just did another test: on the Intel HD 4400 and Windows, it is working. But the same System running Linux is showing only a whole lot of White with some outlines but no textures at all.
Anyone any suggestion?
The sources for the shaders are here:
celShader.vert
#version 120
varying vec3 normalModelView;
varying vec4 vertexModelView;
uniform bool zAnimation;
uniform float osg_FrameTime;
void main()
{
normalModelView = gl_NormalMatrix * gl_Normal;
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
vec4 vertexPos = gl_Vertex;
if(zAnimation){//
vertexPos.z = sin(5.0*vertexPos.z + osg_FrameTime)*0.25;//+ vertexPos.z;
}
gl_Position = gl_ModelViewProjectionMatrix * vertexPos;
}
celShader.frag
#version 120
#define NUM_LIGHTS 5
uniform sampler2D texture0;
uniform sampler2D toonTex;
uniform float osg_FrameTime;
uniform bool tex;
varying vec3 normalModelView;
varying vec4 vertexModelView;
vec4 calculateLightFromLightSource(int lightIndex, bool front){
vec3 lightDir;
vec3 eye = normalize(-vertexModelView.xyz);
vec4 curLightPos = gl_LightSource[lightIndex].position;
//curLightPos.z = sin(10*osg_FrameTime)*4+curLightPos.z;
lightDir = normalize(curLightPos.xyz - vertexModelView.xyz);
float dist = distance( gl_LightSource[lightIndex].position, vertexModelView );
float attenuation = 1.0 / (gl_LightSource[lightIndex].constantAttenuation
+ gl_LightSource[lightIndex].linearAttenuation * dist
+ gl_LightSource[lightIndex].quadraticAttenuation * dist * dist);
float z = length(vertexModelView);
vec4 color;
vec3 n = normalize(normalModelView);
vec3 nBack = normalize(-normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert
float intensityBack = dot(nBack,lightDir); //NdotL, Lambert
//-Phong Modell
vec3 reflected = normalize(reflect( -lightDir, n));
float specular = pow(max(dot(reflected, eye), 0.0), gl_FrontMaterial.shininess);
vec3 reflectedBack = normalize(reflect( -lightDir, nBack));
float specularBack = pow(max(dot(reflectedBack, eye), 0.0), gl_BackMaterial.shininess);
//Toon-Shading
//2D Toon http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S12/final_projects/hutchins_kim.pdf
vec4 toonColor = texture2D(toonTex,vec2(intensity,specular));
vec4 toonColorBack = texture2D(toonTex,vec2(intensityBack,specularBack));
if(front){
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_FrontMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensity * attenuation ;
color += gl_FrontMaterial.specular * gl_LightSource[lightIndex].specular * specular *attenuation ;
}
return color * toonColor;
} else {//back
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_BackMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensityBack * attenuation ;
color += gl_BackMaterial.specular * gl_LightSource[lightIndex].specular * specularBack *attenuation ;
}
return color * toonColorBack;
}
}
void main(void) {
vec4 color = vec4(0.0);
bool front = true;
//non-uniform-flow error correction
//see more here: http://www.opengl.org/wiki/GLSL_Sampler#Non-uniform_flow_control
//and here: http://gamedev.stackexchange.com/questions/32543/glsl-if-else-statement-unexpected-behaviour
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
if(!gl_FrontFacing)
front = false;
for(int i = 0; i< NUM_LIGHTS; i++){
color += calculateLightFromLightSource(i,front);
}
if(tex)
gl_FragColor =color * texColor;
else
gl_FragColor = color;
}
fogShader.vert
#version 120
varying vec4 vertexModelView;
void main()
{
gl_Position = ftransform();
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
fogShader.frag
varying vec4 vertexModelView;
uniform sampler2D texture0;
uniform sampler2D deepth;
uniform vec3 fogColor;
uniform float zNear;
uniform float zFar;
float linearDepth(float z){
return (2.0 * (zNear+zFar)) / ((zFar + zNear) - z * (zFar - zNear));// -1.0;
}
void main(void){
//Literature
//http://www.ozone3d.net/tutorials/glsl_fog/p04.php and depth_of_field example OSG Cookbook
vec2 deepthPoint = gl_TexCoord[0].xy;
float z = texture2D(deepth, deepthPoint).x;
//fogFactor = (end - z) / (end - start)
z = linearDepth(z);
float fogFactor = (4000*4-z) / (4000*4 - 30*4);
fogFactor = clamp(fogFactor, 0.0, 1.0);
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
gl_FragColor = mix(vec4(fogColor,1.0), texColor,fogFactor);
}
ProgramLinking
osg::ref_ptr<osg::Shader> toonFrag = osgDB::readShaderFile("../Shader/celShader.frag");
osg::ref_ptr<osg::Shader> toonVert = osgDB::readShaderFile("../Shader/" + _vertSource);
osg::ref_ptr<osg::Program> celShadingProgram = new osg::Program;
celShadingProgram->addShader(toonFrag);
celShadingProgram->addShader(toonVert);
osg::ref_ptr<osg::Texture2D> toonTex = new osg::Texture2D;
toonTex->setImage(osgDB::readImageFile("../BlenderFiles/Texturen/toons/" + _toonTex));
toonTex->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
toonTex->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
osg::ref_ptr<osg::StateSet> ss = new osg::StateSet;
ss->setTextureAttributeAndModes(1, toonTex, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
ss->addUniform(new osg::Uniform("toonTex", 1));
ss->setAttributeAndModes(celShadingProgram, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
//TODO NEEED?
ss->setTextureMode(1, GL_TEXTURE_1D, osg::StateAttribute::OVERRIDE | osg::StateAttribute::OFF);
ss->addUniform(new osg::Uniform("tex", true));
ss->addUniform(new osg::Uniform("zAnimation", false));
Okay, I finally found the error.
There was a faulty Line since version zero of my Shader which I overlooked for a whole week (and I am suprised my AMD Driver did not gave my an error, it was just plain wrong!
EDIT: not wrong at all, see comment below!).
This two lines were broken:
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
ambient is of course not an array....
I have encountered a problem when GLSL shader generates incorrect image on following GPU's:
GT 430
GT 770
GTX 570
GTX 760
But works normally on these:
Intel HD Graphics 2500
Intel HD 4000
Intel 4400
GTX 740M
Radeon HD 6310M
Radeon HD 8850
Shader code is as follows:
bool PointProjectionInsideTriangle(vec3 p1, vec3 p2, vec3 p3, vec3 point)
{
vec3 n = cross((p2 - p1), (p3 - p1));
vec3 n1 = cross((p2 - p1), n);
vec3 n2 = cross((p3 - p2), n);
vec3 n3 = cross((p1 - p3), n);
float proj1 = dot((point - p2), n1);
float proj2 = dot((point - p3), n2);
float proj3 = dot((point - p1), n3);
if(proj1 > 0.0)
return false;
if(proj2 > 0.0)
return false;
if(proj3 > 0.0)
return false;
return true;
}
struct Intersection
{
vec3 point;
vec3 norm;
bool valid;
};
Intersection GetRayTriangleIntersection(vec3 rayPoint, vec3 rayDir, vec3 p1, vec3 p2, vec3 p3)
{
vec3 norm = normalize(cross(p1 - p2, p1 - p3));
Intersection res;
res.norm = norm;
res.point = vec3(rayPoint.xy, 0.0);
res.valid = PointProjectionInsideTriangle(p1, p2, p3, res.point);
return res;
}
struct ColoredIntersection
{
Intersection geomInt;
vec4 color;
};
#define raysCount 15
void main(void)
{
vec2 radius = (gl_FragCoord.xy / vec2(800.0, 600.0)) - vec2(0.5, 0.5);
ColoredIntersection ints[raysCount];
vec3 randomPoints[raysCount];
int i, j;
for(int i = 0; i < raysCount; i++)
{
float theta = 0.5 * float(i);
float phi = 3.1415 / 2.0;
float r = 1.0;
randomPoints[i] = vec3(r * sin(phi) * cos(theta), r * sin(phi)*sin(theta), r * cos(phi));
vec3 tangent = normalize(cross(vec3(0.0, 0.0, 1.0), randomPoints[i]));
vec3 trianglePoint1 = randomPoints[i] * 2.0 + tangent * 0.2;
vec3 trianglePoint2 = randomPoints[i] * 2.0 - tangent * 0.2;
ints[i].geomInt = GetRayTriangleIntersection(vec3(radius, -10.0), vec3(0.0, 0.0, 1.0), vec3(0.0, 0.0, 0.0), trianglePoint1, trianglePoint2);
if(ints[i].geomInt.valid)
{
float c = length(ints[i].geomInt.point);
ints[i].color = vec4(c, c, c, 1.0);
}
}
for(i = 0; i < raysCount; i++)
{
for(j = i + 1; j < raysCount; j++)
{
if(ints[i].geomInt.point.z < ints[i].geomInt.point.z - 10.0)
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
ColoredIntersection tmp = ints[j];
ints[j] = ints[i];
ints[i] = tmp;
}
}
}
vec4 resultColor = vec4(0.0, 0.0, 0.0, 0.0);
for(i = 0; i < raysCount + 0; i++)
{
if(ints[i].geomInt.valid)
resultColor += ints[i].color;
}
gl_FragColor = clamp(resultColor, 0.0, 1.0);
}
Upd: I have replaced vector normalizations with builtin functions and added gl_FragColor claming just in case.
The code is a simplified version of an actual shader, expected image is:
But what I get is:
Random rotations of the code remove artifacts completely. For example if I change the line
if(ints[i].geomInt.valid) //1
to
if(ints[i].geomInt.valid == true) //1
which apparently should not affect logic in any way or completely remove double cycle that does nothing (marked as 2) artifacts vanish. Please note that the double cycle does nothing at all since condition
if(ints[i].geomInt.point.z < ints[i].geomInt.point.z - 10.0)
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
return;
ColoredIntersection tmp = ints[j];
ints[j] = ints[i];
ints[i] = tmp;
}
Can never be satisfied(left and right sides have index i, not i, j) and there's no NaN's. This code does absolutely nothing yet somehow produces artifacts.
You can test the shader and demo on your own using this project(full MSVS 2010 project + sources + compiled binary and a shader, uses included SFML): https://dl.dropboxusercontent.com/u/25635148/ShaderTest.zip
I use sfml in this test project, but that's 100% irrelevant because the actual project I've enountered this problem does not use this lib.
What I want to know is why these artifacts appear and how to reliably avoid them.
I don't think anything is wrong with your shader. The openGL pipeline renders to a framebuffer. If you make use of that framebuffer before the rendering has completed, you will often get what you have seen. Please bear in mind that glDrawArrays and similar are asynchonous (the function returns before the GPU has finished drawing the vertices.)
The most common use for those square artefacts are when you use the resultant framebuffer as texture which is then use for further rendering.
The OpenGL driver is supposed to keep track of dependencies and should know how to wait for dependencies to be fulfilled.
If you are sharing a framebuffer across threads, however, all bets are off, you then might need to make use of things like a fence sync (glFenceSync) to ensure that that one thread waits for rendering that is taking place on another thread.
As a workaround, you might find calling glFinish or even glReadPixels (with one pixel) sorts the issue out.
Please also bear in mind that this problem is timing related and simplifying a shader might very well make the issue go away.
If anyone's still interested I asked this question on numerous specialized sites including opengl.org and devtalk.nvidia.com. I did not receive any concrete answer on what's wrong with my shader, just some suggestions how to work around my problem. Like use if(condition == true) instead of if(condition), use as simple algorithms as possible and such. In the end I've chosen one of the easiest rotations of my code that gets rid of the problem: I just replaced
struct Intersection
{
vec3 point;
vec3 norm;
bool valid;
};
with
struct Intersection
{
bool valid;
vec3 point;
vec3 norm;
};
There were numerous other code rotations that made the artifacts disappear, but I've chosen this one because I was able to test in on most other systems I had trouble with before.
I've seen this exact thing happen in GLSL when variables aren't initialized. For example, a vec3 will be (0,0,0) by default on some graphics cards, but on other graphics cards it will be a different value. Are you sure you're not using a variable without first assigning it a value? Specifically you aren't initializing ColoredIntersection.color if Insersection.valid is false, but I think you are using it later.
Using OpenGL 3.3 core profile, I'm rendering a full-screen "quad" (as a single oversized triangle) via gl.DrawArrays(gl.TRIANGLES, 0, 3) with the following shaders.
Vertex shader:
#version 330 core
#line 1
vec4 vx_Quad_gl_Position () {
const float extent = 3;
const vec2 pos[3] = vec2[](vec2(-1, -1), vec2(extent, -1), vec2(-1, extent));
return vec4(pos[gl_VertexID], 0, 1);
}
void main () {
gl_Position = vx_Quad_gl_Position();
}
Fragment shader:
#version 330 core
#line 1
out vec3 out_Color;
vec3 fx_RedTest (const in vec3 vCol) {
return vec3(0.9, 0.1, 0.1);
}
vec3 fx_Grayscale (const in vec3 vCol) {
return vec3((vCol.r * 0.3) + (vCol.g * 0.59) + (vCol.b * 0.11));
}
void main () {
out_Color = fx_RedTest(out_Color);
out_Color = fx_Grayscale(out_Color);
}
Now, the code may look a bit odd and the present purpose of this may seem useless, but that shouldn't phase the GL driver.
On a GeForce, a get a gray screen as expected. That is, the "grayscale effect" applied to the hard-coded color "red" (0.9, 0.1, 0.1).
However, Intel HD 4000 [driver 9.17.10.2932 (12-12-2012) version -- the newest as of today] always, repeatedly shows nothing but the following constantly-flickering noise pattern:
Now, just to experiment a little, I changed the fx_Grayscale() function around a little bit -- effectively it should be yielding the same visual result, just with slightly different coding:
vec3 fx_Grayscale (const in vec3 vCol) {
vec3 col = vec3(0.9, 0.1, 0.1);
col = vCol;
float x = (col.r * 0.3) + (col.g * 0.59) + (col.b * 0.11);
return vec3(x, x, x);
}
Again, Nvidia does the correct thing whereas Intel HD now always, repeatedly produces a rather different, but still constantly-flickering noise pattern:
Must I suspect (yet another) Intel GL driver bug, or do you see any issues with my GLSL code -- not from a prettiness perspective (it's part of a shader code-gen experimental project) but from a mere spec-correctness point of view?
I think it looks strange to send in a "out" color as parameter to another function. I would rewrite it something like this:
void main () {
vec3 col = vec3(0f,0f,0f);
col = fx_RedTest(col);
col = fx_Grayscale(col);
out_Color = col;
}
Does it make any difference?
I want to fade the screen to a specific color using glsl
So far this is my glsl code and it works quite well:
uniform sampler2D textureSampler;
uniform vec2 texcoordOffset;
uniform vec3 sp;
uniform vec3 goal;
varying vec4 vertColor;
varying vec4 vertTexcoord;
void main(void) {
vec3 col=texture2D(textureSampler, vertTexcoord.st).rgb;
gl_FragColor = vec4(col+((goal-col)/sp), 1.0);
//gl_FragColor = vec4(col+((goal-col)*sp), 1.0); //as suggested below also this doesn't solve the problem
}
The only problem I have is that with higher sp values the colors aren't faded completly to the new color. I think the problem is caused by the accuracy which with the shader works.
Doas anyone has an Idea how to increase the accuracy?
EDIT:
Could it be that this effect is Driver dependent? I'm using an ATI with the latest drivers maybe someone could try the code on an NVIDIA card?
Let's break it down:
float A, B:
float Mix;
float C = A + (B-A) / Mix;
Now it's fairly easy to see that Mix has to be infinite to create pure A, so it isn't GLSL fault at all. The normally used equation is as follows
float C = A + (B-A) * Mix;
// Let's feed some data:
// Mix = 0 -> C = A;
// Mix = 1 -> C = A + (B - A) = A + B - A = B;
// Mix = 0.5 -> C = A + 0.5*(B - A) = A + 0.5*B - 0.5*A = 0.5*A + 0.5*B
Correct, right?
Change your code to:
gl_FragColor = vec4(col+((goal-col) * sp), 1.0);
And use the range of <0,1> in sp instead. Also, shouldn't sp be actually float? If all of it's components are equal (IOW sp.x == sp.y == sp.z), you can just change it's type and it will work, as referenced here.
I've implemented Cascaded Shadow Mapping as in the nvidia SDK (http://developer.download.nvidia.com/SDK/10.5/Samples/cascaded_shadow_maps.zip). However my lookup just doesn't seem to work.
Here's a picture depicting my current state: http://i.imgur.com/SCHDO.png
The problem is, I end up in the first split right away eventhough I'm far away from it. As you can see, the other splits aren't even considered.
I thought the reason for this might come from a different projection matrix the main engine is using. It's different from the one I supply to the algorithm but I also tried passing the same matrix to the shader and compute this way: gl_Position = matProj * gl_ModelViewMatrix * gl_Vertex
That really didn't change a thing though. I still ended up with only one split.
Here are my shaders:
[vertex]
varying vec3 vecLight;
varying vec3 vecEye;
varying vec3 vecNormal;
varying vec4 vecPos;
varying vec4 fragCoord;
void main(void)
{
vecPos = gl_Vertex;
vecNormal = normalize(gl_NormalMatrix * gl_Normal);
vecLight = normalize(gl_LightSource[0].position.xyz);
vecEye = normalize(-vecPos.xyz);
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
[fragment] (just the shadow part)
vec4 getShadow()
{
vec4 sm_coord_c = texmat_3*vecPos;
float shadow = texture2D(smap_3, sm_coord_c.xy).x;
float s = (shadow < sm_coord_c.z) ? 0.0 : 1.0;
vec4 shadow_c = vec4(1.0, 1.0, 1.0, 1.0) * s;
if(gl_FragCoord.z < vecFarbound.x)
{
vec4 sm_coord_c = texmat_0*vecPos;
float shadow = texture2D(smap_0, sm_coord_c.xy).x;
float s = (shadow < sm_coord_c.z) ? 0.0 : 1.0;
shadow_c = vec4(0.7, 0.7, 1.0, 1.0) * s;
}
else if(gl_FragCoord.z < vecFarbound.y)
{
vec4 sm_coord_c = texmat_1*vecPos;
float shadow = texture2D(smap_1, sm_coord_c.xy).x;
float s = (shadow < sm_coord_c.z) ? 0.0 : 1.0;
shadow_c = vec4(0.7, 1.0, 0.7, 1.0) * s;
}
else if(gl_FragCoord.z < vecFarbound.z)
{
vec4 sm_coord_c = texmat_2*vecPos;
float shadow = texture2D(smap_2, sm_coord_c.xy).x;
float s = (shadow < sm_coord_c.z) ? 0.0 : 1.0;
shadow_c = vec4(1.0, 0.7, 0.7, 1.0) * s;
}
return shadow_c;
}
So for some reason, gl_FragCoord.z is smaller than vecFarbound.x no matter where in the scene I'm at. (Also notice the shadowed area to the far left, this one increases the higher I move the camera and soon takes over all the scene..)
I've checked the vecFarbound values and they're similar to the ones in nvidia's code so I assume I calculated them right.
Is there a way to check gl_FragCoord.z's value?
in my old csm implementation I simply used distance in camera space
float tempDist = 0.0;
tempDist = dot(EyePos.xyz, EyePos.xyz);
if (tempDist < split.x) ...
else if (tempDist < split.y) ...
...
Thsi solution was a bit simplier for me to understand and I got better control over the splits. When you use Z value (in clip space) there might be some problems that comes from z value being not-linear.
I suggest doing split tests in viewSpace, and then (if it works) use gl_FragCoord.z..