In the following shadertoy I illustrate an artefact that occurs when raymarching
https://www.shadertoy.com/view/stdGDl
This is my "scene" (see code fragment below). It renders a primitive "tunnel_fragment" which is an SDF (Signed Distance Function), and uses modulo on the coordinates to calculate "infinite" repetitions of these fragments. It then also calculates which disk we are in (odd/even) to displace them.
I really don't understand why these artefacts occur when the disks (or rings -> see tunnel_fragment, if you remove a comment they become rings instead of disks) present these artefacts when the alternate movement in x direction becomes large.
These artefacts don't appear when the disk structure moves to the right on its whole, it only appears when the disks alternate and the entire structure becomes more complex.
What am I doing wrong? It's really boggling me.
vec2 scene(in vec3 p)
{
float thick = 0.1;
vec3 cp = p;
// Use modulo to simulate inf disks
vec3 c = vec3(0,0,6.0*thick);
vec3 q = mod(cp+0.5*c,c)-0.5*c;
// Find index of the disk
vec3 disk = (cp+0.5*c) / (c);
float idx = floor(disk.z);
// Do something simple with odd/even disks
// Note: changing this shows the artefacts are always there
if(mod(idx,2.0) == 0.0) {
q.x += sin(disk.z*t)*t*t;
} else {
q.x -= sin(disk.z*t)*t*t;
}
float d = tunnel_fragment(q, vec3(0.0), vec3(0.0, 0.0, 1.0), 2.0, thick, 0.2);
return vec2(d, idx);
}
The problem is illustrated with this diagram:
When the current disk (based on modulo) is offset by more than the spacing between the disks, then the distance that you calculate is larger than the distance to the next disk. Consequently you risk in over-stepping the next disk.
To solve this you need to either limit the offset (as said -- no more than the spacing between the disks), or sample odd/even disks separately and min() between them.
I'm working on a WebGL project to create isolines on a 3D surface on macOS/amd GPU. My idea is to colour the pixels based on elevation in fragment shader. With some optimizations, I can achieve a relatively consistent line width and I am happy about that. However when I tested it on windows it behaves differently.
Then I figured out it's because of fwidth(). I use fwidth() to prevent fragment shader from coloring the whole horizontal plane when it happens to locate at a isolevel. Please see the screenshot:
I solved this issue by adding the follow glsl line:
if (fwidth(vPositionZ) < 0.001) { /**then do not colour isoline on these pixels**/ };
It works very well on macOS since I got this:
.
However, on windows/nvidia GPU all isolines are gone because fwidth(vPositionZ) always evaluates to 0.0. Which doesn't make sense to me.
What am I doing wrong? Is there any better way to solve the issue presented in the first screenshot? Thank you all!
EDIT:
Here I attach my fragment shader. It's simplified but I think that's all relevant. I know looping is slow but for now I'm not worried about it.
uniform float zmin; // min elevation
uniform vec3 lineColor;
varying float vPositionZ; // elevation value for each vertex
float interval;
vec3 originColor = finalColor.rgb; // original surface color
for ( int i = 0; i < COUNT; i ++ ) {
float elevation = zmin + float( i + 1 ) * interval;
lineColor = mix( originColor, lineColor, step( 0.001, fwidth(vPositionZ)));
if ( vPositionZ <= elevation + lineWidth && vPositionZ >= elevation - lineWidth ) {
finalColor.rgb = lineColor;
}
// same thing but without condition:
// finalColor.rgb = mix( mix( originColor, lineColor, step(elevation - lineWidth, vPositionZ) ),
// originColor,
// step(elevation + lineWidth, vPositionZ) );
}
gl_FragColor = finalColor;
Environment: WebGL2.0, es version 300, chrome browser.
Put fwidth(vPosistionZ) before the loop will work. Otherwise, fwidth() evaluates anything to 0 if it's inside a loop.
I suspect this is a bug with Nvidia GPU.
I've written a raytracer in C++. This is the snippet for calculating the diffuse component:
//diffuse component
color diffuse(0, 0, 0);
if (intrs.mat.diffuseness > 0)
{
for (auto &light : lights)
{
//define ray from hit object to light
ray light_dir(intrs.point, (light->point - intrs.point).normalize());
double nl = light_dir.direction*intrs.normal; //dot product
double diminish_coeff = 1.0;
double dist = intrs.point.sqrDistance(light->point);
//check whether it reaches the light
if (nl > 0)
{
for (int i = 0; i < (int)shapes.size(); ++i)
{
shape::intersection temp_intrs(shapes[i]->intersect(light_dir, shapes[i]->interpolate_normals));
if (temp_intrs.valid && temp_intrs.point.sqrDistance(intrs.point) < dist)
{
diminish_coeff *= shadow_darkness;
break;
}
}
}
diffuse += intrs.mat.diffuseness * intrs.mat.col * light->light_color * light->light_intensity * nl*diminish_coeff;
}
}
Of course, I can't post the entire code, but I think it should be clear what I'm doing here - intrs is the current intersection of a ray and object and shapes is a vector of all objects in the scene.
Colors are represented as RGB in the (0,1) range. Addition and multiplication of colors are simple memberwise addition and multiplication. Only when the raytracing is over, and I want to write into the image file, I multiply my colors by 255 and clamp to 255 if a component is larger than that.
Currently, there is one point light in the scene and it's white: color(1,1,1), intensity = 1.0.
This is my rendered image:
So, this is not right - the cupboard on the left is supposed to be green, and the box is supposed to be red.
Is there something obviously wrong with my implementation? I can't seem to figure it out. I'll post some more code if necessary.
It seems that your diffuse += line should be inside the if (nl > 0) condition, not outside it.
I found the problem. For some reason, my intrs.normal vector wasn't normalized. Thank you everyone for your help.
To learn opengl, I'm creating a simple 3D graphics engine using openGL 3.3. I've recently added light attenuation over distance; this has turned all objects completely black. This was done by adding the following code to my light calculations in the fragment shader:
float distance = length(lite.position - FragPos);
float attenuation = 1.0f/(lite.constant + (lite.linear * distance) + (lite.quadratic * (distance * distance)));
ambient *= attenuation;
diffuse *= attenuation;
specular *= attenuation;
result += (ambient + diffuse + specular);
It seems safe to assume that attenuation is very small, effectively or actually 0, or negative (black). To test this I use result += vec3(attenuation);, the result of this is white objects; this then indicates that attenuation is not near 0 and instead 1.0 or larger; an additional test trying result += vec3(attenuation/500000); still produces white, which indicates that attenuation is quite large, perhaps infinite. I did some infinity and NaN checks on it. NaN checks told me it is a number, infinity checks tell me it is sometimes infinite and sometimes isn't. In fact it told me that it is both infinite and not infinite at the same time. I determined this by using the following code segment:
if(isinf(attenuation)){
result += vec3(1.0, 0.0, 0.0);
}
if(isinf(attenuation) && !isinf(attenuation)){
result += vec3(0.0, 1.0, 0.0);
}
if(!isinf(attenuation)){
result += vec3(0.0, 0.0, 1.0);
}
My objects turned purple/magenta. Were attenuation infinite, I would expect my objects to appear red; were they not infinite, I would expect them to appear blue; were they somehow both infinite and not infinite I would expect them to appear green. If I make the result += ... to be result = ..., the objects appear red. In this case, were it both infinite and not infinite, as my purple objects suggest, result would first be set to red, and then set to blue, resulting in blue objects (if somehow the green check fails).
I hope this describes the source of my confusion. My testing shows that attenuation is infinite, and that it is not infinite, AND that it is neither.
To top everything off when I use:
float attenuation = 1.0f/(1.0 + (0.0014 * distance) + (0.000007* (distance * distance)));
to determine the attenuation factor, everything works exactly as expected; however the values shown here as constants are exactly what's passed in from my openGL calls (c++):
glUniform1f(lightConstantLoc, 1.0f);
glUniform1f(lightLinearLoc, 0.0014f);
glUniform1f(lightQuadraticLoc, 0.000007f);
From there I should conclude that my data is not being delivered to my shaders correctly, however I'm confident my lite.constant etc values have been set correctly, and that distance is a reasonable value. When I single each one out as a color, the objects do turn that color.i.e.: using this
result = vec3(lite.constant, 0.0, 0.0);
my objects turn some shade of red, for lite.constant, lite.linear etc.
Searching google and stack overflow for things like "glsl isinf true and false" or "glsl variable is and isn't infinite" gives me absolutely no relevant results.
I get the feeling I'm distinctly ignorant of something happening here, or the way something works. And so I turn to you, am I missing something obvious, doing this all wrong, or is this a true mystery?
I'm not sure why your attenuation is so large, but the explanation for your is/isn't infinite issue is simple -- at least one component of attenuation is infinite, while at least one of the other components is not.
When you do if (bvec) -- testing a condition that is a boolean vector rather than a single boolean -- it acts as if you did if (any(bvec)). So it will execute the associated true branch if any of the components are true. When you have isinf(attenuation), you get a boolean vector. For example, if the red is inifinite and the others are not, you'll get (true, false, false). So !isinf(attenuation) will be (false, true, true), and the result of the && in the middle if is (false, false, false).
So it executes the first and third if (and not the second), giving you magenta.
The problem lies in code that is not shown in the question.
The crucial piece of information is that my shader supports up to 5 light sources, and iterates across all 5 of them; even when fewer than 5 light sources are provided. With this in mind changing
vec3 result = vec3(0.0);
for(int i = 0; i < 5; ++i){
Light lite = light[i];
...
to
vec3 result = vec3(0.0);
for(int i = 0; i < 1; ++i){
Light lite = light[i];
...
solves the problem, and everything now behaves perfectly. It seems that data that does not exist is neither infinite nor finite; which makes sense to some degree.
I need to debug a GLSL program but I don't know how to output intermediate result.
Is it possible to make some debug traces (like with printf) with GLSL ?
You can't easily communicate back to the CPU from within GLSL. Using glslDevil or other tools is your best bet.
A printf would require trying to get back to the CPU from the GPU running the GLSL code. Instead, you can try pushing ahead to the display. Instead of trying to output text, output something visually distinctive to the screen. For example you can paint something a specific color only if you reach the point of your code where you want add a printf. If you need to printf a value you can set the color according to that value.
void main(){
float bug=0.0;
vec3 tile=texture2D(colMap, coords.st).xyz;
vec4 col=vec4(tile, 1.0);
if(something) bug=1.0;
col.x+=bug;
gl_FragColor=col;
}
I have found Transform Feedback to be a useful tool for debugging vertex shaders. You can use this to capture the values of VS outputs, and read them back on the CPU side, without having to go through the rasterizer.
Here is another link to a tutorial on Transform Feedback.
GLSL Sandbox has been pretty handy to me for shaders.
Not debugging per se (which has been answered as incapable) but handy to see the changes in output quickly.
You can try this: https://github.com/msqrt/shader-printf which is an implementation called appropriately "Simple printf functionality for GLSL."
You might also want to try ShaderToy, and maybe watch a video like this one (https://youtu.be/EBrAdahFtuo) from "The Art of Code" YouTube channel where you can see some of the techniques that work well for debugging and visualising. I can strongly recommend his channel as he writes some really good stuff and he also has a knack for presenting complex ideas in novel, highly engaging and and easy to digest formats (His Mandelbrot video is a superb example of exactly that : https://youtu.be/6IWXkV82oyY)
I hope nobody minds this late reply, but the question ranks high on Google searches for GLSL debugging and much has of course changed in 9 years :-)
PS: Other alternatives could also be NVIDIA nSight and AMD ShaderAnalyzer which offer a full stepping debugger for shaders.
If you want to visualize the variations of a value across the screen, you can use a heatmap function similar to this (I wrote it in hlsl, but it is easy to adapt to glsl):
float4 HeatMapColor(float value, float minValue, float maxValue)
{
#define HEATMAP_COLORS_COUNT 6
float4 colors[HEATMAP_COLORS_COUNT] =
{
float4(0.32, 0.00, 0.32, 1.00),
float4(0.00, 0.00, 1.00, 1.00),
float4(0.00, 1.00, 0.00, 1.00),
float4(1.00, 1.00, 0.00, 1.00),
float4(1.00, 0.60, 0.00, 1.00),
float4(1.00, 0.00, 0.00, 1.00),
};
float ratio=(HEATMAP_COLORS_COUNT-1.0)*saturate((value-minValue)/(maxValue-minValue));
float indexMin=floor(ratio);
float indexMax=min(indexMin+1,HEATMAP_COLORS_COUNT-1);
return lerp(colors[indexMin], colors[indexMax], ratio-indexMin);
}
Then in your pixel shader you just output something like:
return HeatMapColor(myValue, 0.00, 50.00);
And can get an idea of how it varies across your pixels:
Of course you can use any set of colors you like.
At the bottom of this answer is an example of GLSL code which allows to output the full float value as color, encoding IEEE 754 binary32. I use it like follows (this snippet gives out yy component of modelview matrix):
vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]);
if(bool(1)) // put 0 here to get lowest byte instead of three highest
gl_FrontColor=vec4(xAsColor.rgb,1);
else
gl_FrontColor=vec4(xAsColor.a,0,0,1);
After you get this on screen, you can just take any color picker, format the color as HTML (appending 00 to the rgb value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the float as IEEE 754 binary32.
Here's the actual implementation of toColor():
const int emax=127;
// Input: x>=0
// Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x))
// -emax if x==0
// emax+1 otherwise
int floorLog2(float x)
{
if(x==0.) return -emax;
// NOTE: there exist values of x, for which floor(log2(x)) will give wrong
// (off by one) result as compared to the one calculated with infinite precision.
// Thus we do it in a brute-force way.
for(int e=emax;e>=1-emax;--e)
if(x>=exp2(float(e))) return e;
// If we are here, x must be infinity or NaN
return emax+1;
}
// Input: any x
// Output: IEEE 754 biased exponent with bias=emax
int biasedExp(float x) { return emax+floorLog2(abs(x)); }
// Input: any x such that (!isnan(x) && !isinf(x))
// Output: significand AKA mantissa of x if !isnan(x) && !isinf(x)
// undefined otherwise
float significand(float x)
{
// converting int to float so that exp2(genType) gets correctly-typed value
float expo=float(floorLog2(abs(x)));
return abs(x)/exp2(expo);
}
// Input: x\in[0,1)
// N>=0
// Output: Nth byte as counted from the highest byte in the fraction
int part(float x,int N)
{
// All comments about exactness here assume that underflow and overflow don't occur
const float byteShift=256.;
// Multiplication is exact since it's just an increase of exponent by 8
for(int n=0;n<N;++n)
x*=byteShift;
// Cut higher bits away.
// $q \in [0,1) \cap \mathbb Q'.$
float q=fract(x);
// Shift and cut lower bits away. Cutting lower bits prevents potentially unexpected
// results of rounding by the GPU later in the pipeline when transforming to TrueColor
// the resulting subpixel value.
// $c \in [0,255] \cap \mathbb Z.$
// Multiplication is exact since it's just and increase of exponent by 8
float c=floor(byteShift*q);
return int(c);
}
// Input: any x acceptable to significand()
// Output: significand of x split to (8,8,8)-bit data vector
ivec3 significandAsIVec3(float x)
{
ivec3 result;
float sig=significand(x)/2.; // shift all bits to fractional part
result.x=part(sig,0);
result.y=part(sig,1);
result.z=part(sig,2);
return result;
}
// Input: any x such that !isnan(x)
// Output: IEEE 754 defined binary32 number, packed as ivec4(byte3,byte2,byte1,byte0)
ivec4 packIEEE754binary32(float x)
{
int e = biasedExp(x);
// sign to bit 7
int s = x<0. ? 128 : 0;
ivec4 binary32;
binary32.yzw=significandAsIVec3(x);
// clear the implicit integer bit of significand
if(binary32.y>=128) binary32.y-=128;
// put lowest bit of exponent into its position, replacing just cleared integer bit
binary32.y+=128*int(mod(float(e),2.));
// prepare high bits of exponent for fitting into their positions
e/=2;
// pack highest byte
binary32.x=e+s;
return binary32;
}
vec4 toColor(float x)
{
ivec4 binary32=packIEEE754binary32(x);
// Transform color components to [0,1] range.
// Division is inexact, but works reliably for all integers from 0 to 255 if
// the transformation to TrueColor by GPU uses rounding to nearest or upwards.
// The result will be multiplied by 255 back when transformed
// to TrueColor subpixel value by OpenGL.
return vec4(binary32)/255.;
}
I am sharing a fragment shader example, how i actually debug.
#version 410 core
uniform sampler2D samp;
in VS_OUT
{
vec4 color;
vec2 texcoord;
} fs_in;
out vec4 color;
void main(void)
{
vec4 sampColor;
if( texture2D(samp, fs_in.texcoord).x > 0.8f) //Check if Color contains red
sampColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); //If yes, set it to white
else
sampColor = texture2D(samp, fs_in.texcoord); //else sample from original
color = sampColor;
}
The existing answers are all good stuff, but I wanted to share one more little gem that has been valuable in debugging tricky precision issues in a GLSL shader. With very large int numbers represented as a floating point, one needs to take care to use floor(n) and floor(n + 0.5) properly to implement round() to an exact int. It is then possible to render a float value that is an exact int by the following logic to pack the byte components into R, G, and B output values.
// Break components out of 24 bit float with rounded int value
// scaledWOB = (offset >> 8) & 0xFFFF
float scaledWOB = floor(offset / 256.0);
// c2 = (scaledWOB >> 8) & 0xFF
float c2 = floor(scaledWOB / 256.0);
// c0 = offset - (scaledWOB << 8)
float c0 = offset - floor(scaledWOB * 256.0);
// c1 = scaledWOB - (c2 << 8)
float c1 = scaledWOB - floor(c2 * 256.0);
// Normalize to byte range
vec4 pix;
pix.r = c0 / 255.0;
pix.g = c1 / 255.0;
pix.b = c2 / 255.0;
pix.a = 1.0;
gl_FragColor = pix;
The GLSL Shader source code is compiled and linked by the graphics driver and executed on the GPU.
If you want to debug the shader, then you have to use graphics debugger like RenderDoc or NVIDIA Nsight.
I found a very nice github library (https://github.com/msqrt/shader-printf)
You can use the printf function in a shader file.
sue this
vec3 dd(vec3 finalColor,vec3 valueToDebug){
//debugging
finalColor.x = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.x : finalColor.x;
finalColor.y = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.y : finalColor.y;
finalColor.z = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.z : finalColor.z;
return finalColor;
}
//on the main function, second argument is the value to debug
colour = dd(colour,vec3(0.0,1.0,1.));
gl_FragColor = vec4(clamp(colour * 20., 0., 1.),1.0);
Do offline rendering to a texture and evaluate the texture's data.
You can find related code by googling for "render to texture" opengl
Then use glReadPixels to read the output into an array and perform assertions on it (since looking through such a huge array in the debugger is usually not really useful).
Also you might want to disable clamping to output values that are not between 0 and 1, which is only supported for floating point textures.
I personally was bothered by the problem of properly debugging shaders for a while. There does not seem to be a good way - If anyone finds a good (and not outdated/deprecated) debugger, please let me know.