OpenGL shader fill works with constant color, doesn't work with interpolation, how to debug? - glsl

We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}

It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL

Related

WebGL: Interpolate between old and new value

I am struggling with this for the second day now and it seems like such a simple task, but I can not find the right solution.
With p5.js I am creating a GL instance and sending uniforms to the vertex shader.
Now I want to change the global variable that is the uniform and have the shader interpolate between the old value to the new value.
In the example below, after the button is clicked, it should not change the color instantly, but do small increments to get to the new color.
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 iResolution;
uniform float iTime;
uniform vec3 uSky;
void main()
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/iResolution.xy;
uv -= 0.5;
uv.x *= iResolution.x/iResolution.y;
vec3 sky = uSky;
//sky = mix(sky, uSky, 0.001);
// ^- This needs to be increment until 1 every time the uniform changes
// Output to screen
gl_FragColor = vec4(sky, 1.0);
}
https://glitch.com/~shader-prb
As Rabbid76 commented. I can not change the uniform value in the shader.
The solution was to lerp from the host.

Banding Problem in Multi Step Shader with Ping Pong Buffers, does not happen in ShaderToy

I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);

Failed attempt to create a pixelating shader for opengl

I'm making a libGDX based game and I've tried to make a pixelating shader. To me, it looks like it should work but it doesn't. I just see 1 color of the texture all over the screen. The goal was to turn a detailed texture into a pixelated texture.
Here is the code of my fragment shader:
precision mediump float;
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform sampler2D u_sampler2D;
void main(){
ivec2 texSize = textureSize( u_sampler2D, 0);
vec4 color = texture2D(u_sampler2D, vec2(int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y) / texSize.y)) * v_color;
gl_FragColor = color;
}
What I am trying to do here is: I get the size of the texture. Then, with that size, I 'pixelate' v_texCoord0 and get the color of that pixel.
As soon as I remove the int cast in
int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y)
, I see the texture as normal, otherwise I see what I've described in the beginning of this post. However, to me, anything in my code could be wrong.
I hope someone with experience could help me fix this problem!
You are doing an integer division here:
ivec2 texSize;
[...] int(v_texCoord0.x * texSize.x) / texSize.x
The result can only be an integer, and if v_texCoord0.x is in the range [0,1], it will result in producing zero except for rightmost part of your texture, when the fragment exacly samples at the border of your texture.
You should apply floating-point math to get the effect you want:
vec2 texSize=vec2(textureSize( u_sampler2D, 0));
vec4 color = texture2D(u_sampler2D, floor(v_texCoord0 * texSize) / texSize);
(Also note that there is no need to work with the x and y components individually, you can use vector operations.)
But what you're doing here is not completely clear. Concenptually, you are emulating what GL_NEAREST filtering can do for you for free (just that your selection is shifted by half a texel), so the question is: what are you getting from this. If you use GL_LINEAR filtering, the above formula will sample always at the texel corners, so that the linear filter will result in averaging the color of a 2x2 texel block. If you use GL_NEAREST, the formula will not give you more pixelation than you had before, it just shifts the texture in a weird way. If you use some filter with mipmapping, the formula will completely break the mipmap selection due to the non-continuous nature of the equation (this will also result in the GL not being able to discern between texture minification or magnification in a reliable way, so it does not break only mipmapping).

Qt 5 with QOpenGLTexture and 16 bits integers images

For a while I've been using RGB images in 32 bits floating point precision in textures with QOpenGLTexture. I had no trouble with it.
Originally those images have an unsigned short data type, and I'd liketo keep this data type for sending the data to openGL (BTW, does it actually save some memory at all to do that?). After many attempts, I can't get QOpenGLTexture to display the image. All I end up with is a black image.
Below is how I setup QOpenGLTexture. The parts that used floating points, and that worked so far, is commented out. The part that assumes images in 16 bits unsigned integers, is right below the latter, uncommented. I'm using OpenGL 3.3, GLSL 330, core profile, on a macbook pro retina with Iris graphics.
QOpenGLTexture *oglt = new QOpenGLTexture(QOpenGLTexture::Target2D);
oglt->setMinificationFilter(QOpenGLTexture::NearestMipMapNearest);
oglt->setMagnificationFilter(QOpenGLTexture::NearestMipMapNearest);
//oglt->setFormat(QOpenGLTexture::RGB32F); // works
oglt->setFormat(QOpenGLTexture::RGB16U);
oglt->setSize(naxis1, naxis2);
oglt->setMipLevels(10);
//oglt->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32); // works
//oglt->setData(QOpenGLTexture::RGB, QOpenGLTexture::Float32, tempImageRGB.data); // works
oglt->allocateStorage(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16);
oglt->setData(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16, tempImageRGB.data);
So, in just these lines above, is there something wrong?
My data in tempImageRGB.data are between [0-65535] when I used UInt16. When I use QOpenGLTexture::Float32, the values in tempImageRGB.data are already normalized so they would be within [0-1].
Then, here is my fragment shader:
#version 330 core
in mediump vec2 TexCoord;
out vec4 color;
uniform mediump sampler2D ourTexture;
void main()
{
mediump vec3 textureColor = texture(ourTexture, TexCoord).rgb;
color = vec4(textureColor, 1.0);
}
What am I missing?
It seems I fixed the problem by simply not using NearestMipMapNearest for magnification filter. Things work if I only use it for the minification. While in general it makes sense but I don't understand why I had no problem when using NearestMipMapNearest for both Magnification and Minification in the floating point case.
So, the code is working by simply changing 'sampler2D' to 'usampler2D' in the shader, by changing 'setMagnificationFilter(QOpenGLTexture::NearestMipMapNearest)' into 'setMagnificationFilter(QOpenGLTexture::Nearest)'. The minification filter does not need to change. In addition, although it works with and without, I did not need to set the MipMapLevels explicitly so I can remove oglt->setMipLevels(10).
To be clear, here is the corrected code:
QOpenGLTexture *oglt = new QOpenGLTexture(QOpenGLTexture::Target2D);
oglt->setMinificationFilter(QOpenGLTexture::NearestMipMapNearest);
oglt->setMagnificationFilter(QOpenGLTexture::Nearest);
//oglt->setFormat(QOpenGLTexture::RGB32F); // works
oglt->setFormat(QOpenGLTexture::RGB16U); // now works with integer images (unsigned)
oglt->setSize(naxis1, naxis2);
//oglt->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32); // works
//oglt->setData(QOpenGLTexture::RGB, QOpenGLTexture::Float32, tempImageRGB.data); // works
oglt->allocateStorage(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16); // now works with integer images (unsigned)
oglt->setData(QOpenGLTexture::RGB_Integer, QOpenGLTexture::UInt16, tempImageRGB.data); // now works with integer images (unsigned)
The fragment shader becomes simply:
#version 330 core
in mediump vec2 TexCoord;
out vec4 color;
uniform mediump usampler2D ourTexture;
void main()
{
mediump vec3 textureColor = texture(ourTexture, TexCoord).rgb;
color = vec4(textureColor, 1.0);
}

OpenGL / GLSL varying not interpolated across GL_QUAD

I am using GLSL to render a basic cube (made from GL_QUADS surfaces). I would like to pass the gl_Vertex content from the vertex into the fragment shader. Everything works, if I am using gl_FrontColor (vertex shader) and gl_Color (fragment shader) for this, but it doesn't work, when using a plain varying (see code & image below). It appears the varying is not interpolated across the surface for some reason. Any idea what could cause this in OpenGL ?
glShadeModel is set to GL_SMOOTH - I can't think of anything else that could cause this effect right now.
Vertex Shader:
#version 120
varying vec4 frontSideValue;
void main() {
frontSideValue = gl_Vertex;
gl_Position = transformPos;
}
Fragment Shader:
#version 120
varying vec4 frontSideValue;
void main() {
gl_FragColor = frontSideValue;
}
The result looks just like you are not using values in the range [0,1] for the color vector. You basically use the untransformed vertex position, which might be well outside this range. Your cube seems centered around the origin, so you are seeing the small transition where the values are actually in the range [0,1] as that unsharp band.
With the builin gl_FrontColor, the value seems to get clamped before the interpolation.