Problem with ShaderToy shader converted to PixiJS v4 - glsl

I am trying to convert the most basic ShaderToy shader (https://www.shadertoy.com/new) to use with PixiJS v4.5.5. What I am getting is a completely still background:
The background is supposed to move and blend between colors just like in the ShaderToy example. I am not getting any errors in the console.
My code:
let width = window.innerWidth;
let height = window.innerHeight;
let app = new PIXI.Application(width, height);
document.body.appendChild(app.view);
let shaderFrag = `
precision mediump float;
uniform vec3 iResolution; // viewport resolution (in pixels)
uniform float iTime; // shader playback time (in seconds)
void main() {
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
// Output to screen
gl_FragColor = vec4(col,1.0);
}
`;
let container = new PIXI.Container();
container.filterArea = app.screen;
app.stage.addChild(container);
let filter = new PIXI.Filter(null, shaderFrag);
filter.uniforms.iResolution = [width, height, 1.0];
filter.uniforms.iTime = 1.0;
container.filters = [filter];
// Animate the filter
app.ticker.add(function(delta) {
filter.uniforms.iTime += 0.1;
});
What could be the issue here?
PS: the exact same shader code works perfectly with Three.js.

You are facing a bug in PixiJS.
See Comments in shader can comment out lines of code #5048
I debugged the code and investigated, that PixiJs (version 4.8.2) is parsing the fragment shader file, to find the uniforms.
See the original code, which I copied from the library:
function extractUniformsFromString(string)
{
const maskRegex = new RegExp('^(projectionMatrix|uSampler|filterArea|filterClamp)$');
const uniforms = {};
let nameSplit;
// clean the lines a little - remove extra spaces / tabs etc
// then split along ';'
const lines = string.replace(/\s+/g, ' ').split(/\s*;\s*/);
// loop through..
for (let i = 0; i < lines.length; i++)
{
const line = lines[i].trim();
if (line.indexOf('uniform') > -1)
{
const splitLine = line.split(' ');
const type = splitLine[1];
let name = splitLine[2];
let size = 1;
if (name.indexOf('[') > -1)
{
// array!
nameSplit = name.split(/\[|]/);
name = nameSplit[0];
size *= Number(nameSplit[1]);
}
if (!name.match(maskRegex))
{
uniforms[name] = {
value: defaultValue(type, size),
name,
type,
};
}
}
}
return uniforms;
}
If there is a line comment before the declaration of the uniform, as in your fragment shader before iTime, then the uniform can not be found.
This causes that the automatic synchronisation mechanism of PixiJS breaks down and the value of the uniform is not set.
The workaround is simple, just remove the comments after the uniforms:
precision mediump float;
uniform vec3 iResolution;
uniform float iTime;
....
See the Example:
let width = window.innerWidth;
let height = window.innerHeight;
let app = new PIXI.Application(width, height);
document.body.appendChild(app.view);
let shaderFrag = `
precision mediump float;
uniform vec3 iResolution;
uniform float iTime;
void main() {
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
// Output to screen
gl_FragColor = vec4(col,1.0);
}
`;
let container = new PIXI.Container();
container.filterArea = app.screen;
app.stage.addChild(container);
let filter = new PIXI.Filter(null, shaderFrag);
filter.uniforms.iResolution = [width, height, 1.0];
filter.uniforms.iTime = [1.0];
container.filters = [filter];
app.ticker.add(function(delta) {
filter.uniforms.iTime[0] += 0.1;
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.8.2/pixi.min.js"></script>

Related

Optimizing pixel-swapping shader

I'm using a shader that swaps colors/palettes on a texture. The shader checks a pixel for transparency and then sets the pixel if not transparent. Is there an efficient way to ignore 0 alpha pixels other than a potential branch? In this case, where I set pixel = newPixel:
uniform bool alternate;
uniform sampler2D texture;
void main()
{
vec4 pixel = texture2D(bitmap, openfl_TextureCoordv);
if(alternate)
{
vec4 newPixel = texture2D(texture, vec2(pixel.r, pixel.b));
if(newPixel.a != 0.0)
pixel = newPixel;
}
gl_FragColor = pixel;
}
You can use mix and step:
void main()
{
vec4 pixel = texture2D(bitmap, openfl_TextureCoordv);
vec4 newPixel = texture2D(texture, vec2(pixel.r, pixel.b));
gl_FragColor = mix(pixel, newPixel,
float(alternate) * (1.0 - step(newPixel.a, 0.0)));
}
You may want to make a smooth transition depending on the alpha channel. In this case you only need mix:
gl_FragColor = mix(pixel, newPixel, float(alternate) * newPixel.a);
I would look into the step function. You can express the if statement as a product of two conditions.
For example:
if (a >= 0) {
b = c;
}
Is equivalent to
b = (step(a, 0)*c) + (1.0 - step(a, 0)*b);

Why my texture coordinates are inverted each time I call my glsl shader in p5js?

I am trying to use a glsl shader with p5js to create a simulation like the game of life. To do that I want to create a shader which will take a texture as uniform and which will draw a new texture based on this previous texture. In a next iteration this new texture will be used as uniform and that should allow me create a simulation following the idea exposed here. I am experienced with p5.js but I'm completely new to shader programming so I'm probably missing something.
For now my code is as straightforward as possible:
In the preload() function, I create a texture using the createImage() function and setup some pixels to be white and the others to be black.
In the setup() function I use this texture to run the shader a first time to create a new texture. I also set a timer to run the shader at regular intervals and draw the result in a buffer.
In the draw() function I draw the buffer in the canvas.
To keep things simple I keep the canvas and the texture the same size.
My issue is that at some point the y coordinates in my code seems to get inverted and I don't understand why. My understanding is that my code should show a still image but each time I run the shader the image is inverted. Here is what I mean:
I am not sure if my issue comes from how I use glsl or how I use p5 or a mix of both. Can someone explain to me where this weird y inversion comes from?
Here is my minimal reproducible example (which is also in the p5 editor here):
The sketch file:
const sketch = (p5) => {
const D = 100;
let initialTexture;
p5.preload = () => {
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
const alive = i === j || i === 10 || j === 40;
const color = p5.color(250, 250, 250, alive ? 250 : 0);
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
// Initialize the shader
shader = p5.loadShader('uniform.vert', 'test.frag');
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
canvas.parent('canvasDiv');
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 1009);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.image(graphics, -p5.width / 2, -p5.height / 2);
};
};
new p5(sketch);
The fragment shader which for now only takes the color of the texture and reuses it (I tried using st instead of uv to no avail):
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
void main() {
// Normalize the position between 0 and 1
vec2 st = gl_FragCoord.xy/u_resolution.xy;
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
// Get the color at the texture coordinate
vec4 c = texture2D(u_texture, uv);
// Reuse the same color
gl_FragColor = c;
}
And the vertex shader which I took from an example and does nothing excepted passing the coordinates:
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}
Long story short: the texture coordinates for a rectangle or a plane drawn with p5.js are (0, 0) in the bottom left, and (1, 1) in the top right, where as the coordinate system for sampling values from a texture are (0, 0) in the top left and (1, 1) in the bottom right. You can verify this by commenting out your color sampling code in your fragment shader and using the following:
float val = (uv.x + uv.y) / 2.0;
gl_FragColor = vec4(val, val, val, 1.0);
As you can see by the resulting image:
The value (0 + 0) / 2 results in black in the lower left, and (1 + 1) / 2 results in white in the upper right.
So, to sample the correct portion of the texture you just need to flip the y component of the uv vector:
texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
const sketch = (p5) => {
const D = 200;
let initialTexture;
p5.preload = () => {
// This doesn't actually need to go in preload
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
// draw a big checkerboard
const alive = (p5.round(i / 10) + p5.round(j / 10)) % 2 == 0;
const color = alive ? p5.color('white') : p5.color(150, p5.map(j, 0, D, 50, 200), p5.map(i, 0, D, 50, 200));
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
// Initialize the shader
shader = graphics.createShader(vert, frag);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 100);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.texture(graphics);
p5.rect(-p5.width / 2, -p5.height / 2, p5.width, p5.height);
};
const frag = `
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
varying vec2 vPos;
void main() {
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
gl_FragColor = texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
//// For debugging uv coordinate orientation
// float val = (uv.x + uv.y) / 2.0;
// gl_FragColor = vec4(val, val, val, 1.0);
}
`;
const vert = `
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
// This maps positions 0..1 to -1..1
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}`;
};
new p5(sketch);
<script src="https://cdn.jsdelivr.net/npm/p5#1.3.1/lib/p5.js"></script>

Is it possible to tween the red color to blue?

I have some code I wrote in the book of shaders editor:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float remap(float a, float b, float c, float d, float t) {
return ((t - a) / (b - a)) * (d-c) + c;
}
float outline(vec2 st) {
return smoothstep(0.99, 1.0, st.y) + smoothstep(0.99, 1.0, st.x) + smoothstep(0.01, 0.0, st.y) + smoothstep(0.01, 0.0, st.x);
}
float mouseFoo(vec2 scaledSt, vec2 u_mouse, float scaleVal) {
vec2 scaledMouse = u_mouse * scaleVal;
if(scaledSt.x < ceil(scaledMouse.x) && scaledSt.x > floor(scaledMouse.x) && scaledSt.y < ceil(scaledMouse.y) && scaledSt.y > floor(scaledMouse.y)) {
// if(u_mouse.x < 100.0) {
return 1.0;
} else {
return 0.0;
}
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
vec3 color = vec3(0.03,0.07,0.15);
vec3 redColor = vec3(1.0, 0.0, 0.0);
vec3 outlineColor = vec3(1.0);
float floorSt;
float scaleVal = 5.0;
vec2 scaledSt = st * scaleVal;
// tile
st *= scaleVal;
floorSt = floor(st.x);
st = fract(st);
// inner color
color = mix(color, redColor, mouseFoo(scaledSt, u_mouse/u_resolution.xy, scaleVal));
// outline
color = mix(color, outlineColor, outline(st));
gl_FragColor = vec4(color, 1.0 );
}
I'm wondering if it's possible to have the red color tween to the blue color when a box is hovered off? I think I might have an idea of how to do it if I were to write data to a texture and look that up, but even then I'm not entirely sure.
Use mix
Use mix to interpolate between red and blue. You need another variable that transitions from 0-1 to do the blend, which is mix's 3rd parameter.
ShaderToy example:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec3 red = vec3(1,0,0);
vec3 blue = vec3(0,0,1);
// Output to screen
fragColor = vec4(mix(red, blue, uv.x),1.0);
}
which produces:
In your case, you'll want the 3rd parameter (the alpha or lerp parameter) to be driven over some time (say, .2 seconds) after the mouse entered the hover area. You'll need to do one of the following:
Detect the hover entered at a higher level and then pass the mouse down time in as a uniform
Drive the 3rd parameter directly from a uniform

OpenGL - strange SSAO artifact

I followed the tutorial at Learn OpenGL to implement Screenspace Ambient Occlusion. Things are mostly looking okay besides a strange artifact at the top and bottom of the window.
The problem is more obvious moving the camera, when it appears as if top parts of the image are imprinted on the bottom and vise versa, as shown in this video.
The artifact worsens when standing close to a wall and looking up and down so perhaps the Znear value is contributing? The scale of my scene does seem small compared to other demos, Znear and Zfar are 0.01f and 1000 and the width of the shown hallway is around 1.2f.
I've read into the common SSAO artifacts and haven't found anything resembling this.
#version 330 core
in vec2 TexCoords;
layout (location = 0) out vec3 FragColor;
uniform sampler2D MyTexture0; // Position
uniform sampler2D MyTexture1; // Normal
uniform sampler2D MyTexture2; // TexNoise
const int samples = 64;
const float radius = 0.25;
const float bias = 0.025;
uniform mat4 projectionMatrix;
uniform float screenWidth;
uniform float screenHeight;
void main()
{
//tile noise texture over screen based on screen dimensions divided by noise size
vec2 noiseScale = vec2(screenWidth/4.0, screenHeight/4.0);
vec3 sample_sphere[64];
sample_sphere[0] = vec3(0.04977, -0.04471, 0.04996);
sample_sphere[1] = vec3(0.01457, 0.01653, 0.00224);
sample_sphere[2] = vec3(-0.04065, -0.01937, 0.03193);
sample_sphere[3] = vec3(0.01378, -0.09158, 0.04092);
sample_sphere[4] = vec3(0.05599, 0.05979, 0.05766);
sample_sphere[5] = vec3(0.09227, 0.04428, 0.01545);
sample_sphere[6] = vec3(-0.00204, -0.0544, 0.06674);
sample_sphere[7] = vec3(-0.00033, -0.00019, 0.00037);
sample_sphere[8] = vec3(0.05004, -0.04665, 0.02538);
sample_sphere[9] = vec3(0.03813, 0.0314, 0.03287);
sample_sphere[10] = vec3(-0.03188, 0.02046, 0.02251);
sample_sphere[11] = vec3(0.0557, -0.03697, 0.05449);
sample_sphere[12] = vec3(0.05737, -0.02254, 0.07554);
sample_sphere[13] = vec3(-0.01609, -0.00377, 0.05547);
sample_sphere[14] = vec3(-0.02503, -0.02483, 0.02495);
sample_sphere[15] = vec3(-0.03369, 0.02139, 0.0254);
sample_sphere[16] = vec3(-0.01753, 0.01439, 0.00535);
sample_sphere[17] = vec3(0.07336, 0.11205, 0.01101);
sample_sphere[18] = vec3(-0.04406, -0.09028, 0.08368);
sample_sphere[19] = vec3(-0.08328, -0.00168, 0.08499);
sample_sphere[20] = vec3(-0.01041, -0.03287, 0.01927);
sample_sphere[21] = vec3(0.00321, -0.00488, 0.00416);
sample_sphere[22] = vec3(-0.00738, -0.06583, 0.0674);
sample_sphere[23] = vec3(0.09414, -0.008, 0.14335);
sample_sphere[24] = vec3(0.07683, 0.12697, 0.107);
sample_sphere[25] = vec3(0.00039, 0.00045, 0.0003);
sample_sphere[26] = vec3(-0.10479, 0.06544, 0.10174);
sample_sphere[27] = vec3(-0.00445, -0.11964, 0.1619);
sample_sphere[28] = vec3(-0.07455, 0.03445, 0.22414);
sample_sphere[29] = vec3(-0.00276, 0.00308, 0.00292);
sample_sphere[30] = vec3(-0.10851, 0.14234, 0.16644);
sample_sphere[31] = vec3(0.04688, 0.10364, 0.05958);
sample_sphere[32] = vec3(0.13457, -0.02251, 0.13051);
sample_sphere[33] = vec3(-0.16449, -0.15564, 0.12454);
sample_sphere[34] = vec3(-0.18767, -0.20883, 0.05777);
sample_sphere[35] = vec3(-0.04372, 0.08693, 0.0748);
sample_sphere[36] = vec3(-0.00256, -0.002, 0.00407);
sample_sphere[37] = vec3(-0.0967, -0.18226, 0.29949);
sample_sphere[38] = vec3(-0.22577, 0.31606, 0.08916);
sample_sphere[39] = vec3(-0.02751, 0.28719, 0.31718);
sample_sphere[40] = vec3(0.20722, -0.27084, 0.11013);
sample_sphere[41] = vec3(0.0549, 0.10434, 0.32311);
sample_sphere[42] = vec3(-0.13086, 0.11929, 0.28022);
sample_sphere[43] = vec3(0.15404, -0.06537, 0.22984);
sample_sphere[44] = vec3(0.05294, -0.22787, 0.14848);
sample_sphere[45] = vec3(-0.18731, -0.04022, 0.01593);
sample_sphere[46] = vec3(0.14184, 0.04716, 0.13485);
sample_sphere[47] = vec3(-0.04427, 0.05562, 0.05586);
sample_sphere[48] = vec3(-0.02358, -0.08097, 0.21913);
sample_sphere[49] = vec3(-0.14215, 0.19807, 0.00519);
sample_sphere[50] = vec3(0.15865, 0.23046, 0.04372);
sample_sphere[51] = vec3(0.03004, 0.38183, 0.16383);
sample_sphere[52] = vec3(0.08301, -0.30966, 0.06741);
sample_sphere[53] = vec3(0.22695, -0.23535, 0.19367);
sample_sphere[54] = vec3(0.38129, 0.33204, 0.52949);
sample_sphere[55] = vec3(-0.55627, 0.29472, 0.3011);
sample_sphere[56] = vec3(0.42449, 0.00565, 0.11758);
sample_sphere[57] = vec3(0.3665, 0.00359, 0.0857);
sample_sphere[58] = vec3(0.32902, 0.0309, 0.1785);
sample_sphere[59] = vec3(-0.08294, 0.51285, 0.05656);
sample_sphere[60] = vec3(0.86736, -0.00273, 0.10014);
sample_sphere[61] = vec3(0.45574, -0.77201, 0.00384);
sample_sphere[62] = vec3(0.41729, -0.15485, 0.46251);
sample_sphere[63] = vec3 (-0.44272, -0.67928, 0.1865);
// get input for SSAO algorithm
vec3 fragPos = texture(MyTexture0, TexCoords).xyz;
vec3 normal = normalize(texture(MyTexture1, TexCoords).rgb);
vec3 randomVec = normalize(texture(MyTexture2, TexCoords * noiseScale).xyz);
// create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < samples; ++i)
{
// get sample position
vec3 sample = TBN * sample_sphere[i]; // from tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projectionMatrix * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = texture(MyTexture0, offset.xy).z;
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / samples);
FragColor = vec3(occlusion);
}
As Rabbid76 suggested, the artifacts were caused by sampling outside of the screen borders. I added a check to prevent this and things are looking much better..
vec4 clipSpacePos = projectionMatrix * vec4(sample, 1.0); // from view to clip-space
vec3 ndcSpacePos = clipSpacePos.xyz /= clipSpacePos.w; // perspective divide
vec2 windowSpacePos = ((ndcSpacePos.xy + 1.0) / 2.0) * vec2(screenWidth, screenHeight);
if ((windowSpacePos.y > 0) && (windowSpacePos.y < screenHeight))
if ((windowSpacePos.x > 0) && (windowSpacePos.x < screenWidth))
// THEN APPLY AMBIENT OCCLUSION
It hasn't entirely fixed the issue though as areas close to the windows edge now appear lighter than they should because fewer samples are tested. Perhaps somebody can suggest an approach that moves the sample area to an appropriate location?

OpenGL: GLSL float has low precision

I have float alpha texture that contains amplitude values. It is converted to decibels and displayed in grayscale.
Here is conversation code (C++) :
const float db_min = -100, db_max = 0;
float image[height][width];
for (int y = 0; y<height; ++y) {
for (int x = 0; x<width; ++x) {
image[y][x]= 20.f * log(a[i])/log(10.f);
image[y][x] = (image[y][x]-db_min)/(db_max-db_min);
}
}
Here is Fragment Shader (GLSL):
#version 120
precision highp float;
varying vec2 texcoord;
uniform sampler2D texture;
void main() {
float value = texture2D(texture, texcoord).a;
gl_FragColor = vec4(value, value, value, 0);
}
Here is a screenshot:
Looks perfect! Now, I want to write conversation in Fragment Shader itself, instead of C++:
#version 120
precision highp float;
varying vec2 texcoord;
uniform sampler2D texture;
const float db_min = -100., db_max = 0.;
void main() {
float value = texture2D(texture, texcoord).a;
value = 20. * log(value)/log(10.);
value = (value-db_min)/(db_max-db_min);
gl_FragColor = vec4(value, value, value, 0);
}
Here is a screenshot:
Why are results different? What am I doing wrong?
Limited texel precision - that could be the problem. I can guess you're keeping your (very-high-range as I understand looking at log) values in a 8-bit per-channel texture(e.g. RGBA8). Then you could use a floating-point format or pack your high-precision/range values to you 4-bytes format(e.g. a fixed point).