How can I repeat a texture within a WGSL shader? - glsl

I am using non-power-of-two textures within shader programs. I am trying to implement scrolling text with repeats. Scrolling works fine, but at soon as I try to get the texture to repeat via logic in my vertex shader, I suddenly get a quad with a single set of stretched pixels across the entire range. I assume this is happening due to the filtering algorithm.
As background, I want to generate the texture coordinates within the vertex program since I then do further distortion on them in the fragment programs, and it is easier to manage if the inputs to the fragment programs are already correct to account for scroll. Note that I access textureCoordinateVarying in the corresponding fragment shaders
This works, albeit with no repeated texture once the text scrolls through:
attribute vec4 position;
attribute vec2 texcoord;
uniform mat3 matrixUniform;
uniform float horizontalTextureOffsetUniform;
varying vec2 textureCoordinateVarying;
void main() {
gl_Position = vec4((matrixUniform * vec3(position.x, position.y, 1)).xy, 0, 1);
textureCoordinateVarying = vec2(
//I get a nice scrolling animation by changing the offset here, but the texture doesn't repeat, since it is NPO2 and therefore doesn't have repeating enabled
texcoord.x + horizontalTextureOffsetUniform,
texcoord.y
);
}
On the other hand, this gives me a stretched out image, as you can see:
attribute vec4 position;
attribute vec2 texcoord;
uniform mat3 matrixUniform;
uniform float horizontalTextureOffsetUniform;
varying vec2 textureCoordinateVarying;
void main() {
gl_Position = vec4((matrixUniform * vec3(position.x, position.y, 1)).xy, 0, 1);
textureCoordinateVarying = vec2(
\\Note how I am using fract here, which should make the texture repeat at the 1.0 texture boundary, but instead renders a blurry stretched texture
fract(texcoord.x + horizontalTextureOffsetUniform),
texcoord.y
);
}
Any ideas on how I can solve for this?
Thanks!

You need to do the repeat math in the fragment shader, not the vertex shader.
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
void main() {
gl_Position = vec4(0,0,0,1);
gl_PointSize = 100.0;
}`;
const fs = `
precision highp float;
uniform sampler2D tex;
uniform vec2 offset;
void main() {
gl_FragColor = texture2D(tex, fract(gl_PointCoord.xy + offset));
}`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createTexture, gl.texImage2D, gl.texParameteri
const tex = twgl.createTexture(gl, {
src: 'https://i.imgur.com/v38pV.jpg'
});
function render(time) {
time *= 0.001;
gl.useProgram(programInfo.program);
// calls gl.activeTexture, gl.bindTexture, gl.uniform
twgl.setUniformsAndBindTextures(programInfo, {
tex,
offset: [time, time * 0.1],
});
gl.drawArrays(gl.POINTS, 0, 1);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>

Related

Banding Problem in Multi Step Shader with Ping Pong Buffers, does not happen in ShaderToy

I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);

Why does my WebGL shader not let me using varyings?

When I try to link my vertex and fragment shaders into a program, WebGL throws Varyings with the same name but different type, or statically used varyings in fragment shader are not declared in vertex shader: textureCoordinates
I have varying vec2 test in both my vertex and fragment shaders, and can't see any reason why the compiler wouldn't be able to find the same varying in both.
Vertex Shader:
varying vec2 test;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 0.0);
test = vec2(1.0, 0.0);
}
Fragment Shader:
precision highp float;
varying vec2 test;
void main(void) {
gl_FragColor = vec4(test.xy, 0.0, 1.0);
}
Test code:
const canvas = document.createElement('canvas');
gl = canvas.getContext('webgl')
let vert = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vert, "varying vec2 test;\nvoid main(void) {\n gl_Position = vec4(0.0, 0.0, 0.0, 0.0);\n test = vec2(1.0, 0.0);\n}");
gl.compileShader(vert);
let frag = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(frag, "precision highp float;\nvarying vec2 test;\nvoid main() {\n\tgl_FragColor = vec4(test.xy, 0.0, 1.0);\n}");
gl.compileShader(frag);
let program = gl.createProgram();
gl.attachShader(program, vert);
gl.attachShader(program, frag);
gl.linkProgram(program);
gl.useProgram(program);
Just a guess, but I wonder if it's because you're not using the textureCoordinates in your fragment shader. The names & types match just fine, so i don't think that's the issue. I've done the same thing here:
Frag:
// The fragment shader is the rasterization process of webgl
// use float precision for this shader
precision mediump float;
// the input texture coordinate values from the vertex shader
varying vec2 vTextureCoord;
// the texture data, this is bound via gl.bindTexture()
uniform sampler2D texture;
// the colour uniform
uniform vec3 color;
void main(void) {
// gl_FragColor is the output colour for a particular pixel.
// use the texture data, specifying the texture coordinate, and modify it by the colour value.
gl_FragColor = texture2D(texture, vec2(vTextureCoord.s, vTextureCoord.t)) * vec4(color, 1.0);
}
Vert:
// setup passable attributes for the vertex position & texture coordinates
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
// setup a uniform for our perspective * lookat * model view matrix
uniform mat4 uMatrix;
// setup an output variable for our texture coordinates
varying vec2 vTextureCoord;
void main() {
// take our final matrix to modify the vertex position to display the data on screen in a perspective way
// With shader code here, you can modify the look of an image in all sorts of ways
// the 4th value here is the w coordinate, and it is called Homogeneous coordinates, (x,y,z,w).
// It effectively allows the perspective math to work. With 3d graphics, it should be set to 1. Less than 1 will appear too big
// Greater than 1 will appear too small
gl_Position = uMatrix * vec4(aVertexPosition, 1);
vTextureCoord = aTextureCoord;
}
Issue was resolved by updating Chrome for OSX from v51.something to 52.0.2743.82 (64-bit) Weird.

Passing on two rectangles with different meaning from geometry to fragment shader

I've adapted Cinder library signed distance fonthandling to Delphi, and am now implementing a twist to upload all data for multiple texts in a single call, and to and to have some control over relative size when zooming (using an uniform instead of the 1.0001 factor in the geometry shader, not yet working in this code)
The basic signed distance handling is not altered, I only tried to calculate the needed rectangles using the geometry shader. I understand how to create the destination rectangle (where the character must appear) using triangle_strip, but are having problems passing the texcoord to the fragment shader.
destination rectangle : the input topleft.xy + widthheight (dimensions) is used to calculate the destination rectangle of each character on the screen. Using gl_position.
texture source rectangle 2: texcoordtl+texdimens, topleft point + dimensions for the character in the font texture. This is the main point where I'm unsure. Passed to fragment using texcoord in/out param.
I'd be grateful for any pointers or avenues to research and specially wonder about the way I calculate the texcoord coordinates and pass them on
to the fragment shader.
An array of the below record is bound with GL_ARRAY_BUFFER and described using a series of glGetAttribLocation/glEnableVertexAttribArray/glVertexAttribPointer calls)
Drawing is done using
glDrawArrays(GL_POINTS, 0, numberofelements_in_array );
The record:
TGLCharacter = packed record // 5*2* single + 1*4 byte color + 1*4 byte detail. = 48 bytes per character drawn
origin : TGLVectorf2; // origin of text ( = glfloat[2])
topleft : TGLVectorf2; // origin of this character
widthheight: TGLVectorf2; // width and heght this chracter
texcoordtl : TGLVectorf2; // coordinates topleft in texture.
texdimens : TGLVectorf2; // sizes in texture
col : TGLVectorub4; // 4 colors, 1 per rect vertex
detail : integer; // layer. Not used in this example.
end;
geometry code first because I expect the problems here:
#version 150 compatibility
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
in vec2 gorigin[];
in vec2 gtopleft[];
in vec2 gwidthheight[];
in vec2 gtexcoordtl[];
in vec2 gtexdimens[];
in vec4 gcolor[];
out vec3 fColor;
out vec2 texcoord;
void main() {
// calculate distance cur char - first char of this text
vec2 dxcoordinate = (gtopleft[0]-gorigin[0]);
// now multiply with uniform here and calc new coordinate:
// for now we use uniform slightly close to 1 to make debugging easier and avoid
// nvidia's shadercompiler to optimize gorigin out.
// equal to 1, and the nvidia shader optimizes it out.
vec2 x1y1 = 1.0001*gorigin[0]+dxcoordinate;
vec2 x2y2 = x1y1+gwidthheight[0]*1.0001;
vec2 texx1y1 = gtexcoordtl[0];
vec2 texx2y2 = gtexcoordtl[0]+gtexdimens[0];
fColor = vec3(gcolor[0].rgb);
gl_Position = gl_ModelViewProjectionMatrix * vec4(x1y1,0,1.0);
texcoord = texx1y1.xy;
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2.x,x1y1.y,0,1.0);
texcoord = vec2(texx2y2.x,texx1y1.y);
EmitVertex();
gl_Position= gl_ModelViewProjectionMatrix * vec4(x1y1.x,x2y2.y,0,1.0);
texcoord = vec2(texx1y1.x,texx2y2.y);
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2,0,1.0);
texcoord = texx2y2.xy;
EmitVertex();
EndPrimitive();
}
frag code:
#version 150 compatibility
uniform sampler2D font_map;
uniform float smoothness;
const float gamma = 2.2;
in vec3 fColor;
in vec2 texcoord;
void main()
{
// retrieve signed distance
float sdf = texture2D( font_map, texcoord.xy ).r;
// perform adaptive anti-aliasing of the edges
float w = clamp( smoothness * (abs(dFdx(texcoord.x)) + abs(dFdy(texcoord.y))), 0.0, 0.5);
float a = smoothstep(0.5-w, 0.5+w, sdf);
// gamma correction for linear attenuation
a = pow(a, 1.0/gamma);
if (a<0.1)
discard;
// final color
gl_FragColor.rgb = fColor.rgb;
gl_FragColor.a = gl_Color.a * a;
}
vertex code is probably ok I guess.
#version 150 compatibility
in vec2 anorigin;
in vec2 topleft;
in vec2 widthheight;
in vec2 texcoordtl;
in vec2 texdimens;
in vec4 color;
out vec2 gorigin;
out vec2 gtopleft;
out vec2 gwidthheight;
out vec2 gtexcoordtl;
out vec2 gtexdimens;
out vec4 gcolor;
void main()
{
gorigin=anorigin;
gtopleft=topleft;
gwidthheight=widthheight;
gtexcoordtl=texcoordtl;
gtexdimens=texdimens;
gcolor=color;
gl_Position = gl_ModelViewProjectionMatrix * vec4(anorigin.xy,0,1.0);;
}
The above code works. The problem was in the uploading code, so wrong vertex data was uploaded. I did some minor fixes to the above code while debugging and added it to the question, so that the question now shows working code.
Here is some possible code that changes the last 2 lines of the frag shader to create outline fonts. I'm not really happy yet with it though. When zooming out the color of the font seems to change
vec3 othercol; // to be added to declarations
.. rest shader below the discard statement becomes:
othercol=vec3(1.0,1.0,1.0);
if (sqrt(0.299 * fColor.r*fColor.r + 0.587 * fColor.g*fColor.g + 0.114 * fColor.b*fColor.b)>0.5)
{ othercol=vec3(0,0,0);}
// final color
if (sdf>0.25 && sdf<0.75)
{gl_FragColor.rgb = othercol.rgb;}
else
{gl_FragColor.rgb = fColor.rgb;}
gl_FragColor.a = gl_Color.a * a;
}

Gradient with fixed number of levels

I drawing a set of quads. For each quad I have a defined color in a vertex of it.
E.g. now my set of quads looks like:
I achive such result in rather primitive way just passing into vertex shader as attribute color of each vertex of an quad.
My shaders are pretty simple:
Vertex shader
#version 150 core
in vec3 in_Position;
in vec3 in_Color;
out vec3 pass_Color;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
pass_Color = in_Color;
}
Fragment Shader
#version 150 core
in vec3 pass_Color;
out vec4 out_Color;
void main(void) {
out_Color = vec4(pass_Color, 1.0);
}
Now my goal is to get color non continuous distribution of colors from vertex to vertex. It also can be called "level distribution".
My set of quads should looks like this:
How can I achieve such result?
EDIT:
With vesan and Nico Schertler plate looks like this. (not acceptable variant)
My guess is there will be issues with the hue colours you're using and vertex interpolation (e.g. skipping some bands). Instead, maybe pass in a single channel value and calculate the hue and discrete levels (as #vesan does) within the fragment shader. I use these functions myself...
vec3 hueToRGB(float h)
{
h = fract(h) * 6.0;
vec3 rgb;
rgb.r = clamp(abs(3.0 - h)-1.0, 0.0, 1.0);
rgb.g = clamp(2.0 - abs(2.0 - h), 0.0, 1.0);
rgb.b = clamp(2.0 - abs(4.0 - h), 0.0, 1.0);
return rgb;
}
vec3 heat(float x)
{
return hueToRGB(2.0/3.0-(2.0/3.0)*clamp(x,0.0,1.0));
}
and then
float discrete = floor(pass_Value * steps + 0.5) / steps; //0.5 to round
out_Color = vec4(heat(discrete), 1.0);
where in float in_Value is 0 to 1.
Just expanding on Nico Schertler's comment: you can modify your fragment shader to:
void main(void) {
out_Color = vec4(pass_Color, 1.0);
out_Color = floor(color * steps)/steps;
}
where steps in the number of color steps you want. The floor function will indeed work on a vector, however, the steps will be calculated separately for every color, so the result might not be exactly what you want (the steps might not be as nice as in your example).
Alternatively, you can use some form of "toon shading" (see for example here). That means that you only pass a single number (think a color in grayscale) to your shader, then use your shader to select a color from a color table. The table can either be hardcoded in the shader or selected from a 1-dimensional texture.

Why is this simple mask shader failing?

I'm trying to create a "transition" effect between two 2D scenes. I have 3 textures: before, after, and mask. before and after are self-explanatory. mask is a simple monochrome texture that defines how the first two get composited. It changes over time, to perform the transition. All 3 textures are the same size.
I've verified that all 3 textures contain the correct data, but when I try to perform the compositing, I end up with either before in its entirety, or after in its entirety, seemingly at random.
Here's what I'm doing:
Application code:
glEnable(GL_MULTISAMPLE);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
after.handle.bind;
glActiveTextureARB(GL_TEXTURE2_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
mask.handle.bind;
glActiveTextureARB(GL_TEXTURE0_ARB);
before.handle.bind;
GShaders.UseShaderProgram(maskProgramHandle); //GShaders: Global shader engine
GShaders.SetUniformValue(maskProgramHandle, 'before', 0);
GShaders.SetUniformValue(maskProgramHandle, 'after', 1);
GShaders.SetUniformValue(maskProgramHandle, 'mask', 2);
before.DrawFull; //draws the texture to the screen as a quad.
glDisable(GL_MULTISAMPLE);
Vertex shader:
varying vec4 v_color;
varying vec2 texture_coordinate;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texture_coordinate = vec2(gl_MultiTexCoord0);
v_color = gl_Color;
gl_FrontColor = gl_Color;
}
Fragment shader:
uniform sampler2DRect before;
uniform sampler2DRect after;
uniform sampler2DRect mask;
varying vec2 texture_coordinate;
void main()
{
vec3 maskValue = texture2DRect(mask, texture_coordinate).rgb;
float alpha = (maskValue.r + maskValue.g + maskValue.b) / 3.0;
vec4 beforeValue = texture2DRect(before, texture_coordinate);
vec4 afterValue = texture2DRect(after, texture_coordinate);
gl_FragColor = mix(beforeValue, afterValue, alpha);
}
Any idea what's going wrong?
This is only guessing, have you tried this
gl_FragColor = mix(beforeValue, afterValue, alpha / 255.0f);