I'm drawing a simple textured quad (2 triangles) using a one dimensional texture that hold 512 values ranging from 0 to 1. I'm using RGBA_32F on a NVIDIA GeForce GT 750M, GL_LINEAR interpolation and GL_CLAMP_TO_EDGE.
I draw the quad using the following shader:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
void main()
{
float x = v_texcoord.x;
float v = 512.0 * abs(texture1D(u_texture,x).r - x);
gl_FragColor = vec4(v,v,v,1);
}
Basically, I'm displaying the difference between texture values and frag coordinates. I was hoping to get a black quad (no difference) but here is what I get instead:
I tried to narrow down the problem and try to generate a one dimensional texture with only two values (0 and 1) and display it using:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
void main()
{
float x = v_texcoord.x;
float v = texture1D(u_texture,x).r;
gl_FragColor = vec4(v,v,v,1);
}
and then I get:
Obviously this is not a linear interpolation from 0 to 1. The result seems to be split in 3 areas: black, interpolated and white. I tried different wrapping values with no success (but different results). Any idea what I'm doing wrong here ?
After searching a bit more, it seems the texture needs a small adjustement depending on texture size:
varying vec2 v_texcoord;
uniform sampler1D u_texture;
uniform float u_texture_shape;
void main()
{
float epsilon = 1.0/u_texture_shape;
float x = epsilon/2.0 +(1.0-epsilon)*v_texcoord.x;
float v = 512.0*abs(texture1D(u_texture,x).r -v_texcoord.x);
gl_FragColor = vec4(v,v,v,1);
}
I guess this is related to wrapping mode but I did not find informations on how wrapping is enforced at GPU level.
Related
I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.
I'm able to translate the code and run it, but it behaves diferrent from the orginal fork.
https://www.shadertoy.com/view/llS3zc --orignal
https://editor.p5js.org/jorgeavav/sketches/i9cd4lE7H - translate
Here is the code:
uniform vec2 resolution;
uniform float time;
uniform float mouse;
uniform sampler2D texture;
uniform sampler2D texture2;
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 texCol = vec4(texture2D(texture, uv+time/10.0));
mat3 tfm;
tfm[0] = vec3(texCol.z,0.0,0);
tfm[1] = vec3(0.0,texCol.y,0);
tfm[2] = vec3(0,0,1.0);
vec2 muv = (vec3(uv,1.0)*tfm).xy - 0.1*time;
texCol = vec4(texture2D(texture2, muv));
gl_FragColor = texCol;
}
You have two issues:
Your textures are a lot larger than the shader toy ones, either use smaller textures or scale down the uv coordinates (uv*=0.1 results in a similar scale).
Your textures are not wrapping and their dimensions are not a power of two(which is required to enable wrapping [in WebGL1]), you need to resize the textures and apply wrapping using textureWrap(REPEAT) or wrap in the shader, for example by using fract to wrap the lookup coordinates in your texture2D calls.
I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);
How do I determine what mipmap level was used when sampling a texture in a GLSL fragment shader?
I understand that I can manually sample a particular mipmap level of a texture using the textureLod(...) method:
uniform sampler2D myTexture;
void main()
{
float mipmapLevel = 1;
vec2 textureCoord = vec2(0.5, 0.5);
gl_FragColor = textureLod(myTexture, textureCoord, mipmapLevel);
}
Or I could allow the mipmap level to be selected automatically using texture(...) like
uniform sampler2D myTexture;
void main()
{
vec2 textureCoord = vec2(0.5, 0.5);
gl_FragColor = texture(myTexture, textureCoord);
}
I prefer the latter, because I trust the driver's judgment about appropriate mipmap level more than I do my own.
But I'd like to know what mipmap level was used in the automatic sampling process, to help me rationally sample nearby pixels. Is there a way in GLSL to access the information about what mipmap level was used for an automatic texture sample?
Below are three distinct approaches to this problem, depending on which OpenGL features are available to you:
As pointed out by Andon M. Coleman in the comments, the solution in OpenGL version 4.00 and above is simple; just use the textureQueryLod function:
#version 400
uniform sampler2D myTexture;
in vec2 textureCoord; // in normalized units
out vec4 fragColor;
void main()
{
float mipmapLevel = textureQueryLod(myTexture, textureCoord).x;
fragColor = textureLod(myTexture, textureCoord, mipmapLevel);
}
In earlier versions of OpenGL (2.0+?), you might be able to load an extension, to similar effect. This approach worked for my case. NOTE: the method call is capitalized differently in the extension, vs. the built-in (queryTextureLod vs queryTextureLOD).
#version 330
#extension GL_ARB_texture_query_lod : enable
uniform sampler2D myTexture;
in vec2 textureCoord; // in normalized units
out vec4 fragColor;
void main()
{
float mipmapLevel = 3; // default in case extension is unavailable...
#ifdef GL_ARB_texture_query_lod
mipmapLevel = textureQueryLOD(myTexture, textureCoord).x; // NOTE CAPITALIZATION
#endif
fragColor = textureLod(myTexture, textureCoord, mipmapLevel);
}
If loading the extension does not work, you could estimate the automatic level of detail using the approach contributed by genpfault:
#version 330
uniform sampler2D myTexture;
in vec2 textureCoord; // in normalized units
out vec4 fragColor;
// Does not take into account GL_TEXTURE_MIN_LOD/GL_TEXTURE_MAX_LOD/GL_TEXTURE_LOD_BIAS,
// nor implementation-specific flexibility allowed by OpenGL spec
float mip_map_level(in vec2 texture_coordinate) // in texel units
{
vec2 dx_vtc = dFdx(texture_coordinate);
vec2 dy_vtc = dFdy(texture_coordinate);
float delta_max_sqr = max(dot(dx_vtc, dx_vtc), dot(dy_vtc, dy_vtc));
float mml = 0.5 * log2(delta_max_sqr);
return max( 0, mml ); // Thanks #Nims
}
void main()
{
// convert normalized texture coordinates to texel units before calling mip_map_level
float mipmapLevel = mip_map_level(textureCoord * textureSize(myTexture, 0));
fragColor = textureLod(myTexture, textureCoord, mipmapLevel);
}
In any case, for my particular application, I ended up just computing the mipmap level on the host side, and passing it to the shader, because the automatic level-of-detail turned out to be not exactly what I needed.
From here:
take a look at the OpenGL 4.2 spec chapter 3.9.11 equation 3.21. The mip map level is calculated based on the lengths of the derivative vectors:
float mip_map_level(in vec2 texture_coordinate)
{
vec2 dx_vtc = dFdx(texture_coordinate);
vec2 dy_vtc = dFdy(texture_coordinate);
float delta_max_sqr = max(dot(dx_vtc, dx_vtc), dot(dy_vtc, dy_vtc));
return 0.5 * log2(delta_max_sqr);
}
Lets say we texturing quad (two triangles). I think what this question is similiar to texture splatting like in next example
precision lowp float;
uniform sampler2D Terrain;
uniform sampler2D Grass;
uniform sampler2D Stone;
uniform sampler2D Rock;
varying vec2 tex_coord;
void main(void)
{
vec4 terrain = texture2D(Terrain, tex_coord);
vec4 tex0 = texture2D(Grass, tex_coord * 4.0); // Tile
vec4 tex1 = texture2D(Rock, tex_coord * 4.0); // Tile
vec4 tex2 = texture2D(Stone, tex_coord * 4.0); // Tile
tex0 *= terrain.r; // Red channel - puts grass
tex1 = mix( tex0, tex1, terrain.g ); // Green channel - puts rock and mix with grass
vec4 outColor = mix( tex1, tex2, terrain.b ); // Blue channel - puts stone and mix with others
gl_FragColor = outColor; //final color
}
But i want to just place a 1 decal on base quad texture in desired place.
Algorithm is just the same, but i think we don't need extra texture with 1 filled layer to hold positions(e.g. where red layer != 0) of decal, some how we must generate our own "terrain.r"(is this float?) variable and mix base texture and decal texture with it.
precision lowp float;
uniform sampler2D base;
uniform sampler2D decal;
uniform vec2 decal_location; //where we want place decal (e.g. 0.5, 0.5 is center of quad)
varying vec2 base_tex_coord;
varying vec2 decal_tex_coord;
void main(void)
{
vec4 v_base = texture2D(base, base_tex_coord);
vec4 v_decal = texture2D(Grass, decal_tex_coord);
float decal_layer = /*somehow get our decal_layer based on decal_position*/
gl_FragColor = mix(v_base, v_decal, decal_layer);
}
How achieve such thing?
Or i may just generate splat texture on opengl side and pass it to first shader? This will give me up to 4 various decals on quad but will be slow for frequent updates (e.g. machine gun hits wall)
float decal_layer = /*somehow get our decal_layer based on decal_position*/
Well, it's up to you, how you interpret decal_position. I think a simple distance metric would suffice. but this also requires the size of the quad. Let's assume you provide this through an additional uniform decal_radius. Then we can use
decal_layer = clamp(length(decal_position - vec2(0.5, 0.5)) / decal_radius, 0., 1.);
Yes, decal_layer is a float as you've described. Its range is 0 to 1. But you don't have quite enough info, here you've specified decal_location but no size for the decal. You also don't know where this fragment falls in the quad, you'll need a varying vec2 quad_coord; or similar input from the vertex shader if you want to know where this fragment is relative to the quad being rendered.
But let's try a different approach. Edit the top of your 2nd example to include these uniforms:
uniform vec2 decal_location; // Location of decal relative to base_tex_coord
uniform float decal_size; // Size of decal relative to base_tex_coord
Now, in main(), you should be able to compute decal_layer with something like this:
float decal_layer = 1.0 - smoothstep(decal_size - 0.01, decal_size, max(abs(decal_location.x - base_tex_coord.x), abs(decal_location.y - base_tex_coord.y)));
Basically you're trying to get decal_layer to be 1.0 within the decal, and 0.0 outside the decal. I've added a 0.01 fuzzy edge at the boundary that you can play with. Good luck!