Should I write logic in fragment shader in this case? - glsl

I have 3 objects:
cube1
cube2
cube3
I want to draw:
cube1 as red(1,0,0),
cube2 with texture1
cube3 with texture2.
In fragment shader, I used
FragColor = Color*texture2D(u_texture, TextureCoordinates)
as usual but this code also paints my 1st cube with the texture colors which I want it to be only red. So color is mixed up. My question is should I write a logic in fragment shader to separate these cases?

Create a 1x1 texture with a single (white) color and use it for the uniform colored cubes.
Binding and using this texture is much "cheaper" than changing the shader program:
let whiteTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, whiteTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
new Uint8Array([255,255,255,255]));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
Note, the lookup to this texture (texture2D(u_texture, TextureCoordinates)) will always return vec4(1.0).
If the texture is bound (to the texture unit which is assigned to u_texture), then
FragColor = Color * texture2D(u_texture, TextureCoordinates);
will sets the same fragment color as
FragColor = Color * vec4(1.0);

Related

openGL Translating pixel brightness to colormap texture produces incorrect result

See gif switching between RGB and colormap:
The problem is that the two images are different.
I am drawing dots that are RGB white (1.0,1.0,1.0). The alpha channel controls pixel brightness, which creates the dot blur. That's what you see as the brighter image. Then I have a 2-pixel texture of black and white (0.0,0.0,0.0,1.0) (1.0,1.0,1.0,1.0) and in a fragment shader I do:
#version 330
precision highp float;
uniform sampler2D originalColor;
uniform sampler1D colorMap;
in vec2 uv;
out vec4 color;
void main()
{
vec4 oldColor = texture(originalColor, uv);
color = texture(colorMap, oldColor.a);
}
Very simply, take the fragment of the originalColor texture's alpha value of 0 to 1, and translate it to a new color with colorMap texture of black to white. There should be no difference between the two images! Or... at least, that's my goal.
Here's my setup for the colormap texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &colormap_texture_id); // get texture id
glBindTexture(GL_TEXTURE_1D, colormap_texture_id);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); // required: stop texture wrapping
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // required: scale texture with linear sampling
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, colormapColors.size(), 0, GL_RGBA, GL_FLOAT, colormapColors.data()); // setup memory
Render loop:
GLuint textures[] = { textureIDs[currentTexture], colormap_texture_id };
glBindTextures(0, 2, textures);
colormapShader->use();
colormapShader->setUniform("originalColor", 0);
colormapShader->setUniform("colorMap", 1);
renderFullScreenQuad(colormapShader, "position", "texCoord");
I am using a 1D texture as a colormap because it seems that's the only way to potentially have a 1000 to 2000 indexes of colormap values stored in the GPU memory. If there's a better way, let me know. I assume the problem is that the math for interpolating between two pixels is not right for my purposes.
What should I do to get my expected results?
To make sure there's no shenanigans I tried to following shader code:
color = texture(colorMap, oldColor.a); //incorrect results
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b)/3); //incorrect
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b + oldColor.a)/4); //incorrect
color = vec4(oldColor.a); //incorrect
color = oldColor; // CORRECT... obviously...
I think to be more accurate, you'd need to change:
color = texture(colorMap, oldColor.a);
to
color = texture(colorMap, oldColor.a * 0.5 + 0.25);
Or more generally
color = texture(colorMap, oldColor.a * (1.0 - (1.0 / texWidth)) + (0.5 / texWidth));
Normally, you wouldn't notice the error, it's just because texWidth is so tiny that the difference is significant.
The reason for this is because the texture is only going to start linear filtering from black to white after you pass the centre of the first texel (at 0.25 in your 2 texel wide texture). The interpolation is complete once you pass the centre of the last texel (at 0.75).
If you had a 1024 texture like you mention you plan to end up with then interpolation starts at 0.000488 and I doubt you'd notice the error.

Alpha channel value always returning 1.0 after rendering-to-texture in OpenGL

This problem is driving me crazy since the code was working perfectly before. I have a fragment shader which combines two textures based on the value set in the alpha channel. The output is rendered to a third texture using an FBO.
Since I need to perform a post-processing step on the combined texture, I check the value of the alpha channel to determine whether that texel will need post-processing or not (i.e., I'm using the alpha channel value as a mask). The problem is, the post-processing shader is reading a value of 1.0 for all the texels in the input texture!
Here is the fragment shader that combines the two textures:
uniform samplerRect tex1;
uniform samplerRect tex2;
in vec2 vTexCoord;
out vec4 fColor;
void main(void) {
vec4 color1, color2;
color1 = texture(tex1, vTexCoord.st);
color2 = texture(tex2, vTexCoord.st);
if (color1.a == 1.0) {
fColor = color2;
} else if (color2.a == 1.0) {
fColor = color1;
} else {
fColor = (color1 + color2) / 2.0;
}
}
The texture object that I attach to the FBO is set up as follows:
glGenTextures(1, &glBufferTex);
glBindTexture(GL_TEXTURE_RECTANGLE, glBufferTex);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
Code that attaches the texture to the FBO is:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_RECTANGLE, glBufferTex, 0);
I even added a call to glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE) before attaching the FBO! What could possibly be going wrong that is making the next stage fragment shader read 1.0 for all texels?!
NOTE: I did check that not all the values of the alpha channel for texels in the two textures that I combine are 1.0. Most of them actually are not.

WebGL - Hardware skinning with a bone texture

I am trying to get hardware skinning with WebGL, and can't seem to get it to work with a texture containing all my matrices.
I am feeding a float texture like this:
var buffer = new Float32Array(...);
...
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, buffer.byteLength / 16, 1, 0, gl.RGBA, gl.FLOAT, buffer);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
In addition to the texture, I also send the relative size of each matrix and vector compared to the total size of the texture - this is used to map bone indices to texture coordinates, since there is no texel fetch in WebGL.
E.g. if I have 40 bones, then each matrix is 1/40, and each vector is 1/40/4.
Here are the relevant vertex shader parts:
...
uniform sampler2D u_bone_map;
uniform float u_matrix_fraction;
uniform float u_vector_fraction;
...
mat4 boneMatrix(float bone) {
return mat4(texture2D(u_bone_map, vec2(u_matrix_fraction * bone, 0)),
texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction, 0)),
texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 2.0, 0)),
texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 3.0, 0)));
}
...
This doesn't work, and no matter how I try to change it, I just get junk on my screen.
Is this feasible without sane functions, like texelFetch (and actual uniform buffers)?
I have the same code running with a uniform array of matrices, but with my current setup it can't support more than 62 bones (because of the maximum uniform vectors restrictions), which is not enough for some 3D models.

texturing using texelFetch()

When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API.
E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color.
I tried on AMD and nvidia card, but the results are same.
Can you tell me where could be going wrong?
I am copying my code here:
Vert shader:
in vec2 a_position;
uniform float offset_x;
void main()
{
gl_Position = vec4(a_position.x + offset_x, a_position.y, 1.0, 1.0);
}
Frag shader:
out vec4 Color;
uniform isamplerBuffer sampler;
uniform int index;
void main()
{
Color=texelFetch(sampler,index);
}
Code:
GLubyte arr[]={128,5,250};
glGenBuffers(1,&bufferid);
glBindBuffer(GL_TEXTURE_BUFFER,bufferid);
glBufferData(GL_TEXTURE_BUFFER,sizeof(arr),arr,GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER,0);
glGenTextures(1, &buffer_texture);
glBindTexture(GL_TEXTURE_BUFFER, buffer_texture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
glUniform1f(glGetUniformLocation(shader_data.psId,"offset_x"),0.0f);
glUniform1i(glGetUniformLocation(shader_data.psId,"sampler"),0);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),0);
glGenBuffers(1,&bufferid1);
glBindBuffer(GL_ARRAY_BUFFER,bufferid1);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices4),vertices4,GL_STATIC_DRAW);
attr_vertex = glGetAttribLocation(shader_data.psId, "a_position");
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0, 0);
glEnableVertexAttribArray(attr_vertex);
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),1);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(32) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),2);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(64) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
In this case it draws all the 3 squares with dark red color.
uniform isamplerBuffer sampler;
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
There's your problem: they don't match.
You created the texture's storage as unsigned 8-bit integers, which are normalized to floats upon reading. But you told the shader that you were giving it signed 8-bit integers which will be read as integers, not floats.
You confused OpenGL by being inconsistent. Mismatching sampler types with texture formats yields undefined behavior.
That should be a samplerBuffer, not an isamplerBuffer.

GLSL and FBOs - glActiveTexture doesn't work?

I'm trying to write a simple shader which would add textures attached to FBOs. There is no problem with FBO initialization and such (I've tested it). The problem is I believe with
glActiveTexture(GL_TEXTURE0). It doesn't seem to be doing anything- here is my frag shader:
(but generally shader is called - I've tested that by putting gl_FragColor = vec4(0,1,0,1);
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 vec = texel0;
gl_FragColor = texel0;
}
And in C++ code i have:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, iFrameBufferAccumulation);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
( Render something - it works fine to iTextureImgAccumulation texture attached to GL_COLOR_ATTACHMENT0_EXT )
glClear (GL_COLOR_BUFFER_BIT );
glEnable(GL_TEXTURE_RECTANGLE_NV);
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
glLoadIdentity(); // Load the Identity Matrix to reset our drawing locations
glTranslatef(0.0f, 0.0f, -2.0f);
xShader.bind();
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
glBindTexture( GL_TEXTURE_RECTANGLE_NV, NULL );
xShader.unbind();
Result: black screen (when displaying second texture and using shader (without using shader its fine). I'm aware that this shader shouldn't do much but he doesn't even display
the first texture.
I'm in the middle of testing things but idea is that after rendering to first texture
I would add first texture to the second one. To do this I imagine that this fragment shader
would work :
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 texel1 = texture2D(Texture1, gl_TexCoord[0].st);
vec4 vec = texel0 + texel1;
vec.w = 1.0;
gl_FragColor = vec;
}
And whole idea is that in a loop tex2 = tex2 + tex1 ( would it be possible that i use tex2 in this shader to render to GL_COLOR_ATTACHMENT1_EXT which is attached to tex2 ?)
I've tested both xShader.bind(); before initializing uniform variables and after. Both cases - black screen.
Anyway for a moment, I'm pretty sure that there is some problem with initialization of sampler for textures (maybe cos they are attached to FBO)?
I've checked the rest and it works fine.
Also another stupid problem:
How can i render texture on whole screen ?
I've tried something like that but it doesn't work ( i have to translate a bit this quad )
glViewport(0,0 , OPT.m_nWidth, OPT.m_nHeight);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImg/*iTextureImgAccumulation*/ ); // Bind our frame buffer texture
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
Doesnt work with glVertex2f also..
Edit: I've checked out and I can initialise some uniform variables only textures are problematic.
I've changed order but it still doesn't work.:( Btw other uniforms values are working well. I've displayed texture I want to pass to shader too. It works fine. But for unknown reason texture sampler isn't initialized in fragment shader. Maybe it has something to do that this texture is glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_RGB16F /GL_FLOAT_R32_NV/, OPT.m_nWidth, OPT.m_nHeight, 0, GL_RED, GL_FLOAT, NULL); (its not GL_TEXTURE_2D)?
It's not clear what does your xShader.bind(), I can gues you do glUseProgram(...) there. But uniform variables (sampler index in your case) should be set up after the glUseProgram(...) is called. In this order:
glUseProgram(your_shaders); //probably your xShader.bind() does it.
GLuint sampler_idx = 0;
GLint location = glGetUniformLocation(your_shaders, "Texture0");
if(location != -1) glUniform1i(location, sampler_idx);
else error("cant, get uniform location");
glActiveTexture(GL_TEXTURE0 + sampler_idx);
glBindTexture(GL_TEXTURE_2D, iTextureImg);
and 'yes' you can render FBO texture and use it in shader in another context
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, your_fbo_id);
// render to FBO there
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
then use your FBO texture the same way as you use regular textures.
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
This is a rectangle texture.
uniform sampler2D Texture0;
This is a 2D texture. They are not the same thing. The sampler type must match the texture type. You need to use a samplerRect, assuming your version of GLSL supports that.