texturing using texelFetch() - opengl

When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API.
E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color.
I tried on AMD and nvidia card, but the results are same.
Can you tell me where could be going wrong?
I am copying my code here:
Vert shader:
in vec2 a_position;
uniform float offset_x;
void main()
{
gl_Position = vec4(a_position.x + offset_x, a_position.y, 1.0, 1.0);
}
Frag shader:
out vec4 Color;
uniform isamplerBuffer sampler;
uniform int index;
void main()
{
Color=texelFetch(sampler,index);
}
Code:
GLubyte arr[]={128,5,250};
glGenBuffers(1,&bufferid);
glBindBuffer(GL_TEXTURE_BUFFER,bufferid);
glBufferData(GL_TEXTURE_BUFFER,sizeof(arr),arr,GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER,0);
glGenTextures(1, &buffer_texture);
glBindTexture(GL_TEXTURE_BUFFER, buffer_texture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
glUniform1f(glGetUniformLocation(shader_data.psId,"offset_x"),0.0f);
glUniform1i(glGetUniformLocation(shader_data.psId,"sampler"),0);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),0);
glGenBuffers(1,&bufferid1);
glBindBuffer(GL_ARRAY_BUFFER,bufferid1);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices4),vertices4,GL_STATIC_DRAW);
attr_vertex = glGetAttribLocation(shader_data.psId, "a_position");
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0, 0);
glEnableVertexAttribArray(attr_vertex);
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),1);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(32) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),2);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(64) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
In this case it draws all the 3 squares with dark red color.

uniform isamplerBuffer sampler;
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
There's your problem: they don't match.
You created the texture's storage as unsigned 8-bit integers, which are normalized to floats upon reading. But you told the shader that you were giving it signed 8-bit integers which will be read as integers, not floats.
You confused OpenGL by being inconsistent. Mismatching sampler types with texture formats yields undefined behavior.
That should be a samplerBuffer, not an isamplerBuffer.

Related

OpenGL horizontal pixel pairs drawn swapped

I have problem that is extremely similar to the one described in OpenGL pixels drawn with each horizontal pair swapped. The main difference is that I'm getting this disortion even when I feed the texture one-byte red-only values.
EDIT: By closer inspection of normal textures, I have discovered that this problem manifests when rendering any 2D texture. I tried rotating the resulting texture by swapping the texture coordinates. The resulting picture still have swapped visual horizontal pixels - so I'm assuming that the data in the texture is good, and the disortion occurs when rendering the texture.
Here are the relevant parts of the code:
C++:
struct coord_t { float x; float y; }
GLint loc = glGetAttributeLocation(program, "coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE,
sizeof(coord_t), static_cast<void *>(offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
loc = glGetAttributeLocation(program, "tex_coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, sizeof(coord_t),
static_cast<void *>((void*)(4*sizeof(coord_t)+offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
// ... Texture binding to GL_TEXTURE_2D ...
coord_t pos[] = {coord_t{-1.f,-1.f}, coord_t{1.f,-1.f}
coord_t{-1.f,1.f}, coord_t{1.f,1.f}
};
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(pos), pos); // position
glBuffefSubData(GL_ARRAY_BUFFER, sizeof(pos), sizeof(pos), pos); // texture coordinates
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Corresponding vertex shader:
#version 110
attribute vec2 coord;
attribute vec2 tex_coord;
varying vec2 tex_out;
void main(void) {
gl_Position = vec4(coord.xy, 0.0, 1.0);
tex_out = tex_coord;
}
Corresponding fragment shader:
#version 110
uniform sampler2D my_texture;
varying vec2 tex_out;
void main(void) {
gl_FragColor = texture(my_texture, tex_out);
}
After extensive code investigation, I managed to find the culprit.
I was setting the blending function incorrectly, using GL_SRC1_ALPHA and GL_ONE_MINUS_SRC1_ALPHA instead of GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA.

Getting maximum/minimum luminance of texture OpenGL

I'm starting with OpenGL, and I want to create a tone mapping - algorithm.
I know that my first step is get the max/min luminance value of the HDR image.
I have the image in a texture in FBO, and I'm not sure how to start.
I think the best way is to pass tex coords to a fragment shader and then go through all the pixels and generates somehow smaller textures.
But, I don't know how to do downsampling manually until I had a 1x1 texture; should I had a lot of FBO? where I create each new texture?
I searched a lot of info but I still have no clear almost anything.
I would appreciate some help to situate myself and to start.
EDIT 1. Here's my shaders, and how I pass texture coords to vertex shader:
To pass texture coords and vertex positions, I draw a quad using VBO:
void drawQuad(Shaders* shad){
// coords: vertex (3) + texture (2)
std::vector<GLfloat> quadVerts = {
-1, 1, 0, 0, 0,
-1, -1, 0, 0, 1,
1, 1, 0, 1, 0,
1, -1, 0, 1, 1};
GLuint quadVbo;
glGenBuffers(1, &quadVbo);
glBindBuffer(GL_ARRAY_BUFFER, quadVbo);
glBufferData(GL_ARRAY_BUFFER, 4 * 5 * sizeof(GLfloat), &quadVerts[0], GL_STATIC_DRAW);
// Shader attributes
GLuint vVertex = shad->getLocation("vVertex");
GLuint vUV = shad->getLocation("vUV");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 3 * sizeof(GLfloat), NULL);
// Set attribs
glEnableVertexAttribArray(vVertex);
glVertexAttribPointer(vVertex, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, 0);
glEnableVertexAttribArray(vUV);
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (void*)(3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); // Draw
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(vVertex);
glDisableVertexAttribArray(vUV);
}
Vertex shader:
#version 420
in vec2 vUV;
in vec4 vVertex;
smooth out vec2 vTexCoord;
uniform mat4 MVP;
void main()
{
vTexCoord = vec2(vUV.x * 1024,vUV.y * 512);
gl_Position = MVP * vVertex;
}
And fragment shader:
#version 420
smooth in vec2 vTexCoord;
layout(binding=0) uniform sampler2D texHDR; // Tex image unit binding
layout(location=0) out vec4 color; //Frag data output location
vec4[4] col;
void main(void)
{
for(int i=0;i<=1;++i){
for(int j=0;j<=1;++j){
col[(2*i+j)] = texelFetch(texHDR, ivec2(2*vTexCoord.x+i,2*vTexCoord.y+j),0);
}
}
color = (col[0]+col[1]+col[2]+col[3])/4;
}
In this test code, I have a texture with size 1024*512. My idea is to render to texture attached to GL_ATTACHMENT_0 in a FBO (layout(location=0)) using this shaders and the texture binded in GL_TEXTURE_0 which has the image (layout(binding=0)).
My target is to have the image in texHDR in my FBO texture reducing its size by two.
For downsampling, all you need to do in the fragment shader is multiple texture lookups, then combine them for the output fragment. For example, you could do 2x2 lookups, so each pass would reduce the resolution in x and y by a factor 2.
Let's say you want to reduce a 1024x1024 image. Then you would render a quad into a 512x512 image. Set it up so your vertex shader simply generates values for x and y between 0 and 511. The fragment shader then calls texelFetch(tex, ivec2(2*x+i,2*y+j)), where i and j loop from 0 to 1. Cache those four values, output min and max in your texture.

Why is texelFetch always returning 0 for a 1D single channel texture?

I'm using a 1D texture to store single-channel integer data which needs to be accessed by the fragment shader. Coming from the application, the integer data type is GLubyte and needs to be accessed as an unsigned integer in the shader. Here is how the texture is created (note that there are other texture units being bound after, which I'm hoping are unrelated to the problem):
GLuint mTexture[2];
std::vector<GLubyte> data;
///... populate with 289 elements, all with value of 1
glActiveTexture(GL_TEXTURE0);
{
glGenTextures(1, &mTexture[0]);
glBindTexture(GL_TEXTURE_1D, mTexture[0]);
{
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8UI, data.size(), 0,
GL_RED_INTEGER, GL_UNSIGNED_BYTE, &data[0]);
}
glBindTexture(GL_TEXTURE_1D, 0);
}
glActiveTexture(GL_TEXTURE1);
{
//Setup the other texture using mTexture[1]
}
The fragment shader looks like this:
#version 420 core
smooth in vec2 tc;
out vec4 color;
layout (binding = 0) uniform usampler1D buffer;
layout (binding = 1) uniform sampler2DArray sampler;
uniform float spacing;
void main()
{
vec3 pos;
pos.x = tc.x;
pos.y = tc.y;
if (texelFetch(buffer, 0, 0).r == 1)
pos.z = 3.0;
else
pos.z = 0.0;
color = texture(sampler, pos);
}
The value returned from texelFetch in this example basicaly dictates which texture layer to use from the 2D array for the final output color. I want it to return the value 1, but it always returns 0 and hits the else clause in the fragment shader. Using NVIDIA's Nsight tool, I can see the texture does contain the value 1, 289 times:

Port from OpenGL to GLES 2.0

I have used https://github.com/akrinke/Font-Stash.git for some desktop applications. Now I want to use it on a raspberry Pi which use gles2. I looked into the code and see the only path that don't work on gles is flush_draw function:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts);
glTexCoordPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts+2);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I'm trying to port to gles to this:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
GLint position_index = get_attrib(stash->program, "position");
glEnableVertexAttribArray(position_index);
glVertexAttribPointer (position_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts);
GLint texture_coord_index = get_attrib(stash->program, "texCoord");
glEnableVertexAttribArray(texture_coord_index);
glVertexAttribPointer (texture_coord_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts + 2);
GLint texture_index = get_uniform(stash->program, "texture");
glUniform1i(texture_index, 0);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
with vertex sl
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 texCoordVar;
void main() {
gl_Position = position;
texCoordVar = texCoord;
}
and fragment sl
precision mediump float; // set default precision for floats to medium
uniform sampler2D texture; // shader texture uniform
varying vec2 texCoordVar; // fragment texture coordinate varying
void main() {
// sample the texture at the interpolated texture coordinate
// and write it to gl_FragColor
gl_FragColor = texture2D( texture, texCoordVar);
}
but I can't get anything, nothing on screen.
Can anybody show me what's wrong with my code?
You should setup transformations in your vertex shader. Best way to port fixed function OpenGL app is to write vertex and pixel shader that replicate fixed pipeline with transformations set as uniforms and set those uniforms every time transform is changed.
glEnable(GL_TEXTURE_2D), is not valid GLES2 btw. Also you're not doing any manipulation of the position in your vertex shader, so unless the coordinates are guaranteed to sit within the frustum and you're just passing them through to the rasterizer, then you are leaving it to luck as to whether or not they end up in the frustum. Are you sure you've accounted for everything the fixed function pipe used to handle regarding transforms?

Trouble with imageStore() (OpenGL 4.3)

I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader:
#version 430
layout(RGBA32F) uniform image2D image;
layout (local_size_x = 1, local_size_y = 1) in;
void main() {
imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));
}
and the application code is here:
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(program->GetName());
glUniform1i(program->GetUniformLocation("image"), 0);
glDispatchCompute(WIDTH, HEIGHT, 1);
then a full screen quad is rendered with that texture but currently it only shows some random old data from video memory. Any idea what could be wrong?
EDIT:
This is how I display the texture:
// This comes right after the previous block of code
glUseProgram(drawProgram->GetName());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(drawProgram->GetUniformLocation("sampler"), 0);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers();
and the drawProgram consists of:
#version 430
#extension GL_ARB_explicit_attrib_location : require
layout(location = 0) in vec2 position;
out vec2 uvCoord;
void main() {
gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
uvCoord = position;
}
and:
#version 430
in vec2 uvCoord;
out vec4 color;
uniform sampler2D sampler;
void main() {
vec2 uv = (uvCoord + vec2(1.0f)) / 2.0f;
uv.y = 1.0f - uv.y;
color = texture(sampler, uv);
//color = vec4(uv.x, uv.y, 0.0f, 1.0f);
}
The last commented line in fragment shader produces this output: Render output
The vertex array object (vao) has one buffer with 6 2D vertices:
-1.0, -1.0
1.0, -1.0
1.0, 1.0
1.0, 1.0
-1.0, 1.0
-1.0, -1.0
This is how I display the texture:
That's not good enough. I don't see a call to glMemoryBarrier, so there's no guarantee that your code actually works.
Remember: writes to images via Image Load/Store are not memory coherent. They require explicit user synchronization before they become visible. If you want to use an image you have stored to as a texture later, there must be an explicit glMemoryBarrier call after the rendering command that writes to it, but before the rendering command that samples from it as a texture.
Why that is a problem, I don't know
Because desktop OpenGL is not OpenGL ES.
The last three parameters only describe the arrangement of the pixel data you're giving OpenGL. They change nothing about how OpenGL stores the data. In ES, they do, but that's only because ES doesn't do format conversions.
In desktop OpenGL, it is perfectly legal to upload floating-point data to a normalized integer texture; OpenGL is expected to convert the data as best it can. ES doesn't do conversions, so it has to change the internal format (the third parameter) to match the data.
Desktop GL does not. If you want a specific image format, you ask for it. Desktop GL gives you what you ask for, and only what you ask for.
Always use sized internal formats.
GL_RGBA is not a sized internal format and so you're not able to know which it is really. Most often, it is transformed to GL_RGBA8 by OpenGL.
In your case, the GL_FLOAT parameter you set only describes the pixel data you could upload in the texture.
Read the table 2 here to know what you can set as an internal texture format.
Okay I found the solution. The problem lies here:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
this line doesn't specify the size of the internal format (GL_RGBA). When I supplied GL_RGBA32F it started working. Why that is a problem, I don't know (hopefully somebody will be able to explain).