Sampling unsigned integer texture data in a shader - opengl

I want to render a unsigned integer texture with fragment shader using following code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8UI, width, height, 0, GL_RED_INTEGER, GL_UNSIGNED_BYTE, data);
and part of the fragment shader code:
#version 330
uniform usampler2D tex;
void main(void){
vec3 vec_tex;
vec_tex = (texture(tex), TexCoordOut).r
}
It is written in OpenGL Programming Guide, that if I want to receive integers in shader, then I should use an integer sampler type, an integer internal format, and an integer external format and type. Here is us GL_R8UI as internal format, GL_RED_INTEGER as external format, GL_UNSIGNED_BYTE as data type. I also use usampler2D in shader. But when the program starts to render the file, always got error implicit cast from "int" to "uint". It seems that the texture data is stored as int, and the unsigned sampler can not convert that. But I did use GL_R8UI as internal format, so the texture data should be stored as unsigned. why the unsigned sampler only get signed int? How can I solve this problem?

The texture function call is not correct, secondly the texture function returns float values which needs to handled in shader by dividing the RGBA components by 255.0 (as you use GL_R8UI) and return and fragment color output.
uniform usampler2D tex;
out uvec3 OutColor;
void main(void){
uvec3 vec_tex;
vec_tex = texture(tex, TexCoordOut)
OutColor = vec3(float(vec_tex.r)/255, float(vec_tex.g)/255, float(vec_tex.b)/255)
}

Related

Multiple samplers for one texture array

How can I create multiple samplers for one texture array?
So far I have relied upon OpenGL figuring out that the declared uniform sampler2Darray txa sampler refers to the texture array I bound with glBindTexture.
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA8, width, height,
layerCount, 0 GL_RGBA, GL_UNSIGNED_BYTE, texture_array);
...
glGenTextures(1,&texture_ID);
glBindTexture(GL_TEXTURE_2D_ARRAY, texture_ID);
...
//fragment shader
uniform sampler2Darray txa
...
vec2 tc;
tc.x = (1.0f - tex_coord.x) * tex_quad[0] + tex_coord.x * tex_quad[1];
tc.y = (1.0f - tex_coord.y) * tex_quad[2] + tex_coord.y * tex_quad[3];
vec4 sampled_color = texture(txa, vec3(tc, tex_id));
I tried specifying two samplers in the fragment shader but I get a compilation error for the fragment shader:
uniform sampler2DArray txa;
uniform sampler2DArray txa2;
...
vec4 texture = texture(txa, vec3(tc, tex_id));
vec4 texture2 = texture(txa2, vec3(tc2, tex_id));
I didn't expect this to work, however, I am not sure that the fragment shader compilator checks whether samplers are assigned textures, so maybe something else is wrong.
I tried generating and binding the sampler objects but I still get a fragment shader error:
GLuint sampler_IDs[2];
glGenSamplers(2,sampler_IDs);
glBindSampler(texture_ID, sampler_IDs[0]);
glBindSampler(texture_ID, sampler_IDs[1]);
I would like to stick to lower versions of OpenGL, is it possible? Any help is appreciated, thank you!
The error is cause by the line
vec4 texture = texture(txa, vec3(tc, tex_id));
Then name of the variable texture is equal the name of the built-in function texture. The variable is declared in local scope, so in this scope texture is a variable and calling the function texture causes an error.
Rename the variable to solve the issue. e.g.:
vec4 texture1 = texture(txa, vec3(tc, tex_id));

Shaders casting uint8 to float, and reinterpretting it back to uint

I have a vertex attribute that's being chewed up very weird by my shaders. It's uploaded to the VBO as a (uint8)1 but when the fragment shader sees it, it's interpreted as a 10653532160, or 0x3F800000 which some of you might recognize as being the bit pattern for a 1.0f in floating point.
I have no ideas as to why? I can confirm that it is uploaded to the VBO as a 1 (0x00000001) though.
The vertex attribute is defined as:
struct Vertex{
...
glm::u8vec4 c2; // attribute with problems
};
// not-normalized
glVertexAttribPointer(aColor2, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, c2));
While the shader has that attribute bound with
glBindAttribLocation(programID, aColor2, "c2");
The vertex shader passes along the attribute pretty uneventfully:
#version 330
in lowp uvec4 c2; // <-- this value is uploaded to the VBO as 0x00, 0x00, 0x00, 0x01;
flat out lowp uvec4 indices;
void main(){
indices = c2;
}
And finally the fragment shader gets ahold of it:
flat in lowp uvec4 indices; // <-- this value is now 0, 0, 0, 0x3F800000
out lowp vec4 fragColor;
void main(){
fragColor = vec4(indices) / 256.0;
}
The indices varying leaves the vertex shader as a 0x3F800000 for indices.w according to my shader inspector, so something odd is happening there? What could be causing this?
It the type of an vertex attribute is integral, then you have to use glVertexAttribIPointer rather than glVertexAttribPointer (focus on I). See glVertexAttribPointer.
The type which is specified in glVertexAttribPointer is the type of the data in source buffer and doesn't specify the target attribute type in the shader. If you use glVertexAttribPointer, then the type of the attribute in the shader program is assumed to be floating point, and the integral data are converted.
If you use glVertexAttribIPointer then the values left as integer values.

Reading texels after imageStore()

I'm modifying texels of a texture with imageStore() and after that i'm reading those texels in some other shader as sampler2D with texture() but i get the values which were stored in the texture before the imageStore(). With imageLoad() it works fine but i need to use filtering and the performance of texture() is better, so is there a way to get the modified data with texture()?
Edit:
First fragment shader(for writing):
#version 450 core
layout (binding = 0, rgba32f) uniform image2D img;
in vec2 vs_uv_out;
void main()
{
imageStore(img, ivec2(vs_uv_out), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Second fragment shader(for reading):
#version 450 core
layout (binding = 0) uniform sampler2D tex;
in vec2 vs_uv_out;
out vec4 out_color;
void main()
{
out_color = texture(tex, vs_uv_out);
}
Thats how i run the shaders:
glUseProgram(shader_programs[0]);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE,
GL_RGBA32F);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(shader_programs[1]);
glBindTextureUnit(0, texture);
glDrawArrays(GL_TRIANGLES, 0, 6);
i made this simple application to test that because the real one is very complex, i first clear the texture with red but the texels won't appear blue(except of using imageLoad in the second frag. shader).
Oh, that's easy then. Image Load/Store's writes uses an incoherent memory model, not the synchronous model most of the rest of OpenGL uses. As such, just because you write something with Image Load/Store doesn't mean it's visible to anyone else. You have to explicitly make it visible for reading.
You need a glMemoryBarrier call between the rendering operation that writes the data and the operation that reads it. And since the reading operation is a texture fetch, the correct barrier to use is GL_TEXTURE_FETCH_BARRIER_BIT.
And FYI: your imageLoad was able to read the written data only due to pure luck. Nothing guaranteed that it would be able to read the written data. To ensure such reads, you'd need a memory barrier as well. Though obviously a different one: GL_SHADER_IMAGE_ACCESS_BARRIER_BIT.
Also, texture takes normalized texture coordinates. imageStore takes integer pixel coordinates. Unless that texture is a rectangle texture (and it's not, since you used sampler2D), it is impossible to pass the exact same coordinate to both imageStore and texture.
Therefore, either your pixels are being written to the wrong location, or your texture is being sampled from the wrong location. Either way, there's a clear miscommunication. Assuming that vs_uv_out really is non-normalized, then you should either use texelFetch or you should normalize it. Fortunately, you're using OpenGL 4.5, so that ought to be fairly simple:
ivec2 size = textureSize(tex);
vec2 texCoord = vs_uv_out / size;
out_color = texture(tex, texCoord);

What is the most efficient way to store/pass color values (vertex attribute)?

Is there a way to store color values on the VRAM other than a float per color component ?
Since color can represented as byte per component, how can I force my fragment shader to range color component from [0-255] instead of the default range [0.0-1.0]
if I use type as GL_UNSIGNED_BYTE , do I have to set bool normalized to GL_TRUE to convert them to 0.0-1.0 values that can be interpreted by the Fragment shader?
The output of the fragment shader is independent of the input of the vertex shader. For example you are storing colors in RGBA 8 bit format it would look somehting like this.
//...
glVertexAttribPointer( 0,4,GL_UNSIGNED_BYTE,FALSE,4,0);
//...
in the vertex shader
//the unsigned bytes are automatically converted to floats in the range [0,255]
//if normalized would have been set to true the range would be [0,1]
layout(location = 0) in vec4 color;
out vec4 c;
//...
c = color; //pass color to fragment shader
fragment shader
in vec4 c;
out vec4 out_color; //output (a texture from a framebuffer)
//....
//the output of the fragment shader must be converted to range [0,1] unless
//you're writing to integer textures (i'm asuming not here)
out_color = c / 255.0f;
A VBO is just a bunch of bytes in the first place. You need to tell OpenGL some information about the data in the VBO. One does that by invoking glVertexAttribPointer.
glVertexAttribPointer(index, size, GL_FLOAT, ...)
Using GL_FLOAT OpenGL knows that your data comes in float32 (4 bytes). In your case, you could use GL_BYTE which is an 8 bit number, so you can encode values from 0 to 255.
Since the information is only stored in VAO, one could use the same VBO with different views on data. Here one can find all available types.
According to the documentation of glVertexAttribPointer you have to set the normalize parameter to let the bytes be scaled to the range 0.0 to 1.0.
But as I can see in your comment to another answer, your real problem is with the output. The shader type vec4 contains always of floats to the values must be in range 0.0 to 1.0.

Fragment shader always uses 1.0 for alpha channel

I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
       vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.