In a compute shader, I'm using a r16ui image and I have a problem:
layout (binding = 0, r16ui) uniform writeonly uimage2D texture;
imageStore(texture, iTextureCoords, uvec4(0xffff, 0, 0, 0));
The result in buffer is not 0xffff but 32767. How can I convert 0xffff int to uint properly inside shader?
Ok problem was solved!
I'have passed texture to compute shader (by bindImage) as GL_R16I not GL_R16UI. That was the problem.
Related
I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered. If layered is GL_TRUE, then texture must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level is bound.
I'm using a 1D texture to store single-channel integer data which needs to be accessed by the fragment shader. Coming from the application, the integer data type is GLubyte and needs to be accessed as an unsigned integer in the shader. Here is how the texture is created (note that there are other texture units being bound after, which I'm hoping are unrelated to the problem):
GLuint mTexture[2];
std::vector<GLubyte> data;
///... populate with 289 elements, all with value of 1
glActiveTexture(GL_TEXTURE0);
{
glGenTextures(1, &mTexture[0]);
glBindTexture(GL_TEXTURE_1D, mTexture[0]);
{
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8UI, data.size(), 0,
GL_RED_INTEGER, GL_UNSIGNED_BYTE, &data[0]);
}
glBindTexture(GL_TEXTURE_1D, 0);
}
glActiveTexture(GL_TEXTURE1);
{
//Setup the other texture using mTexture[1]
}
The fragment shader looks like this:
#version 420 core
smooth in vec2 tc;
out vec4 color;
layout (binding = 0) uniform usampler1D buffer;
layout (binding = 1) uniform sampler2DArray sampler;
uniform float spacing;
void main()
{
vec3 pos;
pos.x = tc.x;
pos.y = tc.y;
if (texelFetch(buffer, 0, 0).r == 1)
pos.z = 3.0;
else
pos.z = 0.0;
color = texture(sampler, pos);
}
The value returned from texelFetch in this example basicaly dictates which texture layer to use from the 2D array for the final output color. I want it to return the value 1, but it always returns 0 and hits the else clause in the fragment shader. Using NVIDIA's Nsight tool, I can see the texture does contain the value 1, 289 times:
I want to render a unsigned integer texture with fragment shader using following code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8UI, width, height, 0, GL_RED_INTEGER, GL_UNSIGNED_BYTE, data);
and part of the fragment shader code:
#version 330
uniform usampler2D tex;
void main(void){
vec3 vec_tex;
vec_tex = (texture(tex), TexCoordOut).r
}
It is written in OpenGL Programming Guide, that if I want to receive integers in shader, then I should use an integer sampler type, an integer internal format, and an integer external format and type. Here is us GL_R8UI as internal format, GL_RED_INTEGER as external format, GL_UNSIGNED_BYTE as data type. I also use usampler2D in shader. But when the program starts to render the file, always got error implicit cast from "int" to "uint". It seems that the texture data is stored as int, and the unsigned sampler can not convert that. But I did use GL_R8UI as internal format, so the texture data should be stored as unsigned. why the unsigned sampler only get signed int? How can I solve this problem?
The texture function call is not correct, secondly the texture function returns float values which needs to handled in shader by dividing the RGBA components by 255.0 (as you use GL_R8UI) and return and fragment color output.
uniform usampler2D tex;
out uvec3 OutColor;
void main(void){
uvec3 vec_tex;
vec_tex = texture(tex, TexCoordOut)
OutColor = vec3(float(vec_tex.r)/255, float(vec_tex.g)/255, float(vec_tex.b)/255)
}
I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.
Here is code snippet which I am trying to reproduce:
http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).
I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
The fragment shader is
uniform float fade_factor;
uniform sampler2D textures[2];
varying vec2 texcoord;
void main()
{
vec4 index = texture2D(textures[0], texcoord);
vec4 texel = texture2D(textures[1], index.xy);
gl_FragColor = texel;
}
textures[0] is indexed texture (that one I'm trying to colorize)
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".
textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)
So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.
But instead of painted image I get:
Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)
Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).
Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.
The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:
#version 330 //Always include a version.
uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;
in vec2 texcoord;
layout(location = 0) out vec4 outColor;
void main()
{
float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
gl_FragColor = texel;
}
The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.
Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.
When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API.
E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color.
I tried on AMD and nvidia card, but the results are same.
Can you tell me where could be going wrong?
I am copying my code here:
Vert shader:
in vec2 a_position;
uniform float offset_x;
void main()
{
gl_Position = vec4(a_position.x + offset_x, a_position.y, 1.0, 1.0);
}
Frag shader:
out vec4 Color;
uniform isamplerBuffer sampler;
uniform int index;
void main()
{
Color=texelFetch(sampler,index);
}
Code:
GLubyte arr[]={128,5,250};
glGenBuffers(1,&bufferid);
glBindBuffer(GL_TEXTURE_BUFFER,bufferid);
glBufferData(GL_TEXTURE_BUFFER,sizeof(arr),arr,GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER,0);
glGenTextures(1, &buffer_texture);
glBindTexture(GL_TEXTURE_BUFFER, buffer_texture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
glUniform1f(glGetUniformLocation(shader_data.psId,"offset_x"),0.0f);
glUniform1i(glGetUniformLocation(shader_data.psId,"sampler"),0);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),0);
glGenBuffers(1,&bufferid1);
glBindBuffer(GL_ARRAY_BUFFER,bufferid1);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices4),vertices4,GL_STATIC_DRAW);
attr_vertex = glGetAttribLocation(shader_data.psId, "a_position");
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0, 0);
glEnableVertexAttribArray(attr_vertex);
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),1);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(32) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),2);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(64) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
In this case it draws all the 3 squares with dark red color.
uniform isamplerBuffer sampler;
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
There's your problem: they don't match.
You created the texture's storage as unsigned 8-bit integers, which are normalized to floats upon reading. But you told the shader that you were giving it signed 8-bit integers which will be read as integers, not floats.
You confused OpenGL by being inconsistent. Mismatching sampler types with texture formats yields undefined behavior.
That should be a samplerBuffer, not an isamplerBuffer.