OpenGL Compute Shader: Writing to texture seemingly does nothing - c++

I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.

I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered​. If layered​ is GL_TRUE, then texture​ must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level​ is bound.

Related

Fragment Shader seems to fail after combining GL_TEXTURE_BUFFER and GL_TEXTURE_1D

Currently, I'm trying to implement a fragment shader, which mixes colors of different fluid particles by combining the percentage of the fluids' phases inside the particle. So for example, if fluid 1 possesses 15% of the particle and fluid 2 possesses 85%, the resulting color should reflect that proportion. Therefore, I have a buffer texture containing the percentage reflected as a float value in [0,1] per particle and per phase and a texture containing the fluid colors.
The buffer texture does currently contain the percentages for each particle in a subsequential list. That is for example:
| Particle 1 percentage 1 | Particle 1 percentage 2 | Particle 2 percentage 1 | Particle 2 percentage 2 | ...
I already tested the correctness of the textures by assigning them to the particles directly or by assigning the volFrac to the red part of the final color. I also tried different GLSL debuggers trying to analyze the problem, but none of the popular options did work on my machine after trying.
#version 330
uniform float radius;
uniform mat4 projection_matrix;
uniform uint nFluids;
uniform sampler1D colorSampler;
uniform samplerBuffer volumeFractionSampler;
in block
{
flat vec3 mv_pos;
flat float pIndex;
}
In;
out vec4 out_color;
void main(void)
{
vec3 fluidColor = vec3(0.0, 0.0, 0.0);
for (int fluidModelIndex = 0; fluidModelIndex < int(nFluids); fluidModelIndex++)
{
float volFrac = texelFetch(volumeFractionSampler, int(nFluids * In.pIndex) + fluidModelIndex).x;
vec3 phaseColor = texture(colorSampler, float(fluidModelIndex)/(int(nFluids) - 1)).xyz;
fluidColor = volFrac * phaseColor;
}
out_color = vec4(fluidColor, 1.0);
}
And also a short snippet of the texture initialization
//Texture Initialisation and Manipulation here
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, m_textureMap);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, nFluids, 0, GL_RGB, GL_FLOAT, color_map);
//Creation and Initialisation for Buffer Texture containing the volume Fractions
glBindBuffer(GL_TEXTURE_BUFFER, m_texBuffer);
glBufferData(GL_TEXTURE_BUFFER, nFluids * nParticles * sizeof(float), m_volumeFractions.data(), GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glBindTexture(GL_TEXTURE_BUFFER, m_bufferTexture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, m_texBuffer);
The problem now is, that if I multiply the information of the buffer texture with the information of the texture, the particles that should be rendered disappear completely without any warnings or other error messages. So the particles disappear if I use the statement:
fluidColor = volFrac * phaseColor;
Does anybody know, why this is the case or how I can further debug this problem?
Does anybody know, why this is the case
Yes. You seem to use the same texture unit for both colorSampler and volumeFractionSampler which is simply not allowed as per the spec. Quoting from section 7.11 of the OpenGL 4.6 core profile spec:
It is not allowed to have variables of different sampler types pointing to the same texture image unit within a program object. This situation can only
be detected at the next rendering command issued which triggers shader invocations, and an INVALID_OPERATION error will then be generated.
So while you can bind different textures do the different targets of texture unit 0 at the same time, each draw call can only use one particular target per texture unit. If you only use one sampler or the other (and the shader compilere will aggresively optimize these out if they don't influence the outputs of your shader), you are in a legal use case, but as soon as you use both, it will not work.

Reading texels after imageStore()

I'm modifying texels of a texture with imageStore() and after that i'm reading those texels in some other shader as sampler2D with texture() but i get the values which were stored in the texture before the imageStore(). With imageLoad() it works fine but i need to use filtering and the performance of texture() is better, so is there a way to get the modified data with texture()?
Edit:
First fragment shader(for writing):
#version 450 core
layout (binding = 0, rgba32f) uniform image2D img;
in vec2 vs_uv_out;
void main()
{
imageStore(img, ivec2(vs_uv_out), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Second fragment shader(for reading):
#version 450 core
layout (binding = 0) uniform sampler2D tex;
in vec2 vs_uv_out;
out vec4 out_color;
void main()
{
out_color = texture(tex, vs_uv_out);
}
Thats how i run the shaders:
glUseProgram(shader_programs[0]);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE,
GL_RGBA32F);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(shader_programs[1]);
glBindTextureUnit(0, texture);
glDrawArrays(GL_TRIANGLES, 0, 6);
i made this simple application to test that because the real one is very complex, i first clear the texture with red but the texels won't appear blue(except of using imageLoad in the second frag. shader).
Oh, that's easy then. Image Load/Store's writes uses an incoherent memory model, not the synchronous model most of the rest of OpenGL uses. As such, just because you write something with Image Load/Store doesn't mean it's visible to anyone else. You have to explicitly make it visible for reading.
You need a glMemoryBarrier call between the rendering operation that writes the data and the operation that reads it. And since the reading operation is a texture fetch, the correct barrier to use is GL_TEXTURE_FETCH_BARRIER_BIT.
And FYI: your imageLoad was able to read the written data only due to pure luck. Nothing guaranteed that it would be able to read the written data. To ensure such reads, you'd need a memory barrier as well. Though obviously a different one: GL_SHADER_IMAGE_ACCESS_BARRIER_BIT.
Also, texture takes normalized texture coordinates. imageStore takes integer pixel coordinates. Unless that texture is a rectangle texture (and it's not, since you used sampler2D), it is impossible to pass the exact same coordinate to both imageStore and texture.
Therefore, either your pixels are being written to the wrong location, or your texture is being sampled from the wrong location. Either way, there's a clear miscommunication. Assuming that vs_uv_out really is non-normalized, then you should either use texelFetch or you should normalize it. Fortunately, you're using OpenGL 4.5, so that ought to be fairly simple:
ivec2 size = textureSize(tex);
vec2 texCoord = vs_uv_out / size;
out_color = texture(tex, texCoord);

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

How can I pass multiple textures to a single shader?

I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04.
Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins:
uniform sampler2D DecalTex; //The texture
uniform sampler2D BumpTex; //The bump-map
What they don't mention is how to pass two textures to the shader in the first place.
Previously I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
//Fragment shader
gl_FragColor = color * texture2D(DecalTex,gl_TexCoord[0].xy);
so now I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
glBindTexture(GL_TEXTURE_2D, bumpHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord1;
//Fragment shader
gl_FragColor = color * texture2D(BumpTex,gl_TexCoord[0].xy);
//no bump logic yet, just testing I can use texture 1 instead of texture 0
but this doesn't work. The texture disappears completely (effectively the teapot is white). I've tried GL_TEXTURE_2D_ARRAY, glActiveTexture and few other likely-seeming but fruitless options.
After sifting through the usual mixed bag of references to OpenGL and GLSL new and old, I've come to the conclusion that I probably need glGetUniformLocation. How exactly do I use this in the OpenGL cpp file to pass the already-populated texture handles to the fragment shader?
How to pass an array of textures with different sizes to GLSL?
Passing Multiple Textures from OpenGL to GLSL shader
Multiple textures in GLSL - only one works
(This is homework so please answer with minimal code fragments (if at all). Thanks!)
Failing that, does anyone have a tea cosy mesh?
It is very simple, really. All you need is to bind the sampler to some texture unit with glUniform1i. So for your code sample, assuming the two uniform samplers:
uniform sampler2D DecalTex; // The texture (we'll bind to texture unit 0)
uniform sampler2D BumpTex; // The bump-map (we'll bind to texture unit 1)
In your initialization code:
// Get the uniform variables location. You've probably already done that before...
decalTexLocation = glGetUniformLocation(shader_program, "DecalTex");
bumpTexLocation = glGetUniformLocation(shader_program, "BumpTex");
// Then bind the uniform samplers to texture units:
glUseProgram(shader_program);
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
OK, shader uniforms set, now we render. To do so, you will need the usual glBindTexture plus glActiveTexture:
glActiveTexture(GL_TEXTURE0 + 0); // Texture unit 0
glBindTexture(GL_TEXTURE_2D, decalTexHandle);
glActiveTexture(GL_TEXTURE0 + 1); // Texture unit 1
glBindTexture(GL_TEXTURE_2D, bumpHandle);
// Done! Now you render normally.
And in the shader, you will use the textures samplers just like you already do:
vec4 a = texture2D(DecalTex, tc);
vec4 b = texture2D(BumpTex, tc);
Note: For techniques like bump-mapping, you only need one set of texture coordinates, since the textures are the same, only containing different data. So you should probably pass texture coordinates as a vertex attribute.
instead of using:
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
in your code,
you can have:
layout(binding=0) uniform sampler2D DecalTex;
// The texture (we'll bind to texture unit 0)
layout(binding=1)uniform sampler2D BumpTex;
// The bump-map (we'll bind to texture unit 1)
in your shader. That also mean you don't have to query for the location.

OpenGL - GLSL assigning to varying variable breaks the vertex positioning

I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.