I have an OpenGL program which shows a (huge) lack of repeatability. In the fragment shader, I look for a specific condition. If that condition is met, I write to a SSBO the worldspace coordinates associated to that fragment, which I have kept from the vertex shader.
My problem is that, from one repetition of the program to a next one, the number of write operations to the SSBO does vary widely. Can it be that, if multiple shaders want to write to the SSBO at the same time, it would not always be done?
I do not read to the SSBO from the other shaders. So, it is not a problem of write/read synchronization. I read the SSBO only when I am back into the CPU application.
I use OpenGL 4.3 on a NVIDIA GTX645 card.
Yes. I have fixed it.
After looking around on the web for a long time, I found out that, using glDeleteBuffers(1, &_SSBO); only might not always be enough for deleting the buffer: its name must be deleted as well. You would think the SSBO has been deleted and you can bind a new one by using glBindBuffer(GL_SHADER_STORAGE_BUFFER, Detected_Vertices_SSBO); but, from what I learned (and it worked), it may happen that the new binding will be made to the same SSBO name (or handle). In order to prevent this, make sure the SSBO is unbinded by using glBindBuffer(GL_SHADER_STORAGE_BUFFER,0); before binding a new one. The argument 0 in the command will make sure the SSBO has been really unbinded. Good luck!
In the CPU application:
//Create one Shader Storage Buffer Object for vertices of detected points
GLuint Detected_Vertices_SSBO;
glGenBuffers(1,&Detected_Vertices_SSBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER,Detected_Vertices_SSBO);
glBufferData(GL_SHADER_STORAGE_BUFFER,(sizeN-NbTotalreflectionPoints.at(i))*sizeof(glm::vec4),NULL,GL_DYNAMIC_DRAW);
sprintf(OpenGLErrorMessage,"%s\n","Error on Vertices SSBO at creation time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0,Detected_Vertices_SSBO);
sprintf(OpenGLErrorMessage,"%s\n","Error on Vertices SSBO at binding time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
//Create one Shader Storage Buffer Object for colors of detected points
GLuint Detected_Vertices_Colors_SSBO;
glGenBuffers(1,&Detected_Vertices_Colors_SSBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER,Detected_Vertices_Colors_SSBO);
glBufferData(GL_SHADER_STORAGE_BUFFER,(sizeN-NbTotalreflectionPoints.at(i))*sizeof(glm::vec4),NULL,GL_DYNAMIC_DRAW);
sprintf(OpenGLErrorMessage,"%s\n","Error on Colors of Vertices SSBO at creation time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,1,Detected_Vertices_Colors_SSBO);
sprintf(OpenGLErrorMessage,"%s\n","Error on Vertices Colors SSBO at binding time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
….
glDrawArrays(GL_POINTS, 2*3+NbTargets1*12*3, NbPoints); //
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glFinish();
…
glBindBuffer(GL_SHADER_STORAGE_BUFFER, Detected_Vertices_SSBO);
glm::vec4 * SSBO_Position_ptr = (glm::vec4*)glMapBuffer(GL_SHADER_STORAGE_BUFFER,GL_READ_ONLY);
sprintf(OpenGLErrorMessage,"%s\n","Error on Vertices SSBO at mapping time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
glFinish();
glBindBuffer(GL_SHADER_STORAGE_BUFFER, Detected_Vertices_Colors_SSBO);
glm::vec4 * SSBO_Color_ptr = (glm::vec4*)glMapBuffer(GL_SHADER_STORAGE_BUFFER,GL_READ_ONLY);
sprintf(OpenGLErrorMessage,"%s\n","Error on Vertices Colors SSBO at mapping time.");
QueryForOpenGLErrors(OpenGLErrorMessage);
glFinish();
Then, in the fragment shader:
#version 430 core
layout(early_fragment_tests) in;
// Interpolated values from the vertex shaders
in vec4 fragmentColor;
in vec4 worldspace;
//SSBO's for detected points and their colors
layout (binding=0, std430) coherent buffer detected_vertices_storage
{
vec4 detected_vertices[];
}Positions;
layout (binding=1, std430) coherent buffer detected_vertices_colors_storage
{
vec4 detected_vertices_colors[];
}Colors;
...
Positions.detected_vertices[int(round(Condition * ((gl_FragCoord.y-0.5)*1024.0+(gl_FragCoord.x-0.5))/(1024.0*1024.0)*NbOfCosines))] = worldspace;
Colors.detected_vertices_colors[int(round(Condition * ((gl_FragCoord.y-0.5)*1024.0+(gl_FragCoord.x-0.5))/(1024.0*1024.0)*NbOfCosines))] = color;
wherein Condition = 0 or 1.
I hope this will help. I am sorry to tell that I am still having much trouble with this formatting thing in stackoverflow. I must be too old!
Related
I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered. If layered is GL_TRUE, then texture must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level is bound.
I am working on the code written by someone else and at the moment I have a fairly limited understanding of the codebase. Which is why I wasn't sure how to formulate my question properly, and whether it is an OpenGL question, or debugging strategy question. Furthermore, I obviously can't really share the whole code base, and the reason stuff is not working is most likely rooted in there. Regardless, perhaps someone just might have an idea of what might be going on, or where should I look at.
I have a vertex structure defined in the following way:
struct Vertex {
Vertex(glm::vec3 position, glm::vec3 normal):
_position(position), _normal(normal) {};
glm::vec3 _position;
glm::vec3 _normal;
};
I have a std vector of vertices which I fill out with vertex data extracted from a certain structure. For the sake of simplicity, let's assume it's another vector:
// std::vector<Vertex> data - contains vertex data
std::vector<Vertex> Vertices;
Vertices.reserve(data.size());
for (int i = 0; i < data.size(); i++) {
Vertices.emplace_back(Vertex(data[i]._position, data[i]._normal));
}
Then I generate a vertex buffer object, buffer my data and enable vertex attributes:
GLuint VB;
glGenBuffers(1, &VB);
glBindBuffer(GL_ARRAY_BUFFER, VB);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*Vertices.size(), &Vertices[0],
GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)
(sizeof(GL_FLOAT)*3));
Finally, I bind a shader and set up uniforms, then I call glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, Vertices.size());
// clean-up
// swap buffers (comes at some point in code, I haven't figure out where yet)
At this point nothing gets rendered. However, initially I made a mistake and swapped the parameters of the draw call, such that offset comes before the number of elements to draw:
glDrawArrays(GL_TRIANGLES, Vertices.size(), 0);
And surprisingly, that actually rendered what I wanted to render in the first place. However, the documentation clearly says that the offset comes first, and the number of elements after this. Which means that glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) should have shown nothing, since I specified 'zero' number of elements to be drawn.
Now there are multiple windows in the application and shared vertex buffer objects. At some point I thought that the vertex buffer object I generated somehow gets passed around in the part of the code I haven't explored yet, and uses it to draw the geometry I didn't expect to be drawn. However that still doesn't explain the fact that when I use glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) with 'zero' as the number of elements to be drawn - I see the geometry; whereas when I swap the parameters according to the documentation - nothing gets shown.
Given this scarce information that I shared, does someone by any chance have an idea of what might be going on? If not, how would you tackle this, how would you go about debugging (or understanding) it?
EDIT: Vertex and Fragment shader
Mind that this is a dummy shader that simply paints the whole geometry red. Regardless, shader is not the cause of my problems, given how geometry gets drawn depending on how I use the draw call (see above).
EDIT EDIT: Note that as long as I don't activate blending, the alpha component (which is 'zero' in the shader), won't have any effect on the produced image.
vertex shader:
#version 440
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
uniform mat4 MVP; // model-view-projection matrix
void main() {
gl_Position = MVP*vec4(position, 1.0);
}
fragment shader:
#version 440
out vec4 outColor;
void main()
{
outColor = vec4(1, 0, 0, 0);
})glsl";
Regarding the glDrawArrays parameter inversion, have you tried stepping into that function call? Perhaps you are using an OpenGL wrapper of some sort which modifies the order or the arguments. I can confirm however that the documentation you quote is not wrong about the parameter order!
I'm modifying texels of a texture with imageStore() and after that i'm reading those texels in some other shader as sampler2D with texture() but i get the values which were stored in the texture before the imageStore(). With imageLoad() it works fine but i need to use filtering and the performance of texture() is better, so is there a way to get the modified data with texture()?
Edit:
First fragment shader(for writing):
#version 450 core
layout (binding = 0, rgba32f) uniform image2D img;
in vec2 vs_uv_out;
void main()
{
imageStore(img, ivec2(vs_uv_out), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Second fragment shader(for reading):
#version 450 core
layout (binding = 0) uniform sampler2D tex;
in vec2 vs_uv_out;
out vec4 out_color;
void main()
{
out_color = texture(tex, vs_uv_out);
}
Thats how i run the shaders:
glUseProgram(shader_programs[0]);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE,
GL_RGBA32F);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(shader_programs[1]);
glBindTextureUnit(0, texture);
glDrawArrays(GL_TRIANGLES, 0, 6);
i made this simple application to test that because the real one is very complex, i first clear the texture with red but the texels won't appear blue(except of using imageLoad in the second frag. shader).
Oh, that's easy then. Image Load/Store's writes uses an incoherent memory model, not the synchronous model most of the rest of OpenGL uses. As such, just because you write something with Image Load/Store doesn't mean it's visible to anyone else. You have to explicitly make it visible for reading.
You need a glMemoryBarrier call between the rendering operation that writes the data and the operation that reads it. And since the reading operation is a texture fetch, the correct barrier to use is GL_TEXTURE_FETCH_BARRIER_BIT.
And FYI: your imageLoad was able to read the written data only due to pure luck. Nothing guaranteed that it would be able to read the written data. To ensure such reads, you'd need a memory barrier as well. Though obviously a different one: GL_SHADER_IMAGE_ACCESS_BARRIER_BIT.
Also, texture takes normalized texture coordinates. imageStore takes integer pixel coordinates. Unless that texture is a rectangle texture (and it's not, since you used sampler2D), it is impossible to pass the exact same coordinate to both imageStore and texture.
Therefore, either your pixels are being written to the wrong location, or your texture is being sampled from the wrong location. Either way, there's a clear miscommunication. Assuming that vs_uv_out really is non-normalized, then you should either use texelFetch or you should normalize it. Fortunately, you're using OpenGL 4.5, so that ought to be fairly simple:
ivec2 size = textureSize(tex);
vec2 texCoord = vs_uv_out / size;
out_color = texture(tex, texCoord);
I am trying to render 2 textures in OpenGL 3.
I created two arrays of vertices of GLfloat type,generated and bound the buffers etc.
Note: The texture loading function is working fine,I have already loaded a texture before, now I just need 2 textures rendered at the same time.
Then I load my textures like this:
GLuint grass = texture.loadTexture("grass.bmp");
GLuint grassLoc = glGetUniformLocation(programID, "grassSampler");
glUniform1i(grassLoc, 0);
GLuint crate = texture.loadTexture("crate.bmp");
GLuint crateLoc = glGetUniformLocation(programID, "crateSampler");
glUniform1i(crateLoc, 1);
This is how I draw them:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, grass);
glDrawArrays(GL_TRIANGLES, 0, 6);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, crate);
glDrawArrays(GL_TRIANGLES, 2, 6);
Vertex shader:
#version 330 core
layout(location = 0) in vec3 grassPosition;
layout(location = 1) in vec2 grassUvPosition;
layout(location = 2) in vec3 cratePosition;
layout(location = 3) in vec2 crateUvPosition;
out vec2 grassUV;
out vec2 crateUV;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(grassPosition,1);
gl_Position = MVP * vec4(cratePosition,1);
grassUV = grassUvPosition;
crateUV = crateUvPosition;
}
Fragment shader:
#version 330 core
in vec2 grassUV;
in vec2 crateUV;
out vec3 grassColor;
out vec3 crateColor;
uniform sampler2D grassSampler;
uniform sampler2D crateSampler;
void main(){
crateColor = texture(grassSampler, grassUV).rgb;
grassColor = texture(crateSampler, crateUV).rgb;
}
Can anyone see what I am doing wrong?
EDIT:
I am trying to render 2 different textures on 2 different VAOs
You're kinda doing everything wrong; it's hard to pick out one thing.
Your shaders look like they're tying to take two positions and two texture coordinates, presumably generate two triangles, then sample from two textures and write colors to two different images.
That's not how it works. Unless you use a geometry shader (and please do not take that as an endorsement), your call to glDrawArrays(GL_TRIANGLES, 0, 6); will render exactly 2 triangles, no matter what your VS or FS's say.
A vertex has only one position. Writing to gl_Position twice will simply overwrite the previous value, just like writing to any variable twice in C++ would. And the number of triangles to be rendered is defined by the number of vertices. A vertex shader cannot create vertices. It can't even destroy them (though, through gl_CullDistance, it can potentially cull whole primitives).
It is not clear what you mean by "I just need 2 textures rendered at the same time." Or more to the point, what "at the same time" refers to. I don't know what your code ought to be trying to do.
Given the data your vertex shader expects, it looks like you have two separate sets of triangles, with their own positions and texture coordinates. You want to render one set of triangles with one texture, then render another set with a different texture.
So... do that. Instead of having your VAOs send 2 positions and 2 texture coordinates, send just one. Your VS should also take one position/texcoord, and your FS should similarly take a single texture and write to a single output. The difference will be determined by what VAO is currently active and which texture is bound to texture unit 0 at the time you issue the render call.
If you truly intend to write to different output images, the way your FS suggests, then change FBOs between rendering as well.
If however, your goal is to have the same triangle use two textures with two mappings, writing separate results to two images, you can do that too. The difference is that you only provide a single position, and textures must be bound to both texture units 0 and 1 when you issue your rendering command.
I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.