I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.
Related
I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered. If layered is GL_TRUE, then texture must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level is bound.
Currently, I'm trying to implement a fragment shader, which mixes colors of different fluid particles by combining the percentage of the fluids' phases inside the particle. So for example, if fluid 1 possesses 15% of the particle and fluid 2 possesses 85%, the resulting color should reflect that proportion. Therefore, I have a buffer texture containing the percentage reflected as a float value in [0,1] per particle and per phase and a texture containing the fluid colors.
The buffer texture does currently contain the percentages for each particle in a subsequential list. That is for example:
| Particle 1 percentage 1 | Particle 1 percentage 2 | Particle 2 percentage 1 | Particle 2 percentage 2 | ...
I already tested the correctness of the textures by assigning them to the particles directly or by assigning the volFrac to the red part of the final color. I also tried different GLSL debuggers trying to analyze the problem, but none of the popular options did work on my machine after trying.
#version 330
uniform float radius;
uniform mat4 projection_matrix;
uniform uint nFluids;
uniform sampler1D colorSampler;
uniform samplerBuffer volumeFractionSampler;
in block
{
flat vec3 mv_pos;
flat float pIndex;
}
In;
out vec4 out_color;
void main(void)
{
vec3 fluidColor = vec3(0.0, 0.0, 0.0);
for (int fluidModelIndex = 0; fluidModelIndex < int(nFluids); fluidModelIndex++)
{
float volFrac = texelFetch(volumeFractionSampler, int(nFluids * In.pIndex) + fluidModelIndex).x;
vec3 phaseColor = texture(colorSampler, float(fluidModelIndex)/(int(nFluids) - 1)).xyz;
fluidColor = volFrac * phaseColor;
}
out_color = vec4(fluidColor, 1.0);
}
And also a short snippet of the texture initialization
//Texture Initialisation and Manipulation here
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, m_textureMap);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, nFluids, 0, GL_RGB, GL_FLOAT, color_map);
//Creation and Initialisation for Buffer Texture containing the volume Fractions
glBindBuffer(GL_TEXTURE_BUFFER, m_texBuffer);
glBufferData(GL_TEXTURE_BUFFER, nFluids * nParticles * sizeof(float), m_volumeFractions.data(), GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glBindTexture(GL_TEXTURE_BUFFER, m_bufferTexture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, m_texBuffer);
The problem now is, that if I multiply the information of the buffer texture with the information of the texture, the particles that should be rendered disappear completely without any warnings or other error messages. So the particles disappear if I use the statement:
fluidColor = volFrac * phaseColor;
Does anybody know, why this is the case or how I can further debug this problem?
Does anybody know, why this is the case
Yes. You seem to use the same texture unit for both colorSampler and volumeFractionSampler which is simply not allowed as per the spec. Quoting from section 7.11 of the OpenGL 4.6 core profile spec:
It is not allowed to have variables of different sampler types pointing to the same texture image unit within a program object. This situation can only
be detected at the next rendering command issued which triggers shader invocations, and an INVALID_OPERATION error will then be generated.
So while you can bind different textures do the different targets of texture unit 0 at the same time, each draw call can only use one particular target per texture unit. If you only use one sampler or the other (and the shader compilere will aggresively optimize these out if they don't influence the outputs of your shader), you are in a legal use case, but as soon as you use both, it will not work.
I am working on the code written by someone else and at the moment I have a fairly limited understanding of the codebase. Which is why I wasn't sure how to formulate my question properly, and whether it is an OpenGL question, or debugging strategy question. Furthermore, I obviously can't really share the whole code base, and the reason stuff is not working is most likely rooted in there. Regardless, perhaps someone just might have an idea of what might be going on, or where should I look at.
I have a vertex structure defined in the following way:
struct Vertex {
Vertex(glm::vec3 position, glm::vec3 normal):
_position(position), _normal(normal) {};
glm::vec3 _position;
glm::vec3 _normal;
};
I have a std vector of vertices which I fill out with vertex data extracted from a certain structure. For the sake of simplicity, let's assume it's another vector:
// std::vector<Vertex> data - contains vertex data
std::vector<Vertex> Vertices;
Vertices.reserve(data.size());
for (int i = 0; i < data.size(); i++) {
Vertices.emplace_back(Vertex(data[i]._position, data[i]._normal));
}
Then I generate a vertex buffer object, buffer my data and enable vertex attributes:
GLuint VB;
glGenBuffers(1, &VB);
glBindBuffer(GL_ARRAY_BUFFER, VB);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*Vertices.size(), &Vertices[0],
GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)
(sizeof(GL_FLOAT)*3));
Finally, I bind a shader and set up uniforms, then I call glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, Vertices.size());
// clean-up
// swap buffers (comes at some point in code, I haven't figure out where yet)
At this point nothing gets rendered. However, initially I made a mistake and swapped the parameters of the draw call, such that offset comes before the number of elements to draw:
glDrawArrays(GL_TRIANGLES, Vertices.size(), 0);
And surprisingly, that actually rendered what I wanted to render in the first place. However, the documentation clearly says that the offset comes first, and the number of elements after this. Which means that glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) should have shown nothing, since I specified 'zero' number of elements to be drawn.
Now there are multiple windows in the application and shared vertex buffer objects. At some point I thought that the vertex buffer object I generated somehow gets passed around in the part of the code I haven't explored yet, and uses it to draw the geometry I didn't expect to be drawn. However that still doesn't explain the fact that when I use glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) with 'zero' as the number of elements to be drawn - I see the geometry; whereas when I swap the parameters according to the documentation - nothing gets shown.
Given this scarce information that I shared, does someone by any chance have an idea of what might be going on? If not, how would you tackle this, how would you go about debugging (or understanding) it?
EDIT: Vertex and Fragment shader
Mind that this is a dummy shader that simply paints the whole geometry red. Regardless, shader is not the cause of my problems, given how geometry gets drawn depending on how I use the draw call (see above).
EDIT EDIT: Note that as long as I don't activate blending, the alpha component (which is 'zero' in the shader), won't have any effect on the produced image.
vertex shader:
#version 440
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
uniform mat4 MVP; // model-view-projection matrix
void main() {
gl_Position = MVP*vec4(position, 1.0);
}
fragment shader:
#version 440
out vec4 outColor;
void main()
{
outColor = vec4(1, 0, 0, 0);
})glsl";
Regarding the glDrawArrays parameter inversion, have you tried stepping into that function call? Perhaps you are using an OpenGL wrapper of some sort which modifies the order or the arguments. I can confirm however that the documentation you quote is not wrong about the parameter order!
I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.
I'm fairly new to OpenGL, and I seem to be experiencing some difficulties. I've written a simple shader in GLSL, that is supposed to transform vertices by given joint matrices, allowing simple skeletal animation. Each vertex has a maximum of two bone influences (stored as the x and y components of a Vec2), indices and corresponding weights that are associated with an array of transformation matrices, and are specified as "Attribute variables" in my shader, then set using the "glVertexAttribPointer" function.
Here's where the problem arises... I've managed to set the "Uniform Variable" array of matrices properly, when I check those values in the shader, all of them are imported correctly and they contain the correct data. However, when I attempt to set the joint Indices variable the vertices are multiplied by arbitrary transformation matrices! They jump to seemingly random positions in space (which are different every time) from this I am assuming that the indices are set incorrectly and my shader is reading past the end of my joint matrix array into the following memory. I'm not exactly sure why, because upon reading all of the information I could find on the subject, I was surprised to see the same (if not very similar) code in their examples, and it seemed to work for them.
I have attempted to solve this problem for quite some time now and it's really beginning to get on my nerves... I know that the matrices are correct, and when I manually change the index value in the shader to an arbitrary integer, it reads the correct matrix values and works the way it should, transforming all the vertices by that matrix, but when I try and use the code I wrote to set the attribute variables, it does not seem to work.
The code I am using to set the variables is as follows...
// this works properly...
GLuint boneMatLoc = glGetUniformLocation([[[obj material] shader] programID], "boneMatrices");
glUniformMatrix4fv( boneMatLoc, matCount, GL_TRUE, currentBoneMatrices );
GLfloat testBoneIndices[8] = {1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
// this however, does not...
GLuint boneIndexLoc = glGetAttribLocation([[[obj material] shader] programID], "boneIndices");
glEnableVertexAttribArray( boneIndexLoc );
glVertexAttribPointer( boneIndexLoc, 2, GL_FLOAT, GL_FALSE, 0, testBoneIndices );
And my vertex shader looks like this...
// this shader is supposed to transform the bones by a skeleton, a maximum of two
// bones per vertex with varying weights...
uniform mat4 boneMatrices[32]; // matrices for the bones
attribute vec2 boneIndices; // x for the first bone, y for the second
//attribute vec2 boneWeight; // the blend weights between the two bones
void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0; // just set up the texture coordinates...
vec4 vertexPos1 = 1.0 * boneMatrices[ int(boneIndex.x) ] * gl_Vertex;
//vec4 vertexPos2 = 0.5 * boneMatrices[ int(boneIndex.y) ] * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * (vertexPos1);
}
This is really beginning to frustrate me, and any and all help will be appreciated,
-Andrew Gotow
Ok, I've figured it out. OpenGL draws triangles with the drawArrays function by reading every 9 values as a triangle (3 vertices with 3 components each). Because of this, vertices are repepated between triangles, so if two adjacent triangles share a vertex it comes up twice in the array. So my cube which I originally thought had 8 vertices, actually has 36!
six sides, two triangles a side, three vertices per triangle, all multiplies out to a total of 36 independent vertices instead of 8 shared ones.
The entire problem was an issue with specifying too few values. As soon as I extended my test array to include 36 values it worked perfectly.