Problem in passing position from vertex shader to fragment shader - opengl

//Vertex Shader
#version 450 core
out vec3 vs_color;
layout(location=0) in vec3 position;
layout(location=1) in vec3 colors;
uniform float x_offset;
void main(void)
{
gl_Position = vec4(position,1.0);
gl_Position.x += x_offset;
//vs_color = colors;
vs_color = position;
}
//Fragment Shader
#version 450 core
out vec4 color;
in vec3 vs_color;
void main(void)
{
color = vec4(vs_color,1);
}
This only works if I use vs_color = colors in vertex shader, for any other value like: position(contains xyz coordinates of vertex) or vec3(0.1,0.1,0.1), it throws this exception at glDrawArrays():
Exception thrown at 0x048D1AD0 (nvoglv32.dll) in OpenGL Starting Out.exe:
0xC0000005: Access violation reading location 0x00000000.
Why does this happen, and how can I fix this? (I want to try to set position value as color value)
EDIT:
Also if I don't enable the second vertex attribute using
glEnableVertexArrayAttrib(VAO,1) //VAO is my vertex array object
I am able to do what I want (pass position to fragment shader)
But if I enable it, I need to pass it to fragment shader and then output the color(if I don't do anything with it in the fragment shader it gives the same error)
Here is how the attributes are set up:
glBufferData(GL_ARRAY_BUFFER, sizeof(verticesAndcolors), verticesAndcolors, GL_STATIC_DRAW);
glVertexAttribPointer(
glGetAttribLocation(rendering_program,"position"),
3,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)0
);
glVertexAttribPointer(
glGetAttribLocation(rendering_program,"colors"),
3,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)(3*sizeof(float))
);
glEnableVertexArrayAttrib(VAO, 0);
glEnableVertexArrayAttrib(VAO, 1);
Edit-2:
If I do:
gl_Position = vec4(colors,1.0);
vs_color = position;
it does not give the access violation and works,
I checked how I set up my vertex attributes and I am not able to get further than this.

The root cause of the issue lies here:
glVertexAttribPointer(glGetAttribLocation(rendering_program,"position"), ...)
glVertexAttribPointer(glGetAttribLocation(rendering_program,"colors"), ...);
...
glEnableVertexArrayAttrib(VAO, 0);
glEnableVertexArrayAttrib(VAO, 1);
Only active attributes will have a location, and it does not matter if you qualify an attribute with layout(location=...). If the attribute is not used in the shader, the attribute will be optimized out, and therefor will not have a location. glGetAttribLocation() returns a signed integer and uses the return value -1 to signal that there was no active attribute with that name. glVertexAttribPointer() expects an unsigned attribute location, and (GLuint)-1 will end up in a very high number which is outside of the allowed range, so this function will just produce a GL error, but not set any pointer.
However, you use glEnableVertexArrayAttrib() with hard-coded locations 0 and 1.
The default vertex attribute pointer for each attribute is 0, and the default cvertex array buffer binding this attribute will be sourced from is 0 too, so the pointer will be interpreted as a pointer into client-side memory.
This means that if both position and color are active (meaning: used in a way in the code so that the shader compiler/linker can't completely rule out that it may affect the output), your code will work as expected.
But if you only use one, you will not set the pointer for the other, but still enable that array. Which means your driver will actually access the memory at address 0:
Exception thrown at 0x048D1AD0 (nvoglv32.dll) in OpenGL Starting Out.exe:
0xC0000005: Access violation reading location 0x00000000.
So there are few things things here:
Always check the result for glGetAttribLocation(), and handle the -1 case properly.
Never use different means to get the attribute index passed to glVertexAttribPointer and the corresponding glEnableVertexAttrib()
Check for GL errors. Wherever, possible use the GL debug output feature during application development to get a nice and efficient way to be notified of all GL errors (and other issues your driver can detect). For example, my implementation (Nvidia/Linux) will report API error high [0x501]: GL_INVALID_VALUE error generated. Index out of range. when I call glVertexAttribPointer(-1,...).

I finally was able to fix the issue:
I was calling glUseProgram(programObj) in my drawing loop, moving it out of it fixed the problem
I am not sure why that caused the issue to occur but my guess is the openGL did something when the vertex attribute was not being used and then that change caused the access violation in the next iteration.
Feel free to tell me if you know the reason

Related

glDrawArrays unexpected behavior - wrong order of arguments produces desired image

I am working on the code written by someone else and at the moment I have a fairly limited understanding of the codebase. Which is why I wasn't sure how to formulate my question properly, and whether it is an OpenGL question, or debugging strategy question. Furthermore, I obviously can't really share the whole code base, and the reason stuff is not working is most likely rooted in there. Regardless, perhaps someone just might have an idea of what might be going on, or where should I look at.
I have a vertex structure defined in the following way:
struct Vertex {
Vertex(glm::vec3 position, glm::vec3 normal):
_position(position), _normal(normal) {};
glm::vec3 _position;
glm::vec3 _normal;
};
I have a std vector of vertices which I fill out with vertex data extracted from a certain structure. For the sake of simplicity, let's assume it's another vector:
// std::vector<Vertex> data - contains vertex data
std::vector<Vertex> Vertices;
Vertices.reserve(data.size());
for (int i = 0; i < data.size(); i++) {
Vertices.emplace_back(Vertex(data[i]._position, data[i]._normal));
}
Then I generate a vertex buffer object, buffer my data and enable vertex attributes:
GLuint VB;
glGenBuffers(1, &VB);
glBindBuffer(GL_ARRAY_BUFFER, VB);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*Vertices.size(), &Vertices[0],
GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)
(sizeof(GL_FLOAT)*3));
Finally, I bind a shader and set up uniforms, then I call glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, Vertices.size());
// clean-up
// swap buffers (comes at some point in code, I haven't figure out where yet)
At this point nothing gets rendered. However, initially I made a mistake and swapped the parameters of the draw call, such that offset comes before the number of elements to draw:
glDrawArrays(GL_TRIANGLES, Vertices.size(), 0);
And surprisingly, that actually rendered what I wanted to render in the first place. However, the documentation clearly says that the offset comes first, and the number of elements after this. Which means that glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) should have shown nothing, since I specified 'zero' number of elements to be drawn.
Now there are multiple windows in the application and shared vertex buffer objects. At some point I thought that the vertex buffer object I generated somehow gets passed around in the part of the code I haven't explored yet, and uses it to draw the geometry I didn't expect to be drawn. However that still doesn't explain the fact that when I use glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) with 'zero' as the number of elements to be drawn - I see the geometry; whereas when I swap the parameters according to the documentation - nothing gets shown.
Given this scarce information that I shared, does someone by any chance have an idea of what might be going on? If not, how would you tackle this, how would you go about debugging (or understanding) it?
EDIT: Vertex and Fragment shader
Mind that this is a dummy shader that simply paints the whole geometry red. Regardless, shader is not the cause of my problems, given how geometry gets drawn depending on how I use the draw call (see above).
EDIT EDIT: Note that as long as I don't activate blending, the alpha component (which is 'zero' in the shader), won't have any effect on the produced image.
vertex shader:
#version 440
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
uniform mat4 MVP; // model-view-projection matrix
void main() {
gl_Position = MVP*vec4(position, 1.0);
}
fragment shader:
#version 440
out vec4 outColor;
void main()
{
outColor = vec4(1, 0, 0, 0);
})glsl";
Regarding the glDrawArrays parameter inversion, have you tried stepping into that function call? Perhaps you are using an OpenGL wrapper of some sort which modifies the order or the arguments. I can confirm however that the documentation you quote is not wrong about the parameter order!

OpenGL instancing : how to debug missing per instance data

I am relatively familiar with instanced drawing and per instance data: I've implemented this in the past with success.
Now I am refactoring some old code, and I introduced a bug on how per instance data are supplied to shaders.
The relevant bits are the following:
I have a working render loop implemented using glMultiDrawElementsIndirect: if I ignore the per instance data everything draws as expected.
I have a vbo storing the world transforms of my objects. I used AMD's CodeXL to debug this: the buffer is correctly populated with data, and is bind when drawing a frame.
glBindBuffer(GL_ARRAY_BUFFER,batch.mTransformBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * OBJ_NUM, &xforms, GL_DYNAMIC_DRAW);
The shader specifies the input location explicitly:
#version 450
layout(location = 0) in vec3 vertexPos;
layout(location = 1) in vec4 vertexCol;
//...
layout(location = 6)uniform mat4 ViewProj;
layout(location = 10)uniform mat4 Model;
The ViewProj matrix is equal for all instances and is set correctly using:
glUniformMatrix4fv(6, 1, GL_FALSE, &viewProjMat[0][0]);
Model is per instance world matrix it's wrong: contains all zeros.
After binding the buffer and before drawing each frame, I am trying to setup the attribute pointers and divisors in such a way that every drawn instance will receive a different transform:
for (size_t i = 0; i < 4; ++i)
{
glEnableVertexAttribArray(10 + i);
glVertexAttribPointer(10 + i, 4, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * 16,
(const GLvoid*) (sizeof(GLfloat) * 4 * i));
glVertexAttribDivisor(10 + i, 1);
}
Now, I've looked and the code for a while and I really can't figure out what I am missing. CodeXL clearly show that Model (location 10) isn't correctly filled. No OpenGL error is generated.
My question is: does anyone know under which circumstances the setup of per instance data may fail silently? Or any suggestion on how to debug further this issue?
layout(location = 6)uniform mat4 ViewProj;
layout(location = 10)uniform mat4 Model;
These are uniforms, not input values. They don't get fed by attributes; they get fed by glUniform* calls. If you want Model to be an input value, then qualify it with in, not uniform.
Equally importantly, inputs and uniforms do not get the same locations. What I mean is that uniform locations have a different space from input locations. An input can have the same location index as a uniform, and they won't refer to the same thing. Input locations only refer to attribute indices; uniform locations refer to uniform locations.
Lastly, uniform locations don't work like input locations. With attributes, each vec4-equivalent uses a separate attribute index. With uniform locations, every basic type (anything that isn't a struct or an array) uses a single uniform location. So if ViewProj is a uniform location, then it only takes up 1 location. But if Model is an input, then it takes up 4 attribute indices.

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

OpenGL - GLSL assigning to varying variable breaks the vertex positioning

I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.

Unhandled exception (nvoglv32.dll) during drawing (rift)

I'm actually working on making AR with the HMD oculus rift.
I'm not a pro on openGL and I'm sure it is the source of my problem.
I get this error:
Unhandled exception at 0x064DBD07 (nvoglv32.dll) in THING.exe: 0xC0000005: Access violation reading location 0x00000000.
It hapens during the drawing in quadGeometry_supcam->draw(); that is in renderCamera(eye); :
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);// Stop at this line in the debuger
Here's the drawing code
frameBuffer.activate();
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl::Stacks::with_push(pr, mv, [&]{
mv.preMultiply(eyeArgs.modelviewOffset);
pr.preMultiply(eyeArgs.projectionOffset);
//renderCamera(eye); //If I uncomment this part, it crash
renderScene(eye);
//renderCamera(eye); //If I uncomment this part, it works but It's not what I want
});
frameBuffer.deactivate();
glDisable(GL_DEPTH_TEST);
viewport(eye);
distortProgram->use();
glActiveTexture(GL_TEXTURE1);
eyeArgs.distortionTexture->bind();
glActiveTexture(GL_TEXTURE0);
frameBuffer.color->bind();
quadGeometry->bindVertexArray();
quadGeometry->draw();
gl::VertexArray::unbind();
gl::Program::clear();
//////////////////////////////////////////////////////////////
void renderScene(StereoEye eye) {
GlUtils::renderSkybox(Resource::IMAGES_SKY_CITY_XNEG_PNG);
GlUtils::renderFloorGrid(player);
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::Stacks::with_push(mv, [&]{
mv.translate(glm::vec3(0, eyeHeight, 0)).scale(ipd);
GlUtils::drawColorCube(true); // Before this call renderCamera crash after it works
});
}
void renderCamera(StereoEye eye) {
gl::ProgramPtr program;
program = GlUtils::getProgram(
Resource::SHADERS_TEXTURED_VS,
Resource::SHADERS_TEXTURED_FS);
program->use();
textures[eye]->bind();
quadGeometry_supcam->bindVertexArray();
quadGeometry_supcam->draw();
}
If I call renderCamera before GlUtils::drawColorCube(true); it crash but after it works.
But I need to draw the camera before the rest.
I'm not going to precise drawColorCube because it use an other shader.
I suppose that the problem comes from something missing for the shader program. So here's the fragment and vertex shader.
Vertex:
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec4 Position;
layout(location = 1) in vec2 TexCoord0;
out vec2 vTexCoord;
void main() {
gl_Position = Projection * ModelView * Position;
vTexCoord = TexCoord0;
}
Fragment:
uniform sampler2D sampler;
in vec2 vTexCoord;
out vec4 vFragColor;
void main() {
vFragColor = texture(sampler, vTexCoord);
}
(I want to draw the scene on the image of the camera).
Any idea?
You aren't cleaning up the GL state after you call quadGeometry_supcam->draw();. Try adding these lines after that call
gl::VertexArray::unbind();
gl::Program::clear();
(Edited)
I thought the problem was in NULL being passed as indices parameter to glDrawElements() here:
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);
The last parameter, indices, used to be mandatory and passing in 0 or NULL wouldn't work, because it would then try to read the indices from that address and would produce that error Access violation reading location 0x00000000.
But as Jherico pointed out, it is possible to have a buffer with indices bound to GL_ELEMENT_ARRAY and then the indices parameter is an offset into that buffer, not a pointer. Passing 0 then simply means 'no offset'. And Oculus Rift SDK apparently uses this method.
The program crashing with that invalid access to address 0x0 at the line with glDrawElements() therefore does not indicate that 0 should not be used, but that the GL_ELEMENT_ARRAY buffer is not initialized or enabled correctly. Probably because of an improper cleanup, as Jherico pointed out in his better answer.