Unhandled exception (nvoglv32.dll) during drawing (rift) - c++

I'm actually working on making AR with the HMD oculus rift.
I'm not a pro on openGL and I'm sure it is the source of my problem.
I get this error:
Unhandled exception at 0x064DBD07 (nvoglv32.dll) in THING.exe: 0xC0000005: Access violation reading location 0x00000000.
It hapens during the drawing in quadGeometry_supcam->draw(); that is in renderCamera(eye); :
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);// Stop at this line in the debuger
Here's the drawing code
frameBuffer.activate();
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl::Stacks::with_push(pr, mv, [&]{
mv.preMultiply(eyeArgs.modelviewOffset);
pr.preMultiply(eyeArgs.projectionOffset);
//renderCamera(eye); //If I uncomment this part, it crash
renderScene(eye);
//renderCamera(eye); //If I uncomment this part, it works but It's not what I want
});
frameBuffer.deactivate();
glDisable(GL_DEPTH_TEST);
viewport(eye);
distortProgram->use();
glActiveTexture(GL_TEXTURE1);
eyeArgs.distortionTexture->bind();
glActiveTexture(GL_TEXTURE0);
frameBuffer.color->bind();
quadGeometry->bindVertexArray();
quadGeometry->draw();
gl::VertexArray::unbind();
gl::Program::clear();
//////////////////////////////////////////////////////////////
void renderScene(StereoEye eye) {
GlUtils::renderSkybox(Resource::IMAGES_SKY_CITY_XNEG_PNG);
GlUtils::renderFloorGrid(player);
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::Stacks::with_push(mv, [&]{
mv.translate(glm::vec3(0, eyeHeight, 0)).scale(ipd);
GlUtils::drawColorCube(true); // Before this call renderCamera crash after it works
});
}
void renderCamera(StereoEye eye) {
gl::ProgramPtr program;
program = GlUtils::getProgram(
Resource::SHADERS_TEXTURED_VS,
Resource::SHADERS_TEXTURED_FS);
program->use();
textures[eye]->bind();
quadGeometry_supcam->bindVertexArray();
quadGeometry_supcam->draw();
}
If I call renderCamera before GlUtils::drawColorCube(true); it crash but after it works.
But I need to draw the camera before the rest.
I'm not going to precise drawColorCube because it use an other shader.
I suppose that the problem comes from something missing for the shader program. So here's the fragment and vertex shader.
Vertex:
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec4 Position;
layout(location = 1) in vec2 TexCoord0;
out vec2 vTexCoord;
void main() {
gl_Position = Projection * ModelView * Position;
vTexCoord = TexCoord0;
}
Fragment:
uniform sampler2D sampler;
in vec2 vTexCoord;
out vec4 vFragColor;
void main() {
vFragColor = texture(sampler, vTexCoord);
}
(I want to draw the scene on the image of the camera).
Any idea?

You aren't cleaning up the GL state after you call quadGeometry_supcam->draw();. Try adding these lines after that call
gl::VertexArray::unbind();
gl::Program::clear();

(Edited)
I thought the problem was in NULL being passed as indices parameter to glDrawElements() here:
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);
The last parameter, indices, used to be mandatory and passing in 0 or NULL wouldn't work, because it would then try to read the indices from that address and would produce that error Access violation reading location 0x00000000.
But as Jherico pointed out, it is possible to have a buffer with indices bound to GL_ELEMENT_ARRAY and then the indices parameter is an offset into that buffer, not a pointer. Passing 0 then simply means 'no offset'. And Oculus Rift SDK apparently uses this method.
The program crashing with that invalid access to address 0x0 at the line with glDrawElements() therefore does not indicate that 0 should not be used, but that the GL_ELEMENT_ARRAY buffer is not initialized or enabled correctly. Probably because of an improper cleanup, as Jherico pointed out in his better answer.

Related

Problem in passing position from vertex shader to fragment shader

//Vertex Shader
#version 450 core
out vec3 vs_color;
layout(location=0) in vec3 position;
layout(location=1) in vec3 colors;
uniform float x_offset;
void main(void)
{
gl_Position = vec4(position,1.0);
gl_Position.x += x_offset;
//vs_color = colors;
vs_color = position;
}
//Fragment Shader
#version 450 core
out vec4 color;
in vec3 vs_color;
void main(void)
{
color = vec4(vs_color,1);
}
This only works if I use vs_color = colors in vertex shader, for any other value like: position(contains xyz coordinates of vertex) or vec3(0.1,0.1,0.1), it throws this exception at glDrawArrays():
Exception thrown at 0x048D1AD0 (nvoglv32.dll) in OpenGL Starting Out.exe:
0xC0000005: Access violation reading location 0x00000000.
Why does this happen, and how can I fix this? (I want to try to set position value as color value)
EDIT:
Also if I don't enable the second vertex attribute using
glEnableVertexArrayAttrib(VAO,1) //VAO is my vertex array object
I am able to do what I want (pass position to fragment shader)
But if I enable it, I need to pass it to fragment shader and then output the color(if I don't do anything with it in the fragment shader it gives the same error)
Here is how the attributes are set up:
glBufferData(GL_ARRAY_BUFFER, sizeof(verticesAndcolors), verticesAndcolors, GL_STATIC_DRAW);
glVertexAttribPointer(
glGetAttribLocation(rendering_program,"position"),
3,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)0
);
glVertexAttribPointer(
glGetAttribLocation(rendering_program,"colors"),
3,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)(3*sizeof(float))
);
glEnableVertexArrayAttrib(VAO, 0);
glEnableVertexArrayAttrib(VAO, 1);
Edit-2:
If I do:
gl_Position = vec4(colors,1.0);
vs_color = position;
it does not give the access violation and works,
I checked how I set up my vertex attributes and I am not able to get further than this.
The root cause of the issue lies here:
glVertexAttribPointer(glGetAttribLocation(rendering_program,"position"), ...)
glVertexAttribPointer(glGetAttribLocation(rendering_program,"colors"), ...);
...
glEnableVertexArrayAttrib(VAO, 0);
glEnableVertexArrayAttrib(VAO, 1);
Only active attributes will have a location, and it does not matter if you qualify an attribute with layout(location=...). If the attribute is not used in the shader, the attribute will be optimized out, and therefor will not have a location. glGetAttribLocation() returns a signed integer and uses the return value -1 to signal that there was no active attribute with that name. glVertexAttribPointer() expects an unsigned attribute location, and (GLuint)-1 will end up in a very high number which is outside of the allowed range, so this function will just produce a GL error, but not set any pointer.
However, you use glEnableVertexArrayAttrib() with hard-coded locations 0 and 1.
The default vertex attribute pointer for each attribute is 0, and the default cvertex array buffer binding this attribute will be sourced from is 0 too, so the pointer will be interpreted as a pointer into client-side memory.
This means that if both position and color are active (meaning: used in a way in the code so that the shader compiler/linker can't completely rule out that it may affect the output), your code will work as expected.
But if you only use one, you will not set the pointer for the other, but still enable that array. Which means your driver will actually access the memory at address 0:
Exception thrown at 0x048D1AD0 (nvoglv32.dll) in OpenGL Starting Out.exe:
0xC0000005: Access violation reading location 0x00000000.
So there are few things things here:
Always check the result for glGetAttribLocation(), and handle the -1 case properly.
Never use different means to get the attribute index passed to glVertexAttribPointer and the corresponding glEnableVertexAttrib()
Check for GL errors. Wherever, possible use the GL debug output feature during application development to get a nice and efficient way to be notified of all GL errors (and other issues your driver can detect). For example, my implementation (Nvidia/Linux) will report API error high [0x501]: GL_INVALID_VALUE error generated. Index out of range. when I call glVertexAttribPointer(-1,...).
I finally was able to fix the issue:
I was calling glUseProgram(programObj) in my drawing loop, moving it out of it fixed the problem
I am not sure why that caused the issue to occur but my guess is the openGL did something when the vertex attribute was not being used and then that change caused the access violation in the next iteration.
Feel free to tell me if you know the reason

OpenGL: Zero active uniforms

I'm referring to the OpenGL SuperBible. I use their framework to create an own program. I wanted to do something with an Interface Block (specifically a Uniform Block). If I call
glGetActiveUniformsiv(program, 1, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
I get an error, namely GL_INVALID_VALUE.
But if I call the same function with a 0 instead of a 1, it doesn't make that error. I assumed then, that I have no active uniforms. I should have 3 of them, however.
How do I activate them? Here's my shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
uniform TransformBlock {
mat4 translation;
mat4 rotation;
mat4 projection_matrix;
};
void main(void)
{
mat4 mvp = projection_matrix * translation * rotation ;
gl_Position = mvp * position;
vs_color = color;
}
Here is some code from the startup method:
static const GLchar* uniformNames[3] = {
"TransformBlock.translation",
"TransformBlock.rotation",
"TransformBlock.projection_matrix",
};
GLuint uniformIndices[3];
glUseProgram(program);
glGetUniformIndices(program, 3, uniformNames, uniformIndices);
GLint uniformOffsets[3];
GLint matrixStrides[3];
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_MATRIX_STRIDE, matrixStrides);
unsigned char* buffer1 = (unsigned char*)malloc(4096);
//fill buffer1 in a for-loop
GLuint block_index = glGetUniformBlockIndex(program, "TransformBlock");
glUniformBlockBinding(program, block_index, 0);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, (GLuint)buffer1);
free(buffer1);
However, as a consequence of the function returning GL_INVALID_VALUE there's an error with the calls:
*((float *)(buffer1 + offset)) = ...
and the whole program interrupts. Without adding the offset, I don't get an error here, so I think the second error depends on the first error.
I think it goes wrong at glGetUniformIndices, because you prefixed your uniform names with TransformBlock. You don't use that to access the uniforms with that prefix in the GLSL code, either. If you wanted that, you'd had to set an instance name for the uniform block, the block name is not relevant for accessing / naming the uniforms at all. It is only used for matching interfaces if you link together multiple shaders accessing the same interface block.

OpenGL mapping texture to sphere

I have OpenGL program that I want to texture sphere with bitmap of earth. I prepared mesh in Blender and exported it to OBJ file. Program loads appropriate mesh data (vertices, uv and normals) and bitmap properly- I have checked it texturing cube with bone bitmap.
My program is texturing sphere, but incorrectly (or in the way I don't expect). Each triangle of this sphere includes deformed copy of this bitmap. I've checked bitmap and uv seems to be ok. I've tried many sizes of bitmap (powers of 2, multiples of 2 etc).
Here's the texture:
Screenshot of my program (like It would ignore my UV coords):
Mappings of UVs in Blender I've done in this way:
Code setting texture after loading it (apart from code adding texture to VBO- I think it's ok):
GLuint texID;
glGenTextures(1,&texID);
glBindTexture(GL_TEXTURE_2D,texID);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,width,height,0,GL_BGR,GL_UNSIGNED_BYTE,(GLvoid*)&data[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
Is there needed any extra code to map this texture properly?
[Edit]
Initializing textures (earlier presented code is in LoadTextureBMP_custom() function)
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
GLuint TBO_ID;
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
return true;
}
My main loop:
bool Program::MainLoop()
{
bool done = false;
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat4 MVP;
Camera camera;
shader.SetShader(true);
while(!done)
{
if( (glfwGetKey(GLFW_KEY_ESC)))
done = true;
if(!glfwGetWindowParam(GLFW_OPENED))
done = true;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Tutaj przeksztalcenia macierzy
camera.UpdateCamera();
modelMatrix = mat4(1.0f);
viewMatrix = camera.GetViewMatrix();
projectionMatrix = camera.GetProjectionMatrix();
MVP = projectionMatrix*viewMatrix*modelMatrix;
// Koniec przeksztalcen
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,textureID);
shader.SetShaderParameters(MVP);
SetOpenGLScene(width,height);
glEnableVertexAttribArray(0); // Udostepnienie zmiennej Vertex Shadera => vertexPosition_modelspace
glBindBuffer(GL_ARRAY_BUFFER,VBO_ID);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,vert.size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
}
shader.SetShader(false);
return true;
}
VS:
#version 330
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition,1.0f);
gl_Position = MVP*v;
UV = vertexUV;
}
FS:
#version 330
in vec2 UV;
out vec4 color;
uniform sampler2D texSampler; // Uchwyt tekstury
void main()
{
color = texture(texSampler, UV);
}
I haven't done any professional GL programming, but I've been working with 3D software quite a lot.
your UVs are most likely bad
your texture is a bad fit to project on a sphere
considering UVs are bad, you might want to check your normals as well
consider an ISOSPHERE instead of a regular one to make more efficient use of polygons
You are currently using a flat texture with flat mapping, which may give you very ugly results, since you will have very low resolution in the "outer" perimeter and most likely a nasty seam artifact where the two projections meet if you like... rotate the planet or something.
Note that you don't have to have any particular UV map, it just needs to be correct with the geometry, which it doesn't look like it is right now. The spherical mapping will take care for the rest. You could probably get away with a cylindrical map as well, since most Earth textures are in a suitable projection.
Finally, I've got the answer. Error was there:
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
// GLuint TBO_ID; _ERROR_
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
}
There is the part of Program class declaration:
class Program
{
private:
Shader shader;
GLuint textureID;
GLuint VAO_ID;
GLuint VBO_ID;
GLuint TBO_ID; // Member covered with local variable declaration in InitTextures()
...
}
I've erroneously declared local TBO_ID that covered TBO_ID in class scope. UVs were generated with crummy precision and seams are horrible, but they weren't problem.
I have to admit that information I've supplied is too small to enable help. I should have put all the code of Program class. Thanks everybody who tried to.

texture3d generates GL_INVALID_OPERATION

Have a strange issue with my glsl shader. It renders nothing (eg black screen) and makes my glDrawElements cast a GL_INVALID_OPERATION. The shader in use is shown bellow. When I comment out the line with v = texture3D(texVol,pos).r; and replace it with v = 0.4; it outputs what is expected (orange-like color) and no gl errors is generated.
uniform sampler2D texBack;
uniform sampler3D texVol;
uniform vec3 texSize;
uniform vec2 winSize;
uniform float iso;
varying vec3 inCoords;
vec4 raytrace(in vec3 entryPoint,in vec3 exitPoint){
vec3 dir = exitPoint - entryPoint;
vec3 pos = entryPoint;
vec4 color = vec4(0.0,0.0,0.0,0.0);
int steps = int(2.0*length(texSize));
dir = dir * (1.0/steps);
vec3 n;
float v,m=0.0,avg=0.0,avg2=0.0;
for(int i = 0;i<steps || i < 2500;i++){
v = texture3D(texVol,pos).r;
m = max(v,m);
avg += v;
pos += dir;
}
return vec4(avg/steps,m,0,1);
}
void main()
{
vec2 texCoord = gl_FragCoord.xy/winSize;
vec3 exitPoint = texture2D(texBack,texCoord).xyz;
gl_FragColor = raytrace(inCoords,exitPoint);
}
I am using an VBO for rendering a color cube as entry and exist point for my rays. They are stored in FBOs and they look ok when I render them directly to the screen.
I have tried chaning to glBegin/glEnd and draw the cube with quads and then I get the same errors.
I cant find what I am doing wrong and now I need your help. Why is my texture3D generating GL_INVALID_OPERATION?
Note:
I have enabled both 2d and 3d textures.
Edit:
I've just uploaded the whole project to github. browse to for more code https://github.com/r-englund/rGraphicsLibrary
This is tested on both Intel HD 3000 and Nvidia GT550m
According to OpenGL specification glDrawElements() generates GL_INVALID_OPERATION in the following cases:
If a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
If a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped.
This means the problem has nothing to do with your fragment shader. If you don't use geometry shaders, you should fix the buffer objects accordingly.
It looks like your are not providing additional relevant information in your question.

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.