GLGenVertexArrays giving me an invalid memory address - opengl

I am new to openGL and I have search around the web and followed some tutorials but I am still having an issue. When I run my project I get a error:
Unhandled exception at 0x0000000000000. Access Violation executing location 0x0000000000000
Below is my code that I am executing that is causing this exception and would love some help on nailing down my issue:
GLuint vertextBuffer;
GLuint vertexArrayID;
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
glGenBuffers(1, &vertextBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertextBuffer);
The exception occurs when I try to bind both the VBO and VAO.
Thanks for the help in advance!

All functions and extensions of OpenGL > 1.1 have to be loaded in order to be used. This can be done, for example, by using glew which has to be initialized as follows:
glewExperimental = true;
GLenum err = glewInit();
if (GLEW_OK != err)
{
/* Problem: glewInit failed, something is seriously wrong. */
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
...
}
(Example code is from http://glew.sourceforge.net/)

Related

OpenGL 3 Rendering Problems Linux (Ubuntu Mate)

I'm unable to get OpenGL (with GLFW) to render content to the screen. I'm not even able set a clear color and have that be displayed when I run my application, I'm just consistently presented with a black screen.
I have installed requisite dependencies on my system and set up the build environment such that I'm able to successfully compile my applications (and dependencies) without error. Here is a snippet of the problematic code... You will note much of the rendering code has actually been commented out. For now it will be sufficient to just have the Clear Color I chose displayed to verify that everything is set up correctly:
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
//Include GLEW. Always include it before gl.h and glfw3.h, since it's a bit magic.
#include <GL/glew.h>
// Include GLFW
#include <GLFW/glfw3.h>
// Include GLM
#include <glm/glm.hpp>
#include <GL/glu.h>
#include<common/shader.h>
#include <iostream>
using namespace glm;
int main()
{
// Initialise GLFW
glewExperimental = true; // Needed for core profile
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow* window; // (In the accompanying source code, this variable is global for simplicity)
window = glfwCreateWindow( 1024, 768, "Tutorial 02", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); // Initialize GLEW
//glewExperimental=true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
//INIT VERTEX ARRAY OBJECT (VAO)...
//create Vertex Array Object (VAO)
GLuint VertexArrayID;
//Generate 1 buffer, put the resulting identifier in our Vertex array identifier.
glGenVertexArrays(1, &VertexArrayID);
//Bind the Vertex Array Object (VAO) associated with the specified identifier.
glBindVertexArray(VertexArrayID);
// Create an array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
//INIT VERTEX BUFFER OBJECT (VBO)...
// This will identify our vertex buffer
GLuint VertexBufferId;
// Generate 1 buffer, put the resulting identifier in VertexBufferId
glGenBuffers(1, &VertexBufferId);
//Bind the Vertex Buffer Object (VBO) associated with the specified identifier.
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
//Compile our Vertex and Fragment shaders into a shader program.
/**
GLuint programId = LoadShaders("../tutorial2-drawing-triangles/SimpleVertexShader.glsl","../tutorial2-drawing-triangles/SimpleFragmentShader.glsl");
if(programId == -1){
printf("An error occured whilst attempting to load one or more shaders. Exiting....");
exit(-1);
}
//glUseProgram(programId); //use our shader program
*/
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
do{
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause flickering, so it's there nonetheless.
glClearColor(8.0f, 0.0f, 0.0f, 0.3f);
//glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// DRAW OUR TRIANGE...
/**
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
glEnableVertexAttribArray(0); // 1st attribute buffer : vertices
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// plot the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0); //clean up attribute array
*/
// Swap buffers
glfwSwapBuffers(window);
//poll for and process events.
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
}
Again, pretty straight forward as far as OpenGL goes, all rendering logic, loading of shaders,etc has been commented out I'm just trying to set a clear color and have it displayed to be sure my environment is configured correctly. To build the application I'm using QTCreator with a custom CMAKE file. I can post the make file if you think it may help determine the problem.
So I managed to solve the problem. I'll attempt to succinctly outline the source of the problem and how I arrived at a resolution in the hope that it may be useful to others that encounter the same issue:
In a nutshell, the source of the problem was a driver issue, I neglected to mention that I was actually running OpenGL inside an Ubuntu Mate 18.0 VM (via Parallels 16) on a MacBook Pro (with dedicated graphics) Therein, lies the problem; up until very recently both Parallels and Ubuntu simply did not support more modern, OpenGL 3.3 and upwards. I discovered this by adding the following lines to the posted code in order to force a specific OpenGL version:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
On doing this the application immediately begin to crash and glGetError() reported that I needed to downgrade to an earlier version of OpenGL as 3.3 was not compatible with my system.
The solution was two-fold:
Update Parallels to version 17 which now includes a dedicated, third-party virtual GPU (virGL) that is capable of running OpenGL 3.3 code.
Update Ubuntu or at the very least the kernel as virGL only works with linux kernel versions 5.10 and above. (Ubuntu Mate 18 only ships with kernel version 5.04.)
Thats it, making the changes, as described, enabled me to run the code exactly as posted and successfully render a basic triangle to the screen.

glDrawElements throws an exception without error code

I am trying to draw a simple triangle and set the buffers as follows;
triangle t;
point3f vertices[] = { t.p1(), t.p2(), t.p3() };
GLushort indices[] = { 0, 1, 2 };
gl_vertex_array vao{ 3 };
vao.bind_vertex_array();
gl_vertex_buffer position_vbo{ buffer_type::array_buf };
position_vbo.bind_vertex_buffer();
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0],
GL_STATIC_DRAW);
position_vbo.unbind_vertex_buffer();
gl_vertex_buffer index_vbo{ buffer_type::element_array_buf };
index_vbo.bind_vertex_buffer();
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0],
GL_STATIC_DRAW);
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Setting up of buffers and VAOs are fine I think, I checked with glGetError at each stage and everything seems to be working. On my render function, I do the following:
glClearColor(0.4f, 0.3f, 0.6f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
o.vao.bind_vertex_array();
o.sp.use_program();
GLenum error = glGetError();
assert(error == GL_NO_ERROR);
//glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
error = glGetError();
assert(error == GL_NO_ERROR);
o.sp.unuse_program();
o.vao.unbind_vertex_array();
This rendering call with glDrawArrays works just fine but when I try to render with glDrawElements, I get an exception thrown. Moreover, this is hard exception. I can't go to the next line to see the error code. I didn't know that OpenGl calls could throw. I am stuck here. What might be the problem?
Here is a similar discussion
nvoglv32.dll throws the exception
The problem lies in the VAO setup code. The index buffer gets unbound before the VAO is unbound:
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Since the VAO always stores the last state of the bound GL_ELEMENT_ARRAY_BUFFER, this is effectively unbinding the index buffer. The exception happens then because you try to read from a not bound index buffer. The solution should be to exchange these two lines and unbind the VAO first:
vao.unbind_vertex_array();
index_vbo.unbind_vertex_buffer();
As Nicol Bolas mentioned in the comments: You can actually leave away the unbinding of the element buffer completely. When the VAO gets unbound, there is no element buffer bound anymore.

openvdb viewer & opengl

I am trying to use OpenVDB & viewer i just want to see openvdb file using viewer.
Some of viewer's functions use opengl functions and return opengl error.
Below is execution of ovenvdb viewer and its error message.
C:\Users\user\Documents\Visual Studio 2013\Projects\openvdb_test\Debug>openvdb_test.exe armadillo.vd
b -i
ls_armadillo (1276 x 1519 x 1160 voxels)
Glew init (Windows)
INFO vertex sizes 2934312
INFO sizeof(GLfloat) 4
error genvertexbuffer 1281
openvdb_test.exe: Error: Unable to upload vertex buffer data
C:\Users\user\Documents\Visual Studio 2013\Projects\openvdb_test\Debug>
And this is the function in RenderModules of openvdb viewer that shows error message .
I added some lines for debugging.
BufferObject::genVertexBuffer(const std::vector<GLfloat>& v)
{
if (glIsBuffer(mVertexBuffer) == GL_TRUE) glDeleteBuffers(1, &mVertexBuffer);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
if (glIsBuffer(mVertexBuffer) == GL_FALSE) throw "Error: Unable to create vertex buffer";
printf("INFO vertex sizes %d \n", v.size());
printf("INFO sizeof(GLfloat) %d \n", sizeof(GLfloat));
int size = sizeof(GLfloat) * v.size();
glBufferData(GL_ARRAY_BUFFER, size, &v[0], GL_STATIC_DRAW);
GLenum err=glGetError();
if (GL_NO_ERROR != err)
{
printf("error genvertexbuffer %d\n", err);
throw "Error: Unable to upload vertex buffer data";
}
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
II know that opengl error #1281 means invalid value of size.
But, size of vertices is not negative value and also other args looks fine to me.
Did i miss something?
Just changed
if (glIsBuffer(mVertexBuffer) == GL_FALSE) throw "Error: Unable to create vertex buffer";
by
do {
glIsBuffer(mVertexBuffer);
} while (glGetError() != GL_NO_ERROR);
For me it failed for vertex and color, now it works.

openGL: glGenVertexArrays Access violation executing location 0x00000000

I'm trying to learn openGL myself so I bought a book about openGL and in first chapter are example code so I try it and something went wrong. At line 17(glGenVertexArrays(NumVAOs, VAOs);) and 18(glBindVertexArray(VAOs[Triangles]);) is VS 2013 Ultimate report an error exactly "Unhandled exception at 0x77350309 in openGL_3.exe: 0xC0000005: Access violation executing location 0x00000000.". So I think it's something wrong with memory but i do not know what. Can someone help me?
#include <iostream>
using namespace std;
#include <vgl.h>
#include <LoadShaders.h>
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
void init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
You can try setting:
glewExperimental = GL_TRUE;
before your call to glewInit(). Sources:
https://stackoverflow.com/a/20822876/833188
https://stackoverflow.com/a/22820845/833188
Are you sure, that you can request version 4.3 before call to glewInit()? Every version >= 3.0 requires WGL_ARB_create_context (Windows)/GLX_ARB_create_context (Linux), which is an extension.
Usually, to create "modern" OpenGL context (3.0+) it is required to:
Create temporary context, set it as current.
Initialize extensions (either manually or with loader like GLEW or GLFW).
Request desired version (and profile, if you create version 3.2 or higher and WGL_ARB_create_context_profile/GLX_ARB_create_context_profile is present).
Delete temporary context from 1. and bind your new context.
You may want to look at:
OpenGL 3.0 Context Creation (GLX)
OpenGL 3.1 The First Triangle (WGL)
I do not know, what is the connection between requesting context version in GLUT and GLEW initialization (I often use GLEW, but creation of context is something, that I always do manually, with platform-specific API), but obviously, pointers to new OpenGL API are not initialized when you call glGenVertexArrays. That is the reason of your error - you try to call function via pointer, which is NULL.
I generally agree to Sga's answer recommending to set glewExperimental=GL_TRUE before calling glewInit(). GLEW will fail to initialize in a core profile if this option is not set.
However, the fact that the glewInit() does not fail implies that the OP did not get a core profile at all (or that GLEW has finally been fixed, but that is more of a theoretical possibility).
I already had a look into freeglut's implementation of the (freeglut-specific) glutInitContextVersion API for the question "Where is the documentation for glutInitContextVersion?", and the conclusions from back then might be helpful in this case. To quote myself:
From looking into the code (for the current stable version 2.8.1), one
will see that freeglut implements the following logic: If the
implementation cannot fullfill the version constraints, but does
support the ARB_create_context extension, it will generate some error
and no context will be created. However, if a version is requested,
but the implementation does not even support the relevant extensions,
a legacy GL context is created, effectively ignoring the version
request completely.
So from the reported behavior, I deduce that the OP did only get a legacy context, possibly even Microsofts default OpenGL 1.1 implementation.
This also expalins why glGenVertexArrays() is a NULL pointer even after glewInit() succeeded: The extension is not supported for this context.
You should check what glGetString(GL_VERSION) and glGetString(GL_RENDERER) actually return right after the glutCreateWindow() call. And depending on the output, you might consider checking/updating your graphics drivers, or you might learn that your GPU is just not capable of modern GL at all.

OpenGL and SDL on OSX - EXC_BAD_ACCESS on glDrawElements() - works on GLUT, not with SDL

I wrote a simple application using GLUT that I'm porting to SDL now to turn it into a game.
I am having a bizarre issue specifically with using glDrawElements and Vertex Buffer Objects, SDL 1.2.14 OSX. If I don't use VBO's the program runs fine. It only throws a "EXC_BAD_ACCESS" when using VBO's. To make the matter more obscure. The program runs totally fine in GLUT. I must be missing something perhaps in my initialization that is causing this.
Here's the drawing code:
if (glewGetExtension("GL_ARB_vertex_buffer_object"))
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Load vertices
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_vertices);
glVertexPointer(3, GL_FLOAT, 0, BUFFER_OFFSET(0));
//Load normals
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_normals);
glNormalPointer(GL_FLOAT, 0, BUFFER_OFFSET(0));
//Load UVs
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_uvs);
glTexCoordPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this->mesh->vbo_index);
App dies here -----> glDrawElements(GL_TRIANGLES, 3*this->mesh->numFaces, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
} else {
//BTW: If I run this block of code instead of the above, everything renders fine. App doesn't die.
//Drawing with vertex arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, this->mesh->vertexArray);
glNormalPointer(GL_FLOAT, 0, this->mesh->normalsArray);
glTexCoordPointer(2, GL_FLOAT, 0, this->mesh->uvArray);
glDrawElements(GL_TRIANGLES, 3*this->mesh->numFaces, GL_UNSIGNED_INT, this->mesh->indexArray);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Here's the debug information:
Program received signal: “EXC_BAD_ACCESS”.
Thread-1-<com.apple.main-thread>
#0 0x17747a93 in gleRunVertexSubmitImmediate
#1 0x1774772c in gleLLVMArrayFunc
#2 0x177476e4 in gleSetVertexArrayFunc
#3 0x1773073c in gleDrawArraysOrElements_ExecCore
#4 0x176baa7b in glDrawElements_Exec
#5 0x97524050 in glDrawElements
asm gleRunVertexSubmitImmediate
0x17747a93 <+0771> mov (%eax,%ecx,4),%eax <-- the app dies on this.
Here's my SDL initialization code:
//Initialize SDL
if (SDL_Init(SDL_INIT_VIDEO) < 0)
{
cout << "Could not initialize SDL" << endl << SDL_GetError();
exit(2);
}
//Set window
SDL_WM_SetCaption("Hello World!", "Hello World!");
//Set openGL window
if ( SDL_SetVideoMode(width, height, 32, SDL_OPENGL | SDL_RESIZABLE) == NULL ) {
cout << "Unable to create OpenGL context: %s\n" << endl << SDL_GetError();
SDL_Quit();
exit(2);
}
//Set up event handling
SDL_Event event;
bool quit = false;
//Initialize GLEW
GLenum err = glewInit();
if (GLEW_OK != err)
{
//Problem: glewInit failed, something is seriously wrong.
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
exit(1);
}
fprintf(stdout, "Status: Using GLEW %s\n", glewGetString(GLEW_VERSION));
I don't see anything wrong with your code.
When you get EXC_BAD_ACCESS, it is typically because of an attempt to access an object that is not allocated, or that has been deallocated.
You can get more detailed debug information on the object in question by enabling the NSZombieEnabled environment variable. This is a simple blog post on how to enable this environment variable (I am not the author).
This will hopefully help get more information in the debug console on why the crash is happening.
If you don't have vn in your model (obj) file, you should comment out the following lines:
//glEnableClientState(GL_NORMAL_ARRAY);
//glDisableClientState(GL_NORMAL_ARRAY);
That one worked for me.